diff --git a/spaces/1acneusushi/gradio-2dmoleculeeditor/data/Chief Architect Premier X12 22.5.2.56 Patched keygen How to Activate the Full Features of the Professional 3D Building Software.md b/spaces/1acneusushi/gradio-2dmoleculeeditor/data/Chief Architect Premier X12 22.5.2.56 Patched keygen How to Activate the Full Features of the Professional 3D Building Software.md deleted file mode 100644 index a86ad523b8f188d2da8f1b51cc556d5a442e6bc7..0000000000000000000000000000000000000000 --- a/spaces/1acneusushi/gradio-2dmoleculeeditor/data/Chief Architect Premier X12 22.5.2.56 Patched keygen How to Activate the Full Features of the Professional 3D Building Software.md +++ /dev/null @@ -1,139 +0,0 @@ - -

Chief Architect Premier X12 22.5.2.56 Patched Keygen: A Comprehensive Review

-

If you are looking for a powerful and easy-to-use 3D architecture software for residential and commercial design, you might want to check out Chief Architect Premier X12. This software has automated construction tools that make home design, remodeling, interior design, kitchens and bathrooms, etc. easier. As you draw the walls and place smart architectural objects like doors and windows, the program creates a 3D model, generates a bill of materials, and with the use of powerful construction tools, helps to produce construction documents like blueprints plan, detailed sections, and elevations.

-

In this article, we will review Chief Architect Premier X12 22.5.2.56 Patched Keygen, which is a cracked version of the software that allows you to use it without paying for a license. We will cover the features and benefits of Chief Architect Premier X12, how to install and activate it with the patched keygen, and the pros and cons of using it.

-

Chief Architect Premier X12 22.5.2.56 Patched keygen


DOWNLOAD >>> https://byltly.com/2uKxjH



-

What is Chief Architect Premier X12?

-

Chief Architect Premier X12 is the latest version of Chief Architect software, which was released in February 2020. It is a professional 3D architecture software that can handle all aspects of building design, from conceptual design to construction documents.

-

Chief Architect Premier X12 has many new features and enhancements that make it more efficient and user-friendly. Some of these features include:

- -

Features and benefits of Chief Architect Premier X12

-

Chief Architect Premier X12 has many features and benefits that make it a versatile and powerful 3D architecture software. Here are some of them:

-

Design and build tools

-

Chief Architect Premier X12 has automatic and manual build tools that let you create a variety of roof styles, ladders, trusses, cut BOMs (bill of materials), sizing (dimensioning), sections (cross-sections), elevations (side views), etc. You can also use smart framing tools to create floor systems (joists), wall systems (studs), ceiling systems (rafters), etc. You can also edit these elements individually or in groups to customize their properties.

-

Interior, kitchen and bathroom design

-

Chief Architect Premier X12 uses smart design objects (such as cabinets, appliances, doors, windows, countertops (worktops), floors (flooring), etc.) to quickly and easily create various styles, shapes (forms), sizes (dimensions), etc. You can also use smart labels (tags) to annotate these objects with information such as manufacturer (brand), model (type), price (cost), etc. You can also use smart dimensions (measures) to show the distances between objects or walls.

-

How to download Chief Architect Premier X12 22.5.2.56 with patch
-Chief Architect Premier X12 22.5.2.56 cracked version free download
-Chief Architect Premier X12 22.5.2.56 full version with keygen activation
-Best software for home design: Chief Architect Premier X12 22.5.2.56
-Chief Architect Premier X12 22.5.2.56 patch download link
-Chief Architect Premier X12 22.5.2.56 keygen generator online
-Chief Architect Premier X12 22.5.2.56 review and features
-Chief Architect Premier X12 22.5.2.56 system requirements and compatibility
-Chief Architect Premier X12 22.5.2.56 tutorial and tips
-Chief Architect Premier X12 22.5.2.56 license key and serial number
-Chief Architect Premier X12 22.5.2.56 update and bug fixes
-Chief Architect Premier X12 22.5.2.56 vs other home design software
-Chief Architect Premier X12 22.5.2.56 discount and coupon code
-Chief Architect Premier X12 22.5.2.56 trial version and limitations
-Chief Architect Premier X12 22.5.2.56 alternatives and competitors
-How to install Chief Architect Premier X12 22.5.2.56 with patch and keygen
-Chief Architect Premier X12 22.5.2.56 user manual and guide
-Chief Architect Premier X12 22.5.2.56 support and customer service
-How to uninstall Chief Architect Premier X12 22.5.2.56 completely
-Chief Architect Premier X12 22.5.2.56 testimonials and feedback
-How to use Chief Architect Premier X12 22.5.2.56 for interior design
-How to use Chief Architect Premier X12 22.5.2.56 for exterior design
-How to use Chief Architect Premier X12 22.5.2.56 for landscaping design
-How to use Chief Architect Premier X12 22.5.2.56 for kitchen design
-How to use Chief Architect Premier X12 22.5.2

-

3D modeling and design tools

-

With Chief Architect Premier X12, you can design in any view for seamless (smooth), simultaneous editing between 2D and 3D. You can switch between different views such as plan view (top view), elevation view (side view), perspective view (angle view), orthographic view (straight view), etc. You can also use the camera tool to create custom views such as dollhouse view (open view), glass house view (transparent view), watercolor view (artistic view), etc. You can also use the walkthrough tool to navigate through your model in 3D.

-

CAD tools for productivity and precision

-

Chief Architect Premier X12 has a powerful CAD software engine that includes tools for lines, polylines (connected lines), splines (curved lines), arcs (circular lines), solids (3D shapes), etc. to produce objects. You can also use these tools to draw custom shapes or symbols that can be saved as CAD blocks or library items for future use. You can also import files in DWG, DXF or PDF format from other CAD programs or online sources.

-

Construction blueprint set generation

-

All views of your project such as blueprints plan (floor plan), framing plan (structure plan), sections plan (cross-section plan), details plan (close-up plan) , elevations plan (side views plan) have a user-defined scale and link to a specific drawing that updates as design changes change. You can also use layout sheets to arrange these views on a page with title blocks, borders , text , dimensions , etc. You can also print these sheets or export them as PDF files for sharing or printing.

-

How to install and activate Chief Architect Premier X12 with the patched keygen

-

If you want to use Chief Architect Premier X12 without paying for a license , you can download the patched keygen version from this link. However , be aware that this is an illegal and risky way of using the software , as it may contain viruses , malware , or spyware that can harm your computer or compromise your data . Also , you may face legal consequences if you are caught using pirated software . Therefore , we do not recommend or endorse this method , and we advise you to buy a legitimate license from the official website instead . However , if you still want to proceed with this method , here are the steps you need to follow :

-

System requirements

-

Before installing Chief Architect Premier X12 , make sure your computer meets the minimum system requirements , which are :

- -

Installation steps

-
    -
  1. Download the zip file from the link and extract it to a folder on your computer .
  2. -
  3. Run the setup.exe file as administrator and follow the instructions on the screen .
  4. -
  5. Select the destination folder where you want to install the software .
  6. -
  7. Select the components you want to install such as libraries , bonus catalogs , manufacturer catalogs , etc.
  8. -
  9. Wait for the installation process to complete .
  10. -
  11. Do not run the software yet .
  12. -
-

Activation steps

-
    -
  1. In the folder where you extracted the zip file , open the Crack folder .
  2. -
  3. Copy the file named Chief_Architect_Premier_X11.exe .
  4. -
  5. Paste it in the installation folder where you installed the software , usually C:\Program Files\Chief Architect\Chief Architect Premier X11 .
  6. -
  7. Replace the original file when prompted .
  8. -
  9. Run the software as administrator .
  10. -
  11. Select I have a license key option .
  12. -
  13. In another window , run the file named keygen.exe from the Crack folder .
  14. -
  15. Select Generate option .
  16. -
  17. Copy the generated license key from the keygen window .
  18. -
  19. Paste it in the software activation window .
  20. -
  21. Click OK to confirm the activation .
  22. -
  23. Enjoy using Chief Architect Premier X12 with full features .
  24. -
-

Pros and cons of Chief Architect Premier X12

-

Chief Architect Premier X12 is a powerful and versatile 3D architecture software that can help you create stunning designs and realistic renderings. However, it also has some drawbacks that you should be aware of. Here are some of the pros and cons of using Chief Architect Premier X12:

-

Pros

- -

Cons

- -

Conclusion

-

In conclusion, Chief Architect Premier X12 is a professional 3D architecture software that can help you create amazing designs and realistic renderings for residential and commercial projects. It has many features and benefits that make it a powerful and user-friendly tool. However, it also has some drawbacks that you should consider before buying or using it. If you want to use Chief Architect Premier X12 without paying for a license , you can download the patched keygen version from this link. However , be aware that this is an illegal and risky way of using the software , as it may contain viruses , malware , or spyware that can harm your computer or compromise your data . Also , you may face legal consequences if you are caught using pirated software . Therefore , we do not recommend or endorse this method , and we advise you to buy a legitimate license from the official website instead . We hope this article has given you some useful information and insights about Chief Architect Premier X12 22.5.2.56 Patched Keygen.

-

FAQs

-

Here are some frequently asked questions about Chief Architect Premier X12 22.5.2.56 Patched Keygen:

-
    -
  1. What is the difference between Chief Architect Premier X12 and Chief Architect Interiors X12?
  2. -

    Chief Architect Premier X12 is the full version of the software that can handle all aspects of building design , from conceptual design to construction documents . Chief Architect Interiors X12 is a specialized version of the software that focuses on interior design , kitchen and bath design , remodeling , etc. It has fewer features and tools than Chief Architect Premier X12 , but it is cheaper to buy . You can compare the two versions here.

    -
  3. Can I use Chief Architect Premier X12 on Mac?
  4. -

    Yes , you can use Chief Architect Premier X12 on Mac , as long as your Mac meets the minimum system requirements , which are :

    - -

    You can download the Mac version of Chief Architect Premier X12 from here.

    -
  5. Can I get a free trial of Chief Architect Premier X12?
  6. -

    Yes , you can get a free trial of Chief Architect Premier X12 for 30 days from here. You will need to fill out a form with your name , email address , phone number , etc. to get the download link . You will also need to create an account on the official website to activate the trial . The trial version has all the features and functions of the full version , but it will expire after 30 days . You will also not be able to save or print your work with the trial version . You will need to buy a license to continue using the software after the trial period ends .

    -
  7. How can I learn how to use Chief Architect Premier X12?
  8. -

    You can learn how to use Chief Architect Premier X12 by watching video tutorials , reading user manuals , attending webinars , joining online forums , etc. You can find these resources on the official website here. You can also contact customer support if you have any questions or issues with the software . You can find their contact information here.

    -
  9. Where can I find more reviews about Chief Architect Premier X12?
  10. -

    You can find more reviews about Chief Architect Premier X12 on online platforms such as Capterra, Software Advice, Trustpilot, etc. You can also read testimonials from satisfied customers on the official website here. You can also watch video reviews on YouTube channels such as Home Designer Software, The Rendered Home, etc.

    -
-

0a6ba089eb
-
-
\ No newline at end of file diff --git a/spaces/1gistliPinn/ChatGPT4/Examples/?y??6?? ?? ??x????ownna? REPACK.md b/spaces/1gistliPinn/ChatGPT4/Examples/?y??6?? ?? ??x????ownna? REPACK.md deleted file mode 100644 index 3ed387cd42998bde6c1a7cb58b7148c9d45c2238..0000000000000000000000000000000000000000 --- a/spaces/1gistliPinn/ChatGPT4/Examples/?y??6?? ?? ??x????ownna? REPACK.md +++ /dev/null @@ -1,6 +0,0 @@ -

?y??6?? ?? ??x????ownna?


Download >>> https://imgfil.com/2uy0YV



-
- aaccfb2cb3
-
-
-

diff --git a/spaces/1gistliPinn/ChatGPT4/Examples/Download Film Kisah Nabi Musa Full Movie Free.md b/spaces/1gistliPinn/ChatGPT4/Examples/Download Film Kisah Nabi Musa Full Movie Free.md deleted file mode 100644 index 5f6ffa1eb06d55a785a5b1b7e84a7048ac2f65c3..0000000000000000000000000000000000000000 --- a/spaces/1gistliPinn/ChatGPT4/Examples/Download Film Kisah Nabi Musa Full Movie Free.md +++ /dev/null @@ -1,6 +0,0 @@ -

download film kisah nabi musa full movie


Download ===> https://imgfil.com/2uxXCs



-
- . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4fefd39f24
-
-
-

diff --git a/spaces/1gistliPinn/ChatGPT4/Examples/Downton Abbey Saison 3 Torrent French.md b/spaces/1gistliPinn/ChatGPT4/Examples/Downton Abbey Saison 3 Torrent French.md deleted file mode 100644 index 669bf2bd5edf98f0aa0eb0616c149600fb3b8209..0000000000000000000000000000000000000000 --- a/spaces/1gistliPinn/ChatGPT4/Examples/Downton Abbey Saison 3 Torrent French.md +++ /dev/null @@ -1,7 +0,0 @@ -

Downton Abbey Saison 3 Torrent French


Download Filehttps://imgfil.com/2uxY6d



- -Although Season 5, like Season 4, is not as dramatic 1st, 2nd and 3rd seasons (which were outstanding), it's still a great series. . Unlike many other series in the world, when we talk about horror films that can be scary and really creepy, but still they only show fear and death in some way, Supernatural is what makes the viewer fear- present because this series has a more realistic sense of horror than others. -This gives the viewer a sense of reality that makes the series so scary and yet so good. 8a78ff9644
-
-
-

diff --git a/spaces/1toTree/lora_test/ppdiffusers/pipelines/stable_diffusion_safe/__init__.py b/spaces/1toTree/lora_test/ppdiffusers/pipelines/stable_diffusion_safe/__init__.py deleted file mode 100644 index 944420c47c0e0047df5e8bfdf707c75381c985ac..0000000000000000000000000000000000000000 --- a/spaces/1toTree/lora_test/ppdiffusers/pipelines/stable_diffusion_safe/__init__.py +++ /dev/null @@ -1,85 +0,0 @@ -# Copyright (c) 2022 PaddlePaddle Authors. All Rights Reserved. -# -# Licensed under the Apache License, Version 2.0 (the "License"); -# you may not use this file except in compliance with the License. -# You may obtain a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an "AS IS" BASIS, -# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -# See the License for the specific language governing permissions and -# limitations under the License. -# flake8: noqa - -from dataclasses import dataclass -from enum import Enum -from typing import List, Optional, Union - -import numpy as np -import PIL -from PIL import Image - -from ...utils import BaseOutput, is_paddle_available, is_paddlenlp_available - - -@dataclass -class SafetyConfig(object): - WEAK = { - "sld_warmup_steps": 15, - "sld_guidance_scale": 20, - "sld_threshold": 0.0, - "sld_momentum_scale": 0.0, - "sld_mom_beta": 0.0, - } - MEDIUM = { - "sld_warmup_steps": 10, - "sld_guidance_scale": 1000, - "sld_threshold": 0.01, - "sld_momentum_scale": 0.3, - "sld_mom_beta": 0.4, - } - STRONG = { - "sld_warmup_steps": 7, - "sld_guidance_scale": 2000, - "sld_threshold": 0.025, - "sld_momentum_scale": 0.5, - "sld_mom_beta": 0.7, - } - MAX = { - "sld_warmup_steps": 0, - "sld_guidance_scale": 5000, - "sld_threshold": 1.0, - "sld_momentum_scale": 0.5, - "sld_mom_beta": 0.7, - } - - -@dataclass -class StableDiffusionSafePipelineOutput(BaseOutput): - """ - Output class for Safe Stable Diffusion pipelines. - Args: - images (`List[PIL.Image.Image]` or `np.ndarray`) - List of denoised PIL images of length `batch_size` or numpy array of shape `(batch_size, height, width, - num_channels)`. PIL images or numpy array present the denoised images of the diffusion pipeline. - nsfw_content_detected (`List[bool]`) - List of flags denoting whether the corresponding generated image likely represents "not-safe-for-work" - (nsfw) content, or `None` if safety checking could not be performed. - images (`List[PIL.Image.Image]` or `np.ndarray`) - List of denoised PIL images that were flagged by the safety checker any may contain "not-safe-for-work" - (nsfw) content, or `None` if no safety check was performed or no images were flagged. - applied_safety_concept (`str`) - The safety concept that was applied for safety guidance, or `None` if safety guidance was disabled - """ - - images: Union[List[PIL.Image.Image], np.ndarray] - nsfw_content_detected: Optional[List[bool]] - unsafe_images: Optional[Union[List[PIL.Image.Image], np.ndarray]] - applied_safety_concept: Optional[str] - - -if is_paddle_available() and is_paddlenlp_available(): - from .pipeline_stable_diffusion_safe import StableDiffusionPipelineSafe - from .safety_checker import SafeStableDiffusionSafetyChecker diff --git a/spaces/232labs/VToonify/vtoonify/model/vgg.py b/spaces/232labs/VToonify/vtoonify/model/vgg.py deleted file mode 100644 index a1043d5bd8bdd0d1484d2270ae0d33c29495856c..0000000000000000000000000000000000000000 --- a/spaces/232labs/VToonify/vtoonify/model/vgg.py +++ /dev/null @@ -1,60 +0,0 @@ -import torch -import torch.nn as nn -import torchvision - -# VGG architecter, used for the perceptual loss using a pretrained VGG network -class VGG19(torch.nn.Module): - def __init__(self, requires_grad=False): - super().__init__() - vgg_pretrained_features = torchvision.models.vgg19(pretrained=True).features - self.slice1 = torch.nn.Sequential() - self.slice2 = torch.nn.Sequential() - self.slice3 = torch.nn.Sequential() - self.slice4 = torch.nn.Sequential() - self.slice5 = torch.nn.Sequential() - self.slice6 = torch.nn.Sequential() - for x in range(2): - self.slice1.add_module(str(x), vgg_pretrained_features[x]) - for x in range(2, 7): - self.slice2.add_module(str(x), vgg_pretrained_features[x]) - for x in range(7, 12): - self.slice3.add_module(str(x), vgg_pretrained_features[x]) - for x in range(12, 21): - self.slice4.add_module(str(x), vgg_pretrained_features[x]) - for x in range(21, 32): - self.slice5.add_module(str(x), vgg_pretrained_features[x]) - for x in range(32, 36): - self.slice6.add_module(str(x), vgg_pretrained_features[x]) - if not requires_grad: - for param in self.parameters(): - param.requires_grad = False - - self.pool = nn.AdaptiveAvgPool2d(output_size=1) - - self.mean = torch.tensor([0.485, 0.456, 0.406]).view(1,-1, 1, 1).cuda() * 2 - 1 - self.std = torch.tensor([0.229, 0.224, 0.225]).view(1,-1, 1, 1).cuda() * 2 - - def forward(self, X): # relui_1 - X = (X-self.mean)/self.std - h_relu1 = self.slice1(X) - h_relu2 = self.slice2(h_relu1) - h_relu3 = self.slice3(h_relu2) - h_relu4 = self.slice4(h_relu3) - h_relu5 = self.slice5[:-2](h_relu4) - out = [h_relu1, h_relu2, h_relu3, h_relu4, h_relu5] - return out - -# Perceptual loss that uses a pretrained VGG network -class VGGLoss(nn.Module): - def __init__(self): - super(VGGLoss, self).__init__() - self.vgg = VGG19().cuda() - self.criterion = nn.L1Loss() - self.weights = [1.0 / 32, 1.0 / 16, 1.0 / 8, 1.0 / 4, 1.0] - - def forward(self, x, y): - x_vgg, y_vgg = self.vgg(x), self.vgg(y) - loss = 0 - for i in range(len(x_vgg)): - loss += self.weights[i] * self.criterion(x_vgg[i], y_vgg[i].detach()) - return loss \ No newline at end of file diff --git a/spaces/801artistry/RVC801/infer/lib/rmvpe.py b/spaces/801artistry/RVC801/infer/lib/rmvpe.py deleted file mode 100644 index 2a387ebe73c7e1dd8bb7ccad1ea9e0ea89848ece..0000000000000000000000000000000000000000 --- a/spaces/801artistry/RVC801/infer/lib/rmvpe.py +++ /dev/null @@ -1,717 +0,0 @@ -import pdb, os - -import numpy as np -import torch -try: - #Fix "Torch not compiled with CUDA enabled" - import intel_extension_for_pytorch as ipex # pylint: disable=import-error, unused-import - if torch.xpu.is_available(): - from infer.modules.ipex import ipex_init - ipex_init() -except Exception: - pass -import torch.nn as nn -import torch.nn.functional as F -from librosa.util import normalize, pad_center, tiny -from scipy.signal import get_window - -import logging - -logger = logging.getLogger(__name__) - - -###stft codes from https://github.com/pseeth/torch-stft/blob/master/torch_stft/util.py -def window_sumsquare( - window, - n_frames, - hop_length=200, - win_length=800, - n_fft=800, - dtype=np.float32, - norm=None, -): - """ - # from librosa 0.6 - Compute the sum-square envelope of a window function at a given hop length. - This is used to estimate modulation effects induced by windowing - observations in short-time fourier transforms. - Parameters - ---------- - window : string, tuple, number, callable, or list-like - Window specification, as in `get_window` - n_frames : int > 0 - The number of analysis frames - hop_length : int > 0 - The number of samples to advance between frames - win_length : [optional] - The length of the window function. By default, this matches `n_fft`. - n_fft : int > 0 - The length of each analysis frame. - dtype : np.dtype - The data type of the output - Returns - ------- - wss : np.ndarray, shape=`(n_fft + hop_length * (n_frames - 1))` - The sum-squared envelope of the window function - """ - if win_length is None: - win_length = n_fft - - n = n_fft + hop_length * (n_frames - 1) - x = np.zeros(n, dtype=dtype) - - # Compute the squared window at the desired length - win_sq = get_window(window, win_length, fftbins=True) - win_sq = normalize(win_sq, norm=norm) ** 2 - win_sq = pad_center(win_sq, n_fft) - - # Fill the envelope - for i in range(n_frames): - sample = i * hop_length - x[sample : min(n, sample + n_fft)] += win_sq[: max(0, min(n_fft, n - sample))] - return x - - -class STFT(torch.nn.Module): - def __init__( - self, filter_length=1024, hop_length=512, win_length=None, window="hann" - ): - """ - This module implements an STFT using 1D convolution and 1D transpose convolutions. - This is a bit tricky so there are some cases that probably won't work as working - out the same sizes before and after in all overlap add setups is tough. Right now, - this code should work with hop lengths that are half the filter length (50% overlap - between frames). - - Keyword Arguments: - filter_length {int} -- Length of filters used (default: {1024}) - hop_length {int} -- Hop length of STFT (restrict to 50% overlap between frames) (default: {512}) - win_length {[type]} -- Length of the window function applied to each frame (if not specified, it - equals the filter length). (default: {None}) - window {str} -- Type of window to use (options are bartlett, hann, hamming, blackman, blackmanharris) - (default: {'hann'}) - """ - super(STFT, self).__init__() - self.filter_length = filter_length - self.hop_length = hop_length - self.win_length = win_length if win_length else filter_length - self.window = window - self.forward_transform = None - self.pad_amount = int(self.filter_length / 2) - scale = self.filter_length / self.hop_length - fourier_basis = np.fft.fft(np.eye(self.filter_length)) - - cutoff = int((self.filter_length / 2 + 1)) - fourier_basis = np.vstack( - [np.real(fourier_basis[:cutoff, :]), np.imag(fourier_basis[:cutoff, :])] - ) - forward_basis = torch.FloatTensor(fourier_basis[:, None, :]) - inverse_basis = torch.FloatTensor( - np.linalg.pinv(scale * fourier_basis).T[:, None, :] - ) - - assert filter_length >= self.win_length - # get window and zero center pad it to filter_length - fft_window = get_window(window, self.win_length, fftbins=True) - fft_window = pad_center(fft_window, size=filter_length) - fft_window = torch.from_numpy(fft_window).float() - - # window the bases - forward_basis *= fft_window - inverse_basis *= fft_window - - self.register_buffer("forward_basis", forward_basis.float()) - self.register_buffer("inverse_basis", inverse_basis.float()) - - def transform(self, input_data): - """Take input data (audio) to STFT domain. - - Arguments: - input_data {tensor} -- Tensor of floats, with shape (num_batch, num_samples) - - Returns: - magnitude {tensor} -- Magnitude of STFT with shape (num_batch, - num_frequencies, num_frames) - phase {tensor} -- Phase of STFT with shape (num_batch, - num_frequencies, num_frames) - """ - num_batches = input_data.shape[0] - num_samples = input_data.shape[-1] - - self.num_samples = num_samples - - # similar to librosa, reflect-pad the input - input_data = input_data.view(num_batches, 1, num_samples) - # print(1234,input_data.shape) - input_data = F.pad( - input_data.unsqueeze(1), - (self.pad_amount, self.pad_amount, 0, 0, 0, 0), - mode="reflect", - ).squeeze(1) - # print(2333,input_data.shape,self.forward_basis.shape,self.hop_length) - # pdb.set_trace() - forward_transform = F.conv1d( - input_data, self.forward_basis, stride=self.hop_length, padding=0 - ) - - cutoff = int((self.filter_length / 2) + 1) - real_part = forward_transform[:, :cutoff, :] - imag_part = forward_transform[:, cutoff:, :] - - magnitude = torch.sqrt(real_part**2 + imag_part**2) - # phase = torch.atan2(imag_part.data, real_part.data) - - return magnitude # , phase - - def inverse(self, magnitude, phase): - """Call the inverse STFT (iSTFT), given magnitude and phase tensors produced - by the ```transform``` function. - - Arguments: - magnitude {tensor} -- Magnitude of STFT with shape (num_batch, - num_frequencies, num_frames) - phase {tensor} -- Phase of STFT with shape (num_batch, - num_frequencies, num_frames) - - Returns: - inverse_transform {tensor} -- Reconstructed audio given magnitude and phase. Of - shape (num_batch, num_samples) - """ - recombine_magnitude_phase = torch.cat( - [magnitude * torch.cos(phase), magnitude * torch.sin(phase)], dim=1 - ) - - inverse_transform = F.conv_transpose1d( - recombine_magnitude_phase, - self.inverse_basis, - stride=self.hop_length, - padding=0, - ) - - if self.window is not None: - window_sum = window_sumsquare( - self.window, - magnitude.size(-1), - hop_length=self.hop_length, - win_length=self.win_length, - n_fft=self.filter_length, - dtype=np.float32, - ) - # remove modulation effects - approx_nonzero_indices = torch.from_numpy( - np.where(window_sum > tiny(window_sum))[0] - ) - window_sum = torch.from_numpy(window_sum).to(inverse_transform.device) - inverse_transform[:, :, approx_nonzero_indices] /= window_sum[ - approx_nonzero_indices - ] - - # scale by hop ratio - inverse_transform *= float(self.filter_length) / self.hop_length - - inverse_transform = inverse_transform[..., self.pad_amount :] - inverse_transform = inverse_transform[..., : self.num_samples] - inverse_transform = inverse_transform.squeeze(1) - - return inverse_transform - - def forward(self, input_data): - """Take input data (audio) to STFT domain and then back to audio. - - Arguments: - input_data {tensor} -- Tensor of floats, with shape (num_batch, num_samples) - - Returns: - reconstruction {tensor} -- Reconstructed audio given magnitude and phase. Of - shape (num_batch, num_samples) - """ - self.magnitude, self.phase = self.transform(input_data) - reconstruction = self.inverse(self.magnitude, self.phase) - return reconstruction - - -from time import time as ttime - - -class BiGRU(nn.Module): - def __init__(self, input_features, hidden_features, num_layers): - super(BiGRU, self).__init__() - self.gru = nn.GRU( - input_features, - hidden_features, - num_layers=num_layers, - batch_first=True, - bidirectional=True, - ) - - def forward(self, x): - return self.gru(x)[0] - - -class ConvBlockRes(nn.Module): - def __init__(self, in_channels, out_channels, momentum=0.01): - super(ConvBlockRes, self).__init__() - self.conv = nn.Sequential( - nn.Conv2d( - in_channels=in_channels, - out_channels=out_channels, - kernel_size=(3, 3), - stride=(1, 1), - padding=(1, 1), - bias=False, - ), - nn.BatchNorm2d(out_channels, momentum=momentum), - nn.ReLU(), - nn.Conv2d( - in_channels=out_channels, - out_channels=out_channels, - kernel_size=(3, 3), - stride=(1, 1), - padding=(1, 1), - bias=False, - ), - nn.BatchNorm2d(out_channels, momentum=momentum), - nn.ReLU(), - ) - if in_channels != out_channels: - self.shortcut = nn.Conv2d(in_channels, out_channels, (1, 1)) - self.is_shortcut = True - else: - self.is_shortcut = False - - def forward(self, x): - if self.is_shortcut: - return self.conv(x) + self.shortcut(x) - else: - return self.conv(x) + x - - -class Encoder(nn.Module): - def __init__( - self, - in_channels, - in_size, - n_encoders, - kernel_size, - n_blocks, - out_channels=16, - momentum=0.01, - ): - super(Encoder, self).__init__() - self.n_encoders = n_encoders - self.bn = nn.BatchNorm2d(in_channels, momentum=momentum) - self.layers = nn.ModuleList() - self.latent_channels = [] - for i in range(self.n_encoders): - self.layers.append( - ResEncoderBlock( - in_channels, out_channels, kernel_size, n_blocks, momentum=momentum - ) - ) - self.latent_channels.append([out_channels, in_size]) - in_channels = out_channels - out_channels *= 2 - in_size //= 2 - self.out_size = in_size - self.out_channel = out_channels - - def forward(self, x): - concat_tensors = [] - x = self.bn(x) - for i in range(self.n_encoders): - _, x = self.layers[i](x) - concat_tensors.append(_) - return x, concat_tensors - - -class ResEncoderBlock(nn.Module): - def __init__( - self, in_channels, out_channels, kernel_size, n_blocks=1, momentum=0.01 - ): - super(ResEncoderBlock, self).__init__() - self.n_blocks = n_blocks - self.conv = nn.ModuleList() - self.conv.append(ConvBlockRes(in_channels, out_channels, momentum)) - for i in range(n_blocks - 1): - self.conv.append(ConvBlockRes(out_channels, out_channels, momentum)) - self.kernel_size = kernel_size - if self.kernel_size is not None: - self.pool = nn.AvgPool2d(kernel_size=kernel_size) - - def forward(self, x): - for i in range(self.n_blocks): - x = self.conv[i](x) - if self.kernel_size is not None: - return x, self.pool(x) - else: - return x - - -class Intermediate(nn.Module): # - def __init__(self, in_channels, out_channels, n_inters, n_blocks, momentum=0.01): - super(Intermediate, self).__init__() - self.n_inters = n_inters - self.layers = nn.ModuleList() - self.layers.append( - ResEncoderBlock(in_channels, out_channels, None, n_blocks, momentum) - ) - for i in range(self.n_inters - 1): - self.layers.append( - ResEncoderBlock(out_channels, out_channels, None, n_blocks, momentum) - ) - - def forward(self, x): - for i in range(self.n_inters): - x = self.layers[i](x) - return x - - -class ResDecoderBlock(nn.Module): - def __init__(self, in_channels, out_channels, stride, n_blocks=1, momentum=0.01): - super(ResDecoderBlock, self).__init__() - out_padding = (0, 1) if stride == (1, 2) else (1, 1) - self.n_blocks = n_blocks - self.conv1 = nn.Sequential( - nn.ConvTranspose2d( - in_channels=in_channels, - out_channels=out_channels, - kernel_size=(3, 3), - stride=stride, - padding=(1, 1), - output_padding=out_padding, - bias=False, - ), - nn.BatchNorm2d(out_channels, momentum=momentum), - nn.ReLU(), - ) - self.conv2 = nn.ModuleList() - self.conv2.append(ConvBlockRes(out_channels * 2, out_channels, momentum)) - for i in range(n_blocks - 1): - self.conv2.append(ConvBlockRes(out_channels, out_channels, momentum)) - - def forward(self, x, concat_tensor): - x = self.conv1(x) - x = torch.cat((x, concat_tensor), dim=1) - for i in range(self.n_blocks): - x = self.conv2[i](x) - return x - - -class Decoder(nn.Module): - def __init__(self, in_channels, n_decoders, stride, n_blocks, momentum=0.01): - super(Decoder, self).__init__() - self.layers = nn.ModuleList() - self.n_decoders = n_decoders - for i in range(self.n_decoders): - out_channels = in_channels // 2 - self.layers.append( - ResDecoderBlock(in_channels, out_channels, stride, n_blocks, momentum) - ) - in_channels = out_channels - - def forward(self, x, concat_tensors): - for i in range(self.n_decoders): - x = self.layers[i](x, concat_tensors[-1 - i]) - return x - - -class DeepUnet(nn.Module): - def __init__( - self, - kernel_size, - n_blocks, - en_de_layers=5, - inter_layers=4, - in_channels=1, - en_out_channels=16, - ): - super(DeepUnet, self).__init__() - self.encoder = Encoder( - in_channels, 128, en_de_layers, kernel_size, n_blocks, en_out_channels - ) - self.intermediate = Intermediate( - self.encoder.out_channel // 2, - self.encoder.out_channel, - inter_layers, - n_blocks, - ) - self.decoder = Decoder( - self.encoder.out_channel, en_de_layers, kernel_size, n_blocks - ) - - def forward(self, x): - x, concat_tensors = self.encoder(x) - x = self.intermediate(x) - x = self.decoder(x, concat_tensors) - return x - - -class E2E(nn.Module): - def __init__( - self, - n_blocks, - n_gru, - kernel_size, - en_de_layers=5, - inter_layers=4, - in_channels=1, - en_out_channels=16, - ): - super(E2E, self).__init__() - self.unet = DeepUnet( - kernel_size, - n_blocks, - en_de_layers, - inter_layers, - in_channels, - en_out_channels, - ) - self.cnn = nn.Conv2d(en_out_channels, 3, (3, 3), padding=(1, 1)) - if n_gru: - self.fc = nn.Sequential( - BiGRU(3 * 128, 256, n_gru), - nn.Linear(512, 360), - nn.Dropout(0.25), - nn.Sigmoid(), - ) - else: - self.fc = nn.Sequential( - nn.Linear(3 * nn.N_MELS, nn.N_CLASS), nn.Dropout(0.25), nn.Sigmoid() - ) - - def forward(self, mel): - # print(mel.shape) - mel = mel.transpose(-1, -2).unsqueeze(1) - x = self.cnn(self.unet(mel)).transpose(1, 2).flatten(-2) - x = self.fc(x) - # print(x.shape) - return x - - -from librosa.filters import mel - - -class MelSpectrogram(torch.nn.Module): - def __init__( - self, - is_half, - n_mel_channels, - sampling_rate, - win_length, - hop_length, - n_fft=None, - mel_fmin=0, - mel_fmax=None, - clamp=1e-5, - ): - super().__init__() - n_fft = win_length if n_fft is None else n_fft - self.hann_window = {} - mel_basis = mel( - sr=sampling_rate, - n_fft=n_fft, - n_mels=n_mel_channels, - fmin=mel_fmin, - fmax=mel_fmax, - htk=True, - ) - mel_basis = torch.from_numpy(mel_basis).float() - self.register_buffer("mel_basis", mel_basis) - self.n_fft = win_length if n_fft is None else n_fft - self.hop_length = hop_length - self.win_length = win_length - self.sampling_rate = sampling_rate - self.n_mel_channels = n_mel_channels - self.clamp = clamp - self.is_half = is_half - - def forward(self, audio, keyshift=0, speed=1, center=True): - factor = 2 ** (keyshift / 12) - n_fft_new = int(np.round(self.n_fft * factor)) - win_length_new = int(np.round(self.win_length * factor)) - hop_length_new = int(np.round(self.hop_length * speed)) - keyshift_key = str(keyshift) + "_" + str(audio.device) - if keyshift_key not in self.hann_window: - self.hann_window[keyshift_key] = torch.hann_window(win_length_new).to( - # "cpu"if(audio.device.type=="privateuseone") else audio.device - audio.device - ) - # fft = torch.stft(#doesn't support pytorch_dml - # # audio.cpu() if(audio.device.type=="privateuseone")else audio, - # audio, - # n_fft=n_fft_new, - # hop_length=hop_length_new, - # win_length=win_length_new, - # window=self.hann_window[keyshift_key], - # center=center, - # return_complex=True, - # ) - # magnitude = torch.sqrt(fft.real.pow(2) + fft.imag.pow(2)) - # print(1111111111) - # print(222222222222222,audio.device,self.is_half) - if hasattr(self, "stft") == False: - # print(n_fft_new,hop_length_new,win_length_new,audio.shape) - self.stft = STFT( - filter_length=n_fft_new, - hop_length=hop_length_new, - win_length=win_length_new, - window="hann", - ).to(audio.device) - magnitude = self.stft.transform(audio) # phase - # if (audio.device.type == "privateuseone"): - # magnitude=magnitude.to(audio.device) - if keyshift != 0: - size = self.n_fft // 2 + 1 - resize = magnitude.size(1) - if resize < size: - magnitude = F.pad(magnitude, (0, 0, 0, size - resize)) - magnitude = magnitude[:, :size, :] * self.win_length / win_length_new - mel_output = torch.matmul(self.mel_basis, magnitude) - if self.is_half == True: - mel_output = mel_output.half() - log_mel_spec = torch.log(torch.clamp(mel_output, min=self.clamp)) - # print(log_mel_spec.device.type) - return log_mel_spec - - -class RMVPE: - def __init__(self, model_path, is_half, device=None): - self.resample_kernel = {} - self.resample_kernel = {} - self.is_half = is_half - if device is None: - device = "cuda" if torch.cuda.is_available() else "cpu" - self.device = device - self.mel_extractor = MelSpectrogram( - is_half, 128, 16000, 1024, 160, None, 30, 8000 - ).to(device) - if "privateuseone" in str(device): - import onnxruntime as ort - - ort_session = ort.InferenceSession( - "%s/rmvpe.onnx" % os.environ["rmvpe_root"], - providers=["DmlExecutionProvider"], - ) - self.model = ort_session - else: - model = E2E(4, 1, (2, 2)) - ckpt = torch.load(model_path, map_location="cpu") - model.load_state_dict(ckpt) - model.eval() - if is_half == True: - model = model.half() - self.model = model - self.model = self.model.to(device) - cents_mapping = 20 * np.arange(360) + 1997.3794084376191 - self.cents_mapping = np.pad(cents_mapping, (4, 4)) # 368 - - def mel2hidden(self, mel): - with torch.no_grad(): - n_frames = mel.shape[-1] - mel = F.pad( - mel, (0, 32 * ((n_frames - 1) // 32 + 1) - n_frames), mode="constant" - ) - if "privateuseone" in str(self.device): - onnx_input_name = self.model.get_inputs()[0].name - onnx_outputs_names = self.model.get_outputs()[0].name - hidden = self.model.run( - [onnx_outputs_names], - input_feed={onnx_input_name: mel.cpu().numpy()}, - )[0] - else: - hidden = self.model(mel) - return hidden[:, :n_frames] - - def decode(self, hidden, thred=0.03): - cents_pred = self.to_local_average_cents(hidden, thred=thred) - f0 = 10 * (2 ** (cents_pred / 1200)) - f0[f0 == 10] = 0 - # f0 = np.array([10 * (2 ** (cent_pred / 1200)) if cent_pred else 0 for cent_pred in cents_pred]) - return f0 - - def infer_from_audio(self, audio, thred=0.03): - # torch.cuda.synchronize() - t0 = ttime() - mel = self.mel_extractor( - torch.from_numpy(audio).float().to(self.device).unsqueeze(0), center=True - ) - # print(123123123,mel.device.type) - # torch.cuda.synchronize() - t1 = ttime() - hidden = self.mel2hidden(mel) - # torch.cuda.synchronize() - t2 = ttime() - # print(234234,hidden.device.type) - if "privateuseone" not in str(self.device): - hidden = hidden.squeeze(0).cpu().numpy() - else: - hidden = hidden[0] - if self.is_half == True: - hidden = hidden.astype("float32") - - f0 = self.decode(hidden, thred=thred) - # torch.cuda.synchronize() - t3 = ttime() - # print("hmvpe:%s\t%s\t%s\t%s"%(t1-t0,t2-t1,t3-t2,t3-t0)) - return f0 - - def infer_from_audio_with_pitch(self, audio, thred=0.03, f0_min=50, f0_max=1100): - audio = torch.from_numpy(audio).float().to(self.device).unsqueeze(0) - mel = self.mel_extractor(audio, center=True) - hidden = self.mel2hidden(mel) - hidden = hidden.squeeze(0).cpu().numpy() - if self.is_half == True: - hidden = hidden.astype("float32") - f0 = self.decode(hidden, thred=thred) - f0[(f0 < f0_min) | (f0 > f0_max)] = 0 - return f0 - - def to_local_average_cents(self, salience, thred=0.05): - # t0 = ttime() - center = np.argmax(salience, axis=1) # 帧长#index - salience = np.pad(salience, ((0, 0), (4, 4))) # 帧长,368 - # t1 = ttime() - center += 4 - todo_salience = [] - todo_cents_mapping = [] - starts = center - 4 - ends = center + 5 - for idx in range(salience.shape[0]): - todo_salience.append(salience[:, starts[idx] : ends[idx]][idx]) - todo_cents_mapping.append(self.cents_mapping[starts[idx] : ends[idx]]) - # t2 = ttime() - todo_salience = np.array(todo_salience) # 帧长,9 - todo_cents_mapping = np.array(todo_cents_mapping) # 帧长,9 - product_sum = np.sum(todo_salience * todo_cents_mapping, 1) - weight_sum = np.sum(todo_salience, 1) # 帧长 - devided = product_sum / weight_sum # 帧长 - # t3 = ttime() - maxx = np.max(salience, axis=1) # 帧长 - devided[maxx <= thred] = 0 - # t4 = ttime() - # print("decode:%s\t%s\t%s\t%s" % (t1 - t0, t2 - t1, t3 - t2, t4 - t3)) - return devided - - -if __name__ == "__main__": - import librosa - import soundfile as sf - - audio, sampling_rate = sf.read(r"C:\Users\liujing04\Desktop\Z\冬之花clip1.wav") - if len(audio.shape) > 1: - audio = librosa.to_mono(audio.transpose(1, 0)) - audio_bak = audio.copy() - if sampling_rate != 16000: - audio = librosa.resample(audio, orig_sr=sampling_rate, target_sr=16000) - model_path = r"D:\BaiduNetdiskDownload\RVC-beta-v2-0727AMD_realtime\rmvpe.pt" - thred = 0.03 # 0.01 - device = "cuda" if torch.cuda.is_available() else "cpu" - rmvpe = RMVPE(model_path, is_half=False, device=device) - t0 = ttime() - f0 = rmvpe.infer_from_audio(audio, thred=thred) - # f0 = rmvpe.infer_from_audio(audio, thred=thred) - # f0 = rmvpe.infer_from_audio(audio, thred=thred) - # f0 = rmvpe.infer_from_audio(audio, thred=thred) - # f0 = rmvpe.infer_from_audio(audio, thred=thred) - t1 = ttime() - logger.info("%s %.2f", f0.shape, t1 - t0) diff --git a/spaces/A666sxr/Genshin_TTS/data_utils.py b/spaces/A666sxr/Genshin_TTS/data_utils.py deleted file mode 100644 index 4855699d23d5dee36d4a12e875c7465265caac0f..0000000000000000000000000000000000000000 --- a/spaces/A666sxr/Genshin_TTS/data_utils.py +++ /dev/null @@ -1,392 +0,0 @@ -import time -import os -import random -import numpy as np -import torch -import torch.utils.data - -import commons -from mel_processing import spectrogram_torch -from utils import load_wav_to_torch, load_filepaths_and_text -from text import text_to_sequence, cleaned_text_to_sequence - - -class TextAudioLoader(torch.utils.data.Dataset): - """ - 1) loads audio, text pairs - 2) normalizes text and converts them to sequences of integers - 3) computes spectrograms from audio files. - """ - def __init__(self, audiopaths_and_text, hparams): - self.audiopaths_and_text = load_filepaths_and_text(audiopaths_and_text) - self.text_cleaners = hparams.text_cleaners - self.max_wav_value = hparams.max_wav_value - self.sampling_rate = hparams.sampling_rate - self.filter_length = hparams.filter_length - self.hop_length = hparams.hop_length - self.win_length = hparams.win_length - self.sampling_rate = hparams.sampling_rate - - self.cleaned_text = getattr(hparams, "cleaned_text", False) - - self.add_blank = hparams.add_blank - self.min_text_len = getattr(hparams, "min_text_len", 1) - self.max_text_len = getattr(hparams, "max_text_len", 190) - - random.seed(1234) - random.shuffle(self.audiopaths_and_text) - self._filter() - - - def _filter(self): - """ - Filter text & store spec lengths - """ - # Store spectrogram lengths for Bucketing - # wav_length ~= file_size / (wav_channels * Bytes per dim) = file_size / (1 * 2) - # spec_length = wav_length // hop_length - - audiopaths_and_text_new = [] - lengths = [] - for audiopath, text in self.audiopaths_and_text: - if self.min_text_len <= len(text) and len(text) <= self.max_text_len: - audiopaths_and_text_new.append([audiopath, text]) - lengths.append(os.path.getsize(audiopath) // (2 * self.hop_length)) - self.audiopaths_and_text = audiopaths_and_text_new - self.lengths = lengths - - def get_audio_text_pair(self, audiopath_and_text): - # separate filename and text - audiopath, text = audiopath_and_text[0], audiopath_and_text[1] - text = self.get_text(text) - spec, wav = self.get_audio(audiopath) - return (text, spec, wav) - - def get_audio(self, filename): - audio, sampling_rate = load_wav_to_torch(filename) - if sampling_rate != self.sampling_rate: - raise ValueError("{} {} SR doesn't match target {} SR".format( - sampling_rate, self.sampling_rate)) - audio_norm = audio / self.max_wav_value - audio_norm = audio_norm.unsqueeze(0) - spec_filename = filename.replace(".wav", ".spec.pt") - if os.path.exists(spec_filename): - spec = torch.load(spec_filename) - else: - spec = spectrogram_torch(audio_norm, self.filter_length, - self.sampling_rate, self.hop_length, self.win_length, - center=False) - spec = torch.squeeze(spec, 0) - torch.save(spec, spec_filename) - return spec, audio_norm - - def get_text(self, text): - if self.cleaned_text: - text_norm = cleaned_text_to_sequence(text) - else: - text_norm = text_to_sequence(text, self.text_cleaners) - if self.add_blank: - text_norm = commons.intersperse(text_norm, 0) - text_norm = torch.LongTensor(text_norm) - return text_norm - - def __getitem__(self, index): - return self.get_audio_text_pair(self.audiopaths_and_text[index]) - - def __len__(self): - return len(self.audiopaths_and_text) - - -class TextAudioCollate(): - """ Zero-pads model inputs and targets - """ - def __init__(self, return_ids=False): - self.return_ids = return_ids - - def __call__(self, batch): - """Collate's training batch from normalized text and aduio - PARAMS - ------ - batch: [text_normalized, spec_normalized, wav_normalized] - """ - # Right zero-pad all one-hot text sequences to max input length - _, ids_sorted_decreasing = torch.sort( - torch.LongTensor([x[1].size(1) for x in batch]), - dim=0, descending=True) - - max_text_len = max([len(x[0]) for x in batch]) - max_spec_len = max([x[1].size(1) for x in batch]) - max_wav_len = max([x[2].size(1) for x in batch]) - - text_lengths = torch.LongTensor(len(batch)) - spec_lengths = torch.LongTensor(len(batch)) - wav_lengths = torch.LongTensor(len(batch)) - - text_padded = torch.LongTensor(len(batch), max_text_len) - spec_padded = torch.FloatTensor(len(batch), batch[0][1].size(0), max_spec_len) - wav_padded = torch.FloatTensor(len(batch), 1, max_wav_len) - text_padded.zero_() - spec_padded.zero_() - wav_padded.zero_() - for i in range(len(ids_sorted_decreasing)): - row = batch[ids_sorted_decreasing[i]] - - text = row[0] - text_padded[i, :text.size(0)] = text - text_lengths[i] = text.size(0) - - spec = row[1] - spec_padded[i, :, :spec.size(1)] = spec - spec_lengths[i] = spec.size(1) - - wav = row[2] - wav_padded[i, :, :wav.size(1)] = wav - wav_lengths[i] = wav.size(1) - - if self.return_ids: - return text_padded, text_lengths, spec_padded, spec_lengths, wav_padded, wav_lengths, ids_sorted_decreasing - return text_padded, text_lengths, spec_padded, spec_lengths, wav_padded, wav_lengths - - -"""Multi speaker version""" -class TextAudioSpeakerLoader(torch.utils.data.Dataset): - """ - 1) loads audio, speaker_id, text pairs - 2) normalizes text and converts them to sequences of integers - 3) computes spectrograms from audio files. - """ - def __init__(self, audiopaths_sid_text, hparams): - self.audiopaths_sid_text = load_filepaths_and_text(audiopaths_sid_text) - self.text_cleaners = hparams.text_cleaners - self.max_wav_value = hparams.max_wav_value - self.sampling_rate = hparams.sampling_rate - self.filter_length = hparams.filter_length - self.hop_length = hparams.hop_length - self.win_length = hparams.win_length - self.sampling_rate = hparams.sampling_rate - - self.cleaned_text = getattr(hparams, "cleaned_text", False) - - self.add_blank = hparams.add_blank - self.min_text_len = getattr(hparams, "min_text_len", 1) - self.max_text_len = getattr(hparams, "max_text_len", 190) - - random.seed(1234) - random.shuffle(self.audiopaths_sid_text) - self._filter() - - def _filter(self): - """ - Filter text & store spec lengths - """ - # Store spectrogram lengths for Bucketing - # wav_length ~= file_size / (wav_channels * Bytes per dim) = file_size / (1 * 2) - # spec_length = wav_length // hop_length - - audiopaths_sid_text_new = [] - lengths = [] - for audiopath, sid, text in self.audiopaths_sid_text: - if self.min_text_len <= len(text) and len(text) <= self.max_text_len: - audiopaths_sid_text_new.append([audiopath, sid, text]) - lengths.append(os.path.getsize(audiopath) // (2 * self.hop_length)) - self.audiopaths_sid_text = audiopaths_sid_text_new - self.lengths = lengths - - def get_audio_text_speaker_pair(self, audiopath_sid_text): - # separate filename, speaker_id and text - audiopath, sid, text = audiopath_sid_text[0], audiopath_sid_text[1], audiopath_sid_text[2] - text = self.get_text(text) - spec, wav = self.get_audio(audiopath) - sid = self.get_sid(sid) - return (text, spec, wav, sid) - - def get_audio(self, filename): - audio, sampling_rate = load_wav_to_torch(filename) - if sampling_rate != self.sampling_rate: - raise ValueError("{} {} SR doesn't match target {} SR".format( - sampling_rate, self.sampling_rate)) - audio_norm = audio / self.max_wav_value - audio_norm = audio_norm.unsqueeze(0) - spec_filename = filename.replace(".wav", ".spec.pt") - if os.path.exists(spec_filename): - spec = torch.load(spec_filename) - else: - spec = spectrogram_torch(audio_norm, self.filter_length, - self.sampling_rate, self.hop_length, self.win_length, - center=False) - spec = torch.squeeze(spec, 0) - torch.save(spec, spec_filename) - return spec, audio_norm - - def get_text(self, text): - if self.cleaned_text: - text_norm = cleaned_text_to_sequence(text) - else: - text_norm = text_to_sequence(text, self.text_cleaners) - if self.add_blank: - text_norm = commons.intersperse(text_norm, 0) - text_norm = torch.LongTensor(text_norm) - return text_norm - - def get_sid(self, sid): - sid = torch.LongTensor([int(sid)]) - return sid - - def __getitem__(self, index): - return self.get_audio_text_speaker_pair(self.audiopaths_sid_text[index]) - - def __len__(self): - return len(self.audiopaths_sid_text) - - -class TextAudioSpeakerCollate(): - """ Zero-pads model inputs and targets - """ - def __init__(self, return_ids=False): - self.return_ids = return_ids - - def __call__(self, batch): - """Collate's training batch from normalized text, audio and speaker identities - PARAMS - ------ - batch: [text_normalized, spec_normalized, wav_normalized, sid] - """ - # Right zero-pad all one-hot text sequences to max input length - _, ids_sorted_decreasing = torch.sort( - torch.LongTensor([x[1].size(1) for x in batch]), - dim=0, descending=True) - - max_text_len = max([len(x[0]) for x in batch]) - max_spec_len = max([x[1].size(1) for x in batch]) - max_wav_len = max([x[2].size(1) for x in batch]) - - text_lengths = torch.LongTensor(len(batch)) - spec_lengths = torch.LongTensor(len(batch)) - wav_lengths = torch.LongTensor(len(batch)) - sid = torch.LongTensor(len(batch)) - - text_padded = torch.LongTensor(len(batch), max_text_len) - spec_padded = torch.FloatTensor(len(batch), batch[0][1].size(0), max_spec_len) - wav_padded = torch.FloatTensor(len(batch), 1, max_wav_len) - text_padded.zero_() - spec_padded.zero_() - wav_padded.zero_() - for i in range(len(ids_sorted_decreasing)): - row = batch[ids_sorted_decreasing[i]] - - text = row[0] - text_padded[i, :text.size(0)] = text - text_lengths[i] = text.size(0) - - spec = row[1] - spec_padded[i, :, :spec.size(1)] = spec - spec_lengths[i] = spec.size(1) - - wav = row[2] - wav_padded[i, :, :wav.size(1)] = wav - wav_lengths[i] = wav.size(1) - - sid[i] = row[3] - - if self.return_ids: - return text_padded, text_lengths, spec_padded, spec_lengths, wav_padded, wav_lengths, sid, ids_sorted_decreasing - return text_padded, text_lengths, spec_padded, spec_lengths, wav_padded, wav_lengths, sid - - -class DistributedBucketSampler(torch.utils.data.distributed.DistributedSampler): - """ - Maintain similar input lengths in a batch. - Length groups are specified by boundaries. - Ex) boundaries = [b1, b2, b3] -> any batch is included either {x | b1 < length(x) <=b2} or {x | b2 < length(x) <= b3}. - - It removes samples which are not included in the boundaries. - Ex) boundaries = [b1, b2, b3] -> any x s.t. length(x) <= b1 or length(x) > b3 are discarded. - """ - def __init__(self, dataset, batch_size, boundaries, num_replicas=None, rank=None, shuffle=True): - super().__init__(dataset, num_replicas=num_replicas, rank=rank, shuffle=shuffle) - self.lengths = dataset.lengths - self.batch_size = batch_size - self.boundaries = boundaries - - self.buckets, self.num_samples_per_bucket = self._create_buckets() - self.total_size = sum(self.num_samples_per_bucket) - self.num_samples = self.total_size // self.num_replicas - - def _create_buckets(self): - buckets = [[] for _ in range(len(self.boundaries) - 1)] - for i in range(len(self.lengths)): - length = self.lengths[i] - idx_bucket = self._bisect(length) - if idx_bucket != -1: - buckets[idx_bucket].append(i) - - for i in range(len(buckets) - 1, 0, -1): - if len(buckets[i]) == 0: - buckets.pop(i) - self.boundaries.pop(i+1) - - num_samples_per_bucket = [] - for i in range(len(buckets)): - len_bucket = len(buckets[i]) - total_batch_size = self.num_replicas * self.batch_size - rem = (total_batch_size - (len_bucket % total_batch_size)) % total_batch_size - num_samples_per_bucket.append(len_bucket + rem) - return buckets, num_samples_per_bucket - - def __iter__(self): - # deterministically shuffle based on epoch - g = torch.Generator() - g.manual_seed(self.epoch) - - indices = [] - if self.shuffle: - for bucket in self.buckets: - indices.append(torch.randperm(len(bucket), generator=g).tolist()) - else: - for bucket in self.buckets: - indices.append(list(range(len(bucket)))) - - batches = [] - for i in range(len(self.buckets)): - bucket = self.buckets[i] - len_bucket = len(bucket) - ids_bucket = indices[i] - num_samples_bucket = self.num_samples_per_bucket[i] - - # add extra samples to make it evenly divisible - rem = num_samples_bucket - len_bucket - ids_bucket = ids_bucket + ids_bucket * (rem // len_bucket) + ids_bucket[:(rem % len_bucket)] - - # subsample - ids_bucket = ids_bucket[self.rank::self.num_replicas] - - # batching - for j in range(len(ids_bucket) // self.batch_size): - batch = [bucket[idx] for idx in ids_bucket[j*self.batch_size:(j+1)*self.batch_size]] - batches.append(batch) - - if self.shuffle: - batch_ids = torch.randperm(len(batches), generator=g).tolist() - batches = [batches[i] for i in batch_ids] - self.batches = batches - - assert len(self.batches) * self.batch_size == self.num_samples - return iter(self.batches) - - def _bisect(self, x, lo=0, hi=None): - if hi is None: - hi = len(self.boundaries) - 1 - - if hi > lo: - mid = (hi + lo) // 2 - if self.boundaries[mid] < x and x <= self.boundaries[mid+1]: - return mid - elif x <= self.boundaries[mid]: - return self._bisect(x, lo, mid) - else: - return self._bisect(x, mid + 1, hi) - else: - return -1 - - def __len__(self): - return self.num_samples // self.batch_size diff --git a/spaces/AB-TW/team-ai/documents/bussiness_context/NOTION_DB/Engineering Wiki 2402f5396a3244fdb3f1d135bdb0f3d6/Engineering Wiki 8da06b3dcf1b4eaaa3e90aa70feefe56.md b/spaces/AB-TW/team-ai/documents/bussiness_context/NOTION_DB/Engineering Wiki 2402f5396a3244fdb3f1d135bdb0f3d6/Engineering Wiki 8da06b3dcf1b4eaaa3e90aa70feefe56.md deleted file mode 100644 index 9e1b3c5dc090f76ff01886dffba49490a338fbed..0000000000000000000000000000000000000000 --- a/spaces/AB-TW/team-ai/documents/bussiness_context/NOTION_DB/Engineering Wiki 2402f5396a3244fdb3f1d135bdb0f3d6/Engineering Wiki 8da06b3dcf1b4eaaa3e90aa70feefe56.md +++ /dev/null @@ -1 +0,0 @@ -# Engineering Wiki \ No newline at end of file diff --git a/spaces/AIGC-Audio/AudioGPT/sound_extraction/utils/stft.py b/spaces/AIGC-Audio/AudioGPT/sound_extraction/utils/stft.py deleted file mode 100644 index 04a1da93e3bd5777e8759f1b4bc5c0eaca149317..0000000000000000000000000000000000000000 --- a/spaces/AIGC-Audio/AudioGPT/sound_extraction/utils/stft.py +++ /dev/null @@ -1,159 +0,0 @@ -import torch -import numpy as np -import torch.nn.functional as F -from torch.autograd import Variable -from scipy.signal import get_window -import librosa.util as librosa_util -from librosa.util import pad_center, tiny -# from audio_processing import window_sumsquare - -def window_sumsquare(window, n_frames, hop_length=512, win_length=1024, - n_fft=1024, dtype=np.float32, norm=None): - """ - # from librosa 0.6 - Compute the sum-square envelope of a window function at a given hop length. - This is used to estimate modulation effects induced by windowing - observations in short-time fourier transforms. - Parameters - ---------- - window : string, tuple, number, callable, or list-like - Window specification, as in `get_window` - n_frames : int > 0 - The number of analysis frames - hop_length : int > 0 - The number of samples to advance between frames - win_length : [optional] - The length of the window function. By default, this matches `n_fft`. - n_fft : int > 0 - The length of each analysis frame. - dtype : np.dtype - The data type of the output - Returns - ------- - wss : np.ndarray, shape=`(n_fft + hop_length * (n_frames - 1))` - The sum-squared envelope of the window function - """ - if win_length is None: - win_length = n_fft - - n = n_fft + hop_length * (n_frames - 1) - x = np.zeros(n, dtype=dtype) - - # Compute the squared window at the desired length - win_sq = get_window(window, win_length, fftbins=True) - win_sq = librosa_util.normalize(win_sq, norm=norm)**2 - win_sq = librosa_util.pad_center(win_sq, n_fft) - - # Fill the envelope - for i in range(n_frames): - sample = i * hop_length - x[sample:min(n, sample + n_fft)] += win_sq[:max(0, min(n_fft, n - sample))] - return x - -class STFT(torch.nn.Module): - """adapted from Prem Seetharaman's https://github.com/pseeth/pytorch-stft""" - def __init__(self, filter_length=1024, hop_length=512, win_length=1024, - window='hann'): - super(STFT, self).__init__() - self.filter_length = filter_length - self.hop_length = hop_length - self.win_length = win_length - self.window = window - self.forward_transform = None - scale = self.filter_length / self.hop_length - fourier_basis = np.fft.fft(np.eye(self.filter_length)) - - cutoff = int((self.filter_length / 2 + 1)) - fourier_basis = np.vstack([np.real(fourier_basis[:cutoff, :]), - np.imag(fourier_basis[:cutoff, :])]) - - forward_basis = torch.FloatTensor(fourier_basis[:, None, :]) - inverse_basis = torch.FloatTensor( - np.linalg.pinv(scale * fourier_basis).T[:, None, :]) - - if window is not None: - assert(filter_length >= win_length) - # get window and zero center pad it to filter_length - fft_window = get_window(window, win_length, fftbins=True) - fft_window = pad_center(fft_window, filter_length) - fft_window = torch.from_numpy(fft_window).float() - - # window the bases - forward_basis *= fft_window - inverse_basis *= fft_window - - self.register_buffer('forward_basis', forward_basis.float()) - self.register_buffer('inverse_basis', inverse_basis.float()) - - def transform(self, input_data): - num_batches = input_data.size(0) - num_samples = input_data.size(1) - - self.num_samples = num_samples - - # similar to librosa, reflect-pad the input - input_data = input_data.view(num_batches, 1, num_samples) - input_data = F.pad( - input_data.unsqueeze(1), - (int(self.filter_length / 2), int(self.filter_length / 2), 0, 0), - mode='reflect') - input_data = input_data.squeeze(1) - - forward_transform = F.conv1d( - input_data, - Variable(self.forward_basis, requires_grad=False), - stride=self.hop_length, - padding=0) - - cutoff = int((self.filter_length / 2) + 1) - real_part = forward_transform[:, :cutoff, :] - imag_part = forward_transform[:, cutoff:, :] - - magnitude = torch.sqrt(real_part**2 + imag_part**2) - phase = torch.autograd.Variable( - torch.atan2(imag_part.data, real_part.data)) - - return magnitude, phase # [batch_size, F(513), T(1251)] - - def inverse(self, magnitude, phase): - recombine_magnitude_phase = torch.cat( - [magnitude*torch.cos(phase), magnitude*torch.sin(phase)], dim=1) - - inverse_transform = F.conv_transpose1d( - recombine_magnitude_phase, - Variable(self.inverse_basis, requires_grad=False), - stride=self.hop_length, - padding=0) - - if self.window is not None: - window_sum = window_sumsquare( - self.window, magnitude.size(-1), hop_length=self.hop_length, - win_length=self.win_length, n_fft=self.filter_length, - dtype=np.float32) - # remove modulation effects - approx_nonzero_indices = torch.from_numpy( - np.where(window_sum > tiny(window_sum))[0]) - window_sum = torch.autograd.Variable( - torch.from_numpy(window_sum), requires_grad=False) - window_sum = window_sum.cuda() if magnitude.is_cuda else window_sum - inverse_transform[:, :, approx_nonzero_indices] /= window_sum[approx_nonzero_indices] - - # scale by hop ratio - inverse_transform *= float(self.filter_length) / self.hop_length - - inverse_transform = inverse_transform[:, :, int(self.filter_length/2):] - inverse_transform = inverse_transform[:, :, :-int(self.filter_length/2):] - - return inverse_transform #[batch_size, 1, sample_num] - - def forward(self, input_data): - self.magnitude, self.phase = self.transform(input_data) - reconstruction = self.inverse(self.magnitude, self.phase) - return reconstruction - -if __name__ == '__main__': - a = torch.randn(4, 320000) - stft = STFT() - mag, phase = stft.transform(a) - # rec_a = stft.inverse(mag, phase) - print(mag.shape) diff --git a/spaces/AIWaves/Debate/src/agents/utils.py b/spaces/AIWaves/Debate/src/agents/utils.py deleted file mode 100644 index dcfb5697443049ca18ba568508e227801f51e004..0000000000000000000000000000000000000000 --- a/spaces/AIWaves/Debate/src/agents/utils.py +++ /dev/null @@ -1,480 +0,0 @@ -# coding=utf-8 -# Copyright 2023 The AIWaves Inc. team. - -# -# Licensed under the Apache License, Version 2.0 (the "License"); -# you may not use this file except in compliance with the License. -# You may obtain a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an "AS IS" BASIS, -# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -# See the License for the specific language governing permissions and -# limitations under the License. -"""helper functions for an LLM autonoumous agent""" -import csv -import random -import json -import pandas -import numpy as np -import requests -import torch -from tqdm import tqdm -from text2vec import semantic_search -import re -import datetime -from langchain.document_loaders import UnstructuredFileLoader -from langchain.text_splitter import CharacterTextSplitter -from sentence_transformers import SentenceTransformer -import string -import random -import os -import openai - -embed_model_name = os.environ["Embed_Model"] if "Embed_Model" in os.environ else "text-embedding-ada-002" -if embed_model_name in ["text-embedding-ada-002"]: - pass -else: - embedding_model = SentenceTransformer( - embed_model_name, device=torch.device("cpu") - ) - -def get_embedding(sentence): - if embed_model_name in ["text-embedding-ada-002"]: - openai.api_key = os.environ["API_KEY"] - # if "PROXY" in os.environ: - # assert "http:" in os.environ["PROXY"] or "socks" in os.environ["PROXY"],"PROXY error,PROXY must be http or socks" - # openai.proxy = os.environ["PROXY"] - if "API_BASE" in os.environ: - openai.api_base = os.environ["API_BASE"] - embedding_model = openai.Embedding - embed = embedding_model.create( - model=embed_model_name, - input=sentence - ) - embed = embed["data"][0]["embedding"] - embed = torch.tensor(embed,dtype=torch.float32) - else: - embed = embedding_model.encode(sentence,convert_to_tensor=True) - if len(embed.shape)==1: - embed = embed.unsqueeze(0) - return embed - - -def get_code(): - return "".join(random.sample(string.ascii_letters + string.digits, 8)) - - -def get_content_between_a_b(start_tag, end_tag, text): - """ - - Args: - start_tag (str): start_tag - end_tag (str): end_tag - text (str): complete sentence - - Returns: - str: the content between start_tag and end_tag - """ - extracted_text = "" - start_index = text.find(start_tag) - while start_index != -1: - end_index = text.find(end_tag, start_index + len(start_tag)) - if end_index != -1: - extracted_text += text[start_index + - len(start_tag):end_index] + " " - start_index = text.find(start_tag, end_index + len(end_tag)) - else: - break - - return extracted_text.strip() - - -def extract(text, type): - """extract the content between - - Args: - text (str): complete sentence - type (str): tag - - Returns: - str: content between - """ - target_str = get_content_between_a_b(f"<{type}>", f"", text) - return target_str - -def count_files_in_directory(directory): - # 获取指定目录下的文件数目 - file_count = len([f for f in os.listdir(directory) if os.path.isfile(os.path.join(directory, f))]) - return file_count - -def delete_oldest_files(directory, num_to_keep): - # 获取目录下文件列表,并按修改时间排序 - files = [(f, os.path.getmtime(os.path.join(directory, f))) for f in os.listdir(directory) if os.path.isfile(os.path.join(directory, f))] - - # 删除最开始的 num_to_keep 个文件 - for i in range(min(num_to_keep, len(files))): - file_to_delete = os.path.join(directory, files[i][0]) - os.remove(file_to_delete) - -def delete_files_if_exceed_threshold(directory, threshold, num_to_keep): - # 获取文件数目并进行处理 - file_count = count_files_in_directory(directory) - if file_count > threshold: - delete_count = file_count - num_to_keep - delete_oldest_files(directory, delete_count) - -def save_logs(log_path, messages, response): - if not os.path.exists(log_path): - os.mkdir(log_path) - delete_files_if_exceed_threshold(log_path, 20, 10) - log_path = log_path if log_path else "logs" - log = {} - log["input"] = messages - log["output"] = response - os.makedirs(log_path, exist_ok=True) - log_file = os.path.join( - log_path, - datetime.datetime.now().strftime("%Y-%m-%d-%H:%M:%S") + ".json") - with open(log_file, "w", encoding="utf-8") as f: - json.dump(log, f, ensure_ascii=False, indent=2) - - - -def semantic_search_word2vec(query_embedding, kb_embeddings, top_k): - return semantic_search(query_embedding, kb_embeddings, top_k=top_k) - - -def cut_sent(para): - para = re.sub("([。!?\?])([^”’])", r"\1\n\2", para) - para = re.sub("(\.{6})([^”’])", r"\1\n\2", para) - para = re.sub("(\…{2})([^”’])", r"\1\n\2", para) - para = re.sub("([。!?\?][”’])([^,。!?\?])", r"\1\n\2", para) - para = para.rstrip() - pieces = [i for i in para.split("\n") if i] - batch_size = 3 - chucks = [ - " ".join(pieces[i:i + batch_size]) - for i in range(0, len(pieces), batch_size) - ] - return chucks - - -def process_document(file_path): - """ - Save QA_csv to json. - Args: - model: LLM to generate embeddings - qa_dict: A dict contains Q&A - save_path: where to save the json file. - Json format: - Dict[num,Dict[q:str,a:str,chunk:str,emb:List[float]] - """ - final_dict = {} - count = 0 - if file_path.endswith(".csv"): - dataset = pandas.read_csv(file_path) - questions = dataset["question"] - answers = dataset["answer"] - # embedding q+chunk - for q, a in zip(questions, answers): - for text in cut_sent(a): - temp_dict = {} - temp_dict["q"] = q - temp_dict["a"] = a - temp_dict["chunk"] = text - temp_dict["emb"] = get_embedding(q + text).tolist() - final_dict[count] = temp_dict - count += 1 - # embedding chunk - for q, a in zip(questions, answers): - for text in cut_sent(a): - temp_dict = {} - temp_dict["q"] = q - temp_dict["a"] = a - temp_dict["chunk"] = text - temp_dict["emb"] = get_embedding(text).tolist() - final_dict[count] = temp_dict - count += 1 - # embedding q - for q, a in zip(questions, answers): - temp_dict = {} - temp_dict["q"] = q - temp_dict["a"] = a - temp_dict["chunk"] = a - temp_dict["emb"] = get_embedding(q).tolist() - final_dict[count] = temp_dict - count += 1 - # embedding q+a - for q, a in zip(questions, answers): - temp_dict = {} - temp_dict["q"] = q - temp_dict["a"] = a - temp_dict["chunk"] = a - temp_dict["emb"] = get_embedding(q + a).tolist() - final_dict[count] = temp_dict - count += 1 - # embedding a - for q, a in zip(questions, answers): - temp_dict = {} - temp_dict["q"] = q - temp_dict["a"] = a - temp_dict["chunk"] = a - temp_dict["emb"] = get_embedding(a).tolist() - final_dict[count] = temp_dict - count += 1 - print(f"finish updating {len(final_dict)} data!") - os.makedirs("temp_database", exist_ok=True) - save_path = os.path.join( - "temp_database/", - file_path.split("/")[-1].replace("." + file_path.split(".")[1], - ".json"), - ) - print(save_path) - with open(save_path, "w") as f: - json.dump(final_dict, f, ensure_ascii=False, indent=2) - return {"knowledge_base": save_path, "type": "QA"} - else: - loader = UnstructuredFileLoader(file_path) - docs = loader.load() - text_spiltter = CharacterTextSplitter(chunk_size=200, - chunk_overlap=100) - docs = text_spiltter.split_text(docs[0].page_content) - os.makedirs("temp_database", exist_ok=True) - save_path = os.path.join( - "temp_database/", - file_path.replace("." + file_path.split(".")[1], ".json")) - final_dict = {} - count = 0 - for c in tqdm(docs): - temp_dict = {} - temp_dict["chunk"] = c - temp_dict["emb"] = get_embedding(c).tolist() - final_dict[count] = temp_dict - count += 1 - print(f"finish updating {len(final_dict)} data!") - with open(save_path, "w") as f: - json.dump(final_dict, f, ensure_ascii=False, indent=2) - return {"knowledge_base": save_path, "type": "UnstructuredFile"} - -def load_knowledge_base_qa(path): - """ - Load json format knowledge base. - """ - print("path", path) - with open(path, "r") as f: - data = json.load(f) - embeddings = [] - questions = [] - answers = [] - chunks = [] - for idx in range(len(data.keys())): - embeddings.append(data[str(idx)]["emb"]) - questions.append(data[str(idx)]["q"]) - answers.append(data[str(idx)]["a"]) - chunks.append(data[str(idx)]["chunk"]) - embeddings = np.array(embeddings, dtype=np.float32) - embeddings = torch.from_numpy(embeddings).squeeze() - return embeddings, questions, answers, chunks - - -def load_knowledge_base_UnstructuredFile(path): - """ - Load json format knowledge base. - """ - with open(path, "r") as f: - data = json.load(f) - embeddings = [] - chunks = [] - for idx in range(len(data.keys())): - embeddings.append(data[str(idx)]["emb"]) - chunks.append(data[str(idx)]["chunk"]) - embeddings = np.array(embeddings, dtype=np.float32) - embeddings = torch.from_numpy(embeddings).squeeze() - return embeddings, chunks - - -def cos_sim(a: torch.Tensor, b: torch.Tensor): - """ - Computes the cosine similarity cos_sim(a[i], b[j]) for all i and j. - :return: Matrix with res[i][j] = cos_sim(a[i], b[j]) - """ - if not isinstance(a, torch.Tensor): - a = torch.tensor(a) - - if not isinstance(b, torch.Tensor): - b = torch.tensor(b) - - if len(a.shape) == 1: - a = a.unsqueeze(0) - - if len(b.shape) == 1: - b = b.unsqueeze(0) - - a_norm = torch.nn.functional.normalize(a, p=2, dim=1) - b_norm = torch.nn.functional.normalize(b, p=2, dim=1) - return torch.mm(a_norm, b_norm.transpose(0, 1)) - - -def matching_a_b(a, b, requirements=None): - a_embedder = get_embedding(a) - # 获取embedder - b_embeder = get_embedding(b) - sim_scores = cos_sim(a_embedder, b_embeder)[0] - return sim_scores - - -def matching_category(inputtext, - forest_name, - requirements=None, - cat_embedder=None, - top_k=3): - """ - Args: - inputtext: the category name to be matched - forest: search tree - top_k: the default three highest scoring results - Return: - topk matching_result. List[List] [[top1_name,top2_name,top3_name],[top1_score,top2_score,top3_score]] - """ - - sim_scores = torch.zeros([100]) - if inputtext: - input_embeder = get_embedding(inputtext) - sim_scores = cos_sim(input_embeder, cat_embedder)[0] - - if requirements: - requirements = requirements.split(" ") - requirements_embedder = get_embedding(requirements) - req_scores = cos_sim(requirements_embedder, cat_embedder) - req_scores = torch.mean(req_scores, dim=0) - total_scores = req_scores - else: - total_scores = sim_scores - - top_k_cat = torch.topk(total_scores, k=top_k) - top_k_score, top_k_idx = top_k_cat[0], top_k_cat[1] - top_k_name = [forest_name[top_k_idx[i]] for i in range(0, top_k)] - - return [top_k_name, top_k_score.tolist(), top_k_idx] - - -def sample_with_order_preserved(lst, num): - """Randomly sample from the list while maintaining the original order.""" - indices = list(range(len(lst))) - sampled_indices = random.sample(indices, num) - sampled_indices.sort() # 保持原顺序 - return [lst[i] for i in sampled_indices] - - -def limit_values(data, max_values): - """Reduce each key-value list in the dictionary to the specified size, keeping the order of the original list unchanged.""" - for key, values in data.items(): - if len(values) > max_values: - data[key] = sample_with_order_preserved(values, max_values) - return data - - -def limit_keys(data, max_keys): - """Reduce the dictionary to the specified number of keys.""" - keys = list(data.keys()) - if len(keys) > max_keys: - keys = sample_with_order_preserved(keys, max_keys) - data = {key: data[key] for key in keys} - return data - - -def flatten_dict(nested_dict): - """ - flatten the dictionary - """ - flattened_dict = {} - for key, value in nested_dict.items(): - if isinstance(value, dict): - flattened_subdict = flatten_dict(value) - flattened_dict.update(flattened_subdict) - else: - flattened_dict[key] = value - return flattened_dict - - -def merge_list(list1, list2): - for l in list2: - if l not in list1: - list1.append(l) - return list1 - - -def Search_Engines(req): - FETSIZE = eval(os.environ["FETSIZE"]) if "FETSIZE" in os.environ else 5 - - new_dict = {"keyword": req, "catLeafName": "", "fetchSize": FETSIZE} - url = os.environ["SHOPPING_SEARCH"] - res = requests.post( - url= url, - json=new_dict, - ) - user_dict = json.loads(res.text) - if "data" in user_dict.keys(): - request_items = user_dict["data"]["items"] # 查询到的商品信息JSON - top_category = user_dict["data"]["topCategories"] - return request_items, top_category - else: - return [] - - -def search_with_api(requirements, categery): - - FETSIZE = eval(os.environ["FETSIZE"]) if "FETSIZE" in os.environ else 5 - - request_items = [] - all_req_list = requirements.split(" ") - count = 0 - - while len(request_items) < FETSIZE and len(all_req_list) > 0: - if count: - all_req_list.pop(0) - all_req = (" ").join(all_req_list) - if categery not in all_req_list: - all_req = all_req + " " + categery - now_request_items, top_category = Search_Engines(all_req) - request_items = merge_list(request_items, now_request_items) - count += 1 - new_top = [] - for category in top_category: - if "其它" in category or "其它" in category: - continue - else: - new_top.append(category) - if len(request_items) > FETSIZE: - request_items = request_items[:FETSIZE] - return request_items, new_top - - - -def get_relevant_history(query,history,embeddings): - """ - Retrieve a list of key history entries based on a query using semantic search. - - Args: - query (str): The input query for which key history is to be retrieved. - history (list): A list of historical key entries. - embeddings (numpy.ndarray): An array of embedding vectors for historical entries. - - Returns: - list: A list of key history entries most similar to the query. - """ - TOP_K = eval(os.environ["TOP_K"]) if "TOP_K" in os.environ else 2 - relevant_history = [] - query_embedding = get_embedding(query) - hits = semantic_search(query_embedding, embeddings, top_k=min(TOP_K,embeddings.shape[0])) - hits = hits[0] - for hit in hits: - matching_idx = hit["corpus_id"] - try: - relevant_history.append(history[matching_idx]) - except: - return [] - return relevant_history diff --git a/spaces/AIZeroToHero/Video-Automatic-Speech-Recognition/app.py b/spaces/AIZeroToHero/Video-Automatic-Speech-Recognition/app.py deleted file mode 100644 index e0f03cf2557eba112bf95ebf5eb582da8d8a0fe3..0000000000000000000000000000000000000000 --- a/spaces/AIZeroToHero/Video-Automatic-Speech-Recognition/app.py +++ /dev/null @@ -1,119 +0,0 @@ -from collections import deque -import streamlit as st -import torch -from streamlit_player import st_player -from transformers import AutoModelForCTC, Wav2Vec2Processor -from streaming import ffmpeg_stream - -device = torch.device('cuda' if torch.cuda.is_available() else 'cpu') -player_options = { - "events": ["onProgress"], - "progress_interval": 200, - "volume": 1.0, - "playing": True, - "loop": False, - "controls": False, - "muted": False, - "config": {"youtube": {"playerVars": {"start": 1}}}, -} - -# disable rapid fading in and out on `st.code` updates -st.markdown("", unsafe_allow_html=True) - -@st.cache(hash_funcs={torch.nn.parameter.Parameter: lambda _: None}) -def load_model(model_path="facebook/wav2vec2-large-robust-ft-swbd-300h"): - processor = Wav2Vec2Processor.from_pretrained(model_path) - model = AutoModelForCTC.from_pretrained(model_path).to(device) - return processor, model - -processor, model = load_model() - -def stream_text(url, chunk_duration_ms, pad_duration_ms): - sampling_rate = processor.feature_extractor.sampling_rate - - # calculate the length of logits to cut from the sides of the output to account for input padding - output_pad_len = model._get_feat_extract_output_lengths(int(sampling_rate * pad_duration_ms / 1000)) - - # define the audio chunk generator - stream = ffmpeg_stream(url, sampling_rate, chunk_duration_ms=chunk_duration_ms, pad_duration_ms=pad_duration_ms) - - leftover_text = "" - for i, chunk in enumerate(stream): - input_values = processor(chunk, sampling_rate=sampling_rate, return_tensors="pt").input_values - - with torch.no_grad(): - logits = model(input_values.to(device)).logits[0] - if i > 0: - logits = logits[output_pad_len : len(logits) - output_pad_len] - else: # don't count padding at the start of the clip - logits = logits[: len(logits) - output_pad_len] - - predicted_ids = torch.argmax(logits, dim=-1).cpu().tolist() - if processor.decode(predicted_ids).strip(): - leftover_ids = processor.tokenizer.encode(leftover_text) - # concat the last word (or its part) from the last frame with the current text - text = processor.decode(leftover_ids + predicted_ids) - # don't return the last word in case it's just partially recognized - text, leftover_text = text.rsplit(" ", 1) - yield text - else: - yield leftover_text - leftover_text = "" - yield leftover_text - -def main(): - state = st.session_state - st.header("Video ASR Streamlit from Youtube Link") - - with st.form(key="inputs_form"): - - # Our worlds best teachers on subjects of AI, Cognitive, Neuroscience for our Behavioral and Medical Health - ytJoschaBach="https://youtu.be/cC1HszE5Hcw?list=PLHgX2IExbFouJoqEr8JMF5MbZSbyC91-L&t=8984" - ytSamHarris="https://www.youtube.com/watch?v=4dC_nRYIDZU&list=PLHgX2IExbFouJoqEr8JMF5MbZSbyC91-L&index=2" - ytJohnAbramson="https://www.youtube.com/watch?v=arrokG3wCdE&list=PLHgX2IExbFouJoqEr8JMF5MbZSbyC91-L&index=3" - ytElonMusk="https://www.youtube.com/watch?v=DxREm3s1scA&list=PLHgX2IExbFouJoqEr8JMF5MbZSbyC91-L&index=4" - ytJeffreyShainline="https://www.youtube.com/watch?v=EwueqdgIvq4&list=PLHgX2IExbFouJoqEr8JMF5MbZSbyC91-L&index=5" - ytJeffHawkins="https://www.youtube.com/watch?v=Z1KwkpTUbkg&list=PLHgX2IExbFouJoqEr8JMF5MbZSbyC91-L&index=6" - ytSamHarris="https://youtu.be/Ui38ZzTymDY?list=PLHgX2IExbFouJoqEr8JMF5MbZSbyC91-L" - ytSamHarris="https://youtu.be/4dC_nRYIDZU?list=PLHgX2IExbFouJoqEr8JMF5MbZSbyC91-L&t=7809" - ytSamHarris="https://youtu.be/4dC_nRYIDZU?list=PLHgX2IExbFouJoqEr8JMF5MbZSbyC91-L&t=7809" - ytSamHarris="https://youtu.be/4dC_nRYIDZU?list=PLHgX2IExbFouJoqEr8JMF5MbZSbyC91-L&t=7809" - ytTimelapseAI="https://www.youtube.com/watch?v=63yr9dlI0cU&list=PLHgX2IExbFovQybyfltywXnqZi5YvaSS-" - state.youtube_url = st.text_input("YouTube URL", ytTimelapseAI) - - - state.chunk_duration_ms = st.slider("Audio chunk duration (ms)", 2000, 10000, 3000, 100) - state.pad_duration_ms = st.slider("Padding duration (ms)", 100, 5000, 1000, 100) - submit_button = st.form_submit_button(label="Submit") - - if submit_button or "asr_stream" not in state: - # a hack to update the video player on value changes - state.youtube_url = ( - state.youtube_url.split("&hash=")[0] - + f"&hash={state.chunk_duration_ms}-{state.pad_duration_ms}" - ) - state.asr_stream = stream_text( - state.youtube_url, state.chunk_duration_ms, state.pad_duration_ms - ) - state.chunks_taken = 0 - - - state.lines = deque([], maxlen=100) # limit to the last n lines of subs - - - player = st_player(state.youtube_url, **player_options, key="youtube_player") - - if "asr_stream" in state and player.data and player.data["played"] < 1.0: - # check how many seconds were played, and if more than processed - write the next text chunk - processed_seconds = state.chunks_taken * (state.chunk_duration_ms / 1000) - if processed_seconds < player.data["playedSeconds"]: - text = next(state.asr_stream) - state.lines.append(text) - state.chunks_taken += 1 - if "lines" in state: - # print the lines of subs - st.code("\n".join(state.lines)) - - -if __name__ == "__main__": - main() \ No newline at end of file diff --git a/spaces/AIatUIUC/CodeLATS/generators/__init__.py b/spaces/AIatUIUC/CodeLATS/generators/__init__.py deleted file mode 100644 index a279f9265a96159535180e777513490be797df49..0000000000000000000000000000000000000000 --- a/spaces/AIatUIUC/CodeLATS/generators/__init__.py +++ /dev/null @@ -1,3 +0,0 @@ -from .py_generate import PyGenerator -from .factory import generator_factory, model_factory -from .model import ModelBase, GPT4, GPT35 diff --git a/spaces/ALSv/FSW/roop/face_analyser.py b/spaces/ALSv/FSW/roop/face_analyser.py deleted file mode 100644 index 4e2c6c84a930ce522103c4cac0df2ed3d1a3d1b7..0000000000000000000000000000000000000000 --- a/spaces/ALSv/FSW/roop/face_analyser.py +++ /dev/null @@ -1,53 +0,0 @@ -import threading -from typing import Any, Optional, List -import insightface -import numpy - -import roop.globals -from roop.typing import Frame, Face - -FACE_ANALYSER = None -THREAD_LOCK = threading.Lock() - - -def get_face_analyser() -> Any: - global FACE_ANALYSER - - with THREAD_LOCK: - if FACE_ANALYSER is None: - FACE_ANALYSER = insightface.app.FaceAnalysis(name='buffalo_l', providers=roop.globals.execution_providers) - FACE_ANALYSER.prepare(ctx_id=0) - return FACE_ANALYSER - - -def clear_face_analyser() -> Any: - global FACE_ANALYSER - - FACE_ANALYSER = None - - - -def get_one_face(frame: Frame) -> Any: - face = get_face_analyser().get(frame) - try: - return min(face, key=lambda x: x.bbox[0]) - except ValueError: - return None - - -def get_many_faces(frame: Frame) -> Optional[List[Face]]: - try: - return get_face_analyser().get(frame) - except ValueError: - return None - - -def find_similar_face(frame: Frame, reference_face: Face) -> Optional[Face]: - many_faces = get_many_faces(frame) - if many_faces: - for face in many_faces: - if hasattr(face, 'normed_embedding') and hasattr(reference_face, 'normed_embedding'): - distance = numpy.sum(numpy.square(face.normed_embedding - reference_face.normed_embedding)) - if distance < roop.globals.similar_face_distance: - return face - return None diff --git a/spaces/AchyuthGamer/OpenGPT-Chat-UI/src/lib/actions/snapScrollToBottom.ts b/spaces/AchyuthGamer/OpenGPT-Chat-UI/src/lib/actions/snapScrollToBottom.ts deleted file mode 100644 index b22a0648221f6b58853a910fb6286f79574a0246..0000000000000000000000000000000000000000 --- a/spaces/AchyuthGamer/OpenGPT-Chat-UI/src/lib/actions/snapScrollToBottom.ts +++ /dev/null @@ -1,54 +0,0 @@ -import { navigating } from "$app/stores"; -import { tick } from "svelte"; -import { get } from "svelte/store"; - -const detachedOffset = 10; - -/** - * @param node element to snap scroll to bottom - * @param dependency pass in a dependency to update scroll on changes. - */ -export const snapScrollToBottom = (node: HTMLElement, dependency: unknown) => { - let prevScrollValue = node.scrollTop; - let isDetached = false; - - const handleScroll = () => { - // if user scrolled up, we detach - if (node.scrollTop < prevScrollValue) { - isDetached = true; - } - - // if user scrolled back to within 10px of bottom, we reattach - if (node.scrollTop - (node.scrollHeight - node.clientHeight) >= -detachedOffset) { - isDetached = false; - } - - prevScrollValue = node.scrollTop; - }; - - const updateScroll = async (_options: { force?: boolean } = {}) => { - const defaultOptions = { force: false }; - const options = { ...defaultOptions, ..._options }; - const { force } = options; - - if (!force && isDetached && !get(navigating)) return; - - // wait for next tick to ensure that the DOM is updated - await tick(); - - node.scrollTo({ top: node.scrollHeight }); - }; - - node.addEventListener("scroll", handleScroll); - - if (dependency) { - updateScroll({ force: true }); - } - - return { - update: updateScroll, - destroy: () => { - node.removeEventListener("scroll", handleScroll); - }, - }; -}; diff --git a/spaces/AchyuthGamer/OpenGPT/client/css/checkbox.css b/spaces/AchyuthGamer/OpenGPT/client/css/checkbox.css deleted file mode 100644 index 94955b604ea3fab493a50d740fb29be1a8ef6cd3..0000000000000000000000000000000000000000 --- a/spaces/AchyuthGamer/OpenGPT/client/css/checkbox.css +++ /dev/null @@ -1,55 +0,0 @@ -.checkbox input { - height: 0; - width: 0; - display: none; -} - -.checkbox span { - font-size: 0.875rem; - color: var(--colour-2); - margin-left: 4px; -} - -.checkbox label:after { - content: ""; - position: absolute; - top: 50%; - transform: translateY(-50%); - left: 5px; - width: 20px; - height: 20px; - background: var(--blur-border); - border-radius: 90px; - transition: 0.33s; -} - -.checkbox input + label:after, -.checkbox input:checked + label { - background: var(--colour-3); -} - -.checkbox input + label, -.checkbox input:checked + label:after { - background: var(--blur-border); -} - -.checkbox input:checked + label:after { - left: calc(100% - 5px - 20px); -} - -@media screen and (max-width: 990px) { - .checkbox label { - width: 25px; - height: 15px; - } - - .checkbox label:after { - left: 2px; - width: 10px; - height: 10px; - } - - .checkbox input:checked + label:after { - left: calc(100% - 2px - 10px); - } -} diff --git a/spaces/AchyuthGamer/OpenGPT/g4f/Provider/Providers/deprecated/EasyChat.py b/spaces/AchyuthGamer/OpenGPT/g4f/Provider/Providers/deprecated/EasyChat.py deleted file mode 100644 index ffe9a785a61f17d3b816089165f38dd53e1d7c3f..0000000000000000000000000000000000000000 --- a/spaces/AchyuthGamer/OpenGPT/g4f/Provider/Providers/deprecated/EasyChat.py +++ /dev/null @@ -1,111 +0,0 @@ -from __future__ import annotations - -import json -import random - -import requests - -from ...typing import Any, CreateResult -from ..base_provider import BaseProvider - - -class EasyChat(BaseProvider): - url: str = "https://free.easychat.work" - supports_stream = True - supports_gpt_35_turbo = True - working = False - - @staticmethod - def create_completion( - model: str, - messages: list[dict[str, str]], - stream: bool, **kwargs: Any) -> CreateResult: - - active_servers = [ - "https://chat10.fastgpt.me", - "https://chat9.fastgpt.me", - "https://chat1.fastgpt.me", - "https://chat2.fastgpt.me", - "https://chat3.fastgpt.me", - "https://chat4.fastgpt.me", - "https://gxos1h1ddt.fastgpt.me" - ] - - server = active_servers[kwargs.get("active_server", random.randint(0, 5))] - headers = { - "authority" : f"{server}".replace("https://", ""), - "accept" : "text/event-stream", - "accept-language" : "en,fr-FR;q=0.9,fr;q=0.8,es-ES;q=0.7,es;q=0.6,en-US;q=0.5,am;q=0.4,de;q=0.3,fa=0.2", - "content-type" : "application/json", - "origin" : f"{server}", - "referer" : f"{server}/", - "x-requested-with" : "XMLHttpRequest", - 'plugins' : '0', - 'sec-ch-ua' : '"Chromium";v="116", "Not)A;Brand";v="24", "Google Chrome";v="116"', - 'sec-ch-ua-mobile' : '?0', - 'sec-ch-ua-platform': '"Windows"', - 'sec-fetch-dest' : 'empty', - 'sec-fetch-mode' : 'cors', - 'sec-fetch-site' : 'same-origin', - 'user-agent' : 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/116.0.0.0 Safari/537.36', - 'usesearch' : 'false', - 'x-requested-with' : 'XMLHttpRequest' - } - - json_data = { - "messages" : messages, - "stream" : stream, - "model" : model, - "temperature" : kwargs.get("temperature", 0.5), - "presence_penalty" : kwargs.get("presence_penalty", 0), - "frequency_penalty" : kwargs.get("frequency_penalty", 0), - "top_p" : kwargs.get("top_p", 1) - } - - session = requests.Session() - # init cookies from server - session.get(f"{server}/") - - response = session.post(f"{server}/api/openai/v1/chat/completions", - headers=headers, json=json_data, stream=stream) - - if response.status_code == 200: - - if stream == False: - json_data = response.json() - - if "choices" in json_data: - yield json_data["choices"][0]["message"]["content"] - else: - raise Exception("No response from server") - - else: - - for chunk in response.iter_lines(): - - if b"content" in chunk: - splitData = chunk.decode().split("data:") - - if len(splitData) > 1: - yield json.loads(splitData[1])["choices"][0]["delta"]["content"] - else: - continue - else: - raise Exception(f"Error {response.status_code} from server : {response.reason}") - - - @classmethod - @property - def params(cls): - params = [ - ("model", "str"), - ("messages", "list[dict[str, str]]"), - ("stream", "bool"), - ("temperature", "float"), - ("presence_penalty", "int"), - ("frequency_penalty", "int"), - ("top_p", "int"), - ("active_server", "int"), - ] - param = ", ".join([": ".join(p) for p in params]) - return f"g4f.provider.{cls.__name__} supports: ({param})" diff --git a/spaces/Adapter/T2I-Adapter/app.py b/spaces/Adapter/T2I-Adapter/app.py deleted file mode 100644 index ff2e874320d8fb065f445e3fb75371ecadd83fe4..0000000000000000000000000000000000000000 --- a/spaces/Adapter/T2I-Adapter/app.py +++ /dev/null @@ -1,483 +0,0 @@ -# demo inspired by https://huggingface.co/spaces/lambdalabs/image-mixer-demo -import argparse -import copy -import os -import shlex -import subprocess -from functools import partial -from itertools import chain - -import cv2 -import gradio as gr -import torch -from basicsr.utils import tensor2img -from huggingface_hub import hf_hub_url -from pytorch_lightning import seed_everything -from torch import autocast - -from ldm.inference_base import (DEFAULT_NEGATIVE_PROMPT, diffusion_inference, get_adapters, get_sd_models) -from ldm.modules.extra_condition import api -from ldm.modules.extra_condition.api import (ExtraCondition, get_adapter_feature, get_cond_model) -import numpy as np -from ldm.util import read_state_dict - -torch.set_grad_enabled(False) - -supported_cond_map = ['style', 'color', 'sketch', 'openpose', 'depth', 'canny'] -supported_cond = ['style', 'color', 'sketch', 'sketch', 'openpose', 'depth', 'canny'] -draw_map = gr.Interface(lambda x: x, gr.Image(source="canvas"), gr.Image()) - -# download the checkpoints -urls = { - 'TencentARC/T2I-Adapter': [ - 'models/t2iadapter_keypose_sd14v1.pth', 'models/t2iadapter_color_sd14v1.pth', - 'models/t2iadapter_openpose_sd14v1.pth', 'models/t2iadapter_seg_sd14v1.pth', - 'models/t2iadapter_sketch_sd14v1.pth', 'models/t2iadapter_depth_sd14v1.pth', - 'third-party-models/body_pose_model.pth', "models/t2iadapter_style_sd14v1.pth", - "models/t2iadapter_canny_sd14v1.pth", 'third-party-models/table5_pidinet.pth', - "models/t2iadapter_canny_sd15v2.pth", "models/t2iadapter_depth_sd15v2.pth", - "models/t2iadapter_sketch_sd15v2.pth" - ], - 'runwayml/stable-diffusion-v1-5': ['v1-5-pruned-emaonly.ckpt'], - 'CompVis/stable-diffusion-v-1-4-original':['sd-v1-4.ckpt'], - 'andite/anything-v4.0': ['anything-v4.0-pruned.ckpt', 'anything-v4.0.vae.pt'], -} - -# download image samples -torch.hub.download_url_to_file( - 'https://user-images.githubusercontent.com/52127135/223114920-cae3e723-3683-424a-bebc-0875479f2409.jpg', - 'cyber_style.jpg') -torch.hub.download_url_to_file( - 'https://user-images.githubusercontent.com/52127135/223114946-6ccc127f-cb58-443e-8677-805f5dbaf6f1.png', - 'sword.png') -torch.hub.download_url_to_file( - 'https://user-images.githubusercontent.com/52127135/223121793-20c2ac6a-5a4f-4ff8-88ea-6d007a7959dd.png', - 'white.png') -torch.hub.download_url_to_file( - 'https://user-images.githubusercontent.com/52127135/223127404-4a3748cf-85a6-40f3-af31-a74e206db96e.jpeg', - 'scream_style.jpeg') -torch.hub.download_url_to_file( - 'https://user-images.githubusercontent.com/52127135/223127433-8768913f-9872-4d24-b883-a19a3eb20623.jpg', - 'motorcycle.jpg') - -if os.path.exists('models') == False: - os.mkdir('models') -for repo in urls: - files = urls[repo] - for file in files: - url = hf_hub_url(repo, file) - name_ckp = url.split('/')[-1] - save_path = os.path.join('models', name_ckp) - if os.path.exists(save_path) == False: - subprocess.run(shlex.split(f'wget {url} -O {save_path}')) - -# config -parser = argparse.ArgumentParser() -parser.add_argument( - '--sd_ckpt', - type=str, - default='models/v1-5-pruned-emaonly.ckpt', - help='path to checkpoint of stable diffusion model, both .ckpt and .safetensor are supported', -) -parser.add_argument( - '--vae_ckpt', - type=str, - default=None, - help='vae checkpoint, anime SD models usually have seperate vae ckpt that need to be loaded', -) -global_opt = parser.parse_args() -global_opt.config = 'configs/stable-diffusion/sd-v1-inference.yaml' -for cond_name in supported_cond: - if cond_name in ['sketch', 'depth', 'canny']: - setattr(global_opt, f'{cond_name}_adapter_ckpt', f'models/t2iadapter_{cond_name}_sd15v2.pth') - else: - setattr(global_opt, f'{cond_name}_adapter_ckpt', f'models/t2iadapter_{cond_name}_sd14v1.pth') -global_opt.device = torch.device("cuda") if torch.cuda.is_available() else torch.device("cpu") -global_opt.max_resolution = 512 * 512 -global_opt.sampler = 'ddim' -global_opt.cond_weight = 1.0 -global_opt.C = 4 -global_opt.f = 8 -# adapters and models to processing condition inputs -adapters = {} -cond_models = {} -torch.cuda.empty_cache() - - -def draw_transfer(im1): - c = im1[:, :, 0:3].astype(np.float32) - a = im1[:, :, 3:4].astype(np.float32) / 255.0 - im1 = c * a + 255.0 * (1.0 - a) - im1 = (im1.clip(0, 255)).astype(np.uint8) - - return im1 - -class process: - def __init__(self): - self.base_model = 'v1-5-pruned-emaonly.ckpt' - # stable-diffusion model - self.sd_model, self.sampler = get_sd_models(global_opt) - - def run(self, *args): - opt = copy.deepcopy(global_opt) - opt.prompt, opt.neg_prompt, opt.scale, opt.n_samples, opt.seed, opt.steps, opt.resize_short_edge, opt.cond_tau, opt.base_model \ - = args[-9:] - # check base model - if opt.base_model!=self.base_model: - ckpt = os.path.join("models", opt.base_model) - pl_sd = read_state_dict(ckpt) - if "state_dict" in pl_sd: - pl_sd = pl_sd["state_dict"] - else: - pl_sd = pl_sd - self.sd_model.load_state_dict(pl_sd, strict=False) - del pl_sd - self.base_model = opt.base_model - if self.base_model!='v1-5-pruned-emaonly.ckpt' and self.base_model!='sd-v1-4.ckpt': - vae_sd = torch.load(os.path.join('models', 'anything-v4.0.vae.pt'), map_location="cuda") - st = vae_sd["state_dict"] - self.sd_model.first_stage_model.load_state_dict(st, strict=False) - del st - - with torch.inference_mode(), \ - self.sd_model.ema_scope(), \ - autocast('cuda'): - - inps = [] - for i in range(0, len(args) - 9, len(supported_cond)): - inps.append(args[i:i + len(supported_cond)]) - - conds = [] - activated_conds = [] - - ims1 = [] - ims2 = [] - for idx, (b, im1, im2, cond_weight) in enumerate(zip(*inps)): - if b != 'Nothing' and (im1 is not None or im2 is not None): - if im1 is not None and isinstance(im1,dict): - im1 = im1['mask'] - im1 = draw_transfer(im1) - - if im1 is not None: - h, w, _ = im1.shape - else: - h, w, _ = im2.shape - - # resize all the images to the same size - for idx, (b, im1, im2, cond_weight) in enumerate(zip(*inps)): - if idx == 0: - ims1.append(im1) - ims2.append(im2) - continue - if b != 'Nothing': - if im1 is not None and isinstance(im1,dict): - im1 = im1['mask'] - im1 = draw_transfer(im1) - im2 = im1 - cv2.imwrite('sketch.png', im1) - if im1 is not None: - im1 = cv2.resize(im1, (w, h), interpolation=cv2.INTER_CUBIC) - if im2 is not None: - im2 = cv2.resize(im2, (w, h), interpolation=cv2.INTER_CUBIC) - ims1.append(im1) - ims2.append(im2) - - for idx, (b, _, _, cond_weight) in enumerate(zip(*inps)): - cond_name = supported_cond[idx] - if b == 'Nothing': - if cond_name in adapters: - adapters[cond_name]['model'] = adapters[cond_name]['model'].to(opt.device)#.cpu() - else: - # print(idx,b) - activated_conds.append(cond_name) - if cond_name in adapters: - adapters[cond_name]['model'] = adapters[cond_name]['model'].to(opt.device) - else: - adapters[cond_name] = get_adapters(opt, getattr(ExtraCondition, cond_name)) - adapters[cond_name]['cond_weight'] = cond_weight - - process_cond_module = getattr(api, f'get_cond_{cond_name}') - - if b == 'Image': - if cond_name not in cond_models: - cond_models[cond_name] = get_cond_model(opt, getattr(ExtraCondition, cond_name)) - conds.append(process_cond_module(opt, ims1[idx], 'image', cond_models[cond_name])) - else: - if idx == 2: # draw - conds.append(process_cond_module(opt, (255.-ims2[idx]).astype(np.uint8), cond_name, None)) - else: - conds.append(process_cond_module(opt, ims2[idx], cond_name, None)) - - adapter_features, append_to_context = get_adapter_feature( - conds, [adapters[cond_name] for cond_name in activated_conds]) - - output_conds = [] - for cond in conds: - output_conds.append(tensor2img(cond, rgb2bgr=False)) - - ims = [] - seed_everything(opt.seed) - for _ in range(opt.n_samples): - result = diffusion_inference(opt, self.sd_model, self.sampler, adapter_features, append_to_context) - ims.append(tensor2img(result, rgb2bgr=False)) - - # Clear GPU memory cache so less likely to OOM - torch.cuda.empty_cache() - return ims, output_conds - - -def change_visible(im1, im2, val): - outputs = {} - if val == "Image": - outputs[im1] = gr.update(visible=True) - outputs[im2] = gr.update(visible=False) - elif val == "Nothing": - outputs[im1] = gr.update(visible=False) - outputs[im2] = gr.update(visible=False) - else: - outputs[im1] = gr.update(visible=False) - outputs[im2] = gr.update(visible=True) - return outputs - -DESCRIPTION = '# [T2I-Adapter](https://github.com/TencentARC/T2I-Adapter)' - -DESCRIPTION += f'

Gradio demo for **T2I-Adapter**: [[GitHub]](https://github.com/TencentARC/T2I-Adapter), [[Paper]](https://arxiv.org/abs/2302.08453). If T2I-Adapter is helpful, please help to ⭐ the [Github Repo](https://github.com/TencentARC/T2I-Adapter) and recommend it to your friends 😊

' - -DESCRIPTION += f'

For faster inference without waiting in queue, you may duplicate the space and upgrade to GPU in settings. Duplicate Space

' - -processer = process() - -with gr.Blocks(css='style.css') as demo: - gr.Markdown(DESCRIPTION) - - btns = [] - ims1 = [] - ims2 = [] - cond_weights = [] - - with gr.Row(): - with gr.Column(scale=1.9): - with gr.Box(): - gr.Markdown("
Style & Color
") - with gr.Row(): - for cond_name in supported_cond_map[:2]: - with gr.Box(): - with gr.Column(): - if cond_name == 'style': - btn1 = gr.Radio( - choices=["Image", "Nothing"], - label=f"Input type for {cond_name}", - interactive=True, - value="Nothing", - ) - else: - btn1 = gr.Radio( - choices=["Image", cond_name, "Nothing"], - label=f"Input type for {cond_name}", - interactive=True, - value="Nothing", - ) - - im1 = gr.Image( - source='upload', label="Image", interactive=True, visible=False, type="numpy") - im2 = gr.Image( - source='upload', label=cond_name, interactive=True, visible=False, type="numpy") - cond_weight = gr.Slider( - label="Condition weight", - minimum=0, - maximum=5, - step=0.05, - value=1, - interactive=True) - - fn = partial(change_visible, im1, im2) - btn1.change(fn=fn, inputs=[btn1], outputs=[im1, im2], queue=False) - - btns.append(btn1) - ims1.append(im1) - ims2.append(im2) - cond_weights.append(cond_weight) - - with gr.Box(): - gr.Markdown("
Drawing
") - with gr.Column(): - btn1 = gr.Radio( - choices=["Sketch", "Nothing"], - label=f"Input type for drawing", - interactive=True, - value="Nothing") - im1 = gr.Image(source='canvas', tool='color-sketch', label='Pay attention to adjusting stylus thickness!', visible=False) - im2 = im1 - cond_weight = gr.Slider( - label="Condition weight", - minimum=0, - maximum=5, - step=0.05, - value=1, - interactive=True) - - fn = partial(change_visible, im1, im2) - btn1.change(fn=fn, inputs=[btn1], outputs=[im1, im2], queue=False) - - btns.append(btn1) - ims1.append(im1) - ims2.append(im2) - cond_weights.append(cond_weight) - - with gr.Column(scale=4): - with gr.Box(): - gr.Markdown("
Structure
") - with gr.Row(): - for cond_name in supported_cond_map[2:6]: - with gr.Box(): - with gr.Column(): - if cond_name == 'openpose': - btn1 = gr.Radio( - choices=["Image", 'pose', "Nothing"], - label=f"Input type for {cond_name}", - interactive=True, - value="Nothing", - ) - else: - btn1 = gr.Radio( - choices=["Image", cond_name, "Nothing"], - label=f"Input type for {cond_name}", - interactive=True, - value="Nothing", - ) - - im1 = gr.Image( - source='upload', label="Image", interactive=True, visible=False, type="numpy") - im2 = gr.Image( - source='upload', label=cond_name, interactive=True, visible=False, type="numpy") - cond_weight = gr.Slider( - label="Condition weight", - minimum=0, - maximum=5, - step=0.05, - value=1, - interactive=True) - - fn = partial(change_visible, im1, im2) - btn1.change(fn=fn, inputs=[btn1], outputs=[im1, im2], queue=False) - btns.append(btn1) - ims1.append(im1) - ims2.append(im2) - cond_weights.append(cond_weight) - - with gr.Column(): - base_model = gr.inputs.Radio(['v1-5-pruned-emaonly.ckpt', 'sd-v1-4.ckpt', 'anything-v4.0-pruned.ckpt'], type="value", default='v1-5-pruned-emaonly.ckpt', label='The base model you want to use. You can try more base models on https://civitai.com/.') - prompt = gr.Textbox(label="Prompt") - with gr.Accordion('Advanced options', open=False): - neg_prompt = gr.Textbox(label="Negative Prompt", value=DEFAULT_NEGATIVE_PROMPT) - scale = gr.Slider( - label="Guidance Scale (Classifier free guidance)", value=7.5, minimum=1, maximum=20, step=0.1) - n_samples = gr.Slider(label="Num samples", value=1, minimum=1, maximum=1, step=1) - seed = gr.Slider(label="Seed", value=42, minimum=0, maximum=10000, step=1, randomize=True) - steps = gr.Slider(label="Steps", value=50, minimum=10, maximum=100, step=1) - resize_short_edge = gr.Slider(label="Image resolution", value=512, minimum=320, maximum=1024, step=1) - cond_tau = gr.Slider( - label="timestamp parameter that determines until which step the adapter is applied", - value=1.0, - minimum=0.1, - maximum=1.0, - step=0.05) - submit = gr.Button("Generate") - - with gr.Box(): - gr.Markdown("
Results
") - with gr.Column(): - output = gr.Gallery().style(grid=2, height='auto') - cond = gr.Gallery().style(grid=2, height='auto') - - inps = list(chain(btns, ims1, ims2, cond_weights)) - - inps.extend([prompt, neg_prompt, scale, n_samples, seed, steps, resize_short_edge, cond_tau, base_model]) - submit.click(fn=processer.run, inputs=inps, outputs=[output, cond]) - - ex = gr.Examples([ - [ - "Image", - "Nothing", - "Nothing", - "Image", - "Nothing", - "Nothing", - "Nothing", - "cyber_style.jpg", - "white.png", - "white.png", - "sword.png", - "white.png", - "white.png", - "white.png", - "white.png", - "white.png", - "white.png", - "white.png", - "white.png", - "white.png", - "white.png", - 1, - 1, - 1, - 1, - 1, - 1, - 1, - "master sword", - "longbody, lowres, bad anatomy, bad hands, missing fingers, extra digit, fewer digits, cropped, worst quality, low quality", - 7.5, - 1, - 2500, - 50, - 512, - 1, - "v1-5-pruned-emaonly.ckpt", - ], - [ - "Image", - "Nothing", - "Nothing", - "Image", - "Nothing", - "Nothing", - "Nothing", - "scream_style.jpeg", - "white.png", - "white.png", - "motorcycle.jpg", - "white.png", - "white.png", - "white.png", - "white.png", - "white.png", - "white.png", - "white.png", - "white.png", - "white.png", - "white.png", - 1, - 1, - 1, - 1, - 1, - 1, - 1, - "motorcycle", - "longbody, lowres, bad anatomy, bad hands, missing fingers, extra digit, fewer digits, cropped, worst quality, low quality", - 7.5, - 1, - 2500, - 50, - 512, - 1, - "v1-5-pruned-emaonly.ckpt", - ], - ], - fn=processer.run, - inputs=inps, - outputs=[output, cond], - cache_examples=True) - -demo.queue().launch(debug=True, server_name='0.0.0.0') diff --git a/spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/templates/ui/shake/Shake.js b/spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/templates/ui/shake/Shake.js deleted file mode 100644 index ed0e51eb660dd0102ab44c903a414a25df755768..0000000000000000000000000000000000000000 --- a/spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/templates/ui/shake/Shake.js +++ /dev/null @@ -1,2 +0,0 @@ -import Shake from '../../../plugins/shakeposition.js'; -export default Shake; \ No newline at end of file diff --git a/spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/src/diffusers/pipelines/vq_diffusion/pipeline_vq_diffusion.py b/spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/src/diffusers/pipelines/vq_diffusion/pipeline_vq_diffusion.py deleted file mode 100644 index 1abe50a9b6b67485f5b29109dec02b9af0937846..0000000000000000000000000000000000000000 --- a/spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/src/diffusers/pipelines/vq_diffusion/pipeline_vq_diffusion.py +++ /dev/null @@ -1,325 +0,0 @@ -# Copyright 2023 Microsoft and The HuggingFace Team. All rights reserved. -# -# Licensed under the Apache License, Version 2.0 (the "License"); -# you may not use this file except in compliance with the License. -# You may obtain a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an "AS IS" BASIS, -# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -# See the License for the specific language governing permissions and -# limitations under the License. - -from typing import Callable, List, Optional, Tuple, Union - -import torch -from transformers import CLIPTextModel, CLIPTokenizer - -from ...configuration_utils import ConfigMixin, register_to_config -from ...models import ModelMixin, Transformer2DModel, VQModel -from ...schedulers import VQDiffusionScheduler -from ...utils import logging -from ..pipeline_utils import DiffusionPipeline, ImagePipelineOutput - - -logger = logging.get_logger(__name__) # pylint: disable=invalid-name - - -class LearnedClassifierFreeSamplingEmbeddings(ModelMixin, ConfigMixin): - """ - Utility class for storing learned text embeddings for classifier free sampling - """ - - @register_to_config - def __init__(self, learnable: bool, hidden_size: Optional[int] = None, length: Optional[int] = None): - super().__init__() - - self.learnable = learnable - - if self.learnable: - assert hidden_size is not None, "learnable=True requires `hidden_size` to be set" - assert length is not None, "learnable=True requires `length` to be set" - - embeddings = torch.zeros(length, hidden_size) - else: - embeddings = None - - self.embeddings = torch.nn.Parameter(embeddings) - - -class VQDiffusionPipeline(DiffusionPipeline): - r""" - Pipeline for text-to-image generation using VQ Diffusion. - - This model inherits from [`DiffusionPipeline`]. Check the superclass documentation for the generic methods - implemented for all pipelines (downloading, saving, running on a particular device, etc.). - - Args: - vqvae ([`VQModel`]): - Vector Quantized Variational Auto-Encoder (VAE) model to encode and decode images to and from latent - representations. - text_encoder ([`~transformers.CLIPTextModel`]): - Frozen text-encoder ([clip-vit-base-patch32](https://huggingface.co/openai/clip-vit-base-patch32)). - tokenizer ([`~transformers.CLIPTokenizer`]): - A `CLIPTokenizer` to tokenize text. - transformer ([`Transformer2DModel`]): - A conditional `Transformer2DModel` to denoise the encoded image latents. - scheduler ([`VQDiffusionScheduler`]): - A scheduler to be used in combination with `transformer` to denoise the encoded image latents. - """ - - vqvae: VQModel - text_encoder: CLIPTextModel - tokenizer: CLIPTokenizer - transformer: Transformer2DModel - learned_classifier_free_sampling_embeddings: LearnedClassifierFreeSamplingEmbeddings - scheduler: VQDiffusionScheduler - - def __init__( - self, - vqvae: VQModel, - text_encoder: CLIPTextModel, - tokenizer: CLIPTokenizer, - transformer: Transformer2DModel, - scheduler: VQDiffusionScheduler, - learned_classifier_free_sampling_embeddings: LearnedClassifierFreeSamplingEmbeddings, - ): - super().__init__() - - self.register_modules( - vqvae=vqvae, - transformer=transformer, - text_encoder=text_encoder, - tokenizer=tokenizer, - scheduler=scheduler, - learned_classifier_free_sampling_embeddings=learned_classifier_free_sampling_embeddings, - ) - - def _encode_prompt(self, prompt, num_images_per_prompt, do_classifier_free_guidance): - batch_size = len(prompt) if isinstance(prompt, list) else 1 - - # get prompt text embeddings - text_inputs = self.tokenizer( - prompt, - padding="max_length", - max_length=self.tokenizer.model_max_length, - return_tensors="pt", - ) - text_input_ids = text_inputs.input_ids - - if text_input_ids.shape[-1] > self.tokenizer.model_max_length: - removed_text = self.tokenizer.batch_decode(text_input_ids[:, self.tokenizer.model_max_length :]) - logger.warning( - "The following part of your input was truncated because CLIP can only handle sequences up to" - f" {self.tokenizer.model_max_length} tokens: {removed_text}" - ) - text_input_ids = text_input_ids[:, : self.tokenizer.model_max_length] - prompt_embeds = self.text_encoder(text_input_ids.to(self.device))[0] - - # NOTE: This additional step of normalizing the text embeddings is from VQ-Diffusion. - # While CLIP does normalize the pooled output of the text transformer when combining - # the image and text embeddings, CLIP does not directly normalize the last hidden state. - # - # CLIP normalizing the pooled output. - # https://github.com/huggingface/transformers/blob/d92e22d1f28324f513f3080e5c47c071a3916721/src/transformers/models/clip/modeling_clip.py#L1052-L1053 - prompt_embeds = prompt_embeds / prompt_embeds.norm(dim=-1, keepdim=True) - - # duplicate text embeddings for each generation per prompt - prompt_embeds = prompt_embeds.repeat_interleave(num_images_per_prompt, dim=0) - - if do_classifier_free_guidance: - if self.learned_classifier_free_sampling_embeddings.learnable: - negative_prompt_embeds = self.learned_classifier_free_sampling_embeddings.embeddings - negative_prompt_embeds = negative_prompt_embeds.unsqueeze(0).repeat(batch_size, 1, 1) - else: - uncond_tokens = [""] * batch_size - - max_length = text_input_ids.shape[-1] - uncond_input = self.tokenizer( - uncond_tokens, - padding="max_length", - max_length=max_length, - truncation=True, - return_tensors="pt", - ) - negative_prompt_embeds = self.text_encoder(uncond_input.input_ids.to(self.device))[0] - # See comment for normalizing text embeddings - negative_prompt_embeds = negative_prompt_embeds / negative_prompt_embeds.norm(dim=-1, keepdim=True) - - # duplicate unconditional embeddings for each generation per prompt, using mps friendly method - seq_len = negative_prompt_embeds.shape[1] - negative_prompt_embeds = negative_prompt_embeds.repeat(1, num_images_per_prompt, 1) - negative_prompt_embeds = negative_prompt_embeds.view(batch_size * num_images_per_prompt, seq_len, -1) - - # For classifier free guidance, we need to do two forward passes. - # Here we concatenate the unconditional and text embeddings into a single batch - # to avoid doing two forward passes - prompt_embeds = torch.cat([negative_prompt_embeds, prompt_embeds]) - - return prompt_embeds - - @torch.no_grad() - def __call__( - self, - prompt: Union[str, List[str]], - num_inference_steps: int = 100, - guidance_scale: float = 5.0, - truncation_rate: float = 1.0, - num_images_per_prompt: int = 1, - generator: Optional[Union[torch.Generator, List[torch.Generator]]] = None, - latents: Optional[torch.FloatTensor] = None, - output_type: Optional[str] = "pil", - return_dict: bool = True, - callback: Optional[Callable[[int, int, torch.FloatTensor], None]] = None, - callback_steps: int = 1, - ) -> Union[ImagePipelineOutput, Tuple]: - """ - The call function to the pipeline for generation. - - Args: - prompt (`str` or `List[str]`): - The prompt or prompts to guide image generation. - num_inference_steps (`int`, *optional*, defaults to 100): - The number of denoising steps. More denoising steps usually lead to a higher quality image at the - expense of slower inference. - guidance_scale (`float`, *optional*, defaults to 7.5): - A higher guidance scale value encourages the model to generate images closely linked to the text - `prompt` at the expense of lower image quality. Guidance scale is enabled when `guidance_scale > 1`. - truncation_rate (`float`, *optional*, defaults to 1.0 (equivalent to no truncation)): - Used to "truncate" the predicted classes for x_0 such that the cumulative probability for a pixel is at - most `truncation_rate`. The lowest probabilities that would increase the cumulative probability above - `truncation_rate` are set to zero. - num_images_per_prompt (`int`, *optional*, defaults to 1): - The number of images to generate per prompt. - generator (`torch.Generator`, *optional*): - A [`torch.Generator`](https://pytorch.org/docs/stable/generated/torch.Generator.html) to make - generation deterministic. - latents (`torch.FloatTensor` of shape (batch), *optional*): - Pre-generated noisy latents sampled from a Gaussian distribution, to be used as inputs for image - generation. Must be valid embedding indices.If not provided, a latents tensor will be generated of - completely masked latent pixels. - output_type (`str`, *optional*, defaults to `"pil"`): - The output format of the generated image. Choose between `PIL.Image` or `np.array`. - return_dict (`bool`, *optional*, defaults to `True`): - Whether or not to return a [`~pipelines.ImagePipelineOutput`] instead of a plain tuple. - callback (`Callable`, *optional*): - A function that calls every `callback_steps` steps during inference. The function is called with the - following arguments: `callback(step: int, timestep: int, latents: torch.FloatTensor)`. - callback_steps (`int`, *optional*, defaults to 1): - The frequency at which the `callback` function is called. If not specified, the callback is called at - every step. - - Returns: - [`~pipelines.ImagePipelineOutput`] or `tuple`: - If `return_dict` is `True`, [`~pipelines.ImagePipelineOutput`] is returned, otherwise a `tuple` is - returned where the first element is a list with the generated images. - """ - if isinstance(prompt, str): - batch_size = 1 - elif isinstance(prompt, list): - batch_size = len(prompt) - else: - raise ValueError(f"`prompt` has to be of type `str` or `list` but is {type(prompt)}") - - batch_size = batch_size * num_images_per_prompt - - do_classifier_free_guidance = guidance_scale > 1.0 - - prompt_embeds = self._encode_prompt(prompt, num_images_per_prompt, do_classifier_free_guidance) - - if (callback_steps is None) or ( - callback_steps is not None and (not isinstance(callback_steps, int) or callback_steps <= 0) - ): - raise ValueError( - f"`callback_steps` has to be a positive integer but is {callback_steps} of type" - f" {type(callback_steps)}." - ) - - # get the initial completely masked latents unless the user supplied it - - latents_shape = (batch_size, self.transformer.num_latent_pixels) - if latents is None: - mask_class = self.transformer.num_vector_embeds - 1 - latents = torch.full(latents_shape, mask_class).to(self.device) - else: - if latents.shape != latents_shape: - raise ValueError(f"Unexpected latents shape, got {latents.shape}, expected {latents_shape}") - if (latents < 0).any() or (latents >= self.transformer.num_vector_embeds).any(): - raise ValueError( - "Unexpected latents value(s). All latents be valid embedding indices i.e. in the range 0," - f" {self.transformer.num_vector_embeds - 1} (inclusive)." - ) - latents = latents.to(self.device) - - # set timesteps - self.scheduler.set_timesteps(num_inference_steps, device=self.device) - - timesteps_tensor = self.scheduler.timesteps.to(self.device) - - sample = latents - - for i, t in enumerate(self.progress_bar(timesteps_tensor)): - # expand the sample if we are doing classifier free guidance - latent_model_input = torch.cat([sample] * 2) if do_classifier_free_guidance else sample - - # predict the un-noised image - # model_output == `log_p_x_0` - model_output = self.transformer(latent_model_input, encoder_hidden_states=prompt_embeds, timestep=t).sample - - if do_classifier_free_guidance: - model_output_uncond, model_output_text = model_output.chunk(2) - model_output = model_output_uncond + guidance_scale * (model_output_text - model_output_uncond) - model_output -= torch.logsumexp(model_output, dim=1, keepdim=True) - - model_output = self.truncate(model_output, truncation_rate) - - # remove `log(0)`'s (`-inf`s) - model_output = model_output.clamp(-70) - - # compute the previous noisy sample x_t -> x_t-1 - sample = self.scheduler.step(model_output, timestep=t, sample=sample, generator=generator).prev_sample - - # call the callback, if provided - if callback is not None and i % callback_steps == 0: - callback(i, t, sample) - - embedding_channels = self.vqvae.config.vq_embed_dim - embeddings_shape = (batch_size, self.transformer.height, self.transformer.width, embedding_channels) - embeddings = self.vqvae.quantize.get_codebook_entry(sample, shape=embeddings_shape) - image = self.vqvae.decode(embeddings, force_not_quantize=True).sample - - image = (image / 2 + 0.5).clamp(0, 1) - image = image.cpu().permute(0, 2, 3, 1).numpy() - - if output_type == "pil": - image = self.numpy_to_pil(image) - - if not return_dict: - return (image,) - - return ImagePipelineOutput(images=image) - - def truncate(self, log_p_x_0: torch.FloatTensor, truncation_rate: float) -> torch.FloatTensor: - """ - Truncates `log_p_x_0` such that for each column vector, the total cumulative probability is `truncation_rate` - The lowest probabilities that would increase the cumulative probability above `truncation_rate` are set to - zero. - """ - sorted_log_p_x_0, indices = torch.sort(log_p_x_0, 1, descending=True) - sorted_p_x_0 = torch.exp(sorted_log_p_x_0) - keep_mask = sorted_p_x_0.cumsum(dim=1) < truncation_rate - - # Ensure that at least the largest probability is not zeroed out - all_true = torch.full_like(keep_mask[:, 0:1, :], True) - keep_mask = torch.cat((all_true, keep_mask), dim=1) - keep_mask = keep_mask[:, :-1, :] - - keep_mask = keep_mask.gather(1, indices.argsort(1)) - - rv = log_p_x_0.clone() - - rv[~keep_mask] = -torch.inf # -inf = log(0) - - return rv diff --git a/spaces/Andy1621/uniformer_image_detection/configs/cascade_rpn/crpn_fast_rcnn_r50_caffe_fpn_1x_coco.py b/spaces/Andy1621/uniformer_image_detection/configs/cascade_rpn/crpn_fast_rcnn_r50_caffe_fpn_1x_coco.py deleted file mode 100644 index 68c57dfb242c6681cda6ead27929d6737c74fc45..0000000000000000000000000000000000000000 --- a/spaces/Andy1621/uniformer_image_detection/configs/cascade_rpn/crpn_fast_rcnn_r50_caffe_fpn_1x_coco.py +++ /dev/null @@ -1,75 +0,0 @@ -_base_ = '../fast_rcnn/fast_rcnn_r50_fpn_1x_coco.py' -model = dict( - pretrained='open-mmlab://detectron2/resnet50_caffe', - backbone=dict( - type='ResNet', - depth=50, - num_stages=4, - out_indices=(0, 1, 2, 3), - frozen_stages=1, - norm_cfg=dict(type='BN', requires_grad=False), - norm_eval=True, - style='caffe'), - roi_head=dict( - bbox_head=dict( - bbox_coder=dict(target_stds=[0.04, 0.04, 0.08, 0.08]), - loss_cls=dict( - type='CrossEntropyLoss', use_sigmoid=False, loss_weight=1.5), - loss_bbox=dict(type='SmoothL1Loss', beta=1.0, loss_weight=1.0))), - # model training and testing settings - train_cfg=dict( - rcnn=dict( - assigner=dict( - pos_iou_thr=0.65, neg_iou_thr=0.65, min_pos_iou=0.65), - sampler=dict(num=256))), - test_cfg=dict(rcnn=dict(score_thr=1e-3))) -dataset_type = 'CocoDataset' -data_root = 'data/coco/' -img_norm_cfg = dict( - mean=[103.530, 116.280, 123.675], std=[1.0, 1.0, 1.0], to_rgb=False) -train_pipeline = [ - dict(type='LoadImageFromFile'), - dict(type='LoadProposals', num_max_proposals=300), - dict(type='LoadAnnotations', with_bbox=True), - dict(type='Resize', img_scale=(1333, 800), keep_ratio=True), - dict(type='RandomFlip', flip_ratio=0.5), - dict(type='Normalize', **img_norm_cfg), - dict(type='Pad', size_divisor=32), - dict(type='DefaultFormatBundle'), - dict(type='Collect', keys=['img', 'proposals', 'gt_bboxes', 'gt_labels']), -] -test_pipeline = [ - dict(type='LoadImageFromFile'), - dict(type='LoadProposals', num_max_proposals=300), - dict( - type='MultiScaleFlipAug', - img_scale=(1333, 800), - flip=False, - transforms=[ - dict(type='Resize', keep_ratio=True), - dict(type='RandomFlip'), - dict(type='Normalize', **img_norm_cfg), - dict(type='Pad', size_divisor=32), - dict(type='ImageToTensor', keys=['img']), - dict(type='ToTensor', keys=['proposals']), - dict( - type='ToDataContainer', - fields=[dict(key='proposals', stack=False)]), - dict(type='Collect', keys=['img', 'proposals']), - ]) -] -data = dict( - train=dict( - proposal_file=data_root + - 'proposals/crpn_r50_caffe_fpn_1x_train2017.pkl', - pipeline=train_pipeline), - val=dict( - proposal_file=data_root + - 'proposals/crpn_r50_caffe_fpn_1x_val2017.pkl', - pipeline=test_pipeline), - test=dict( - proposal_file=data_root + - 'proposals/crpn_r50_caffe_fpn_1x_val2017.pkl', - pipeline=test_pipeline)) -optimizer_config = dict( - _delete_=True, grad_clip=dict(max_norm=35, norm_type=2)) diff --git a/spaces/AnishKumbhar/ChatBot/text-generation-webui-main/docs/ExLlama.md b/spaces/AnishKumbhar/ChatBot/text-generation-webui-main/docs/ExLlama.md deleted file mode 100644 index db0ebe63c90cf155e8b550e73a542d560ccb0b54..0000000000000000000000000000000000000000 --- a/spaces/AnishKumbhar/ChatBot/text-generation-webui-main/docs/ExLlama.md +++ /dev/null @@ -1,22 +0,0 @@ -# ExLlama - -### About - -ExLlama is an extremely optimized GPTQ backend for LLaMA models. It features much lower VRAM usage and much higher speeds due to not relying on unoptimized transformers code. - -### Usage - -Configure text-generation-webui to use exllama via the UI or command line: - - In the "Model" tab, set "Loader" to "exllama" - - Specify `--loader exllama` on the command line - -### Manual setup - -No additional installation steps are necessary since an exllama package is already included in the requirements.txt. If this package fails to install for some reason, you can install it manually by cloning the original repository into your `repositories/` folder: - -``` -mkdir repositories -cd repositories -git clone https://github.com/turboderp/exllama -``` - diff --git a/spaces/AnishKumbhar/ChatBot/text-generation-webui-main/js/switch_tabs.js b/spaces/AnishKumbhar/ChatBot/text-generation-webui-main/js/switch_tabs.js deleted file mode 100644 index 75d563670dbd7a6d5e1b81eb5d38b025a868c01b..0000000000000000000000000000000000000000 --- a/spaces/AnishKumbhar/ChatBot/text-generation-webui-main/js/switch_tabs.js +++ /dev/null @@ -1,59 +0,0 @@ -let chat_tab = document.getElementById("chat-tab"); -let main_parent = chat_tab.parentNode; - -function scrollToTop() { - window.scrollTo({ - top: 0, - // behavior: 'smooth' - }); -} - -function findButtonsByText(buttonText) { - const buttons = document.getElementsByTagName("button"); - const matchingButtons = []; - buttonText = buttonText.trim(); - - for (let i = 0; i < buttons.length; i++) { - const button = buttons[i]; - const buttonInnerText = button.textContent.trim(); - - if (buttonInnerText === buttonText) { - matchingButtons.push(button); - } - } - - return matchingButtons; -} - -function switch_to_chat() { - let chat_tab_button = main_parent.childNodes[0].childNodes[1]; - chat_tab_button.click(); - scrollToTop(); -} - -function switch_to_default() { - let default_tab_button = main_parent.childNodes[0].childNodes[4]; - default_tab_button.click(); - scrollToTop(); -} - -function switch_to_notebook() { - let notebook_tab_button = main_parent.childNodes[0].childNodes[7]; - notebook_tab_button.click(); - findButtonsByText("Raw")[1].click(); - scrollToTop(); -} - -function switch_to_generation_parameters() { - let parameters_tab_button = main_parent.childNodes[0].childNodes[10]; - parameters_tab_button.click(); - findButtonsByText("Generation")[0].click(); - scrollToTop(); -} - -function switch_to_character() { - let parameters_tab_button = main_parent.childNodes[0].childNodes[10]; - parameters_tab_button.click(); - findButtonsByText("Character")[0].click(); - scrollToTop(); -} diff --git a/spaces/Anonymous-sub/Rerender/ControlNet/ldm/modules/midas/midas/blocks.py b/spaces/Anonymous-sub/Rerender/ControlNet/ldm/modules/midas/midas/blocks.py deleted file mode 100644 index 2145d18fa98060a618536d9a64fe6589e9be4f78..0000000000000000000000000000000000000000 --- a/spaces/Anonymous-sub/Rerender/ControlNet/ldm/modules/midas/midas/blocks.py +++ /dev/null @@ -1,342 +0,0 @@ -import torch -import torch.nn as nn - -from .vit import ( - _make_pretrained_vitb_rn50_384, - _make_pretrained_vitl16_384, - _make_pretrained_vitb16_384, - forward_vit, -) - -def _make_encoder(backbone, features, use_pretrained, groups=1, expand=False, exportable=True, hooks=None, use_vit_only=False, use_readout="ignore",): - if backbone == "vitl16_384": - pretrained = _make_pretrained_vitl16_384( - use_pretrained, hooks=hooks, use_readout=use_readout - ) - scratch = _make_scratch( - [256, 512, 1024, 1024], features, groups=groups, expand=expand - ) # ViT-L/16 - 85.0% Top1 (backbone) - elif backbone == "vitb_rn50_384": - pretrained = _make_pretrained_vitb_rn50_384( - use_pretrained, - hooks=hooks, - use_vit_only=use_vit_only, - use_readout=use_readout, - ) - scratch = _make_scratch( - [256, 512, 768, 768], features, groups=groups, expand=expand - ) # ViT-H/16 - 85.0% Top1 (backbone) - elif backbone == "vitb16_384": - pretrained = _make_pretrained_vitb16_384( - use_pretrained, hooks=hooks, use_readout=use_readout - ) - scratch = _make_scratch( - [96, 192, 384, 768], features, groups=groups, expand=expand - ) # ViT-B/16 - 84.6% Top1 (backbone) - elif backbone == "resnext101_wsl": - pretrained = _make_pretrained_resnext101_wsl(use_pretrained) - scratch = _make_scratch([256, 512, 1024, 2048], features, groups=groups, expand=expand) # efficientnet_lite3 - elif backbone == "efficientnet_lite3": - pretrained = _make_pretrained_efficientnet_lite3(use_pretrained, exportable=exportable) - scratch = _make_scratch([32, 48, 136, 384], features, groups=groups, expand=expand) # efficientnet_lite3 - else: - print(f"Backbone '{backbone}' not implemented") - assert False - - return pretrained, scratch - - -def _make_scratch(in_shape, out_shape, groups=1, expand=False): - scratch = nn.Module() - - out_shape1 = out_shape - out_shape2 = out_shape - out_shape3 = out_shape - out_shape4 = out_shape - if expand==True: - out_shape1 = out_shape - out_shape2 = out_shape*2 - out_shape3 = out_shape*4 - out_shape4 = out_shape*8 - - scratch.layer1_rn = nn.Conv2d( - in_shape[0], out_shape1, kernel_size=3, stride=1, padding=1, bias=False, groups=groups - ) - scratch.layer2_rn = nn.Conv2d( - in_shape[1], out_shape2, kernel_size=3, stride=1, padding=1, bias=False, groups=groups - ) - scratch.layer3_rn = nn.Conv2d( - in_shape[2], out_shape3, kernel_size=3, stride=1, padding=1, bias=False, groups=groups - ) - scratch.layer4_rn = nn.Conv2d( - in_shape[3], out_shape4, kernel_size=3, stride=1, padding=1, bias=False, groups=groups - ) - - return scratch - - -def _make_pretrained_efficientnet_lite3(use_pretrained, exportable=False): - efficientnet = torch.hub.load( - "rwightman/gen-efficientnet-pytorch", - "tf_efficientnet_lite3", - pretrained=use_pretrained, - exportable=exportable - ) - return _make_efficientnet_backbone(efficientnet) - - -def _make_efficientnet_backbone(effnet): - pretrained = nn.Module() - - pretrained.layer1 = nn.Sequential( - effnet.conv_stem, effnet.bn1, effnet.act1, *effnet.blocks[0:2] - ) - pretrained.layer2 = nn.Sequential(*effnet.blocks[2:3]) - pretrained.layer3 = nn.Sequential(*effnet.blocks[3:5]) - pretrained.layer4 = nn.Sequential(*effnet.blocks[5:9]) - - return pretrained - - -def _make_resnet_backbone(resnet): - pretrained = nn.Module() - pretrained.layer1 = nn.Sequential( - resnet.conv1, resnet.bn1, resnet.relu, resnet.maxpool, resnet.layer1 - ) - - pretrained.layer2 = resnet.layer2 - pretrained.layer3 = resnet.layer3 - pretrained.layer4 = resnet.layer4 - - return pretrained - - -def _make_pretrained_resnext101_wsl(use_pretrained): - resnet = torch.hub.load("facebookresearch/WSL-Images", "resnext101_32x8d_wsl") - return _make_resnet_backbone(resnet) - - - -class Interpolate(nn.Module): - """Interpolation module. - """ - - def __init__(self, scale_factor, mode, align_corners=False): - """Init. - - Args: - scale_factor (float): scaling - mode (str): interpolation mode - """ - super(Interpolate, self).__init__() - - self.interp = nn.functional.interpolate - self.scale_factor = scale_factor - self.mode = mode - self.align_corners = align_corners - - def forward(self, x): - """Forward pass. - - Args: - x (tensor): input - - Returns: - tensor: interpolated data - """ - - x = self.interp( - x, scale_factor=self.scale_factor, mode=self.mode, align_corners=self.align_corners - ) - - return x - - -class ResidualConvUnit(nn.Module): - """Residual convolution module. - """ - - def __init__(self, features): - """Init. - - Args: - features (int): number of features - """ - super().__init__() - - self.conv1 = nn.Conv2d( - features, features, kernel_size=3, stride=1, padding=1, bias=True - ) - - self.conv2 = nn.Conv2d( - features, features, kernel_size=3, stride=1, padding=1, bias=True - ) - - self.relu = nn.ReLU(inplace=True) - - def forward(self, x): - """Forward pass. - - Args: - x (tensor): input - - Returns: - tensor: output - """ - out = self.relu(x) - out = self.conv1(out) - out = self.relu(out) - out = self.conv2(out) - - return out + x - - -class FeatureFusionBlock(nn.Module): - """Feature fusion block. - """ - - def __init__(self, features): - """Init. - - Args: - features (int): number of features - """ - super(FeatureFusionBlock, self).__init__() - - self.resConfUnit1 = ResidualConvUnit(features) - self.resConfUnit2 = ResidualConvUnit(features) - - def forward(self, *xs): - """Forward pass. - - Returns: - tensor: output - """ - output = xs[0] - - if len(xs) == 2: - output += self.resConfUnit1(xs[1]) - - output = self.resConfUnit2(output) - - output = nn.functional.interpolate( - output, scale_factor=2, mode="bilinear", align_corners=True - ) - - return output - - - - -class ResidualConvUnit_custom(nn.Module): - """Residual convolution module. - """ - - def __init__(self, features, activation, bn): - """Init. - - Args: - features (int): number of features - """ - super().__init__() - - self.bn = bn - - self.groups=1 - - self.conv1 = nn.Conv2d( - features, features, kernel_size=3, stride=1, padding=1, bias=True, groups=self.groups - ) - - self.conv2 = nn.Conv2d( - features, features, kernel_size=3, stride=1, padding=1, bias=True, groups=self.groups - ) - - if self.bn==True: - self.bn1 = nn.BatchNorm2d(features) - self.bn2 = nn.BatchNorm2d(features) - - self.activation = activation - - self.skip_add = nn.quantized.FloatFunctional() - - def forward(self, x): - """Forward pass. - - Args: - x (tensor): input - - Returns: - tensor: output - """ - - out = self.activation(x) - out = self.conv1(out) - if self.bn==True: - out = self.bn1(out) - - out = self.activation(out) - out = self.conv2(out) - if self.bn==True: - out = self.bn2(out) - - if self.groups > 1: - out = self.conv_merge(out) - - return self.skip_add.add(out, x) - - # return out + x - - -class FeatureFusionBlock_custom(nn.Module): - """Feature fusion block. - """ - - def __init__(self, features, activation, deconv=False, bn=False, expand=False, align_corners=True): - """Init. - - Args: - features (int): number of features - """ - super(FeatureFusionBlock_custom, self).__init__() - - self.deconv = deconv - self.align_corners = align_corners - - self.groups=1 - - self.expand = expand - out_features = features - if self.expand==True: - out_features = features//2 - - self.out_conv = nn.Conv2d(features, out_features, kernel_size=1, stride=1, padding=0, bias=True, groups=1) - - self.resConfUnit1 = ResidualConvUnit_custom(features, activation, bn) - self.resConfUnit2 = ResidualConvUnit_custom(features, activation, bn) - - self.skip_add = nn.quantized.FloatFunctional() - - def forward(self, *xs): - """Forward pass. - - Returns: - tensor: output - """ - output = xs[0] - - if len(xs) == 2: - res = self.resConfUnit1(xs[1]) - output = self.skip_add.add(output, res) - # output += res - - output = self.resConfUnit2(output) - - output = nn.functional.interpolate( - output, scale_factor=2, mode="bilinear", align_corners=self.align_corners - ) - - output = self.out_conv(output) - - return output - diff --git a/spaces/Arikkod/FoodVisionMini/app.py b/spaces/Arikkod/FoodVisionMini/app.py deleted file mode 100644 index eae29e16c7c067911a1b2906a99ab36cc62d05b2..0000000000000000000000000000000000000000 --- a/spaces/Arikkod/FoodVisionMini/app.py +++ /dev/null @@ -1,51 +0,0 @@ -import gradio as gr -import os -import torch -from model import create_effnetb2_model -from timeit import default_timer as timer -from typing import Tuple, Dict - -class_names = ['pizza', 'steak', 'sushi'] -effnetb2, effnetb2_transforms = create_effnetb2_model(3, 42) -# Load save weights: -effnetb2.load_state_dict( - torch.load(f='09_pretrained_effnetb2_feature_extractor_pizza_steak_sushi_20_precent.pth', - map_location=torch.device('cpu') - ) -) - -def predict(img): - # Start a timer - start_time = timer() - # Transform the input image for use wit EffNetB2 - img = effnetb2_transforms(img).unsqueeze(0) - # Put model into eval mode, make prediction - effnetb2.eval() - with torch.inference_mode(): - pred_probs = torch.softmax(effnetb2(img), dim=1) - # Create a prediction labal and prediction probability dictionary - pred_labels_and_probs = {class_names[i]:float(pred_probs[0][i]) for i in range(len(class_names))} - # Calculated pred time - end_time = timer() - pred_time = round(end_time - start_time, 4) - # Return pred dict and pred time - return pred_labels_and_probs, pred_time - - -title = 'FoodVision Mini 🍕🥩🍣' -description = 'An [EfficientNetB2 feature extractor](https://pytorch.org/vision/main/models/generated/torchvision.models.efficientnet_b2.html)' -article = 'Created with Pytorch model deployment' -example_list = [["./examples/" + file] for file in os.listdir("./examples")] - -demo = gr.Interface(fn=predict, - inputs=gr.Image(type='pil'), - outputs=[gr.Label(num_top_classes=3, label='Predictions'), - gr.Number(label='Prediction time (s)')], - examples=example_list, - title=title, - description=description, - article=article - ) - -demo.launch(debug=False, - share=False) diff --git a/spaces/Arthur678/vits-uma-genshin-honkai/modules.py b/spaces/Arthur678/vits-uma-genshin-honkai/modules.py deleted file mode 100644 index 56ea4145eddf19dd330a3a41ab0183efc1686d83..0000000000000000000000000000000000000000 --- a/spaces/Arthur678/vits-uma-genshin-honkai/modules.py +++ /dev/null @@ -1,388 +0,0 @@ -import math -import numpy as np -import torch -from torch import nn -from torch.nn import functional as F - -from torch.nn import Conv1d, ConvTranspose1d, AvgPool1d, Conv2d -from torch.nn.utils import weight_norm, remove_weight_norm - -import commons -from commons import init_weights, get_padding -from transforms import piecewise_rational_quadratic_transform - - -LRELU_SLOPE = 0.1 - - -class LayerNorm(nn.Module): - def __init__(self, channels, eps=1e-5): - super().__init__() - self.channels = channels - self.eps = eps - - self.gamma = nn.Parameter(torch.ones(channels)) - self.beta = nn.Parameter(torch.zeros(channels)) - - def forward(self, x): - x = x.transpose(1, -1) - x = F.layer_norm(x, (self.channels,), self.gamma, self.beta, self.eps) - return x.transpose(1, -1) - - -class ConvReluNorm(nn.Module): - def __init__(self, in_channels, hidden_channels, out_channels, kernel_size, n_layers, p_dropout): - super().__init__() - self.in_channels = in_channels - self.hidden_channels = hidden_channels - self.out_channels = out_channels - self.kernel_size = kernel_size - self.n_layers = n_layers - self.p_dropout = p_dropout - assert n_layers > 1, "Number of layers should be larger than 0." - - self.conv_layers = nn.ModuleList() - self.norm_layers = nn.ModuleList() - self.conv_layers.append(nn.Conv1d(in_channels, hidden_channels, kernel_size, padding=kernel_size//2)) - self.norm_layers.append(LayerNorm(hidden_channels)) - self.relu_drop = nn.Sequential( - nn.ReLU(), - nn.Dropout(p_dropout)) - for _ in range(n_layers-1): - self.conv_layers.append(nn.Conv1d(hidden_channels, hidden_channels, kernel_size, padding=kernel_size//2)) - self.norm_layers.append(LayerNorm(hidden_channels)) - self.proj = nn.Conv1d(hidden_channels, out_channels, 1) - self.proj.weight.data.zero_() - self.proj.bias.data.zero_() - - def forward(self, x, x_mask): - x_org = x - for i in range(self.n_layers): - x = self.conv_layers[i](x * x_mask) - x = self.norm_layers[i](x) - x = self.relu_drop(x) - x = x_org + self.proj(x) - return x * x_mask - - -class DDSConv(nn.Module): - """ - Dialted and Depth-Separable Convolution - """ - def __init__(self, channels, kernel_size, n_layers, p_dropout=0.): - super().__init__() - self.channels = channels - self.kernel_size = kernel_size - self.n_layers = n_layers - self.p_dropout = p_dropout - - self.drop = nn.Dropout(p_dropout) - self.convs_sep = nn.ModuleList() - self.convs_1x1 = nn.ModuleList() - self.norms_1 = nn.ModuleList() - self.norms_2 = nn.ModuleList() - for i in range(n_layers): - dilation = kernel_size ** i - padding = (kernel_size * dilation - dilation) // 2 - self.convs_sep.append(nn.Conv1d(channels, channels, kernel_size, - groups=channels, dilation=dilation, padding=padding - )) - self.convs_1x1.append(nn.Conv1d(channels, channels, 1)) - self.norms_1.append(LayerNorm(channels)) - self.norms_2.append(LayerNorm(channels)) - - def forward(self, x, x_mask, g=None): - if g is not None: - x = x + g - for i in range(self.n_layers): - y = self.convs_sep[i](x * x_mask) - y = self.norms_1[i](y) - y = F.gelu(y) - y = self.convs_1x1[i](y) - y = self.norms_2[i](y) - y = F.gelu(y) - y = self.drop(y) - x = x + y - return x * x_mask - - -class WN(torch.nn.Module): - def __init__(self, hidden_channels, kernel_size, dilation_rate, n_layers, gin_channels=0, p_dropout=0): - super(WN, self).__init__() - assert(kernel_size % 2 == 1) - self.hidden_channels =hidden_channels - self.kernel_size = kernel_size, - self.dilation_rate = dilation_rate - self.n_layers = n_layers - self.gin_channels = gin_channels - self.p_dropout = p_dropout - - self.in_layers = torch.nn.ModuleList() - self.res_skip_layers = torch.nn.ModuleList() - self.drop = nn.Dropout(p_dropout) - - if gin_channels != 0: - cond_layer = torch.nn.Conv1d(gin_channels, 2*hidden_channels*n_layers, 1) - self.cond_layer = torch.nn.utils.weight_norm(cond_layer, name='weight') - - for i in range(n_layers): - dilation = dilation_rate ** i - padding = int((kernel_size * dilation - dilation) / 2) - in_layer = torch.nn.Conv1d(hidden_channels, 2*hidden_channels, kernel_size, - dilation=dilation, padding=padding) - in_layer = torch.nn.utils.weight_norm(in_layer, name='weight') - self.in_layers.append(in_layer) - - # last one is not necessary - if i < n_layers - 1: - res_skip_channels = 2 * hidden_channels - else: - res_skip_channels = hidden_channels - - res_skip_layer = torch.nn.Conv1d(hidden_channels, res_skip_channels, 1) - res_skip_layer = torch.nn.utils.weight_norm(res_skip_layer, name='weight') - self.res_skip_layers.append(res_skip_layer) - - def forward(self, x, x_mask, g=None, **kwargs): - output = torch.zeros_like(x) - n_channels_tensor = torch.IntTensor([self.hidden_channels]) - - if g is not None: - g = self.cond_layer(g) - - for i in range(self.n_layers): - x_in = self.in_layers[i](x) - if g is not None: - cond_offset = i * 2 * self.hidden_channels - g_l = g[:,cond_offset:cond_offset+2*self.hidden_channels,:] - else: - g_l = torch.zeros_like(x_in) - - acts = commons.fused_add_tanh_sigmoid_multiply( - x_in, - g_l, - n_channels_tensor) - acts = self.drop(acts) - - res_skip_acts = self.res_skip_layers[i](acts) - if i < self.n_layers - 1: - res_acts = res_skip_acts[:,:self.hidden_channels,:] - x = (x + res_acts) * x_mask - output = output + res_skip_acts[:,self.hidden_channels:,:] - else: - output = output + res_skip_acts - return output * x_mask - - def remove_weight_norm(self): - if self.gin_channels != 0: - torch.nn.utils.remove_weight_norm(self.cond_layer) - for l in self.in_layers: - torch.nn.utils.remove_weight_norm(l) - for l in self.res_skip_layers: - torch.nn.utils.remove_weight_norm(l) - - -class ResBlock1(torch.nn.Module): - def __init__(self, channels, kernel_size=3, dilation=(1, 3, 5)): - super(ResBlock1, self).__init__() - self.convs1 = nn.ModuleList([ - weight_norm(Conv1d(channels, channels, kernel_size, 1, dilation=dilation[0], - padding=get_padding(kernel_size, dilation[0]))), - weight_norm(Conv1d(channels, channels, kernel_size, 1, dilation=dilation[1], - padding=get_padding(kernel_size, dilation[1]))), - weight_norm(Conv1d(channels, channels, kernel_size, 1, dilation=dilation[2], - padding=get_padding(kernel_size, dilation[2]))) - ]) - self.convs1.apply(init_weights) - - self.convs2 = nn.ModuleList([ - weight_norm(Conv1d(channels, channels, kernel_size, 1, dilation=1, - padding=get_padding(kernel_size, 1))), - weight_norm(Conv1d(channels, channels, kernel_size, 1, dilation=1, - padding=get_padding(kernel_size, 1))), - weight_norm(Conv1d(channels, channels, kernel_size, 1, dilation=1, - padding=get_padding(kernel_size, 1))) - ]) - self.convs2.apply(init_weights) - - def forward(self, x, x_mask=None): - for c1, c2 in zip(self.convs1, self.convs2): - xt = F.leaky_relu(x, LRELU_SLOPE) - if x_mask is not None: - xt = xt * x_mask - xt = c1(xt) - xt = F.leaky_relu(xt, LRELU_SLOPE) - if x_mask is not None: - xt = xt * x_mask - xt = c2(xt) - x = xt + x - if x_mask is not None: - x = x * x_mask - return x - - def remove_weight_norm(self): - for l in self.convs1: - remove_weight_norm(l) - for l in self.convs2: - remove_weight_norm(l) - - -class ResBlock2(torch.nn.Module): - def __init__(self, channels, kernel_size=3, dilation=(1, 3)): - super(ResBlock2, self).__init__() - self.convs = nn.ModuleList([ - weight_norm(Conv1d(channels, channels, kernel_size, 1, dilation=dilation[0], - padding=get_padding(kernel_size, dilation[0]))), - weight_norm(Conv1d(channels, channels, kernel_size, 1, dilation=dilation[1], - padding=get_padding(kernel_size, dilation[1]))) - ]) - self.convs.apply(init_weights) - - def forward(self, x, x_mask=None): - for c in self.convs: - xt = F.leaky_relu(x, LRELU_SLOPE) - if x_mask is not None: - xt = xt * x_mask - xt = c(xt) - x = xt + x - if x_mask is not None: - x = x * x_mask - return x - - def remove_weight_norm(self): - for l in self.convs: - remove_weight_norm(l) - - -class Log(nn.Module): - def forward(self, x, x_mask, reverse=False, **kwargs): - if not reverse: - y = torch.log(torch.clamp_min(x, 1e-5)) * x_mask - logdet = torch.sum(-y, [1, 2]) - return y, logdet - else: - x = torch.exp(x) * x_mask - return x - - -class Flip(nn.Module): - def forward(self, x, *args, reverse=False, **kwargs): - x = torch.flip(x, [1]) - if not reverse: - logdet = torch.zeros(x.size(0)).to(dtype=x.dtype, device=x.device) - return x, logdet - else: - return x - - -class ElementwiseAffine(nn.Module): - def __init__(self, channels): - super().__init__() - self.channels = channels - self.m = nn.Parameter(torch.zeros(channels,1)) - self.logs = nn.Parameter(torch.zeros(channels,1)) - - def forward(self, x, x_mask, reverse=False, **kwargs): - if not reverse: - y = self.m + torch.exp(self.logs) * x - y = y * x_mask - logdet = torch.sum(self.logs * x_mask, [1,2]) - return y, logdet - else: - x = (x - self.m) * torch.exp(-self.logs) * x_mask - return x - - -class ResidualCouplingLayer(nn.Module): - def __init__(self, - channels, - hidden_channels, - kernel_size, - dilation_rate, - n_layers, - p_dropout=0, - gin_channels=0, - mean_only=False): - assert channels % 2 == 0, "channels should be divisible by 2" - super().__init__() - self.channels = channels - self.hidden_channels = hidden_channels - self.kernel_size = kernel_size - self.dilation_rate = dilation_rate - self.n_layers = n_layers - self.half_channels = channels // 2 - self.mean_only = mean_only - - self.pre = nn.Conv1d(self.half_channels, hidden_channels, 1) - self.enc = WN(hidden_channels, kernel_size, dilation_rate, n_layers, p_dropout=p_dropout, gin_channels=gin_channels) - self.post = nn.Conv1d(hidden_channels, self.half_channels * (2 - mean_only), 1) - self.post.weight.data.zero_() - self.post.bias.data.zero_() - - def forward(self, x, x_mask, g=None, reverse=False): - x0, x1 = torch.split(x, [self.half_channels]*2, 1) - h = self.pre(x0) * x_mask - h = self.enc(h, x_mask, g=g) - stats = self.post(h) * x_mask - if not self.mean_only: - m, logs = torch.split(stats, [self.half_channels]*2, 1) - else: - m = stats - logs = torch.zeros_like(m) - - if not reverse: - x1 = m + x1 * torch.exp(logs) * x_mask - x = torch.cat([x0, x1], 1) - logdet = torch.sum(logs, [1,2]) - return x, logdet - else: - x1 = (x1 - m) * torch.exp(-logs) * x_mask - x = torch.cat([x0, x1], 1) - return x - - -class ConvFlow(nn.Module): - def __init__(self, in_channels, filter_channels, kernel_size, n_layers, num_bins=10, tail_bound=5.0): - super().__init__() - self.in_channels = in_channels - self.filter_channels = filter_channels - self.kernel_size = kernel_size - self.n_layers = n_layers - self.num_bins = num_bins - self.tail_bound = tail_bound - self.half_channels = in_channels // 2 - - self.pre = nn.Conv1d(self.half_channels, filter_channels, 1) - self.convs = DDSConv(filter_channels, kernel_size, n_layers, p_dropout=0.) - self.proj = nn.Conv1d(filter_channels, self.half_channels * (num_bins * 3 - 1), 1) - self.proj.weight.data.zero_() - self.proj.bias.data.zero_() - - def forward(self, x, x_mask, g=None, reverse=False): - x0, x1 = torch.split(x, [self.half_channels]*2, 1) - h = self.pre(x0) - h = self.convs(h, x_mask, g=g) - h = self.proj(h) * x_mask - - b, c, t = x0.shape - h = h.reshape(b, c, -1, t).permute(0, 1, 3, 2) # [b, cx?, t] -> [b, c, t, ?] - - unnormalized_widths = h[..., :self.num_bins] / math.sqrt(self.filter_channels) - unnormalized_heights = h[..., self.num_bins:2*self.num_bins] / math.sqrt(self.filter_channels) - unnormalized_derivatives = h[..., 2 * self.num_bins:] - - x1, logabsdet = piecewise_rational_quadratic_transform(x1, - unnormalized_widths, - unnormalized_heights, - unnormalized_derivatives, - inverse=reverse, - tails='linear', - tail_bound=self.tail_bound - ) - - x = torch.cat([x0, x1], 1) * x_mask - logdet = torch.sum(logabsdet * x_mask, [1,2]) - if not reverse: - return x, logdet - else: - return x diff --git a/spaces/Arulkumar03/GroundingDINO_SOTA_Zero_Shot_Model/groundingdino/models/GroundingDINO/csrc/MsDeformAttn/ms_deform_attn_cpu.cpp b/spaces/Arulkumar03/GroundingDINO_SOTA_Zero_Shot_Model/groundingdino/models/GroundingDINO/csrc/MsDeformAttn/ms_deform_attn_cpu.cpp deleted file mode 100644 index 551243fdadfd1682b5dc6628623b67a79b3f6c74..0000000000000000000000000000000000000000 --- a/spaces/Arulkumar03/GroundingDINO_SOTA_Zero_Shot_Model/groundingdino/models/GroundingDINO/csrc/MsDeformAttn/ms_deform_attn_cpu.cpp +++ /dev/null @@ -1,43 +0,0 @@ -/*! -************************************************************************************************** -* Deformable DETR -* Copyright (c) 2020 SenseTime. All Rights Reserved. -* Licensed under the Apache License, Version 2.0 [see LICENSE for details] -************************************************************************************************** -* Modified from https://github.com/chengdazhi/Deformable-Convolution-V2-PyTorch/tree/pytorch_1.0.0 -************************************************************************************************** -*/ - -#include - -#include -#include - -namespace groundingdino { - -at::Tensor -ms_deform_attn_cpu_forward( - const at::Tensor &value, - const at::Tensor &spatial_shapes, - const at::Tensor &level_start_index, - const at::Tensor &sampling_loc, - const at::Tensor &attn_weight, - const int im2col_step) -{ - AT_ERROR("Not implement on cpu"); -} - -std::vector -ms_deform_attn_cpu_backward( - const at::Tensor &value, - const at::Tensor &spatial_shapes, - const at::Tensor &level_start_index, - const at::Tensor &sampling_loc, - const at::Tensor &attn_weight, - const at::Tensor &grad_output, - const int im2col_step) -{ - AT_ERROR("Not implement on cpu"); -} - -} // namespace groundingdino diff --git a/spaces/Ataturk-Chatbot/HuggingFaceChat/README.md b/spaces/Ataturk-Chatbot/HuggingFaceChat/README.md deleted file mode 100644 index 8013b939f020321c3ade76f59daacead5fcd69e7..0000000000000000000000000000000000000000 --- a/spaces/Ataturk-Chatbot/HuggingFaceChat/README.md +++ /dev/null @@ -1,12 +0,0 @@ ---- -title: HuggingFaceChat -emoji: 🚀 -colorFrom: indigo -colorTo: red -sdk: gradio -sdk_version: 3.50.2 -app_file: app.py -pinned: false ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/Ataturk-Chatbot/HuggingFaceChat/venv/lib/python3.11/site-packages/pip/_internal/commands/configuration.py b/spaces/Ataturk-Chatbot/HuggingFaceChat/venv/lib/python3.11/site-packages/pip/_internal/commands/configuration.py deleted file mode 100644 index 84b134e490b081d661daf69f98e0b9b1fdddd36f..0000000000000000000000000000000000000000 --- a/spaces/Ataturk-Chatbot/HuggingFaceChat/venv/lib/python3.11/site-packages/pip/_internal/commands/configuration.py +++ /dev/null @@ -1,282 +0,0 @@ -import logging -import os -import subprocess -from optparse import Values -from typing import Any, List, Optional - -from pip._internal.cli.base_command import Command -from pip._internal.cli.status_codes import ERROR, SUCCESS -from pip._internal.configuration import ( - Configuration, - Kind, - get_configuration_files, - kinds, -) -from pip._internal.exceptions import PipError -from pip._internal.utils.logging import indent_log -from pip._internal.utils.misc import get_prog, write_output - -logger = logging.getLogger(__name__) - - -class ConfigurationCommand(Command): - """ - Manage local and global configuration. - - Subcommands: - - - list: List the active configuration (or from the file specified) - - edit: Edit the configuration file in an editor - - get: Get the value associated with command.option - - set: Set the command.option=value - - unset: Unset the value associated with command.option - - debug: List the configuration files and values defined under them - - Configuration keys should be dot separated command and option name, - with the special prefix "global" affecting any command. For example, - "pip config set global.index-url https://example.org/" would configure - the index url for all commands, but "pip config set download.timeout 10" - would configure a 10 second timeout only for "pip download" commands. - - If none of --user, --global and --site are passed, a virtual - environment configuration file is used if one is active and the file - exists. Otherwise, all modifications happen to the user file by - default. - """ - - ignore_require_venv = True - usage = """ - %prog [] list - %prog [] [--editor ] edit - - %prog [] get command.option - %prog [] set command.option value - %prog [] unset command.option - %prog [] debug - """ - - def add_options(self) -> None: - self.cmd_opts.add_option( - "--editor", - dest="editor", - action="store", - default=None, - help=( - "Editor to use to edit the file. Uses VISUAL or EDITOR " - "environment variables if not provided." - ), - ) - - self.cmd_opts.add_option( - "--global", - dest="global_file", - action="store_true", - default=False, - help="Use the system-wide configuration file only", - ) - - self.cmd_opts.add_option( - "--user", - dest="user_file", - action="store_true", - default=False, - help="Use the user configuration file only", - ) - - self.cmd_opts.add_option( - "--site", - dest="site_file", - action="store_true", - default=False, - help="Use the current environment configuration file only", - ) - - self.parser.insert_option_group(0, self.cmd_opts) - - def run(self, options: Values, args: List[str]) -> int: - handlers = { - "list": self.list_values, - "edit": self.open_in_editor, - "get": self.get_name, - "set": self.set_name_value, - "unset": self.unset_name, - "debug": self.list_config_values, - } - - # Determine action - if not args or args[0] not in handlers: - logger.error( - "Need an action (%s) to perform.", - ", ".join(sorted(handlers)), - ) - return ERROR - - action = args[0] - - # Determine which configuration files are to be loaded - # Depends on whether the command is modifying. - try: - load_only = self._determine_file( - options, need_value=(action in ["get", "set", "unset", "edit"]) - ) - except PipError as e: - logger.error(e.args[0]) - return ERROR - - # Load a new configuration - self.configuration = Configuration( - isolated=options.isolated_mode, load_only=load_only - ) - self.configuration.load() - - # Error handling happens here, not in the action-handlers. - try: - handlers[action](options, args[1:]) - except PipError as e: - logger.error(e.args[0]) - return ERROR - - return SUCCESS - - def _determine_file(self, options: Values, need_value: bool) -> Optional[Kind]: - file_options = [ - key - for key, value in ( - (kinds.USER, options.user_file), - (kinds.GLOBAL, options.global_file), - (kinds.SITE, options.site_file), - ) - if value - ] - - if not file_options: - if not need_value: - return None - # Default to user, unless there's a site file. - elif any( - os.path.exists(site_config_file) - for site_config_file in get_configuration_files()[kinds.SITE] - ): - return kinds.SITE - else: - return kinds.USER - elif len(file_options) == 1: - return file_options[0] - - raise PipError( - "Need exactly one file to operate upon " - "(--user, --site, --global) to perform." - ) - - def list_values(self, options: Values, args: List[str]) -> None: - self._get_n_args(args, "list", n=0) - - for key, value in sorted(self.configuration.items()): - write_output("%s=%r", key, value) - - def get_name(self, options: Values, args: List[str]) -> None: - key = self._get_n_args(args, "get [name]", n=1) - value = self.configuration.get_value(key) - - write_output("%s", value) - - def set_name_value(self, options: Values, args: List[str]) -> None: - key, value = self._get_n_args(args, "set [name] [value]", n=2) - self.configuration.set_value(key, value) - - self._save_configuration() - - def unset_name(self, options: Values, args: List[str]) -> None: - key = self._get_n_args(args, "unset [name]", n=1) - self.configuration.unset_value(key) - - self._save_configuration() - - def list_config_values(self, options: Values, args: List[str]) -> None: - """List config key-value pairs across different config files""" - self._get_n_args(args, "debug", n=0) - - self.print_env_var_values() - # Iterate over config files and print if they exist, and the - # key-value pairs present in them if they do - for variant, files in sorted(self.configuration.iter_config_files()): - write_output("%s:", variant) - for fname in files: - with indent_log(): - file_exists = os.path.exists(fname) - write_output("%s, exists: %r", fname, file_exists) - if file_exists: - self.print_config_file_values(variant) - - def print_config_file_values(self, variant: Kind) -> None: - """Get key-value pairs from the file of a variant""" - for name, value in self.configuration.get_values_in_config(variant).items(): - with indent_log(): - write_output("%s: %s", name, value) - - def print_env_var_values(self) -> None: - """Get key-values pairs present as environment variables""" - write_output("%s:", "env_var") - with indent_log(): - for key, value in sorted(self.configuration.get_environ_vars()): - env_var = f"PIP_{key.upper()}" - write_output("%s=%r", env_var, value) - - def open_in_editor(self, options: Values, args: List[str]) -> None: - editor = self._determine_editor(options) - - fname = self.configuration.get_file_to_edit() - if fname is None: - raise PipError("Could not determine appropriate file.") - elif '"' in fname: - # This shouldn't happen, unless we see a username like that. - # If that happens, we'd appreciate a pull request fixing this. - raise PipError( - f'Can not open an editor for a file name containing "\n{fname}' - ) - - try: - subprocess.check_call(f'{editor} "{fname}"', shell=True) - except FileNotFoundError as e: - if not e.filename: - e.filename = editor - raise - except subprocess.CalledProcessError as e: - raise PipError( - "Editor Subprocess exited with exit code {}".format(e.returncode) - ) - - def _get_n_args(self, args: List[str], example: str, n: int) -> Any: - """Helper to make sure the command got the right number of arguments""" - if len(args) != n: - msg = ( - "Got unexpected number of arguments, expected {}. " - '(example: "{} config {}")' - ).format(n, get_prog(), example) - raise PipError(msg) - - if n == 1: - return args[0] - else: - return args - - def _save_configuration(self) -> None: - # We successfully ran a modifying command. Need to save the - # configuration. - try: - self.configuration.save() - except Exception: - logger.exception( - "Unable to save configuration. Please report this as a bug." - ) - raise PipError("Internal Error.") - - def _determine_editor(self, options: Values) -> str: - if options.editor is not None: - return options.editor - elif "VISUAL" in os.environ: - return os.environ["VISUAL"] - elif "EDITOR" in os.environ: - return os.environ["EDITOR"] - else: - raise PipError("Could not determine editor to use.") diff --git a/spaces/Ataturk-Chatbot/HuggingFaceChat/venv/lib/python3.11/site-packages/pip/_vendor/requests/certs.py b/spaces/Ataturk-Chatbot/HuggingFaceChat/venv/lib/python3.11/site-packages/pip/_vendor/requests/certs.py deleted file mode 100644 index 38696a1fb3419dd810004d5aec9654e5224042ed..0000000000000000000000000000000000000000 --- a/spaces/Ataturk-Chatbot/HuggingFaceChat/venv/lib/python3.11/site-packages/pip/_vendor/requests/certs.py +++ /dev/null @@ -1,24 +0,0 @@ -#!/usr/bin/env python - -""" -requests.certs -~~~~~~~~~~~~~~ - -This module returns the preferred default CA certificate bundle. There is -only one — the one from the certifi package. - -If you are packaging Requests, e.g., for a Linux distribution or a managed -environment, you can change the definition of where() to return a separately -packaged CA bundle. -""" - -import os - -if "_PIP_STANDALONE_CERT" not in os.environ: - from pip._vendor.certifi import where -else: - def where(): - return os.environ["_PIP_STANDALONE_CERT"] - -if __name__ == "__main__": - print(where()) diff --git a/spaces/Ataturk-Chatbot/HuggingFaceChat/venv/lib/python3.11/site-packages/pkg_resources/_vendor/jaraco/context.py b/spaces/Ataturk-Chatbot/HuggingFaceChat/venv/lib/python3.11/site-packages/pkg_resources/_vendor/jaraco/context.py deleted file mode 100644 index 87a4e3dca299c4201ac50f6ef589dc73f1c45576..0000000000000000000000000000000000000000 --- a/spaces/Ataturk-Chatbot/HuggingFaceChat/venv/lib/python3.11/site-packages/pkg_resources/_vendor/jaraco/context.py +++ /dev/null @@ -1,213 +0,0 @@ -import os -import subprocess -import contextlib -import functools -import tempfile -import shutil -import operator - - -@contextlib.contextmanager -def pushd(dir): - orig = os.getcwd() - os.chdir(dir) - try: - yield dir - finally: - os.chdir(orig) - - -@contextlib.contextmanager -def tarball_context(url, target_dir=None, runner=None, pushd=pushd): - """ - Get a tarball, extract it, change to that directory, yield, then - clean up. - `runner` is the function to invoke commands. - `pushd` is a context manager for changing the directory. - """ - if target_dir is None: - target_dir = os.path.basename(url).replace('.tar.gz', '').replace('.tgz', '') - if runner is None: - runner = functools.partial(subprocess.check_call, shell=True) - # In the tar command, use --strip-components=1 to strip the first path and - # then - # use -C to cause the files to be extracted to {target_dir}. This ensures - # that we always know where the files were extracted. - runner('mkdir {target_dir}'.format(**vars())) - try: - getter = 'wget {url} -O -' - extract = 'tar x{compression} --strip-components=1 -C {target_dir}' - cmd = ' | '.join((getter, extract)) - runner(cmd.format(compression=infer_compression(url), **vars())) - with pushd(target_dir): - yield target_dir - finally: - runner('rm -Rf {target_dir}'.format(**vars())) - - -def infer_compression(url): - """ - Given a URL or filename, infer the compression code for tar. - """ - # cheat and just assume it's the last two characters - compression_indicator = url[-2:] - mapping = dict(gz='z', bz='j', xz='J') - # Assume 'z' (gzip) if no match - return mapping.get(compression_indicator, 'z') - - -@contextlib.contextmanager -def temp_dir(remover=shutil.rmtree): - """ - Create a temporary directory context. Pass a custom remover - to override the removal behavior. - """ - temp_dir = tempfile.mkdtemp() - try: - yield temp_dir - finally: - remover(temp_dir) - - -@contextlib.contextmanager -def repo_context(url, branch=None, quiet=True, dest_ctx=temp_dir): - """ - Check out the repo indicated by url. - - If dest_ctx is supplied, it should be a context manager - to yield the target directory for the check out. - """ - exe = 'git' if 'git' in url else 'hg' - with dest_ctx() as repo_dir: - cmd = [exe, 'clone', url, repo_dir] - if branch: - cmd.extend(['--branch', branch]) - devnull = open(os.path.devnull, 'w') - stdout = devnull if quiet else None - subprocess.check_call(cmd, stdout=stdout) - yield repo_dir - - -@contextlib.contextmanager -def null(): - yield - - -class ExceptionTrap: - """ - A context manager that will catch certain exceptions and provide an - indication they occurred. - - >>> with ExceptionTrap() as trap: - ... raise Exception() - >>> bool(trap) - True - - >>> with ExceptionTrap() as trap: - ... pass - >>> bool(trap) - False - - >>> with ExceptionTrap(ValueError) as trap: - ... raise ValueError("1 + 1 is not 3") - >>> bool(trap) - True - - >>> with ExceptionTrap(ValueError) as trap: - ... raise Exception() - Traceback (most recent call last): - ... - Exception - - >>> bool(trap) - False - """ - - exc_info = None, None, None - - def __init__(self, exceptions=(Exception,)): - self.exceptions = exceptions - - def __enter__(self): - return self - - @property - def type(self): - return self.exc_info[0] - - @property - def value(self): - return self.exc_info[1] - - @property - def tb(self): - return self.exc_info[2] - - def __exit__(self, *exc_info): - type = exc_info[0] - matches = type and issubclass(type, self.exceptions) - if matches: - self.exc_info = exc_info - return matches - - def __bool__(self): - return bool(self.type) - - def raises(self, func, *, _test=bool): - """ - Wrap func and replace the result with the truth - value of the trap (True if an exception occurred). - - First, give the decorator an alias to support Python 3.8 - Syntax. - - >>> raises = ExceptionTrap(ValueError).raises - - Now decorate a function that always fails. - - >>> @raises - ... def fail(): - ... raise ValueError('failed') - >>> fail() - True - """ - - @functools.wraps(func) - def wrapper(*args, **kwargs): - with ExceptionTrap(self.exceptions) as trap: - func(*args, **kwargs) - return _test(trap) - - return wrapper - - def passes(self, func): - """ - Wrap func and replace the result with the truth - value of the trap (True if no exception). - - First, give the decorator an alias to support Python 3.8 - Syntax. - - >>> passes = ExceptionTrap(ValueError).passes - - Now decorate a function that always fails. - - >>> @passes - ... def fail(): - ... raise ValueError('failed') - - >>> fail() - False - """ - return self.raises(func, _test=operator.not_) - - -class suppress(contextlib.suppress, contextlib.ContextDecorator): - """ - A version of contextlib.suppress with decorator support. - - >>> @suppress(KeyError) - ... def key_error(): - ... {}[''] - >>> key_error() - """ diff --git a/spaces/Awiny/Image2Paragraph/models/grit_src/third_party/CenterNet2/detectron2/utils/video_visualizer.py b/spaces/Awiny/Image2Paragraph/models/grit_src/third_party/CenterNet2/detectron2/utils/video_visualizer.py deleted file mode 100644 index 9d8a366d3ca78c1824eff62f6fe422542075f055..0000000000000000000000000000000000000000 --- a/spaces/Awiny/Image2Paragraph/models/grit_src/third_party/CenterNet2/detectron2/utils/video_visualizer.py +++ /dev/null @@ -1,252 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -import numpy as np -import pycocotools.mask as mask_util - -from detectron2.utils.visualizer import ( - ColorMode, - Visualizer, - _create_text_labels, - _PanopticPrediction, -) - -from .colormap import random_color - - -class _DetectedInstance: - """ - Used to store data about detected objects in video frame, - in order to transfer color to objects in the future frames. - - Attributes: - label (int): - bbox (tuple[float]): - mask_rle (dict): - color (tuple[float]): RGB colors in range (0, 1) - ttl (int): time-to-live for the instance. For example, if ttl=2, - the instance color can be transferred to objects in the next two frames. - """ - - __slots__ = ["label", "bbox", "mask_rle", "color", "ttl"] - - def __init__(self, label, bbox, mask_rle, color, ttl): - self.label = label - self.bbox = bbox - self.mask_rle = mask_rle - self.color = color - self.ttl = ttl - - -class VideoVisualizer: - def __init__(self, metadata, instance_mode=ColorMode.IMAGE): - """ - Args: - metadata (MetadataCatalog): image metadata. - """ - self.metadata = metadata - self._old_instances = [] - assert instance_mode in [ - ColorMode.IMAGE, - ColorMode.IMAGE_BW, - ], "Other mode not supported yet." - self._instance_mode = instance_mode - - def draw_instance_predictions(self, frame, predictions): - """ - Draw instance-level prediction results on an image. - - Args: - frame (ndarray): an RGB image of shape (H, W, C), in the range [0, 255]. - predictions (Instances): the output of an instance detection/segmentation - model. Following fields will be used to draw: - "pred_boxes", "pred_classes", "scores", "pred_masks" (or "pred_masks_rle"). - - Returns: - output (VisImage): image object with visualizations. - """ - frame_visualizer = Visualizer(frame, self.metadata) - num_instances = len(predictions) - if num_instances == 0: - return frame_visualizer.output - - boxes = predictions.pred_boxes.tensor.numpy() if predictions.has("pred_boxes") else None - scores = predictions.scores if predictions.has("scores") else None - classes = predictions.pred_classes.numpy() if predictions.has("pred_classes") else None - keypoints = predictions.pred_keypoints if predictions.has("pred_keypoints") else None - colors = predictions.COLOR if predictions.has("COLOR") else [None] * len(predictions) - durations = predictions.ID_duration if predictions.has("ID_duration") else None - duration_threshold = self.metadata.get("duration_threshold", 0) - visibilities = None if durations is None else [x > duration_threshold for x in durations] - - if predictions.has("pred_masks"): - masks = predictions.pred_masks - # mask IOU is not yet enabled - # masks_rles = mask_util.encode(np.asarray(masks.permute(1, 2, 0), order="F")) - # assert len(masks_rles) == num_instances - else: - masks = None - - detected = [ - _DetectedInstance(classes[i], boxes[i], mask_rle=None, color=colors[i], ttl=8) - for i in range(num_instances) - ] - if not predictions.has("COLOR"): - colors = self._assign_colors(detected) - - labels = _create_text_labels(classes, scores, self.metadata.get("thing_classes", None)) - - if self._instance_mode == ColorMode.IMAGE_BW: - # any() returns uint8 tensor - frame_visualizer.output.reset_image( - frame_visualizer._create_grayscale_image( - (masks.any(dim=0) > 0).numpy() if masks is not None else None - ) - ) - alpha = 0.3 - else: - alpha = 0.5 - - labels = ( - None - if labels is None - else [y[0] for y in filter(lambda x: x[1], zip(labels, visibilities))] - ) # noqa - assigned_colors = ( - None - if colors is None - else [y[0] for y in filter(lambda x: x[1], zip(colors, visibilities))] - ) # noqa - frame_visualizer.overlay_instances( - boxes=None if masks is not None else boxes[visibilities], # boxes are a bit distracting - masks=None if masks is None else masks[visibilities], - labels=labels, - keypoints=None if keypoints is None else keypoints[visibilities], - assigned_colors=assigned_colors, - alpha=alpha, - ) - - return frame_visualizer.output - - def draw_sem_seg(self, frame, sem_seg, area_threshold=None): - """ - Args: - sem_seg (ndarray or Tensor): semantic segmentation of shape (H, W), - each value is the integer label. - area_threshold (Optional[int]): only draw segmentations larger than the threshold - """ - # don't need to do anything special - frame_visualizer = Visualizer(frame, self.metadata) - frame_visualizer.draw_sem_seg(sem_seg, area_threshold=None) - return frame_visualizer.output - - def draw_panoptic_seg_predictions( - self, frame, panoptic_seg, segments_info, area_threshold=None, alpha=0.5 - ): - frame_visualizer = Visualizer(frame, self.metadata) - pred = _PanopticPrediction(panoptic_seg, segments_info, self.metadata) - - if self._instance_mode == ColorMode.IMAGE_BW: - frame_visualizer.output.reset_image( - frame_visualizer._create_grayscale_image(pred.non_empty_mask()) - ) - - # draw mask for all semantic segments first i.e. "stuff" - for mask, sinfo in pred.semantic_masks(): - category_idx = sinfo["category_id"] - try: - mask_color = [x / 255 for x in self.metadata.stuff_colors[category_idx]] - except AttributeError: - mask_color = None - - frame_visualizer.draw_binary_mask( - mask, - color=mask_color, - text=self.metadata.stuff_classes[category_idx], - alpha=alpha, - area_threshold=area_threshold, - ) - - all_instances = list(pred.instance_masks()) - if len(all_instances) == 0: - return frame_visualizer.output - # draw mask for all instances second - masks, sinfo = list(zip(*all_instances)) - num_instances = len(masks) - masks_rles = mask_util.encode( - np.asarray(np.asarray(masks).transpose(1, 2, 0), dtype=np.uint8, order="F") - ) - assert len(masks_rles) == num_instances - - category_ids = [x["category_id"] for x in sinfo] - detected = [ - _DetectedInstance(category_ids[i], bbox=None, mask_rle=masks_rles[i], color=None, ttl=8) - for i in range(num_instances) - ] - colors = self._assign_colors(detected) - labels = [self.metadata.thing_classes[k] for k in category_ids] - - frame_visualizer.overlay_instances( - boxes=None, - masks=masks, - labels=labels, - keypoints=None, - assigned_colors=colors, - alpha=alpha, - ) - return frame_visualizer.output - - def _assign_colors(self, instances): - """ - Naive tracking heuristics to assign same color to the same instance, - will update the internal state of tracked instances. - - Returns: - list[tuple[float]]: list of colors. - """ - - # Compute iou with either boxes or masks: - is_crowd = np.zeros((len(instances),), dtype=np.bool) - if instances[0].bbox is None: - assert instances[0].mask_rle is not None - # use mask iou only when box iou is None - # because box seems good enough - rles_old = [x.mask_rle for x in self._old_instances] - rles_new = [x.mask_rle for x in instances] - ious = mask_util.iou(rles_old, rles_new, is_crowd) - threshold = 0.5 - else: - boxes_old = [x.bbox for x in self._old_instances] - boxes_new = [x.bbox for x in instances] - ious = mask_util.iou(boxes_old, boxes_new, is_crowd) - threshold = 0.6 - if len(ious) == 0: - ious = np.zeros((len(self._old_instances), len(instances)), dtype="float32") - - # Only allow matching instances of the same label: - for old_idx, old in enumerate(self._old_instances): - for new_idx, new in enumerate(instances): - if old.label != new.label: - ious[old_idx, new_idx] = 0 - - matched_new_per_old = np.asarray(ious).argmax(axis=1) - max_iou_per_old = np.asarray(ious).max(axis=1) - - # Try to find match for each old instance: - extra_instances = [] - for idx, inst in enumerate(self._old_instances): - if max_iou_per_old[idx] > threshold: - newidx = matched_new_per_old[idx] - if instances[newidx].color is None: - instances[newidx].color = inst.color - continue - # If an old instance does not match any new instances, - # keep it for the next frame in case it is just missed by the detector - inst.ttl -= 1 - if inst.ttl > 0: - extra_instances.append(inst) - - # Assign random color to newly-detected instances: - for inst in instances: - if inst.color is None: - inst.color = random_color(rgb=True, maximum=1) - self._old_instances = instances[:] + extra_instances - return [d.color for d in instances] diff --git a/spaces/Banbri/zcvzcv/LICENCE.md b/spaces/Banbri/zcvzcv/LICENCE.md deleted file mode 100644 index 537fde8423156f05dc00b52a4fc8eebd451f66e9..0000000000000000000000000000000000000000 --- a/spaces/Banbri/zcvzcv/LICENCE.md +++ /dev/null @@ -1,170 +0,0 @@ -Apache License -============== - -_Version 2.0, January 2004_ -_<>_ - -### Terms and Conditions for use, reproduction, and distribution - -#### 1. Definitions - -“License” shall mean the terms and conditions for use, reproduction, and -distribution as defined by Sections 1 through 9 of this document. - -“Licensor” shall mean the copyright owner or entity authorized by the copyright -owner that is granting the License. - -“Legal Entity” shall mean the union of the acting entity and all other entities -that control, are controlled by, or are under common control with that entity. -For the purposes of this definition, “control” means **(i)** the power, direct or -indirect, to cause the direction or management of such entity, whether by -contract or otherwise, or **(ii)** ownership of fifty percent (50%) or more of the -outstanding shares, or **(iii)** beneficial ownership of such entity. - -“You” (or “Your”) shall mean an individual or Legal Entity exercising -permissions granted by this License. - -“Source” form shall mean the preferred form for making modifications, including -but not limited to software source code, documentation source, and configuration -files. - -“Object” form shall mean any form resulting from mechanical transformation or -translation of a Source form, including but not limited to compiled object code, -generated documentation, and conversions to other media types. - -“Work” shall mean the work of authorship, whether in Source or Object form, made -available under the License, as indicated by a copyright notice that is included -in or attached to the work (an example is provided in the Appendix below). - -“Derivative Works” shall mean any work, whether in Source or Object form, that -is based on (or derived from) the Work and for which the editorial revisions, -annotations, elaborations, or other modifications represent, as a whole, an -original work of authorship. For the purposes of this License, Derivative Works -shall not include works that remain separable from, or merely link (or bind by -name) to the interfaces of, the Work and Derivative Works thereof. - -“Contribution” shall mean any work of authorship, including the original version -of the Work and any modifications or additions to that Work or Derivative Works -thereof, that is intentionally submitted to Licensor for inclusion in the Work -by the copyright owner or by an individual or Legal Entity authorized to submit -on behalf of the copyright owner. For the purposes of this definition, -“submitted” means any form of electronic, verbal, or written communication sent -to the Licensor or its representatives, including but not limited to -communication on electronic mailing lists, source code control systems, and -issue tracking systems that are managed by, or on behalf of, the Licensor for -the purpose of discussing and improving the Work, but excluding communication -that is conspicuously marked or otherwise designated in writing by the copyright -owner as “Not a Contribution.” - -“Contributor” shall mean Licensor and any individual or Legal Entity on behalf -of whom a Contribution has been received by Licensor and subsequently -incorporated within the Work. - -#### 2. Grant of Copyright License - -Subject to the terms and conditions of this License, each Contributor hereby -grants to You a perpetual, worldwide, non-exclusive, no-charge, royalty-free, -irrevocable copyright license to reproduce, prepare Derivative Works of, -publicly display, publicly perform, sublicense, and distribute the Work and such -Derivative Works in Source or Object form. - -#### 3. Grant of Patent License - -Subject to the terms and conditions of this License, each Contributor hereby -grants to You a perpetual, worldwide, non-exclusive, no-charge, royalty-free, -irrevocable (except as stated in this section) patent license to make, have -made, use, offer to sell, sell, import, and otherwise transfer the Work, where -such license applies only to those patent claims licensable by such Contributor -that are necessarily infringed by their Contribution(s) alone or by combination -of their Contribution(s) with the Work to which such Contribution(s) was -submitted. If You institute patent litigation against any entity (including a -cross-claim or counterclaim in a lawsuit) alleging that the Work or a -Contribution incorporated within the Work constitutes direct or contributory -patent infringement, then any patent licenses granted to You under this License -for that Work shall terminate as of the date such litigation is filed. - -#### 4. Redistribution - -You may reproduce and distribute copies of the Work or Derivative Works thereof -in any medium, with or without modifications, and in Source or Object form, -provided that You meet the following conditions: - -* **(a)** You must give any other recipients of the Work or Derivative Works a copy of -this License; and -* **(b)** You must cause any modified files to carry prominent notices stating that You -changed the files; and -* **(c)** You must retain, in the Source form of any Derivative Works that You distribute, -all copyright, patent, trademark, and attribution notices from the Source form -of the Work, excluding those notices that do not pertain to any part of the -Derivative Works; and -* **(d)** If the Work includes a “NOTICE” text file as part of its distribution, then any -Derivative Works that You distribute must include a readable copy of the -attribution notices contained within such NOTICE file, excluding those notices -that do not pertain to any part of the Derivative Works, in at least one of the -following places: within a NOTICE text file distributed as part of the -Derivative Works; within the Source form or documentation, if provided along -with the Derivative Works; or, within a display generated by the Derivative -Works, if and wherever such third-party notices normally appear. The contents of -the NOTICE file are for informational purposes only and do not modify the -License. You may add Your own attribution notices within Derivative Works that -You distribute, alongside or as an addendum to the NOTICE text from the Work, -provided that such additional attribution notices cannot be construed as -modifying the License. - -You may add Your own copyright statement to Your modifications and may provide -additional or different license terms and conditions for use, reproduction, or -distribution of Your modifications, or for any such Derivative Works as a whole, -provided Your use, reproduction, and distribution of the Work otherwise complies -with the conditions stated in this License. - -#### 5. Submission of Contributions - -Unless You explicitly state otherwise, any Contribution intentionally submitted -for inclusion in the Work by You to the Licensor shall be under the terms and -conditions of this License, without any additional terms or conditions. -Notwithstanding the above, nothing herein shall supersede or modify the terms of -any separate license agreement you may have executed with Licensor regarding -such Contributions. - -#### 6. Trademarks - -This License does not grant permission to use the trade names, trademarks, -service marks, or product names of the Licensor, except as required for -reasonable and customary use in describing the origin of the Work and -reproducing the content of the NOTICE file. - -#### 7. Disclaimer of Warranty - -Unless required by applicable law or agreed to in writing, Licensor provides the -Work (and each Contributor provides its Contributions) on an “AS IS” BASIS, -WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied, -including, without limitation, any warranties or conditions of TITLE, -NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A PARTICULAR PURPOSE. You are -solely responsible for determining the appropriateness of using or -redistributing the Work and assume any risks associated with Your exercise of -permissions under this License. - -#### 8. Limitation of Liability - -In no event and under no legal theory, whether in tort (including negligence), -contract, or otherwise, unless required by applicable law (such as deliberate -and grossly negligent acts) or agreed to in writing, shall any Contributor be -liable to You for damages, including any direct, indirect, special, incidental, -or consequential damages of any character arising as a result of this License or -out of the use or inability to use the Work (including but not limited to -damages for loss of goodwill, work stoppage, computer failure or malfunction, or -any and all other commercial damages or losses), even if such Contributor has -been advised of the possibility of such damages. - -#### 9. Accepting Warranty or Additional Liability - -While redistributing the Work or Derivative Works thereof, You may choose to -offer, and charge a fee for, acceptance of support, warranty, indemnity, or -other liability obligations and/or rights consistent with this License. However, -in accepting such obligations, You may act only on Your own behalf and on Your -sole responsibility, not on behalf of any other Contributor, and only if You -agree to indemnify, defend, and hold each Contributor harmless for any liability -incurred by, or claims asserted against, such Contributor by reason of your -accepting any such warranty or additional liability. - -_END OF TERMS AND CONDITIONS_ \ No newline at end of file diff --git a/spaces/Benson/text-generation/Examples/Choque De Clanes Indir Apkcombo.md b/spaces/Benson/text-generation/Examples/Choque De Clanes Indir Apkcombo.md deleted file mode 100644 index 1cf4d546d9125bbf414cf0ad2aeb5b26ed34d6b2..0000000000000000000000000000000000000000 --- a/spaces/Benson/text-generation/Examples/Choque De Clanes Indir Apkcombo.md +++ /dev/null @@ -1,153 +0,0 @@ - -

Choque de clanes Indir Apkcombo: Cómo descargar y jugar el popular juego de estrategia

-

Si estás buscando un juego de estrategia divertido y adictivo que desafíe tus habilidades y creatividad, deberías probar Clash of Clans. Este juego ha sido uno de los juegos más populares del mundo durante años, con millones de jugadores uniéndose a clanes y compitiendo en guerras épicas. En este artículo, te mostraremos cómo descargar y jugar Clash of Clans desde Apkcombo, un sitio web que ofrece archivos APK gratuitos para juegos y aplicaciones Android. También te daremos algunos consejos y trucos para ayudarte a ganar en este juego.

-

¿Qué es el Choque de Clanes?

-

Clash of Clans es un juego de estrategia desarrollado por Supercell, una compañía finlandesa que también creó otros juegos de éxito como Clash Royale, Brawl Stars, Boom Beach y Hay Day. En Clash of Clans, puedes construir tu propia aldea, entrenar a tus tropas y unirte o crear un clan con otros jugadores. A continuación, puedes participar en guerras de clanes, donde puedes atacar y defenderte contra otros clanes, o en batallas multijugador, donde puedes asaltar las aldeas de otros jugadores en busca de recursos. También puedes desbloquear y actualizar diferentes tipos de tropas, hechizos y héroes, cada uno con sus propias habilidades y estrategias.

-

choque de clanes indir apkcombo


Download 🆓 https://bltlly.com/2v6IRM



-

Una breve introducción a las características y la jugabilidad del juego

-

Clash of Clans tiene muchas características que lo convierten en un juego emocionante y diverso. Aquí están algunas de ellas:

-
    -
  • Village: Aquí es donde construyes tu base, que consta de varios edificios, como minas de oro, colectores de elixires, cuarteles, campamentos del ejército, defensas, muros, ayuntamiento, castillo del clan, laboratorio, etc. También puedes personalizar tu pueblo con decoraciones, obstáculos, pieles de héroe y escenarios.
  • - -
  • Hechizos: Estos son los efectos mágicos que puedes usar para apoyar a tus tropas u obstaculizar a tus enemigos en las batallas. Hay diferentes tipos de hechizos, como hechizo de relámpago, hechizo de sanación, hechizo de ira, hechizo de salto, hechizo de congelación, etc. Cada hechizo tiene sus propios efectos y cuesta elixir u elixir oscuro para usar.
  • -
  • Héroes: Estas son las unidades especiales que tienen habilidades poderosas y se pueden usar varias veces en batallas. Hay cuatro héroes en el juego: rey bárbaro, reina arquera, gran alcaide y campeón real. Cada héroe tiene su propio nivel que puede actualizar con elixir oscuro o gemas

    Los beneficios de descargar Clash of Clans de Apkcombo

    -

    Apkcombo es un sitio web que ofrece archivos APK gratuitos para juegos y aplicaciones Android. APK significa Android Package Kit, que es el formato de archivo utilizado por Android para distribuir e instalar aplicaciones. Al descargar archivos APK desde Apkcombo, puede disfrutar de algunos beneficios, como:

    -
      -
    • Acceso a la última versión: Apkcombo siempre actualiza los archivos APK a la última versión disponible, para que pueda obtener las nuevas características y correcciones de errores para Clash of Clans.
    • -
    • Acceso a la versión modded: Apkcombo también proporciona archivos APK modded para algunos juegos y aplicaciones, lo que significa que se han modificado para tener características o ventajas adicionales, como recursos ilimitados, elementos desbloqueados o anuncios eliminados. Sin embargo, tenga cuidado al usar archivos APK modificados, ya que pueden no ser compatibles con el juego original o la aplicación, o pueden violar los términos del servicio.
    • -
    • Acceso a la versión bloqueada por región: Apkcombo le permite descargar archivos APK de diferentes regiones, que pueden tener diferentes contenidos o idiomas. Por ejemplo, puedes descargar la versión china de Clash of Clans, que tiene algunas características y eventos exclusivos que no están disponibles en otras regiones.
    • - -
    -

    Sin embargo, también hay algunos riesgos y desventajas de descargar archivos APK de Apkcombo, como:

    -
      -
    • Malware o virus potenciales: Apkcombo afirma que todos los archivos APK son escaneados y verificados por el software antivirus, pero todavía hay una posibilidad de que algún código malicioso o software puede estar oculto en los archivos APK. Por lo tanto, siempre debe comprobar el origen y la reputación del archivo APK antes de descargarlo, y utilizar una aplicación antivirus confiable para escanearlo antes de instalarlo.
    • -
    • Problemas potenciales de compatibilidad: Apkcombo no garantiza que todos los archivos APK funcionarán en su dispositivo, ya que pueden tener diferentes requisitos o especificaciones. Por lo tanto, siempre debe comprobar la compatibilidad y los requisitos del sistema del archivo APK antes de descargarlo, y hacer una copia de seguridad de sus datos antes de instalarlo.
    • -
    • Problemas legales potenciales: Apkcombo no posee ni aloja ninguno de los archivos APK en su sitio web, pero solo proporciona enlaces a otras fuentes. Por lo tanto, siempre debe respetar los derechos de propiedad intelectual y los términos de servicio de los desarrolladores originales y editores de los juegos y aplicaciones. Descargar e instalar archivos APK desde Apkcombo puede violar sus derechos y políticas, y puede resultar en acciones legales o sanciones.
    • -
    -

    Por lo tanto, siempre debe ser cuidadoso y responsable al descargar e instalar archivos APK desde Apkcombo. Solo debe descargar archivos APK de fuentes de confianza, y solo para uso personal. También debes evitar usar archivos APK modificados que puedan darte ventajas injustas o dañar a otros jugadores en Clash of Clans.

    -

    Cómo descargar e instalar Clash of Clans desde Apkcombo

    -

    Si quieres descargar e instalar Clash of Clans desde Apkcombo, puedes seguir estos pasos:

    -

    -

    Los pasos para descargar el archivo APK desde el sitio web de Apkcombo

    -
      -
    1. Ir a Sitio web de Apkcombo en su navegador.
    2. - -
    3. Seleccione la versión de Clash of Clans que desea descargar. Puede elegir entre la versión original o la versión modificada.
    4. -
    5. Seleccione la región de Clash of Clans que desea descargar. Puede elegir entre diferentes regiones, como global, China, Japón, etc.
    6. -
    7. Seleccione la arquitectura de su dispositivo. Puede elegir entre armeabi-v7a, arm64-v8a, x86 o x86_x64.
    8. -
    9. Haga clic en el botón "Descargar" y espere a que termine la descarga.
    10. -
    -

    Los pasos para instalar el archivo APK en su dispositivo Android

    -
      -
    1. Antes de instalar el archivo APK, asegúrese de que ha habilitado la opción "Fuentes desconocidas" en la configuración del dispositivo. Esto le permitirá instalar aplicaciones desde fuentes distintas de Google Play Store.
    2. -
    3. Localizar el archivo APK descargado en el almacenamiento del dispositivo utilizando una aplicación de administrador de archivos.
    4. -
    5. Toque en el archivo APK y siga las instrucciones en la pantalla para instalarlo.
    6. -
    7. Esperar a que la instalación para completar y lanzar Clash of Clans desde el cajón de la aplicación

      Los pasos para actualizar el juego y solucionar cualquier problema

      -
        -
      1. Para actualizar el juego, puede descargar el último archivo APK de Apkcombo e instalarlo sobre el existente, o puede usar la opción de actualización dentro del juego si está disponible. Siempre debes actualizar el juego para disfrutar de las nuevas características y mejoras.
      2. -
      3. Para solucionar cualquier problema, como fallos, errores o fallas, puede probar algunas de estas soluciones:
          -
        • Borra la caché y los datos del juego desde la configuración de tu dispositivo.
        • -
        • Desinstalar y volver a instalar el juego desde Apkcombo.
        • -
        • Compruebe su conexión a Internet y asegúrese de que es estable y rápido.
        • -
        • Compruebe el almacenamiento del dispositivo y asegúrese de que tiene suficiente espacio para el juego.
        • -
        • Póngase en contacto con el equipo de soporte de Supercell desde la configuración del juego o su sitio web para obtener más ayuda.
        • -
        -
      4. -
      -

      Cómo jugar al choque de clanes y ganar

      - -

      Los fundamentos de la construcción de su pueblo y la elevación de su clan

      -

      Lo primero que tienes que hacer en Clash of Clans es construir tu pueblo y levantar tu clan. Estos son algunos pasos básicos a seguir:

      -
        -
      1. Comienza construyendo y mejorando tu ayuntamiento, que es el corazón de tu pueblo. Tu nivel de ayuntamiento determina qué edificios y tropas puedes desbloquear y usar.
      2. -
      3. Construya y actualice sus edificios de recursos, como minas de oro, colectores de elixir, almacenes de oro, almacenes de elixir, taladros de elixir oscuro y almacenes de elixir oscuro. Estos edificios le proporcionarán los recursos que necesita para construir y mejorar otros edificios y tropas.
      4. -
      5. Construye y mejora tus edificios de defensa, como cañones, torres de arqueros, morteros, defensas aéreas, torres de magos, teslas ocultas, torres de bombas, torres de infierno, artillería de águilas, etc. Estos edificios protegerán tu pueblo de los ataques enemigos.
      6. -
      7. Construir y mejorar sus paredes, que actuará como una barrera contra las tropas enemigas. También puedes colocar trampas, como bombas, trampas de resorte, bombas de aire, bombas gigantes, minas de aire en busca, trampas de esqueleto, etc. para sorprender y dañar a las tropas enemigas.
      8. -
      9. Construir y mejorar sus edificios del ejército, tales como cuarteles, cuarteles oscuros, campamentos del ejército, fábrica de hechizos, fábrica de hechizos oscuros, taller de asedio, etc. Estos edificios le permitirá entrenar y almacenar sus tropas y hechizos para batallas.
      10. -
      11. Construye y mejora el castillo de tu clan, lo que te permitirá unirte o crear un clan con otros jugadores. También puedes solicitar y donar tropas y hechizos a los miembros de tu clan, que te ayudarán en las batallas.
      12. -
      13. Construir y mejorar sus edificios héroe, tales como el rey bárbaro altar, arquero reina altar, gran guardián altar y campeón real altar. Estos edificios te permitirán desbloquear y usar a los héroes en las batallas.
      14. - -
      -

      Siempre debes tratar de equilibrar el desarrollo de tu pueblo, y no descuidar ningún aspecto de él. También debe seguir el pedido de actualización recomendado, que puede encontrar en varias guías y sitios web en línea.

      -

      Los consejos y trucos para atacar y defender en las guerras de clanes y batallas multijugador

      -

      Una de las principales atracciones de Clash of Clans son las guerras de clanes y las batallas multijugador, donde puedes poner a prueba tus habilidades y estrategias contra otros jugadores. Aquí hay algunos consejos y trucos para ayudarte a atacar y defender en estos modos:

      -
        -
      • Explora a tu enemigo: Antes de atacar, siempre debes explorar la aldea de tu enemigo y analizar su diseño, defensas, trampas, tropas del castillo del clan, héroes, etc. También debes revisar su perfil y ver su historia de ataque y defensa, trofeos, liga, clan, etc. Esto te ayudará a planificar tu ataque y elegir las mejores tropas y hechizos para él.
      • -
      • Usa la composición correcta del ejército: Dependiendo de la aldea de tu enemigo y tu estrategia, debes usar la composición correcta del ejército para tu ataque. Debes considerar el costo, tiempo de entrenamiento, espacio de alojamiento, daños, salud, velocidad, rango, preferencia de objetivo, habilidad especial, etc. de cada tropa y hechizo. También debes tener una variedad de tropas y hechizos para lidiar con diferentes situaciones y obstáculos.
      • -
      • Usa la técnica de despliegue correcta: Dependiendo de la composición de tu ejército y tu estrategia, debes usar la técnica de despliegue correcta para tu ataque. Debes considerar el tiempo, ubicación, dirección, espaciado, agrupación, canalización, etc. de cada tropa y hechizo. También debes usar las habilidades del héroe y las tropas del castillo del clan sabiamente.
      • - -
      • Practica y aprende: La mejor manera de mejorar tus habilidades de ataque es practicar y aprender de tus propios ataques y los de los demás. Puedes usar la función de desafío amistoso para practicar con tus compañeros de clan o el modo de práctica para aprender algunas estrategias básicas. También puedes ver las repeticiones de tus propios ataques y los de otros para ver qué funcionó y qué no.
      • -
      • Diseña tu base: Para defender tu aldea de los ataques enemigos, debes diseñar tu base con cuidado y estratégicamente. Usted debe considerar la disposición, colocación , y la sinergia de cada edificio, pared, trampa, clan de la tropa del castillo, héroe, etc. También debe seguir los principios de diseño de base recomendados, que se pueden encontrar en varias guías y sitios web en línea.
      • -
      • Mejora tus defensas: Para defender tu pueblo de los ataques enemigos, debes mejorar tus defensas de forma regular y estratégica. Debe considerar el costo, tiempo, efecto, prioridad, etc. de cada actualización. También debe seguir el pedido de actualización recomendado, que puede encontrar en varias guías y sitios web en línea.
      • -
      • Prueba tu base: Para defender tu aldea de los ataques enemigos, debes probar tu base con frecuencia y de manera realista. Puedes usar la función de desafío amigable para probar tu base con tus compañeros de clan o el editor de diseño de base para probar tu base con diferentes escenarios. También puedes ver las repeticiones de ataques enemigos para ver cómo funciona tu base y qué puedes mejorar.
      • -
      -

      Los recursos y estrategias para mejorar tus tropas, hechizos y héroes

      -

      Para tener éxito en Clash of Clans, necesitas mejorar tus tropas, hechizos y héroes constantemente y estratégicamente. Aquí hay algunos recursos y estrategias para ayudarle a hacer eso:

      -
        - -
      • Elixir oscuro: Este es un recurso especial que necesitas para actualizar tus tropas oscuras, hechizos oscuros, héroes y algunos edificios. Puedes obtener elixir oscuro de taladros de elixir oscuro, asaltar aldeas de otros jugadores, completar logros y eventos, abrir carros de botín y cofres de bonificación de estrellas, etc.
      • -
      • Gemas: Este es un recurso premium que puede usar para acelerar las actualizaciones, comprar recursos, impulsar edificios, entrenar tropas y hechizos al instante, etc. Puede obtener gemas de eliminar obstáculos, completar logros y eventos, abrir cajas de gemas y carritos de minas de gemas, comprar con dinero real, etc.
      • -
      • Base de constructor de oro y elixir: Estos son los recursos que necesita para actualizar sus tropas de base de constructor, edificios y paredes. Puedes obtener oro base constructor y elixir de minas de oro y coleccionistas de elixires, ganar batallas, completar logros y eventos, abrir carritos de botín y cofres de bonificación de estrellas, etc.
      • -
      • Gemas de base de constructor: Este es un recurso que puedes usar para acelerar las actualizaciones, comprar recursos, impulsar edificios, etc. en tu base de constructor. Puede obtener gemas base constructor de despejar obstáculos, completar logros y eventos, abrir cajas de gemas y carretas de minas de gemas, comprarlos con dinero real, etc.
      • -
      • Artículos mágicos: Estos son artículos especiales que puedes usar para aumentar tu progreso de varias maneras, como aumentar tu producción de recursos, reducir tu tiempo o costo de actualización, mejorar tus tropas o hechizos, etc. Puedes obtener objetos mágicos al completar juegos de clan, alcanzando ciertos niveles de liga, comprándolos con gemas o dinero real, etc.
      • - -
      -

      Conclusión

      -

      Clash of Clans es un juego que te mantendrá entretenido y comprometido durante horas. Puedes descargarlo y jugarlo desde Apkcombo, un sitio web que ofrece archivos APK gratuitos para juegos y aplicaciones Android. Sin embargo, debe ser cuidadoso y responsable al descargar e instalar archivos APK desde Apkcombo. También debes seguir algunos consejos y trucos para ayudarte a construir tu pueblo, levantar tu clan y ganar en guerras de clanes y batallas multijugador. También debes mejorar tus tropas, hechizos y héroes de forma regular y estratégica. Esperamos que este artículo te haya ayudado a aprender más sobre Clash of Clans indir Apkcombo. ¡Ahora sigue adelante y disfruta del juego!

      -

      Preguntas frecuentes

      -

      Q1: ¿Es Clash of Clans libre para jugar?

      -

      A1: Sí, Clash of Clans es gratis para descargar y jugar. Sin embargo, también ofrece algunas compras opcionales en el juego con dinero real, como gemas, objetos mágicos , u ofertas especiales. Puede desactivar estas compras desde la configuración de su dispositivo si lo desea.

      -

      Q2: ¿Es seguro descargar Clash of Clans desde Apkcombo?

      -

      A2: Apkcombo afirma que todos los archivos APK en su sitio web son escaneados y verificados por el software antivirus, pero todavía hay un riesgo de malware o virus. Por lo tanto, siempre debe comprobar el origen y la reputación del archivo APK antes de descargarlo, y utilizar una aplicación antivirus confiable para escanearlo antes de instalarlo. También debe descargar solo archivos APK de fuentes de confianza, y solo para uso personal.

      -

      Q3: ¿Cómo puedo unirme o crear un clan en Clash of Clans?

      -

      A3: Para unirte o crear un clan en Clash of Clans, necesitas tener un castillo de clan, que puedes construir después de llegar al nivel 3 del ayuntamiento. Puedes tocar el castillo del clan y elegir la opción de unirte o crear un clan. Puedes buscar clanes por nombre, etiqueta, ubicación, nivel, miembros, etc. o navegar por los clanes recomendados. También puedes invitar o aceptar a otros jugadores para que se unan a tu clan. Puedes chatear, donar, solicitar y luchar con los miembros de tu clan.

      - -

      A4: No hay una respuesta definitiva a esta pregunta, ya que diferentes tropas y hechizos pueden funcionar mejor para diferentes situaciones y estrategias. Sin embargo, algunas de las tropas y hechizos más populares y eficaces son:

      - - -Tropas -Hechizos - - -Mineros -Hechizo de sanación - - -Jugadores de bolos -Hechizo de ira - - -Jinetes de cerdo -Hechizo de congelación - - -Electro dragones -Hechizo de murciélago - - -Perros de lava -Hechizo de prisa - - -Globos -Hechizo de clonación - - -Golems -Hechizo de veneno - - -BrujasHechizo de terremoto - - -

      Puedes experimentar con diferentes combinaciones de tropas y hechizos para encontrar los que se adapten a tu estilo y objetivos.

      -

      Q5: ¿Cómo puedo contactar a Supercell para soporte o retroalimentación?

      -

      A5: Si tienes algún problema, pregunta o sugerencia con respecto a Clash of Clans, puedes ponerte en contacto con Supercell para obtener apoyo o comentarios. Puedes hacer esto por:

      -
        -
      • Tocando en el icono de configuración en el juego y la elección de la "Ayuda y soporte" opción. A continuación, puede examinar las preguntas frecuentes, informar de un problema o enviar un mensaje al equipo de soporte.
      • -
      • Visitando el sitio web oficial de Clash of Clans y eligiendo la opción "Contáctenos". Luego puede llenar un formulario con sus datos y consulta.
      • -
      • Visitar los foros oficiales de Clash of Clans y publicar su consulta o retroalimentación en la sección correspondiente. También puede interactuar con otros jugadores y moderadores allí.
      • -
      • Visitar las páginas oficiales de redes sociales de Clash of Clans, como Facebook, Twitter, Instagram, YouTube, etc. y dejar un comentario o mensaje allí. También puede seguir las últimas noticias y actualizaciones allí.
      • -

      64aa2da5cf
      -
      -
      \ No newline at end of file diff --git a/spaces/Benson/text-generation/Examples/Descarga De La Aplicacin Comercial Zugacoin.md b/spaces/Benson/text-generation/Examples/Descarga De La Aplicacin Comercial Zugacoin.md deleted file mode 100644 index 3097d12a474b83ee2976034d56dbcc50d76ad6e0..0000000000000000000000000000000000000000 --- a/spaces/Benson/text-generation/Examples/Descarga De La Aplicacin Comercial Zugacoin.md +++ /dev/null @@ -1,138 +0,0 @@ - -

      Descarga de la aplicación comercial Zugacoin: Una guía para principiantes

      -

      Si usted está buscando una manera de invertir en criptomonedas, activos digitales de comercio, o préstamos de acceso en África, es posible que desee considerar el uso de Zugacoin. Zugacoin es una criptomoneda revolucionaria que tiene como objetivo reconstruir la economía moribunda de África convirtiéndose en la primera moneda en capital y financiación de inversiones. En este artículo, le mostraremos cómo descargar, instalar y usar la aplicación comercial Zugacoin, que es una plataforma segura y conveniente para comprar y vender Zugacoin. También revisaremos las características y beneficios de Zugacoin, así como sus revisiones, calificaciones, pros, contras y comparación con otras criptomonedas. Al final de este artículo, usted tendrá una idea clara de si Zugacoin es una opción de inversión digna para usted o no.

      -

      descarga de la aplicación comercial zugacoin


      Download Zip ✦✦✦ https://bltlly.com/2v6IAp



      -

      Características y beneficios de Zugacoin

      -

      Zugacoin es una criptomoneda que se construye bajo la cadena de bloques Ethereum. Es un token ERC20 con el ticker (SZC); también es negociable en intercambios criptográficos. Este token lanzado a finales de 2020 y es #2672 en el rango de cryptocurrencies en existencia. Al momento de escribir este artículo, Zugacoin cotiza a $47.06 (Coinmarketcap).

      -

      Zugacoin tiene un suministro máximo que es mucho más limitado que el suministro total de bitcoin. Bitcoin Max. suministro = 21 millones BTC, mientras que Zugacoin Max. suministro = 1 millón SZC. Además, este token tiene la funcionalidad de prueba de apuesta; esto simplemente significa que puedes ganar recompensas apostando o mezclando el token SZC.

      -

      Zugacoin fue fundada por el Arzobispo Dr. Sam Zuga, un clérigo de la Iglesia Casa de la Alegría, ubicada en Gboko, estado de Benue, Nigeria. Sam Zuga quería una moneda que fomentará el desarrollo económico en África a través de las finanzas descentralizadas. Para lograr este concepto, Zugacoin fue concebido.

      - -

      Zugacoin pretende ser una criptomoneda revolucionaria que restaure la economía africana. Quiere cambiar África para siempre haciendo uso de la tecnología blockchain en las economías emergentes de África y más allá. Su objetivo es liberar el potencial creando, ganando, ahorrando y gastando oportunidades en toda África.

      -

      -

      Los usuarios objetivo de Zugacoin son personas subempleadas y desempleadas de África, además de ayudar al gobierno africano en el desarrollo de la economía. Se aconseja a los africanos a tomar ventaja de esta moneda para la libertad financiera, especialmente ya que está en la red Binance SZCB.

      -

      Algunas de las características y beneficios de usar Zugacoin son:

      -
        -
      • Ofrece transacciones rápidas, seguras y de bajo costo a través de las fronteras.
      • -
      • Proporciona acceso a préstamos para empresas emergentes y necesidades personales.
      • Admite múltiples métodos de pago, como transferencia bancaria, pago con tarjeta y dinero móvil. -
      • Permite a los usuarios obtener ingresos pasivos mediante la apuesta o la celebración de Zugacoin en sus carteras.
      • -
      • Tiene una oferta limitada de 1 millón de SZC, lo que significa que tiene un alto potencial de escasez y demanda.
      • -
      • Está respaldado por un fundador de buena reputación y un equipo de expertos en blockchain, finanzas y marketing.
      • -
      • Es compatible con la red Ethereum y se puede integrar con otras aplicaciones descentralizadas.
      • -
      -

      Cómo descargar e instalar Zugacoin Merchant App

      -

      Si desea comenzar a usar Zugacoin, tendrá que descargar e instalar la aplicación comercial Zugacoin en su teléfono inteligente. La aplicación está disponible para dispositivos Android e iOS y se puede descargar desde el sitio web oficial o las tiendas de aplicaciones. Estos son los pasos a seguir:

      -

      Para usuarios de Android

      -
        -
      1. Ir a la Google Play Store y buscar "Zugacoin Merchant App".
      2. -
      3. Seleccione la aplicación de la lista y toque en "Instalar".
      4. -
      5. Espere a que la aplicación se descargue e instale en su dispositivo.
      6. - -
      -

      Para usuarios de iOS

      -
        -
      1. Ir a la App Store y buscar "Zugacoin Merchant App".
      2. -
      3. Seleccione la aplicación de la lista y toque en "Obtener".
      4. -
      5. Ingresa tu contraseña de Apple ID o usa Touch ID o Face ID para confirmar.
      6. -
      7. Espere a que la aplicación se descargue e instale en su dispositivo.
      8. -
      9. Abra la aplicación y acepte los términos y condiciones.
      10. -
      -

      Cómo registrarse y verificar su cuenta

      -

      Después de haber descargado e instalado la aplicación comercial Zugacoin, tendrá que registrarse y verificar su cuenta antes de comenzar a usarla. Estos son los pasos a seguir:

      -
        -
      1. Abra la aplicación y toque en "Crear cuenta".
      2. -
      3. Ingrese su nombre completo, dirección de correo electrónico, número de teléfono, contraseña y código de referencia (si existe).
      4. -
      5. Toque en "Registrarse" y compruebe su correo electrónico para un enlace de verificación.
      6. -
      7. Haga clic en el enlace para verificar su dirección de correo electrónico y activar su cuenta.
      8. -
      9. Inicie sesión en su cuenta y toque en "Perfil".
      10. -
      11. Seleccione "Verificación" y cargue su documento de identidad (como pasaporte, licencia de conducir o tarjeta de identificación nacional).
      12. Introduzca sus datos personales, como su fecha de nacimiento, sexo, dirección y país. -
      13. Toque en "Enviar" y espere a que se complete la verificación.
      14. -
      15. Recibirás una notificación cuando tu cuenta esté verificada y lista para usar.
      16. -
      -

      Cómo comprar y vender Zugacoin en la aplicación

      -

      Una vez que haya verificado su cuenta, puede comenzar a comprar y vender Zugacoin en la aplicación. Hay tres formas principales de hacer esto: usar la función de escaneo a pago, usar el intercambio P2P y usar la función de intercambio. Estos son los pasos a seguir para cada método:

      -
      Uso de la función de escaneo a pago
      -

      Esta función le permite pagar por bienes y servicios con Zugacoin escaneando un código QR. También puedes recibir pagos de otros usuarios generando tu propio código QR. Estos son los pasos a seguir:

      -
        - -
      1. Si quieres pagar a alguien, escanea su código QR con tu cámara. Si desea recibir el pago, toque en "Recibir" y mostrar su código QR al pagador.
      2. -
      3. Introduzca la cantidad de Zugacoin que desea enviar o recibir y confirme la transacción.
      4. -
      5. Verá un mensaje de confirmación y un recibo de la transacción.
      6. -
      -
      Usando el intercambio P2P
      -

      Esta característica le permite comprar y vender Zugacoin con otros usuarios directamente. Puede elegir entre una lista de ofertas o crear su propia oferta. También puede chatear con el vendedor o el comprador y calificarlos después de la transacción. Estos son los pasos a seguir:

      -
        -
      1. Abra la aplicación y toque en "P2P Exchange".
      2. -
      3. Si desea comprar Zugacoin, toque en "Comprar". Si desea vender Zugacoin, toque en "Vender".
      4. -
      5. Navegar por la lista de ofertas y seleccionar el que se adapte a sus necesidades. Puede filtrar las ofertas por método de pago, ubicación, precio y calificación.
      6. -
      7. Toque en "Comercio" y chatear con el vendedor o comprador para acordar los términos de la transacción.
      8. -
      9. Siga las instrucciones en la pantalla y complete el pago o transferencia de Zugacoin.
      10. -
      11. Toque en "Confirmar" y espere la confirmación de la otra parte.
      12. -
      13. Verá un mensaje de confirmación y un recibo de la transacción.
      14. -
      15. También puede calificar y revisar al vendedor o comprador después de la transacción.
      16. -
      -
      Usando la función de intercambio
      -

      Esta función le permite intercambiar Zugacoin con otras criptomonedas, como Bitcoin, Ethereum, Binance Coin, Tether, etc. Puede elegir entre una lista de monedas admitidas o ingresar una cantidad personalizada. Estos son los pasos a seguir:

      -
        -
      1. Abra la aplicación y toque en "Intercambiar".
      2. -
      3. Seleccione la moneda que desea intercambiar y la moneda que desea intercambiar.
      4. Introduzca la cantidad de moneda que desea intercambiar o use el control deslizante para ajustar la cantidad. -
      5. Toque en "Intercambiar ahora" y confirme la transacción.
      6. - -
      -

      Comentarios y valoraciones de Zugacoin

      -

      Zugacoin es una criptomoneda relativamente nueva que aún no ha ganado mucha popularidad o reconocimiento en el espacio criptográfico. Sin embargo, ha recibido algunas críticas y valoraciones de usuarios y expertos que lo han probado o analizado. Estos son algunos de ellos:

      -

      Pros y contras de Zugacoin

      -

      Como cualquier otra criptomoneda, Zugacoin tiene sus propios pros y contras que usted debe ser consciente de antes de invertir en ella. Aquí está un resumen de las principales ventajas y desventajas de usar Zugacoin:

      - - -Pros -Contras - - -- Ofrece transacciones rápidas, seguras y de bajo costo a través de las fronteras. -- Tiene un suministro limitado de 1 millón de SZC, lo que puede limitar su escalabilidad y adopción. - - -- Proporciona acceso a préstamos para startups de negocios y necesidades personales. -- No es ampliamente aceptado o apoyado por comerciantes, bolsas o carteras. - - -- Permite a los usuarios obtener ingresos pasivos mediante la apuesta o la celebración de Zugacoin en sus carteras. -- Es vulnerable a la volatilidad del mercado, la incertidumbre regulatoria y los ciberataques. - - -- Está respaldado por un fundador de buena reputación y un equipo de expertos en blockchain, finanzas y marketing. -- Tiene una baja capitalización de mercado, liquidez y volumen de operaciones. - - -- Es compatible con la red Ethereum y se puede integrar con otras aplicaciones descentralizadas. -- Tiene una baja conciencia, confianza y reputación entre la comunidad criptográfica. - - -

      Zugacoin vs otras criptomonedas

      -

      Zugacoin no es la única criptomoneda que tiene como objetivo empoderar a África y promover la inclusión financiera. Hay otras criptomonedas que tienen objetivos o características similares, como Akoin, KubitX, BitSika, etc. ¿Cómo se compara Zugacoin con ellos? Aquí hay algunos puntos de comparación:

      -
        - -
      • Zugacoin es más compatible con la red Ethereum y sus aplicaciones descentralizadas que otras criptomonedas que utilizan diferentes blockchains o protocolos.
      • -
      • Zugacoin tiene una aceptación, soporte y adopción más limitada que otras criptomonedas que tienen más asociaciones, integraciones e intercambios.
      • -
      • Zugacoin tiene un gobierno y una visión más centralizados que otras criptomonedas que tienen más participación y retroalimentación de la comunidad.
      • -
      -

      Conclusión

      -

      Zugacoin es una criptomoneda que tiene como objetivo reconstruir la economía de África mediante la concesión de préstamos, pagos e inversiones para las nuevas empresas y las necesidades personales. Es una forma rápida, segura y de bajo costo de realizar transacciones a través de las fronteras y obtener ingresos pasivos al apostar o mantener Zugacoin en su billetera. Está respaldado por un fundador de buena reputación y un equipo de expertos en blockchain, finanzas y marketing. Es compatible con la red Ethereum y puede integrarse con otras aplicaciones descentralizadas.

      -

      Sin embargo, Zugacoin también tiene algunos inconvenientes que debe considerar antes de invertir en él. Tiene un suministro limitado de 1 millón de SZC, lo que puede limitar su escalabilidad y adopción. No es ampliamente aceptado o apoyado por los comerciantes, intercambios o carteras. Es vulnerable a la volatilidad del mercado, la incertidumbre regulatoria y los ciberataques. Tiene una baja capitalización de mercado, liquidez y volumen de operaciones. Tiene una baja conciencia, confianza y reputación entre la comunidad criptográfica.

      -

      Si desea probar Zugacoin, tendrá que descargar e instalar la aplicación comercial Zugacoin en su teléfono inteligente. La aplicación es una plataforma segura y conveniente para comprar y vender Zugacoin. Puede utilizar la función de escaneo a pago, el intercambio P2P o la función de intercambio para operar Zugacoin con otros usuarios o criptomonedas. También tendrá que registrarse y verificar su cuenta antes de que pueda comenzar a usar la aplicación.

      - -

      Preguntas frecuentes

      -

      Aquí están algunas de las preguntas y respuestas más frecuentes sobre Zugacoin:

      -
        -
      1. ¿Cuál es el sitio web oficial de Zugacoin?
      2. -

        El sitio web oficial de Zugacoin es https://zugacoin.com/ Puede encontrar más información sobre la visión de Zugacoin, misión, hoja de ruta, equipo, socios, noticias, eventos, etc. en el sitio web.

        -
      3. ¿Dónde puedo comprar Zugacoin?
      4. -

        Puedes comprar Zugacoin en la aplicación comercial Zugacoin o en algunos intercambios de criptografía que lo soportan. Algunos de los intercambios que enumeran Zugacoin son BitMart, VinDAX, FinexBox, SatoExchange, etc.

        -
      5. ¿Cómo puedo ponerme en contacto con Zugacoin?
      6. Puede ponerse en contacto con Zugacoin enviando un correo electrónico a info@zugacoin.com o llamando al +234 811 377 7709. También puedes seguir a Zugacoin en redes sociales, como Facebook, Twitter, Instagram, YouTube, etc.

        -
      7. ¿Es Zugacoin una estafa?
      8. -

        No, Zugacoin no es una estafa. Es una criptomoneda legítima que está registrada y regulada por el gobierno nigeriano. Tiene una visión clara, misión, hoja de ruta, equipo, socios y comunidad. También tiene un libro mayor de blockchain transparente y auditable que registra todas las transacciones y actividades.

        -
      9. ¿Cómo puedo almacenar Zugacoin?
      10. -

        Puede almacenar Zugacoin en la aplicación comercial Zugacoin o en cualquier cartera compatible que soporte tokens ERC20. Algunas de las carteras que puedes usar son Trust Wallet, MetaMask, MyEtherWallet, etc. Siempre debes mantener tus claves y contraseñas privadas seguras.

        -

      64aa2da5cf
      -
      -
      \ No newline at end of file diff --git a/spaces/Big-Web/MMSD/env/Lib/site-packages/pip/_internal/req/req_install.py b/spaces/Big-Web/MMSD/env/Lib/site-packages/pip/_internal/req/req_install.py deleted file mode 100644 index d01b24a918954bd5440c94463369ee7a666aad29..0000000000000000000000000000000000000000 --- a/spaces/Big-Web/MMSD/env/Lib/site-packages/pip/_internal/req/req_install.py +++ /dev/null @@ -1,867 +0,0 @@ -# The following comment should be removed at some point in the future. -# mypy: strict-optional=False - -import functools -import logging -import os -import shutil -import sys -import uuid -import zipfile -from optparse import Values -from typing import Any, Collection, Dict, Iterable, List, Optional, Sequence, Union - -from pip._vendor.packaging.markers import Marker -from pip._vendor.packaging.requirements import Requirement -from pip._vendor.packaging.specifiers import SpecifierSet -from pip._vendor.packaging.utils import canonicalize_name -from pip._vendor.packaging.version import Version -from pip._vendor.packaging.version import parse as parse_version -from pip._vendor.pyproject_hooks import BuildBackendHookCaller - -from pip._internal.build_env import BuildEnvironment, NoOpBuildEnvironment -from pip._internal.exceptions import InstallationError -from pip._internal.locations import get_scheme -from pip._internal.metadata import ( - BaseDistribution, - get_default_environment, - get_directory_distribution, - get_wheel_distribution, -) -from pip._internal.metadata.base import FilesystemWheel -from pip._internal.models.direct_url import DirectUrl -from pip._internal.models.link import Link -from pip._internal.operations.build.metadata import generate_metadata -from pip._internal.operations.build.metadata_editable import generate_editable_metadata -from pip._internal.operations.build.metadata_legacy import ( - generate_metadata as generate_metadata_legacy, -) -from pip._internal.operations.install.editable_legacy import ( - install_editable as install_editable_legacy, -) -from pip._internal.operations.install.wheel import install_wheel -from pip._internal.pyproject import load_pyproject_toml, make_pyproject_path -from pip._internal.req.req_uninstall import UninstallPathSet -from pip._internal.utils.deprecation import deprecated -from pip._internal.utils.hashes import Hashes -from pip._internal.utils.misc import ( - ConfiguredBuildBackendHookCaller, - ask_path_exists, - backup_dir, - display_path, - hide_url, - redact_auth_from_url, -) -from pip._internal.utils.packaging import safe_extra -from pip._internal.utils.subprocess import runner_with_spinner_message -from pip._internal.utils.temp_dir import TempDirectory, tempdir_kinds -from pip._internal.utils.virtualenv import running_under_virtualenv -from pip._internal.vcs import vcs - -logger = logging.getLogger(__name__) - - -class InstallRequirement: - """ - Represents something that may be installed later on, may have information - about where to fetch the relevant requirement and also contains logic for - installing the said requirement. - """ - - def __init__( - self, - req: Optional[Requirement], - comes_from: Optional[Union[str, "InstallRequirement"]], - editable: bool = False, - link: Optional[Link] = None, - markers: Optional[Marker] = None, - use_pep517: Optional[bool] = None, - isolated: bool = False, - *, - global_options: Optional[List[str]] = None, - hash_options: Optional[Dict[str, List[str]]] = None, - config_settings: Optional[Dict[str, Union[str, List[str]]]] = None, - constraint: bool = False, - extras: Collection[str] = (), - user_supplied: bool = False, - permit_editable_wheels: bool = False, - ) -> None: - assert req is None or isinstance(req, Requirement), req - self.req = req - self.comes_from = comes_from - self.constraint = constraint - self.editable = editable - self.permit_editable_wheels = permit_editable_wheels - - # source_dir is the local directory where the linked requirement is - # located, or unpacked. In case unpacking is needed, creating and - # populating source_dir is done by the RequirementPreparer. Note this - # is not necessarily the directory where pyproject.toml or setup.py is - # located - that one is obtained via unpacked_source_directory. - self.source_dir: Optional[str] = None - if self.editable: - assert link - if link.is_file: - self.source_dir = os.path.normpath(os.path.abspath(link.file_path)) - - if link is None and req and req.url: - # PEP 508 URL requirement - link = Link(req.url) - self.link = self.original_link = link - - # When this InstallRequirement is a wheel obtained from the cache of locally - # built wheels, this is the source link corresponding to the cache entry, which - # was used to download and build the cached wheel. - self.cached_wheel_source_link: Optional[Link] = None - - # Information about the location of the artifact that was downloaded . This - # property is guaranteed to be set in resolver results. - self.download_info: Optional[DirectUrl] = None - - # Path to any downloaded or already-existing package. - self.local_file_path: Optional[str] = None - if self.link and self.link.is_file: - self.local_file_path = self.link.file_path - - if extras: - self.extras = extras - elif req: - self.extras = {safe_extra(extra) for extra in req.extras} - else: - self.extras = set() - if markers is None and req: - markers = req.marker - self.markers = markers - - # This holds the Distribution object if this requirement is already installed. - self.satisfied_by: Optional[BaseDistribution] = None - # Whether the installation process should try to uninstall an existing - # distribution before installing this requirement. - self.should_reinstall = False - # Temporary build location - self._temp_build_dir: Optional[TempDirectory] = None - # Set to True after successful installation - self.install_succeeded: Optional[bool] = None - # Supplied options - self.global_options = global_options if global_options else [] - self.hash_options = hash_options if hash_options else {} - self.config_settings = config_settings - # Set to True after successful preparation of this requirement - self.prepared = False - # User supplied requirement are explicitly requested for installation - # by the user via CLI arguments or requirements files, as opposed to, - # e.g. dependencies, extras or constraints. - self.user_supplied = user_supplied - - self.isolated = isolated - self.build_env: BuildEnvironment = NoOpBuildEnvironment() - - # For PEP 517, the directory where we request the project metadata - # gets stored. We need this to pass to build_wheel, so the backend - # can ensure that the wheel matches the metadata (see the PEP for - # details). - self.metadata_directory: Optional[str] = None - - # The static build requirements (from pyproject.toml) - self.pyproject_requires: Optional[List[str]] = None - - # Build requirements that we will check are available - self.requirements_to_check: List[str] = [] - - # The PEP 517 backend we should use to build the project - self.pep517_backend: Optional[BuildBackendHookCaller] = None - - # Are we using PEP 517 for this requirement? - # After pyproject.toml has been loaded, the only valid values are True - # and False. Before loading, None is valid (meaning "use the default"). - # Setting an explicit value before loading pyproject.toml is supported, - # but after loading this flag should be treated as read only. - self.use_pep517 = use_pep517 - - # This requirement needs more preparation before it can be built - self.needs_more_preparation = False - - def __str__(self) -> str: - if self.req: - s = str(self.req) - if self.link: - s += " from {}".format(redact_auth_from_url(self.link.url)) - elif self.link: - s = redact_auth_from_url(self.link.url) - else: - s = "" - if self.satisfied_by is not None: - if self.satisfied_by.location is not None: - location = display_path(self.satisfied_by.location) - else: - location = "" - s += f" in {location}" - if self.comes_from: - if isinstance(self.comes_from, str): - comes_from: Optional[str] = self.comes_from - else: - comes_from = self.comes_from.from_path() - if comes_from: - s += f" (from {comes_from})" - return s - - def __repr__(self) -> str: - return "<{} object: {} editable={!r}>".format( - self.__class__.__name__, str(self), self.editable - ) - - def format_debug(self) -> str: - """An un-tested helper for getting state, for debugging.""" - attributes = vars(self) - names = sorted(attributes) - - state = ("{}={!r}".format(attr, attributes[attr]) for attr in sorted(names)) - return "<{name} object: {{{state}}}>".format( - name=self.__class__.__name__, - state=", ".join(state), - ) - - # Things that are valid for all kinds of requirements? - @property - def name(self) -> Optional[str]: - if self.req is None: - return None - return self.req.name - - @functools.lru_cache() # use cached_property in python 3.8+ - def supports_pyproject_editable(self) -> bool: - if not self.use_pep517: - return False - assert self.pep517_backend - with self.build_env: - runner = runner_with_spinner_message( - "Checking if build backend supports build_editable" - ) - with self.pep517_backend.subprocess_runner(runner): - return "build_editable" in self.pep517_backend._supported_features() - - @property - def specifier(self) -> SpecifierSet: - return self.req.specifier - - @property - def is_pinned(self) -> bool: - """Return whether I am pinned to an exact version. - - For example, some-package==1.2 is pinned; some-package>1.2 is not. - """ - specifiers = self.specifier - return len(specifiers) == 1 and next(iter(specifiers)).operator in {"==", "==="} - - def match_markers(self, extras_requested: Optional[Iterable[str]] = None) -> bool: - if not extras_requested: - # Provide an extra to safely evaluate the markers - # without matching any extra - extras_requested = ("",) - if self.markers is not None: - return any( - self.markers.evaluate({"extra": extra}) for extra in extras_requested - ) - else: - return True - - @property - def has_hash_options(self) -> bool: - """Return whether any known-good hashes are specified as options. - - These activate --require-hashes mode; hashes specified as part of a - URL do not. - - """ - return bool(self.hash_options) - - def hashes(self, trust_internet: bool = True) -> Hashes: - """Return a hash-comparer that considers my option- and URL-based - hashes to be known-good. - - Hashes in URLs--ones embedded in the requirements file, not ones - downloaded from an index server--are almost peers with ones from - flags. They satisfy --require-hashes (whether it was implicitly or - explicitly activated) but do not activate it. md5 and sha224 are not - allowed in flags, which should nudge people toward good algos. We - always OR all hashes together, even ones from URLs. - - :param trust_internet: Whether to trust URL-based (#md5=...) hashes - downloaded from the internet, as by populate_link() - - """ - good_hashes = self.hash_options.copy() - if trust_internet: - link = self.link - elif self.original_link and self.user_supplied: - link = self.original_link - else: - link = None - if link and link.hash: - good_hashes.setdefault(link.hash_name, []).append(link.hash) - return Hashes(good_hashes) - - def from_path(self) -> Optional[str]: - """Format a nice indicator to show where this "comes from" """ - if self.req is None: - return None - s = str(self.req) - if self.comes_from: - if isinstance(self.comes_from, str): - comes_from = self.comes_from - else: - comes_from = self.comes_from.from_path() - if comes_from: - s += "->" + comes_from - return s - - def ensure_build_location( - self, build_dir: str, autodelete: bool, parallel_builds: bool - ) -> str: - assert build_dir is not None - if self._temp_build_dir is not None: - assert self._temp_build_dir.path - return self._temp_build_dir.path - if self.req is None: - # Some systems have /tmp as a symlink which confuses custom - # builds (such as numpy). Thus, we ensure that the real path - # is returned. - self._temp_build_dir = TempDirectory( - kind=tempdir_kinds.REQ_BUILD, globally_managed=True - ) - - return self._temp_build_dir.path - - # This is the only remaining place where we manually determine the path - # for the temporary directory. It is only needed for editables where - # it is the value of the --src option. - - # When parallel builds are enabled, add a UUID to the build directory - # name so multiple builds do not interfere with each other. - dir_name: str = canonicalize_name(self.name) - if parallel_builds: - dir_name = f"{dir_name}_{uuid.uuid4().hex}" - - # FIXME: Is there a better place to create the build_dir? (hg and bzr - # need this) - if not os.path.exists(build_dir): - logger.debug("Creating directory %s", build_dir) - os.makedirs(build_dir) - actual_build_dir = os.path.join(build_dir, dir_name) - # `None` indicates that we respect the globally-configured deletion - # settings, which is what we actually want when auto-deleting. - delete_arg = None if autodelete else False - return TempDirectory( - path=actual_build_dir, - delete=delete_arg, - kind=tempdir_kinds.REQ_BUILD, - globally_managed=True, - ).path - - def _set_requirement(self) -> None: - """Set requirement after generating metadata.""" - assert self.req is None - assert self.metadata is not None - assert self.source_dir is not None - - # Construct a Requirement object from the generated metadata - if isinstance(parse_version(self.metadata["Version"]), Version): - op = "==" - else: - op = "===" - - self.req = Requirement( - "".join( - [ - self.metadata["Name"], - op, - self.metadata["Version"], - ] - ) - ) - - def warn_on_mismatching_name(self) -> None: - metadata_name = canonicalize_name(self.metadata["Name"]) - if canonicalize_name(self.req.name) == metadata_name: - # Everything is fine. - return - - # If we're here, there's a mismatch. Log a warning about it. - logger.warning( - "Generating metadata for package %s " - "produced metadata for project name %s. Fix your " - "#egg=%s fragments.", - self.name, - metadata_name, - self.name, - ) - self.req = Requirement(metadata_name) - - def check_if_exists(self, use_user_site: bool) -> None: - """Find an installed distribution that satisfies or conflicts - with this requirement, and set self.satisfied_by or - self.should_reinstall appropriately. - """ - if self.req is None: - return - existing_dist = get_default_environment().get_distribution(self.req.name) - if not existing_dist: - return - - version_compatible = self.req.specifier.contains( - existing_dist.version, - prereleases=True, - ) - if not version_compatible: - self.satisfied_by = None - if use_user_site: - if existing_dist.in_usersite: - self.should_reinstall = True - elif running_under_virtualenv() and existing_dist.in_site_packages: - raise InstallationError( - f"Will not install to the user site because it will " - f"lack sys.path precedence to {existing_dist.raw_name} " - f"in {existing_dist.location}" - ) - else: - self.should_reinstall = True - else: - if self.editable: - self.should_reinstall = True - # when installing editables, nothing pre-existing should ever - # satisfy - self.satisfied_by = None - else: - self.satisfied_by = existing_dist - - # Things valid for wheels - @property - def is_wheel(self) -> bool: - if not self.link: - return False - return self.link.is_wheel - - @property - def is_wheel_from_cache(self) -> bool: - # When True, it means that this InstallRequirement is a local wheel file in the - # cache of locally built wheels. - return self.cached_wheel_source_link is not None - - # Things valid for sdists - @property - def unpacked_source_directory(self) -> str: - return os.path.join( - self.source_dir, self.link and self.link.subdirectory_fragment or "" - ) - - @property - def setup_py_path(self) -> str: - assert self.source_dir, f"No source dir for {self}" - setup_py = os.path.join(self.unpacked_source_directory, "setup.py") - - return setup_py - - @property - def setup_cfg_path(self) -> str: - assert self.source_dir, f"No source dir for {self}" - setup_cfg = os.path.join(self.unpacked_source_directory, "setup.cfg") - - return setup_cfg - - @property - def pyproject_toml_path(self) -> str: - assert self.source_dir, f"No source dir for {self}" - return make_pyproject_path(self.unpacked_source_directory) - - def load_pyproject_toml(self) -> None: - """Load the pyproject.toml file. - - After calling this routine, all of the attributes related to PEP 517 - processing for this requirement have been set. In particular, the - use_pep517 attribute can be used to determine whether we should - follow the PEP 517 or legacy (setup.py) code path. - """ - pyproject_toml_data = load_pyproject_toml( - self.use_pep517, self.pyproject_toml_path, self.setup_py_path, str(self) - ) - - if pyproject_toml_data is None: - if self.config_settings: - deprecated( - reason=f"Config settings are ignored for project {self}.", - replacement=( - "to use --use-pep517 or add a " - "pyproject.toml file to the project" - ), - gone_in="23.3", - ) - self.use_pep517 = False - return - - self.use_pep517 = True - requires, backend, check, backend_path = pyproject_toml_data - self.requirements_to_check = check - self.pyproject_requires = requires - self.pep517_backend = ConfiguredBuildBackendHookCaller( - self, - self.unpacked_source_directory, - backend, - backend_path=backend_path, - ) - - def isolated_editable_sanity_check(self) -> None: - """Check that an editable requirement if valid for use with PEP 517/518. - - This verifies that an editable that has a pyproject.toml either supports PEP 660 - or as a setup.py or a setup.cfg - """ - if ( - self.editable - and self.use_pep517 - and not self.supports_pyproject_editable() - and not os.path.isfile(self.setup_py_path) - and not os.path.isfile(self.setup_cfg_path) - ): - raise InstallationError( - f"Project {self} has a 'pyproject.toml' and its build " - f"backend is missing the 'build_editable' hook. Since it does not " - f"have a 'setup.py' nor a 'setup.cfg', " - f"it cannot be installed in editable mode. " - f"Consider using a build backend that supports PEP 660." - ) - - def prepare_metadata(self) -> None: - """Ensure that project metadata is available. - - Under PEP 517 and PEP 660, call the backend hook to prepare the metadata. - Under legacy processing, call setup.py egg-info. - """ - assert self.source_dir - details = self.name or f"from {self.link}" - - if self.use_pep517: - assert self.pep517_backend is not None - if ( - self.editable - and self.permit_editable_wheels - and self.supports_pyproject_editable() - ): - self.metadata_directory = generate_editable_metadata( - build_env=self.build_env, - backend=self.pep517_backend, - details=details, - ) - else: - self.metadata_directory = generate_metadata( - build_env=self.build_env, - backend=self.pep517_backend, - details=details, - ) - else: - self.metadata_directory = generate_metadata_legacy( - build_env=self.build_env, - setup_py_path=self.setup_py_path, - source_dir=self.unpacked_source_directory, - isolated=self.isolated, - details=details, - ) - - # Act on the newly generated metadata, based on the name and version. - if not self.name: - self._set_requirement() - else: - self.warn_on_mismatching_name() - - self.assert_source_matches_version() - - @property - def metadata(self) -> Any: - if not hasattr(self, "_metadata"): - self._metadata = self.get_dist().metadata - - return self._metadata - - def get_dist(self) -> BaseDistribution: - if self.metadata_directory: - return get_directory_distribution(self.metadata_directory) - elif self.local_file_path and self.is_wheel: - return get_wheel_distribution( - FilesystemWheel(self.local_file_path), canonicalize_name(self.name) - ) - raise AssertionError( - f"InstallRequirement {self} has no metadata directory and no wheel: " - f"can't make a distribution." - ) - - def assert_source_matches_version(self) -> None: - assert self.source_dir - version = self.metadata["version"] - if self.req.specifier and version not in self.req.specifier: - logger.warning( - "Requested %s, but installing version %s", - self, - version, - ) - else: - logger.debug( - "Source in %s has version %s, which satisfies requirement %s", - display_path(self.source_dir), - version, - self, - ) - - # For both source distributions and editables - def ensure_has_source_dir( - self, - parent_dir: str, - autodelete: bool = False, - parallel_builds: bool = False, - ) -> None: - """Ensure that a source_dir is set. - - This will create a temporary build dir if the name of the requirement - isn't known yet. - - :param parent_dir: The ideal pip parent_dir for the source_dir. - Generally src_dir for editables and build_dir for sdists. - :return: self.source_dir - """ - if self.source_dir is None: - self.source_dir = self.ensure_build_location( - parent_dir, - autodelete=autodelete, - parallel_builds=parallel_builds, - ) - - # For editable installations - def update_editable(self) -> None: - if not self.link: - logger.debug( - "Cannot update repository at %s; repository location is unknown", - self.source_dir, - ) - return - assert self.editable - assert self.source_dir - if self.link.scheme == "file": - # Static paths don't get updated - return - vcs_backend = vcs.get_backend_for_scheme(self.link.scheme) - # Editable requirements are validated in Requirement constructors. - # So here, if it's neither a path nor a valid VCS URL, it's a bug. - assert vcs_backend, f"Unsupported VCS URL {self.link.url}" - hidden_url = hide_url(self.link.url) - vcs_backend.obtain(self.source_dir, url=hidden_url, verbosity=0) - - # Top-level Actions - def uninstall( - self, auto_confirm: bool = False, verbose: bool = False - ) -> Optional[UninstallPathSet]: - """ - Uninstall the distribution currently satisfying this requirement. - - Prompts before removing or modifying files unless - ``auto_confirm`` is True. - - Refuses to delete or modify files outside of ``sys.prefix`` - - thus uninstallation within a virtual environment can only - modify that virtual environment, even if the virtualenv is - linked to global site-packages. - - """ - assert self.req - dist = get_default_environment().get_distribution(self.req.name) - if not dist: - logger.warning("Skipping %s as it is not installed.", self.name) - return None - logger.info("Found existing installation: %s", dist) - - uninstalled_pathset = UninstallPathSet.from_dist(dist) - uninstalled_pathset.remove(auto_confirm, verbose) - return uninstalled_pathset - - def _get_archive_name(self, path: str, parentdir: str, rootdir: str) -> str: - def _clean_zip_name(name: str, prefix: str) -> str: - assert name.startswith( - prefix + os.path.sep - ), f"name {name!r} doesn't start with prefix {prefix!r}" - name = name[len(prefix) + 1 :] - name = name.replace(os.path.sep, "/") - return name - - path = os.path.join(parentdir, path) - name = _clean_zip_name(path, rootdir) - return self.name + "/" + name - - def archive(self, build_dir: Optional[str]) -> None: - """Saves archive to provided build_dir. - - Used for saving downloaded VCS requirements as part of `pip download`. - """ - assert self.source_dir - if build_dir is None: - return - - create_archive = True - archive_name = "{}-{}.zip".format(self.name, self.metadata["version"]) - archive_path = os.path.join(build_dir, archive_name) - - if os.path.exists(archive_path): - response = ask_path_exists( - "The file {} exists. (i)gnore, (w)ipe, " - "(b)ackup, (a)bort ".format(display_path(archive_path)), - ("i", "w", "b", "a"), - ) - if response == "i": - create_archive = False - elif response == "w": - logger.warning("Deleting %s", display_path(archive_path)) - os.remove(archive_path) - elif response == "b": - dest_file = backup_dir(archive_path) - logger.warning( - "Backing up %s to %s", - display_path(archive_path), - display_path(dest_file), - ) - shutil.move(archive_path, dest_file) - elif response == "a": - sys.exit(-1) - - if not create_archive: - return - - zip_output = zipfile.ZipFile( - archive_path, - "w", - zipfile.ZIP_DEFLATED, - allowZip64=True, - ) - with zip_output: - dir = os.path.normcase(os.path.abspath(self.unpacked_source_directory)) - for dirpath, dirnames, filenames in os.walk(dir): - for dirname in dirnames: - dir_arcname = self._get_archive_name( - dirname, - parentdir=dirpath, - rootdir=dir, - ) - zipdir = zipfile.ZipInfo(dir_arcname + "/") - zipdir.external_attr = 0x1ED << 16 # 0o755 - zip_output.writestr(zipdir, "") - for filename in filenames: - file_arcname = self._get_archive_name( - filename, - parentdir=dirpath, - rootdir=dir, - ) - filename = os.path.join(dirpath, filename) - zip_output.write(filename, file_arcname) - - logger.info("Saved %s", display_path(archive_path)) - - def install( - self, - global_options: Optional[Sequence[str]] = None, - root: Optional[str] = None, - home: Optional[str] = None, - prefix: Optional[str] = None, - warn_script_location: bool = True, - use_user_site: bool = False, - pycompile: bool = True, - ) -> None: - scheme = get_scheme( - self.name, - user=use_user_site, - home=home, - root=root, - isolated=self.isolated, - prefix=prefix, - ) - - if self.editable and not self.is_wheel: - install_editable_legacy( - global_options=global_options if global_options is not None else [], - prefix=prefix, - home=home, - use_user_site=use_user_site, - name=self.name, - setup_py_path=self.setup_py_path, - isolated=self.isolated, - build_env=self.build_env, - unpacked_source_directory=self.unpacked_source_directory, - ) - self.install_succeeded = True - return - - assert self.is_wheel - assert self.local_file_path - - install_wheel( - self.name, - self.local_file_path, - scheme=scheme, - req_description=str(self.req), - pycompile=pycompile, - warn_script_location=warn_script_location, - direct_url=self.download_info if self.original_link else None, - requested=self.user_supplied, - ) - self.install_succeeded = True - - -def check_invalid_constraint_type(req: InstallRequirement) -> str: - # Check for unsupported forms - problem = "" - if not req.name: - problem = "Unnamed requirements are not allowed as constraints" - elif req.editable: - problem = "Editable requirements are not allowed as constraints" - elif req.extras: - problem = "Constraints cannot have extras" - - if problem: - deprecated( - reason=( - "Constraints are only allowed to take the form of a package " - "name and a version specifier. Other forms were originally " - "permitted as an accident of the implementation, but were " - "undocumented. The new implementation of the resolver no " - "longer supports these forms." - ), - replacement="replacing the constraint with a requirement", - # No plan yet for when the new resolver becomes default - gone_in=None, - issue=8210, - ) - - return problem - - -def _has_option(options: Values, reqs: List[InstallRequirement], option: str) -> bool: - if getattr(options, option, None): - return True - for req in reqs: - if getattr(req, option, None): - return True - return False - - -def check_legacy_setup_py_options( - options: Values, - reqs: List[InstallRequirement], -) -> None: - has_build_options = _has_option(options, reqs, "build_options") - has_global_options = _has_option(options, reqs, "global_options") - if has_build_options or has_global_options: - deprecated( - reason="--build-option and --global-option are deprecated.", - issue=11859, - replacement="to use --config-settings", - gone_in="23.3", - ) - logger.warning( - "Implying --no-binary=:all: due to the presence of " - "--build-option / --global-option. " - ) - options.format_control.disallow_binaries() diff --git a/spaces/Big-Web/MMSD/env/Lib/site-packages/pip/_vendor/pygments/util.py b/spaces/Big-Web/MMSD/env/Lib/site-packages/pip/_vendor/pygments/util.py deleted file mode 100644 index 8032962dc994bd2b62e98f02016c88d0994e2f58..0000000000000000000000000000000000000000 --- a/spaces/Big-Web/MMSD/env/Lib/site-packages/pip/_vendor/pygments/util.py +++ /dev/null @@ -1,308 +0,0 @@ -""" - pygments.util - ~~~~~~~~~~~~~ - - Utility functions. - - :copyright: Copyright 2006-2022 by the Pygments team, see AUTHORS. - :license: BSD, see LICENSE for details. -""" - -import re -from io import TextIOWrapper - - -split_path_re = re.compile(r'[/\\ ]') -doctype_lookup_re = re.compile(r''' - ]*> -''', re.DOTALL | re.MULTILINE | re.VERBOSE) -tag_re = re.compile(r'<(.+?)(\s.*?)?>.*?', - re.IGNORECASE | re.DOTALL | re.MULTILINE) -xml_decl_re = re.compile(r'\s*<\?xml[^>]*\?>', re.I) - - -class ClassNotFound(ValueError): - """Raised if one of the lookup functions didn't find a matching class.""" - - -class OptionError(Exception): - pass - - -def get_choice_opt(options, optname, allowed, default=None, normcase=False): - string = options.get(optname, default) - if normcase: - string = string.lower() - if string not in allowed: - raise OptionError('Value for option %s must be one of %s' % - (optname, ', '.join(map(str, allowed)))) - return string - - -def get_bool_opt(options, optname, default=None): - string = options.get(optname, default) - if isinstance(string, bool): - return string - elif isinstance(string, int): - return bool(string) - elif not isinstance(string, str): - raise OptionError('Invalid type %r for option %s; use ' - '1/0, yes/no, true/false, on/off' % ( - string, optname)) - elif string.lower() in ('1', 'yes', 'true', 'on'): - return True - elif string.lower() in ('0', 'no', 'false', 'off'): - return False - else: - raise OptionError('Invalid value %r for option %s; use ' - '1/0, yes/no, true/false, on/off' % ( - string, optname)) - - -def get_int_opt(options, optname, default=None): - string = options.get(optname, default) - try: - return int(string) - except TypeError: - raise OptionError('Invalid type %r for option %s; you ' - 'must give an integer value' % ( - string, optname)) - except ValueError: - raise OptionError('Invalid value %r for option %s; you ' - 'must give an integer value' % ( - string, optname)) - - -def get_list_opt(options, optname, default=None): - val = options.get(optname, default) - if isinstance(val, str): - return val.split() - elif isinstance(val, (list, tuple)): - return list(val) - else: - raise OptionError('Invalid type %r for option %s; you ' - 'must give a list value' % ( - val, optname)) - - -def docstring_headline(obj): - if not obj.__doc__: - return '' - res = [] - for line in obj.__doc__.strip().splitlines(): - if line.strip(): - res.append(" " + line.strip()) - else: - break - return ''.join(res).lstrip() - - -def make_analysator(f): - """Return a static text analyser function that returns float values.""" - def text_analyse(text): - try: - rv = f(text) - except Exception: - return 0.0 - if not rv: - return 0.0 - try: - return min(1.0, max(0.0, float(rv))) - except (ValueError, TypeError): - return 0.0 - text_analyse.__doc__ = f.__doc__ - return staticmethod(text_analyse) - - -def shebang_matches(text, regex): - r"""Check if the given regular expression matches the last part of the - shebang if one exists. - - >>> from pygments.util import shebang_matches - >>> shebang_matches('#!/usr/bin/env python', r'python(2\.\d)?') - True - >>> shebang_matches('#!/usr/bin/python2.4', r'python(2\.\d)?') - True - >>> shebang_matches('#!/usr/bin/python-ruby', r'python(2\.\d)?') - False - >>> shebang_matches('#!/usr/bin/python/ruby', r'python(2\.\d)?') - False - >>> shebang_matches('#!/usr/bin/startsomethingwith python', - ... r'python(2\.\d)?') - True - - It also checks for common windows executable file extensions:: - - >>> shebang_matches('#!C:\\Python2.4\\Python.exe', r'python(2\.\d)?') - True - - Parameters (``'-f'`` or ``'--foo'`` are ignored so ``'perl'`` does - the same as ``'perl -e'``) - - Note that this method automatically searches the whole string (eg: - the regular expression is wrapped in ``'^$'``) - """ - index = text.find('\n') - if index >= 0: - first_line = text[:index].lower() - else: - first_line = text.lower() - if first_line.startswith('#!'): - try: - found = [x for x in split_path_re.split(first_line[2:].strip()) - if x and not x.startswith('-')][-1] - except IndexError: - return False - regex = re.compile(r'^%s(\.(exe|cmd|bat|bin))?$' % regex, re.IGNORECASE) - if regex.search(found) is not None: - return True - return False - - -def doctype_matches(text, regex): - """Check if the doctype matches a regular expression (if present). - - Note that this method only checks the first part of a DOCTYPE. - eg: 'html PUBLIC "-//W3C//DTD XHTML 1.0 Strict//EN"' - """ - m = doctype_lookup_re.search(text) - if m is None: - return False - doctype = m.group(1) - return re.compile(regex, re.I).match(doctype.strip()) is not None - - -def html_doctype_matches(text): - """Check if the file looks like it has a html doctype.""" - return doctype_matches(text, r'html') - - -_looks_like_xml_cache = {} - - -def looks_like_xml(text): - """Check if a doctype exists or if we have some tags.""" - if xml_decl_re.match(text): - return True - key = hash(text) - try: - return _looks_like_xml_cache[key] - except KeyError: - m = doctype_lookup_re.search(text) - if m is not None: - return True - rv = tag_re.search(text[:1000]) is not None - _looks_like_xml_cache[key] = rv - return rv - - -def surrogatepair(c): - """Given a unicode character code with length greater than 16 bits, - return the two 16 bit surrogate pair. - """ - # From example D28 of: - # http://www.unicode.org/book/ch03.pdf - return (0xd7c0 + (c >> 10), (0xdc00 + (c & 0x3ff))) - - -def format_lines(var_name, seq, raw=False, indent_level=0): - """Formats a sequence of strings for output.""" - lines = [] - base_indent = ' ' * indent_level * 4 - inner_indent = ' ' * (indent_level + 1) * 4 - lines.append(base_indent + var_name + ' = (') - if raw: - # These should be preformatted reprs of, say, tuples. - for i in seq: - lines.append(inner_indent + i + ',') - else: - for i in seq: - # Force use of single quotes - r = repr(i + '"') - lines.append(inner_indent + r[:-2] + r[-1] + ',') - lines.append(base_indent + ')') - return '\n'.join(lines) - - -def duplicates_removed(it, already_seen=()): - """ - Returns a list with duplicates removed from the iterable `it`. - - Order is preserved. - """ - lst = [] - seen = set() - for i in it: - if i in seen or i in already_seen: - continue - lst.append(i) - seen.add(i) - return lst - - -class Future: - """Generic class to defer some work. - - Handled specially in RegexLexerMeta, to support regex string construction at - first use. - """ - def get(self): - raise NotImplementedError - - -def guess_decode(text): - """Decode *text* with guessed encoding. - - First try UTF-8; this should fail for non-UTF-8 encodings. - Then try the preferred locale encoding. - Fall back to latin-1, which always works. - """ - try: - text = text.decode('utf-8') - return text, 'utf-8' - except UnicodeDecodeError: - try: - import locale - prefencoding = locale.getpreferredencoding() - text = text.decode() - return text, prefencoding - except (UnicodeDecodeError, LookupError): - text = text.decode('latin1') - return text, 'latin1' - - -def guess_decode_from_terminal(text, term): - """Decode *text* coming from terminal *term*. - - First try the terminal encoding, if given. - Then try UTF-8. Then try the preferred locale encoding. - Fall back to latin-1, which always works. - """ - if getattr(term, 'encoding', None): - try: - text = text.decode(term.encoding) - except UnicodeDecodeError: - pass - else: - return text, term.encoding - return guess_decode(text) - - -def terminal_encoding(term): - """Return our best guess of encoding for the given *term*.""" - if getattr(term, 'encoding', None): - return term.encoding - import locale - return locale.getpreferredencoding() - - -class UnclosingTextIOWrapper(TextIOWrapper): - # Don't close underlying buffer on destruction. - def close(self): - self.flush() diff --git a/spaces/Big-Web/MMSD/env/Lib/site-packages/setuptools/_distutils/dep_util.py b/spaces/Big-Web/MMSD/env/Lib/site-packages/setuptools/_distutils/dep_util.py deleted file mode 100644 index db1fa01996ce0d47cd7f070c53b085926440d377..0000000000000000000000000000000000000000 --- a/spaces/Big-Web/MMSD/env/Lib/site-packages/setuptools/_distutils/dep_util.py +++ /dev/null @@ -1,96 +0,0 @@ -"""distutils.dep_util - -Utility functions for simple, timestamp-based dependency of files -and groups of files; also, function based entirely on such -timestamp dependency analysis.""" - -import os -from distutils.errors import DistutilsFileError - - -def newer(source, target): - """Return true if 'source' exists and is more recently modified than - 'target', or if 'source' exists and 'target' doesn't. Return false if - both exist and 'target' is the same age or younger than 'source'. - Raise DistutilsFileError if 'source' does not exist. - """ - if not os.path.exists(source): - raise DistutilsFileError("file '%s' does not exist" % os.path.abspath(source)) - if not os.path.exists(target): - return 1 - - from stat import ST_MTIME - - mtime1 = os.stat(source)[ST_MTIME] - mtime2 = os.stat(target)[ST_MTIME] - - return mtime1 > mtime2 - - -# newer () - - -def newer_pairwise(sources, targets): - """Walk two filename lists in parallel, testing if each source is newer - than its corresponding target. Return a pair of lists (sources, - targets) where source is newer than target, according to the semantics - of 'newer()'. - """ - if len(sources) != len(targets): - raise ValueError("'sources' and 'targets' must be same length") - - # build a pair of lists (sources, targets) where source is newer - n_sources = [] - n_targets = [] - for i in range(len(sources)): - if newer(sources[i], targets[i]): - n_sources.append(sources[i]) - n_targets.append(targets[i]) - - return (n_sources, n_targets) - - -# newer_pairwise () - - -def newer_group(sources, target, missing='error'): - """Return true if 'target' is out-of-date with respect to any file - listed in 'sources'. In other words, if 'target' exists and is newer - than every file in 'sources', return false; otherwise return true. - 'missing' controls what we do when a source file is missing; the - default ("error") is to blow up with an OSError from inside 'stat()'; - if it is "ignore", we silently drop any missing source files; if it is - "newer", any missing source files make us assume that 'target' is - out-of-date (this is handy in "dry-run" mode: it'll make you pretend to - carry out commands that wouldn't work because inputs are missing, but - that doesn't matter because you're not actually going to run the - commands). - """ - # If the target doesn't even exist, then it's definitely out-of-date. - if not os.path.exists(target): - return 1 - - # Otherwise we have to find out the hard way: if *any* source file - # is more recent than 'target', then 'target' is out-of-date and - # we can immediately return true. If we fall through to the end - # of the loop, then 'target' is up-to-date and we return false. - from stat import ST_MTIME - - target_mtime = os.stat(target)[ST_MTIME] - for source in sources: - if not os.path.exists(source): - if missing == 'error': # blow up when we stat() the file - pass - elif missing == 'ignore': # missing source dropped from - continue # target's dependency list - elif missing == 'newer': # missing source means target is - return 1 # out-of-date - - source_mtime = os.stat(source)[ST_MTIME] - if source_mtime > target_mtime: - return 1 - else: - return 0 - - -# newer_group () diff --git a/spaces/CVPR/Dual-Key_Backdoor_Attacks/datagen/detectron2/detectron2/layers/csrc/nms_rotated/nms_rotated.h b/spaces/CVPR/Dual-Key_Backdoor_Attacks/datagen/detectron2/detectron2/layers/csrc/nms_rotated/nms_rotated.h deleted file mode 100644 index f0dd981745a5a2b97b44d4d232a131e9255c02fe..0000000000000000000000000000000000000000 --- a/spaces/CVPR/Dual-Key_Backdoor_Attacks/datagen/detectron2/detectron2/layers/csrc/nms_rotated/nms_rotated.h +++ /dev/null @@ -1,38 +0,0 @@ -// Copyright (c) Facebook, Inc. and its affiliates. All Rights Reserved -#pragma once -#include - -namespace detectron2 { - -at::Tensor nms_rotated_cpu( - const at::Tensor& dets, - const at::Tensor& scores, - const float iou_threshold); - -#ifdef WITH_CUDA -at::Tensor nms_rotated_cuda( - const at::Tensor& dets, - const at::Tensor& scores, - const float iou_threshold); -#endif - -// Interface for Python -// inline is needed to prevent multiple function definitions when this header is -// included by different cpps -inline at::Tensor nms_rotated( - const at::Tensor& dets, - const at::Tensor& scores, - const float iou_threshold) { - assert(dets.device().is_cuda() == scores.device().is_cuda()); - if (dets.device().is_cuda()) { -#ifdef WITH_CUDA - return nms_rotated_cuda(dets, scores, iou_threshold); -#else - AT_ERROR("Not compiled with GPU support"); -#endif - } - - return nms_rotated_cpu(dets, scores, iou_threshold); -} - -} // namespace detectron2 diff --git a/spaces/CVPR/Dual-Key_Backdoor_Attacks/datagen/detectron2/projects/TensorMask/tensormask/layers/swap_align2nat.py b/spaces/CVPR/Dual-Key_Backdoor_Attacks/datagen/detectron2/projects/TensorMask/tensormask/layers/swap_align2nat.py deleted file mode 100644 index a72c98a968577eff2302d75e4cb41620e4ecf582..0000000000000000000000000000000000000000 --- a/spaces/CVPR/Dual-Key_Backdoor_Attacks/datagen/detectron2/projects/TensorMask/tensormask/layers/swap_align2nat.py +++ /dev/null @@ -1,61 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. All Rights Reserved -from torch import nn -from torch.autograd import Function -from torch.autograd.function import once_differentiable - -from tensormask import _C - - -class _SwapAlign2Nat(Function): - @staticmethod - def forward(ctx, X, lambda_val, pad_val): - ctx.lambda_val = lambda_val - ctx.input_shape = X.size() - - Y = _C.swap_align2nat_forward(X, lambda_val, pad_val) - return Y - - @staticmethod - @once_differentiable - def backward(ctx, gY): - lambda_val = ctx.lambda_val - bs, ch, h, w = ctx.input_shape - - gX = _C.swap_align2nat_backward(gY, lambda_val, bs, ch, h, w) - - return gX, None, None - - -swap_align2nat = _SwapAlign2Nat.apply - - -class SwapAlign2Nat(nn.Module): - """ - The op `SwapAlign2Nat` described in https://arxiv.org/abs/1903.12174. - Given an input tensor that predicts masks of shape (N, C=VxU, H, W), - apply the op, it will return masks of shape (N, V'xU', H', W') where - the unit lengths of (V, U) and (H, W) are swapped, and the mask representation - is transformed from aligned to natural. - Args: - lambda_val (int): the relative unit length ratio between (V, U) and (H, W), - as we always have larger unit lengths for (V, U) than (H, W), - lambda_val is always >= 1. - pad_val (float): padding value for the values falling outside of the input - tensor, default set to -6 as sigmoid(-6) is ~0, indicating - that is no masks outside of the tensor. - """ - - def __init__(self, lambda_val, pad_val=-6.0): - super(SwapAlign2Nat, self).__init__() - self.lambda_val = lambda_val - self.pad_val = pad_val - - def forward(self, X): - return swap_align2nat(X, self.lambda_val, self.pad_val) - - def __repr__(self): - tmpstr = self.__class__.__name__ + "(" - tmpstr += "lambda_val=" + str(self.lambda_val) - tmpstr += ", pad_val=" + str(self.pad_val) - tmpstr += ")" - return tmpstr diff --git a/spaces/CVPR/MonoScene/monoscene/.ipynb_checkpoints/modules-checkpoint.py b/spaces/CVPR/MonoScene/monoscene/.ipynb_checkpoints/modules-checkpoint.py deleted file mode 100644 index 3e8bf875ccd6dffb51bb5acb25f0302fe0032d6c..0000000000000000000000000000000000000000 --- a/spaces/CVPR/MonoScene/monoscene/.ipynb_checkpoints/modules-checkpoint.py +++ /dev/null @@ -1,194 +0,0 @@ -import torch -import torch.nn as nn -from monoscene.DDR import Bottleneck3D - - -class ASPP(nn.Module): - """ - ASPP 3D - Adapt from https://github.com/cv-rits/LMSCNet/blob/main/LMSCNet/models/LMSCNet.py#L7 - """ - - def __init__(self, planes, dilations_conv_list): - super().__init__() - - # ASPP Block - self.conv_list = dilations_conv_list - self.conv1 = nn.ModuleList( - [ - nn.Conv3d( - planes, planes, kernel_size=3, padding=dil, dilation=dil, bias=False - ) - for dil in dilations_conv_list - ] - ) - self.bn1 = nn.ModuleList( - [nn.BatchNorm3d(planes) for dil in dilations_conv_list] - ) - self.conv2 = nn.ModuleList( - [ - nn.Conv3d( - planes, planes, kernel_size=3, padding=dil, dilation=dil, bias=False - ) - for dil in dilations_conv_list - ] - ) - self.bn2 = nn.ModuleList( - [nn.BatchNorm3d(planes) for dil in dilations_conv_list] - ) - self.relu = nn.ReLU() - - def forward(self, x_in): - - y = self.bn2[0](self.conv2[0](self.relu(self.bn1[0](self.conv1[0](x_in))))) - for i in range(1, len(self.conv_list)): - y += self.bn2[i](self.conv2[i](self.relu(self.bn1[i](self.conv1[i](x_in))))) - x_in = self.relu(y + x_in) # modified - - return x_in - - -class SegmentationHead(nn.Module): - """ - 3D Segmentation heads to retrieve semantic segmentation at each scale. - Formed by Dim expansion, Conv3D, ASPP block, Conv3D. - Taken from https://github.com/cv-rits/LMSCNet/blob/main/LMSCNet/models/LMSCNet.py#L7 - """ - - def __init__(self, inplanes, planes, nbr_classes, dilations_conv_list): - super().__init__() - - # First convolution - self.conv0 = nn.Conv3d(inplanes, planes, kernel_size=3, padding=1, stride=1) - - # ASPP Block - self.conv_list = dilations_conv_list - self.conv1 = nn.ModuleList( - [ - nn.Conv3d( - planes, planes, kernel_size=3, padding=dil, dilation=dil, bias=False - ) - for dil in dilations_conv_list - ] - ) - self.bn1 = nn.ModuleList( - [nn.BatchNorm3d(planes) for dil in dilations_conv_list] - ) - self.conv2 = nn.ModuleList( - [ - nn.Conv3d( - planes, planes, kernel_size=3, padding=dil, dilation=dil, bias=False - ) - for dil in dilations_conv_list - ] - ) - self.bn2 = nn.ModuleList( - [nn.BatchNorm3d(planes) for dil in dilations_conv_list] - ) - self.relu = nn.ReLU() - - self.conv_classes = nn.Conv3d( - planes, nbr_classes, kernel_size=3, padding=1, stride=1 - ) - - def forward(self, x_in): - - # Convolution to go from inplanes to planes features... - x_in = self.relu(self.conv0(x_in)) - - y = self.bn2[0](self.conv2[0](self.relu(self.bn1[0](self.conv1[0](x_in))))) - for i in range(1, len(self.conv_list)): - y += self.bn2[i](self.conv2[i](self.relu(self.bn1[i](self.conv1[i](x_in))))) - x_in = self.relu(y + x_in) # modified - - x_in = self.conv_classes(x_in) - - return x_in - - -class ProcessKitti(nn.Module): - def __init__(self, feature, norm_layer, bn_momentum, dilations=[1, 2, 3]): - super(Process, self).__init__() - self.main = nn.Sequential( - *[ - Bottleneck3D( - feature, - feature // 4, - bn_momentum=bn_momentum, - norm_layer=norm_layer, - dilation=[i, i, i], - ) - for i in dilations - ] - ) - - def forward(self, x): - return self.main(x) - - -class Process(nn.Module): - def __init__(self, feature, norm_layer, bn_momentum, dilations=[1, 2, 3]): - super(Process, self).__init__() - self.main = nn.Sequential( - *[ - Bottleneck3D( - feature, - feature // 4, - bn_momentum=bn_momentum, - norm_layer=norm_layer, - dilation=[i, i, i], - ) - for i in dilations - ] - ) - - def forward(self, x): - return self.main(x) - - -class Upsample(nn.Module): - def __init__(self, in_channels, out_channels, norm_layer, bn_momentum): - super(Upsample, self).__init__() - self.main = nn.Sequential( - nn.ConvTranspose3d( - in_channels, - out_channels, - kernel_size=3, - stride=2, - padding=1, - dilation=1, - output_padding=1, - ), - norm_layer(out_channels, momentum=bn_momentum), - nn.ReLU(), - ) - - def forward(self, x): - return self.main(x) - - -class Downsample(nn.Module): - def __init__(self, feature, norm_layer, bn_momentum, expansion=8): - super(Downsample, self).__init__() - self.main = Bottleneck3D( - feature, - feature // 4, - bn_momentum=bn_momentum, - expansion=expansion, - stride=2, - downsample=nn.Sequential( - nn.AvgPool3d(kernel_size=2, stride=2), - nn.Conv3d( - feature, - int(feature * expansion / 4), - kernel_size=1, - stride=1, - bias=False, - ), - norm_layer(int(feature * expansion / 4), momentum=bn_momentum), - ), - norm_layer=norm_layer, - ) - - def forward(self, x): - return self.main(x) diff --git a/spaces/CVPR/WALT/mmdet/datasets/xml_style.py b/spaces/CVPR/WALT/mmdet/datasets/xml_style.py deleted file mode 100644 index 71069488b0f6da3b37e588228f44460ce5f00679..0000000000000000000000000000000000000000 --- a/spaces/CVPR/WALT/mmdet/datasets/xml_style.py +++ /dev/null @@ -1,170 +0,0 @@ -import os.path as osp -import xml.etree.ElementTree as ET - -import mmcv -import numpy as np -from PIL import Image - -from .builder import DATASETS -from .custom import CustomDataset - - -@DATASETS.register_module() -class XMLDataset(CustomDataset): - """XML dataset for detection. - - Args: - min_size (int | float, optional): The minimum size of bounding - boxes in the images. If the size of a bounding box is less than - ``min_size``, it would be add to ignored field. - """ - - def __init__(self, min_size=None, **kwargs): - assert self.CLASSES or kwargs.get( - 'classes', None), 'CLASSES in `XMLDataset` can not be None.' - super(XMLDataset, self).__init__(**kwargs) - self.cat2label = {cat: i for i, cat in enumerate(self.CLASSES)} - self.min_size = min_size - - def load_annotations(self, ann_file): - """Load annotation from XML style ann_file. - - Args: - ann_file (str): Path of XML file. - - Returns: - list[dict]: Annotation info from XML file. - """ - - data_infos = [] - img_ids = mmcv.list_from_file(ann_file) - for img_id in img_ids: - filename = f'JPEGImages/{img_id}.jpg' - xml_path = osp.join(self.img_prefix, 'Annotations', - f'{img_id}.xml') - tree = ET.parse(xml_path) - root = tree.getroot() - size = root.find('size') - if size is not None: - width = int(size.find('width').text) - height = int(size.find('height').text) - else: - img_path = osp.join(self.img_prefix, 'JPEGImages', - '{}.jpg'.format(img_id)) - img = Image.open(img_path) - width, height = img.size - data_infos.append( - dict(id=img_id, filename=filename, width=width, height=height)) - - return data_infos - - def _filter_imgs(self, min_size=32): - """Filter images too small or without annotation.""" - valid_inds = [] - for i, img_info in enumerate(self.data_infos): - if min(img_info['width'], img_info['height']) < min_size: - continue - if self.filter_empty_gt: - img_id = img_info['id'] - xml_path = osp.join(self.img_prefix, 'Annotations', - f'{img_id}.xml') - tree = ET.parse(xml_path) - root = tree.getroot() - for obj in root.findall('object'): - name = obj.find('name').text - if name in self.CLASSES: - valid_inds.append(i) - break - else: - valid_inds.append(i) - return valid_inds - - def get_ann_info(self, idx): - """Get annotation from XML file by index. - - Args: - idx (int): Index of data. - - Returns: - dict: Annotation info of specified index. - """ - - img_id = self.data_infos[idx]['id'] - xml_path = osp.join(self.img_prefix, 'Annotations', f'{img_id}.xml') - tree = ET.parse(xml_path) - root = tree.getroot() - bboxes = [] - labels = [] - bboxes_ignore = [] - labels_ignore = [] - for obj in root.findall('object'): - name = obj.find('name').text - if name not in self.CLASSES: - continue - label = self.cat2label[name] - difficult = obj.find('difficult') - difficult = 0 if difficult is None else int(difficult.text) - bnd_box = obj.find('bndbox') - # TODO: check whether it is necessary to use int - # Coordinates may be float type - bbox = [ - int(float(bnd_box.find('xmin').text)), - int(float(bnd_box.find('ymin').text)), - int(float(bnd_box.find('xmax').text)), - int(float(bnd_box.find('ymax').text)) - ] - ignore = False - if self.min_size: - assert not self.test_mode - w = bbox[2] - bbox[0] - h = bbox[3] - bbox[1] - if w < self.min_size or h < self.min_size: - ignore = True - if difficult or ignore: - bboxes_ignore.append(bbox) - labels_ignore.append(label) - else: - bboxes.append(bbox) - labels.append(label) - if not bboxes: - bboxes = np.zeros((0, 4)) - labels = np.zeros((0, )) - else: - bboxes = np.array(bboxes, ndmin=2) - 1 - labels = np.array(labels) - if not bboxes_ignore: - bboxes_ignore = np.zeros((0, 4)) - labels_ignore = np.zeros((0, )) - else: - bboxes_ignore = np.array(bboxes_ignore, ndmin=2) - 1 - labels_ignore = np.array(labels_ignore) - ann = dict( - bboxes=bboxes.astype(np.float32), - labels=labels.astype(np.int64), - bboxes_ignore=bboxes_ignore.astype(np.float32), - labels_ignore=labels_ignore.astype(np.int64)) - return ann - - def get_cat_ids(self, idx): - """Get category ids in XML file by index. - - Args: - idx (int): Index of data. - - Returns: - list[int]: All categories in the image of specified index. - """ - - cat_ids = [] - img_id = self.data_infos[idx]['id'] - xml_path = osp.join(self.img_prefix, 'Annotations', f'{img_id}.xml') - tree = ET.parse(xml_path) - root = tree.getroot() - for obj in root.findall('object'): - name = obj.find('name').text - if name not in self.CLASSES: - continue - label = self.cat2label[name] - cat_ids.append(label) - - return cat_ids diff --git a/spaces/Caoyunkang/Segment-Any-Anomaly/utils/__init__.py b/spaces/Caoyunkang/Segment-Any-Anomaly/utils/__init__.py deleted file mode 100644 index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000 diff --git a/spaces/ChandraMohanNayal/AutoGPT/ui/app.py b/spaces/ChandraMohanNayal/AutoGPT/ui/app.py deleted file mode 100644 index d7dbd31e901969d090292215935bdbc3d9d75e37..0000000000000000000000000000000000000000 --- a/spaces/ChandraMohanNayal/AutoGPT/ui/app.py +++ /dev/null @@ -1,145 +0,0 @@ -import gradio as gr -import utils -from api import AutoAPI, get_openai_api_key -import os, shutil -import json - -FILE_DIR = os.path.dirname(os.path.abspath(__file__)) -OUTPUT_DIR = os.path.join(os.path.dirname(FILE_DIR), "auto_gpt_workspace") -if not os.path.exists(OUTPUT_DIR): - os.mkdir(OUTPUT_DIR) - -CSS = """ -#chatbot {font-family: monospace;} -#files .generating {display: none;} -#files .min {min-height: 0px;} -""" - -with gr.Blocks(css=CSS) as app: - with gr.Column() as setup_pane: - gr.Markdown(f"""# Auto-GPT - 1. Duplicate this Space: Duplicate Space This will **NOT** work without duplication! - 2. Enter your OpenAI API Key below. - """) - with gr.Row(): - open_ai_key = gr.Textbox( - value=get_openai_api_key(), - label="OpenAI API Key", - type="password", - ) - gr.Markdown( - "3. Fill the values below, then click 'Start'. There are example values you can load at the bottom of this page." - ) - with gr.Row(): - ai_name = gr.Textbox(label="AI Name", placeholder="e.g. Entrepreneur-GPT") - ai_role = gr.Textbox( - label="AI Role", - placeholder="e.g. an AI designed to autonomously develop and run businesses with the sole goal of increasing your net worth.", - ) - top_5_goals = gr.Dataframe( - row_count=(5, "fixed"), - col_count=(1, "fixed"), - headers=["AI Goals - Enter up to 5"], - type="array" - ) - start_btn = gr.Button("Start", variant="primary") - with open(os.path.join(FILE_DIR, "examples.json"), "r") as f: - example_values = json.load(f) - gr.Examples( - example_values, - [ai_name, ai_role, top_5_goals], - ) - with gr.Column(visible=False) as main_pane: - with gr.Row(): - with gr.Column(scale=2): - chatbot = gr.Chatbot(elem_id="chatbot") - with gr.Row(): - yes_btn = gr.Button("Yes", variant="primary", interactive=False) - consecutive_yes = gr.Slider( - 1, 10, 1, step=1, label="Consecutive Yes", interactive=False - ) - custom_response = gr.Textbox( - label="Custom Response", - placeholder="Press 'Enter' to Submit.", - interactive=False, - ) - with gr.Column(scale=1): - gr.HTML( - lambda: f""" - Generated Files -
      {utils.format_directory(OUTPUT_DIR)}
      - """, every=3, elem_id="files" - ) - download_btn = gr.Button("Download All Files") - - chat_history = gr.State([[None, None]]) - api = gr.State(None) - - def start(open_ai_key, ai_name, ai_role, top_5_goals): - auto_api = AutoAPI(open_ai_key, ai_name, ai_role, top_5_goals) - return gr.Column.update(visible=False), gr.Column.update(visible=True), auto_api - - def bot_response(chat, api): - messages = [] - for message in api.get_chatbot_response(): - messages.append(message) - chat[-1][1] = "\n".join(messages) + "..." - yield chat - chat[-1][1] = "\n".join(messages) - yield chat - - def send_message(count, chat, api, message="Y"): - if message != "Y": - count = 1 - for i in range(count): - chat.append([message, None]) - yield chat, count - i - api.send_message(message) - for updated_chat in bot_response(chat, api): - yield updated_chat, count - i - - def activate_inputs(): - return { - yes_btn: gr.Button.update(interactive=True), - consecutive_yes: gr.Slider.update(interactive=True), - custom_response: gr.Textbox.update(interactive=True), - } - - def deactivate_inputs(): - return { - yes_btn: gr.Button.update(interactive=False), - consecutive_yes: gr.Slider.update(interactive=False), - custom_response: gr.Textbox.update(interactive=False), - } - - start_btn.click( - start, - [open_ai_key, ai_name, ai_role, top_5_goals], - [setup_pane, main_pane, api], - ).then(bot_response, [chat_history, api], chatbot).then( - activate_inputs, None, [yes_btn, consecutive_yes, custom_response] - ) - - yes_btn.click( - deactivate_inputs, None, [yes_btn, consecutive_yes, custom_response] - ).then( - send_message, [consecutive_yes, chat_history, api], [chatbot, consecutive_yes] - ).then( - activate_inputs, None, [yes_btn, consecutive_yes, custom_response] - ) - custom_response.submit( - deactivate_inputs, None, [yes_btn, consecutive_yes, custom_response] - ).then( - send_message, - [consecutive_yes, chat_history, api, custom_response], - [chatbot, consecutive_yes], - ).then( - activate_inputs, None, [yes_btn, consecutive_yes, custom_response] - ) - - def download_all_files(): - shutil.make_archive("outputs", "zip", OUTPUT_DIR) - - download_btn.click(download_all_files).then(None, _js=utils.DOWNLOAD_OUTPUTS_JS) - -app.queue(concurrency_count=20).launch(file_directories=[OUTPUT_DIR]) diff --git a/spaces/ChrisPreston/diff-svc_minato_aqua/utils/__init__.py b/spaces/ChrisPreston/diff-svc_minato_aqua/utils/__init__.py deleted file mode 100644 index edd05b1cbcf86d489ce395ab90e50587c7bef4c6..0000000000000000000000000000000000000000 --- a/spaces/ChrisPreston/diff-svc_minato_aqua/utils/__init__.py +++ /dev/null @@ -1,250 +0,0 @@ -import glob -import logging -import re -import time -from collections import defaultdict -import os -import sys -import shutil -import types -import numpy as np -import torch -import torch.nn.functional as F -import torch.distributed as dist -from torch import nn - - -def tensors_to_scalars(metrics): - new_metrics = {} - for k, v in metrics.items(): - if isinstance(v, torch.Tensor): - v = v.item() - if type(v) is dict: - v = tensors_to_scalars(v) - new_metrics[k] = v - return new_metrics - - -class AvgrageMeter(object): - - def __init__(self): - self.reset() - - def reset(self): - self.avg = 0 - self.sum = 0 - self.cnt = 0 - - def update(self, val, n=1): - self.sum += val * n - self.cnt += n - self.avg = self.sum / self.cnt - - -def collate_1d(values, pad_idx=0, left_pad=False, shift_right=False, max_len=None, shift_id=1): - """Convert a list of 1d tensors into a padded 2d tensor.""" - size = max(v.size(0) for v in values) if max_len is None else max_len - res = values[0].new(len(values), size).fill_(pad_idx) - - def copy_tensor(src, dst): - assert dst.numel() == src.numel() - if shift_right: - dst[1:] = src[:-1] - dst[0] = shift_id - else: - dst.copy_(src) - - for i, v in enumerate(values): - copy_tensor(v, res[i][size - len(v):] if left_pad else res[i][:len(v)]) - return res - - -def collate_2d(values, pad_idx=0, left_pad=False, shift_right=False, max_len=None): - """Convert a list of 2d tensors into a padded 3d tensor.""" - size = max(v.size(0) for v in values) if max_len is None else max_len - res = values[0].new(len(values), size, values[0].shape[1]).fill_(pad_idx) - - def copy_tensor(src, dst): - assert dst.numel() == src.numel() - if shift_right: - dst[1:] = src[:-1] - else: - dst.copy_(src) - - for i, v in enumerate(values): - copy_tensor(v, res[i][size - len(v):] if left_pad else res[i][:len(v)]) - return res - - -def _is_batch_full(batch, num_tokens, max_tokens, max_sentences): - if len(batch) == 0: - return 0 - if len(batch) == max_sentences: - return 1 - if num_tokens > max_tokens: - return 1 - return 0 - - -def batch_by_size( - indices, num_tokens_fn, max_tokens=None, max_sentences=None, - required_batch_size_multiple=1, distributed=False -): - """ - Yield mini-batches of indices bucketed by size. Batches may contain - sequences of different lengths. - - Args: - indices (List[int]): ordered list of dataset indices - num_tokens_fn (callable): function that returns the number of tokens at - a given index - max_tokens (int, optional): max number of tokens in each batch - (default: None). - max_sentences (int, optional): max number of sentences in each - batch (default: None). - required_batch_size_multiple (int, optional): require batch size to - be a multiple of N (default: 1). - """ - max_tokens = max_tokens if max_tokens is not None else sys.maxsize - max_sentences = max_sentences if max_sentences is not None else sys.maxsize - bsz_mult = required_batch_size_multiple - - if isinstance(indices, types.GeneratorType): - indices = np.fromiter(indices, dtype=np.int64, count=-1) - - sample_len = 0 - sample_lens = [] - batch = [] - batches = [] - for i in range(len(indices)): - idx = indices[i] - num_tokens = num_tokens_fn(idx) - sample_lens.append(num_tokens) - sample_len = max(sample_len, num_tokens) - assert sample_len <= max_tokens, ( - "sentence at index {} of size {} exceeds max_tokens " - "limit of {}!".format(idx, sample_len, max_tokens) - ) - num_tokens = (len(batch) + 1) * sample_len - - if _is_batch_full(batch, num_tokens, max_tokens, max_sentences): - mod_len = max( - bsz_mult * (len(batch) // bsz_mult), - len(batch) % bsz_mult, - ) - batches.append(batch[:mod_len]) - batch = batch[mod_len:] - sample_lens = sample_lens[mod_len:] - sample_len = max(sample_lens) if len(sample_lens) > 0 else 0 - batch.append(idx) - if len(batch) > 0: - batches.append(batch) - return batches - - -def make_positions(tensor, padding_idx): - """Replace non-padding symbols with their position numbers. - - Position numbers begin at padding_idx+1. Padding symbols are ignored. - """ - # The series of casts and type-conversions here are carefully - # balanced to both work with ONNX export and XLA. In particular XLA - # prefers ints, cumsum defaults to output longs, and ONNX doesn't know - # how to handle the dtype kwarg in cumsum. - mask = tensor.ne(padding_idx).int() - return ( - torch.cumsum(mask, dim=1).type_as(mask) * mask - ).long() + padding_idx - - -def softmax(x, dim): - return F.softmax(x, dim=dim, dtype=torch.float32) - - -def unpack_dict_to_list(samples): - samples_ = [] - bsz = samples.get('outputs').size(0) - for i in range(bsz): - res = {} - for k, v in samples.items(): - try: - res[k] = v[i] - except: - pass - samples_.append(res) - return samples_ - - -def load_ckpt(cur_model, ckpt_base_dir, prefix_in_ckpt='model', force=True, strict=True): - if os.path.isfile(ckpt_base_dir): - base_dir = os.path.dirname(ckpt_base_dir) - checkpoint_path = [ckpt_base_dir] - else: - base_dir = ckpt_base_dir - checkpoint_path = sorted(glob.glob(f'{base_dir}/model_ckpt_steps_*.ckpt'), key= - lambda x: int(re.findall(f'{base_dir}/model_ckpt_steps_(\d+).ckpt', x.replace('\\','/'))[0])) - if len(checkpoint_path) > 0: - checkpoint_path = checkpoint_path[-1] - state_dict = torch.load(checkpoint_path, map_location="cpu")["state_dict"] - state_dict = {k[len(prefix_in_ckpt) + 1:]: v for k, v in state_dict.items() - if k.startswith(f'{prefix_in_ckpt}.')} - if not strict: - cur_model_state_dict = cur_model.state_dict() - unmatched_keys = [] - for key, param in state_dict.items(): - if key in cur_model_state_dict: - new_param = cur_model_state_dict[key] - if new_param.shape != param.shape: - unmatched_keys.append(key) - print("| Unmatched keys: ", key, new_param.shape, param.shape) - for key in unmatched_keys: - del state_dict[key] - cur_model.load_state_dict(state_dict, strict=strict) - print(f"| load '{prefix_in_ckpt}' from '{checkpoint_path}'.") - else: - e_msg = f"| ckpt not found in {base_dir}." - if force: - assert False, e_msg - else: - print(e_msg) - - -def remove_padding(x, padding_idx=0): - if x is None: - return None - assert len(x.shape) in [1, 2] - if len(x.shape) == 2: # [T, H] - return x[np.abs(x).sum(-1) != padding_idx] - elif len(x.shape) == 1: # [T] - return x[x != padding_idx] - - -class Timer: - timer_map = {} - - def __init__(self, name, print_time=False): - if name not in Timer.timer_map: - Timer.timer_map[name] = 0 - self.name = name - self.print_time = print_time - - def __enter__(self): - self.t = time.time() - - def __exit__(self, exc_type, exc_val, exc_tb): - Timer.timer_map[self.name] += time.time() - self.t - if self.print_time: - print(self.name, Timer.timer_map[self.name]) - - -def print_arch(model, model_name='model'): - #print(f"| {model_name} Arch: ", model) - num_params(model, model_name=model_name) - - -def num_params(model, print_out=True, model_name="model"): - parameters = filter(lambda p: p.requires_grad, model.parameters()) - parameters = sum([np.prod(p.size()) for p in parameters]) / 1_000_000 - if print_out: - print(f'| {model_name} Trainable Parameters: %.3fM' % parameters) - return parameters diff --git a/spaces/CikeyQI/Yunzai/Yunzai/lib/listener/listener.js b/spaces/CikeyQI/Yunzai/Yunzai/lib/listener/listener.js deleted file mode 100644 index 644f7a1bb5ee78279807bce45a6733f333274b74..0000000000000000000000000000000000000000 --- a/spaces/CikeyQI/Yunzai/Yunzai/lib/listener/listener.js +++ /dev/null @@ -1,16 +0,0 @@ -import PluginsLoader from '../plugins/loader.js' - -export default class EventListener { - /** - * 事件监听 - * @param data.prefix 事件名称前缀 - * @param data.event 监听的事件 - * @param data.once 是否只监听一次 - */ - constructor (data) { - this.prefix = data.prefix || '' - this.event = data.event - this.once = data.once || false - this.plugins = PluginsLoader - } -} \ No newline at end of file diff --git a/spaces/CofAI/chat/g4f/Provider/Providers/Forefront.py b/spaces/CofAI/chat/g4f/Provider/Providers/Forefront.py deleted file mode 100644 index e7e89831cc4ec6dc37ea094d9828a7582e981ff1..0000000000000000000000000000000000000000 --- a/spaces/CofAI/chat/g4f/Provider/Providers/Forefront.py +++ /dev/null @@ -1,30 +0,0 @@ -import os -import json -import requests -from ...typing import sha256, Dict, get_type_hints - -url = 'https://forefront.com' -model = ['gpt-3.5-turbo'] -supports_stream = True -needs_auth = False - -def _create_completion(model: str, messages: list, stream: bool, **kwargs): - json_data = { - 'text': messages[-1]['content'], - 'action': 'noauth', - 'id': '', - 'parentId': '', - 'workspaceId': '', - 'messagePersona': '607e41fe-95be-497e-8e97-010a59b2e2c0', - 'model': 'gpt-4', - 'messages': messages[:-1] if len(messages) > 1 else [], - 'internetMode': 'auto' - } - response = requests.post( 'https://streaming.tenant-forefront-default.knative.chi.coreweave.com/free-chat', - json=json_data, stream=True) - for token in response.iter_lines(): - if b'delta' in token: - token = json.loads(token.decode().split('data: ')[1])['delta'] - yield (token) -params = f'g4f.Providers.{os.path.basename(__file__)[:-3]} supports: ' + \ - '(%s)' % ', '.join([f"{name}: {get_type_hints(_create_completion)[name].__name__}" for name in _create_completion.__code__.co_varnames[:_create_completion.__code__.co_argcount]]) \ No newline at end of file diff --git a/spaces/CuraAlizm/stabilityai-stable-diffusion-xl-base-1.0/app.py b/spaces/CuraAlizm/stabilityai-stable-diffusion-xl-base-1.0/app.py deleted file mode 100644 index 9520517f687cf7229ddfab9d8c5f8af7f76b0bd4..0000000000000000000000000000000000000000 --- a/spaces/CuraAlizm/stabilityai-stable-diffusion-xl-base-1.0/app.py +++ /dev/null @@ -1,3 +0,0 @@ -import gradio as gr - -gr.Interface.load("models/stabilityai/stable-diffusion-xl-base-1.0").launch() \ No newline at end of file diff --git a/spaces/Cvandi/remake/tests/test_model.py b/spaces/Cvandi/remake/tests/test_model.py deleted file mode 100644 index c20bb1d56ed20222e929e9c94026f6ea383c6026..0000000000000000000000000000000000000000 --- a/spaces/Cvandi/remake/tests/test_model.py +++ /dev/null @@ -1,126 +0,0 @@ -import torch -import yaml -from basicsr.archs.rrdbnet_arch import RRDBNet -from basicsr.data.paired_image_dataset import PairedImageDataset -from basicsr.losses.losses import GANLoss, L1Loss, PerceptualLoss - -from realesrgan.archs.discriminator_arch import UNetDiscriminatorSN -from realesrgan.models.realesrgan_model import RealESRGANModel -from realesrgan.models.realesrnet_model import RealESRNetModel - - -def test_realesrnet_model(): - with open('tests/data/test_realesrnet_model.yml', mode='r') as f: - opt = yaml.load(f, Loader=yaml.FullLoader) - - # build model - model = RealESRNetModel(opt) - # test attributes - assert model.__class__.__name__ == 'RealESRNetModel' - assert isinstance(model.net_g, RRDBNet) - assert isinstance(model.cri_pix, L1Loss) - assert isinstance(model.optimizers[0], torch.optim.Adam) - - # prepare data - gt = torch.rand((1, 3, 32, 32), dtype=torch.float32) - kernel1 = torch.rand((1, 5, 5), dtype=torch.float32) - kernel2 = torch.rand((1, 5, 5), dtype=torch.float32) - sinc_kernel = torch.rand((1, 5, 5), dtype=torch.float32) - data = dict(gt=gt, kernel1=kernel1, kernel2=kernel2, sinc_kernel=sinc_kernel) - model.feed_data(data) - # check dequeue - model.feed_data(data) - # check data shape - assert model.lq.shape == (1, 3, 8, 8) - assert model.gt.shape == (1, 3, 32, 32) - - # change probability to test if-else - model.opt['gaussian_noise_prob'] = 0 - model.opt['gray_noise_prob'] = 0 - model.opt['second_blur_prob'] = 0 - model.opt['gaussian_noise_prob2'] = 0 - model.opt['gray_noise_prob2'] = 0 - model.feed_data(data) - # check data shape - assert model.lq.shape == (1, 3, 8, 8) - assert model.gt.shape == (1, 3, 32, 32) - - # ----------------- test nondist_validation -------------------- # - # construct dataloader - dataset_opt = dict( - name='Demo', - dataroot_gt='tests/data/gt', - dataroot_lq='tests/data/lq', - io_backend=dict(type='disk'), - scale=4, - phase='val') - dataset = PairedImageDataset(dataset_opt) - dataloader = torch.utils.data.DataLoader(dataset=dataset, batch_size=1, shuffle=False, num_workers=0) - assert model.is_train is True - model.nondist_validation(dataloader, 1, None, False) - assert model.is_train is True - - -def test_realesrgan_model(): - with open('tests/data/test_realesrgan_model.yml', mode='r') as f: - opt = yaml.load(f, Loader=yaml.FullLoader) - - # build model - model = RealESRGANModel(opt) - # test attributes - assert model.__class__.__name__ == 'RealESRGANModel' - assert isinstance(model.net_g, RRDBNet) # generator - assert isinstance(model.net_d, UNetDiscriminatorSN) # discriminator - assert isinstance(model.cri_pix, L1Loss) - assert isinstance(model.cri_perceptual, PerceptualLoss) - assert isinstance(model.cri_gan, GANLoss) - assert isinstance(model.optimizers[0], torch.optim.Adam) - assert isinstance(model.optimizers[1], torch.optim.Adam) - - # prepare data - gt = torch.rand((1, 3, 32, 32), dtype=torch.float32) - kernel1 = torch.rand((1, 5, 5), dtype=torch.float32) - kernel2 = torch.rand((1, 5, 5), dtype=torch.float32) - sinc_kernel = torch.rand((1, 5, 5), dtype=torch.float32) - data = dict(gt=gt, kernel1=kernel1, kernel2=kernel2, sinc_kernel=sinc_kernel) - model.feed_data(data) - # check dequeue - model.feed_data(data) - # check data shape - assert model.lq.shape == (1, 3, 8, 8) - assert model.gt.shape == (1, 3, 32, 32) - - # change probability to test if-else - model.opt['gaussian_noise_prob'] = 0 - model.opt['gray_noise_prob'] = 0 - model.opt['second_blur_prob'] = 0 - model.opt['gaussian_noise_prob2'] = 0 - model.opt['gray_noise_prob2'] = 0 - model.feed_data(data) - # check data shape - assert model.lq.shape == (1, 3, 8, 8) - assert model.gt.shape == (1, 3, 32, 32) - - # ----------------- test nondist_validation -------------------- # - # construct dataloader - dataset_opt = dict( - name='Demo', - dataroot_gt='tests/data/gt', - dataroot_lq='tests/data/lq', - io_backend=dict(type='disk'), - scale=4, - phase='val') - dataset = PairedImageDataset(dataset_opt) - dataloader = torch.utils.data.DataLoader(dataset=dataset, batch_size=1, shuffle=False, num_workers=0) - assert model.is_train is True - model.nondist_validation(dataloader, 1, None, False) - assert model.is_train is True - - # ----------------- test optimize_parameters -------------------- # - model.feed_data(data) - model.optimize_parameters(1) - assert model.output.shape == (1, 3, 32, 32) - assert isinstance(model.log_dict, dict) - # check returned keys - expected_keys = ['l_g_pix', 'l_g_percep', 'l_g_gan', 'l_d_real', 'out_d_real', 'l_d_fake', 'out_d_fake'] - assert set(expected_keys).issubset(set(model.log_dict.keys())) diff --git a/spaces/Cyril666/ContourNet-ABI/maskrcnn_benchmark/solver/build.py b/spaces/Cyril666/ContourNet-ABI/maskrcnn_benchmark/solver/build.py deleted file mode 100644 index 865a4ec8d1b3d996b0618e3b2b77bd1b44acfa96..0000000000000000000000000000000000000000 --- a/spaces/Cyril666/ContourNet-ABI/maskrcnn_benchmark/solver/build.py +++ /dev/null @@ -1,31 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. All Rights Reserved. -import torch - -from .lr_scheduler import WarmupMultiStepLR - - -def make_optimizer(cfg, model): - params = [] - for key, value in model.named_parameters(): - if not value.requires_grad: - continue - lr = cfg.SOLVER.BASE_LR - weight_decay = cfg.SOLVER.WEIGHT_DECAY - if "bias" in key: - lr = cfg.SOLVER.BASE_LR * cfg.SOLVER.BIAS_LR_FACTOR - weight_decay = cfg.SOLVER.WEIGHT_DECAY_BIAS - params += [{"params": [value], "lr": lr, "weight_decay": weight_decay}] - - optimizer = torch.optim.SGD(params, lr, momentum=cfg.SOLVER.MOMENTUM) - return optimizer - - -def make_lr_scheduler(cfg, optimizer): - return WarmupMultiStepLR( - optimizer, - cfg.SOLVER.STEPS, - cfg.SOLVER.GAMMA, - warmup_factor=cfg.SOLVER.WARMUP_FACTOR, - warmup_iters=cfg.SOLVER.WARMUP_ITERS, - warmup_method=cfg.SOLVER.WARMUP_METHOD, - ) diff --git a/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/attr/filters.py b/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/attr/filters.py deleted file mode 100644 index a1e40c98db853aa375ab0b24559e0559f91e6152..0000000000000000000000000000000000000000 --- a/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/attr/filters.py +++ /dev/null @@ -1,66 +0,0 @@ -# SPDX-License-Identifier: MIT - -""" -Commonly useful filters for `attr.asdict`. -""" - -from ._make import Attribute - - -def _split_what(what): - """ - Returns a tuple of `frozenset`s of classes and attributes. - """ - return ( - frozenset(cls for cls in what if isinstance(cls, type)), - frozenset(cls for cls in what if isinstance(cls, str)), - frozenset(cls for cls in what if isinstance(cls, Attribute)), - ) - - -def include(*what): - """ - Include *what*. - - :param what: What to include. - :type what: `list` of classes `type`, field names `str` or - `attrs.Attribute`\\ s - - :rtype: `callable` - - .. versionchanged:: 23.1.0 Accept strings with field names. - """ - cls, names, attrs = _split_what(what) - - def include_(attribute, value): - return ( - value.__class__ in cls - or attribute.name in names - or attribute in attrs - ) - - return include_ - - -def exclude(*what): - """ - Exclude *what*. - - :param what: What to exclude. - :type what: `list` of classes `type`, field names `str` or - `attrs.Attribute`\\ s. - - :rtype: `callable` - - .. versionchanged:: 23.3.0 Accept field name string as input argument - """ - cls, names, attrs = _split_what(what) - - def exclude_(attribute, value): - return not ( - value.__class__ in cls - or attribute.name in names - or attribute in attrs - ) - - return exclude_ diff --git a/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/fontTools/misc/psLib.py b/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/fontTools/misc/psLib.py deleted file mode 100644 index 1e0408ce9c16f9a784f53ef1d17af88b0ab65647..0000000000000000000000000000000000000000 --- a/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/fontTools/misc/psLib.py +++ /dev/null @@ -1,399 +0,0 @@ -from fontTools.misc.textTools import bytechr, byteord, bytesjoin, tobytes, tostr -from fontTools.misc import eexec -from .psOperators import ( - PSOperators, - ps_StandardEncoding, - ps_array, - ps_boolean, - ps_dict, - ps_integer, - ps_literal, - ps_mark, - ps_name, - ps_operator, - ps_procedure, - ps_procmark, - ps_real, - ps_string, -) -import re -from collections.abc import Callable -from string import whitespace -import logging - - -log = logging.getLogger(__name__) - -ps_special = b"()<>[]{}%" # / is one too, but we take care of that one differently - -skipwhiteRE = re.compile(bytesjoin([b"[", whitespace, b"]*"])) -endofthingPat = bytesjoin([b"[^][(){}<>/%", whitespace, b"]*"]) -endofthingRE = re.compile(endofthingPat) -commentRE = re.compile(b"%[^\n\r]*") - -# XXX This not entirely correct as it doesn't allow *nested* embedded parens: -stringPat = rb""" - \( - ( - ( - [^()]* \ [()] - ) - | - ( - [^()]* \( [^()]* \) - ) - )* - [^()]* - \) -""" -stringPat = b"".join(stringPat.split()) -stringRE = re.compile(stringPat) - -hexstringRE = re.compile(bytesjoin([b"<[", whitespace, b"0-9A-Fa-f]*>"])) - - -class PSTokenError(Exception): - pass - - -class PSError(Exception): - pass - - -class PSTokenizer(object): - def __init__(self, buf=b"", encoding="ascii"): - # Force self.buf to be a byte string - buf = tobytes(buf) - self.buf = buf - self.len = len(buf) - self.pos = 0 - self.closed = False - self.encoding = encoding - - def read(self, n=-1): - """Read at most 'n' bytes from the buffer, or less if the read - hits EOF before obtaining 'n' bytes. - If 'n' is negative or omitted, read all data until EOF is reached. - """ - if self.closed: - raise ValueError("I/O operation on closed file") - if n is None or n < 0: - newpos = self.len - else: - newpos = min(self.pos + n, self.len) - r = self.buf[self.pos : newpos] - self.pos = newpos - return r - - def close(self): - if not self.closed: - self.closed = True - del self.buf, self.pos - - def getnexttoken( - self, - # localize some stuff, for performance - len=len, - ps_special=ps_special, - stringmatch=stringRE.match, - hexstringmatch=hexstringRE.match, - commentmatch=commentRE.match, - endmatch=endofthingRE.match, - ): - - self.skipwhite() - if self.pos >= self.len: - return None, None - pos = self.pos - buf = self.buf - char = bytechr(byteord(buf[pos])) - if char in ps_special: - if char in b"{}[]": - tokentype = "do_special" - token = char - elif char == b"%": - tokentype = "do_comment" - _, nextpos = commentmatch(buf, pos).span() - token = buf[pos:nextpos] - elif char == b"(": - tokentype = "do_string" - m = stringmatch(buf, pos) - if m is None: - raise PSTokenError("bad string at character %d" % pos) - _, nextpos = m.span() - token = buf[pos:nextpos] - elif char == b"<": - tokentype = "do_hexstring" - m = hexstringmatch(buf, pos) - if m is None: - raise PSTokenError("bad hexstring at character %d" % pos) - _, nextpos = m.span() - token = buf[pos:nextpos] - else: - raise PSTokenError("bad token at character %d" % pos) - else: - if char == b"/": - tokentype = "do_literal" - m = endmatch(buf, pos + 1) - else: - tokentype = "" - m = endmatch(buf, pos) - if m is None: - raise PSTokenError("bad token at character %d" % pos) - _, nextpos = m.span() - token = buf[pos:nextpos] - self.pos = pos + len(token) - token = tostr(token, encoding=self.encoding) - return tokentype, token - - def skipwhite(self, whitematch=skipwhiteRE.match): - _, nextpos = whitematch(self.buf, self.pos).span() - self.pos = nextpos - - def starteexec(self): - self.pos = self.pos + 1 - self.dirtybuf = self.buf[self.pos :] - self.buf, R = eexec.decrypt(self.dirtybuf, 55665) - self.len = len(self.buf) - self.pos = 4 - - def stopeexec(self): - if not hasattr(self, "dirtybuf"): - return - self.buf = self.dirtybuf - del self.dirtybuf - - -class PSInterpreter(PSOperators): - def __init__(self, encoding="ascii"): - systemdict = {} - userdict = {} - self.encoding = encoding - self.dictstack = [systemdict, userdict] - self.stack = [] - self.proclevel = 0 - self.procmark = ps_procmark() - self.fillsystemdict() - - def fillsystemdict(self): - systemdict = self.dictstack[0] - systemdict["["] = systemdict["mark"] = self.mark = ps_mark() - systemdict["]"] = ps_operator("]", self.do_makearray) - systemdict["true"] = ps_boolean(1) - systemdict["false"] = ps_boolean(0) - systemdict["StandardEncoding"] = ps_array(ps_StandardEncoding) - systemdict["FontDirectory"] = ps_dict({}) - self.suckoperators(systemdict, self.__class__) - - def suckoperators(self, systemdict, klass): - for name in dir(klass): - attr = getattr(self, name) - if isinstance(attr, Callable) and name[:3] == "ps_": - name = name[3:] - systemdict[name] = ps_operator(name, attr) - for baseclass in klass.__bases__: - self.suckoperators(systemdict, baseclass) - - def interpret(self, data, getattr=getattr): - tokenizer = self.tokenizer = PSTokenizer(data, self.encoding) - getnexttoken = tokenizer.getnexttoken - do_token = self.do_token - handle_object = self.handle_object - try: - while 1: - tokentype, token = getnexttoken() - if not token: - break - if tokentype: - handler = getattr(self, tokentype) - object = handler(token) - else: - object = do_token(token) - if object is not None: - handle_object(object) - tokenizer.close() - self.tokenizer = None - except: - if self.tokenizer is not None: - log.debug( - "ps error:\n" - "- - - - - - -\n" - "%s\n" - ">>>\n" - "%s\n" - "- - - - - - -", - self.tokenizer.buf[self.tokenizer.pos - 50 : self.tokenizer.pos], - self.tokenizer.buf[self.tokenizer.pos : self.tokenizer.pos + 50], - ) - raise - - def handle_object(self, object): - if not (self.proclevel or object.literal or object.type == "proceduretype"): - if object.type != "operatortype": - object = self.resolve_name(object.value) - if object.literal: - self.push(object) - else: - if object.type == "proceduretype": - self.call_procedure(object) - else: - object.function() - else: - self.push(object) - - def call_procedure(self, proc): - handle_object = self.handle_object - for item in proc.value: - handle_object(item) - - def resolve_name(self, name): - dictstack = self.dictstack - for i in range(len(dictstack) - 1, -1, -1): - if name in dictstack[i]: - return dictstack[i][name] - raise PSError("name error: " + str(name)) - - def do_token( - self, - token, - int=int, - float=float, - ps_name=ps_name, - ps_integer=ps_integer, - ps_real=ps_real, - ): - try: - num = int(token) - except (ValueError, OverflowError): - try: - num = float(token) - except (ValueError, OverflowError): - if "#" in token: - hashpos = token.find("#") - try: - base = int(token[:hashpos]) - num = int(token[hashpos + 1 :], base) - except (ValueError, OverflowError): - return ps_name(token) - else: - return ps_integer(num) - else: - return ps_name(token) - else: - return ps_real(num) - else: - return ps_integer(num) - - def do_comment(self, token): - pass - - def do_literal(self, token): - return ps_literal(token[1:]) - - def do_string(self, token): - return ps_string(token[1:-1]) - - def do_hexstring(self, token): - hexStr = "".join(token[1:-1].split()) - if len(hexStr) % 2: - hexStr = hexStr + "0" - cleanstr = [] - for i in range(0, len(hexStr), 2): - cleanstr.append(chr(int(hexStr[i : i + 2], 16))) - cleanstr = "".join(cleanstr) - return ps_string(cleanstr) - - def do_special(self, token): - if token == "{": - self.proclevel = self.proclevel + 1 - return self.procmark - elif token == "}": - proc = [] - while 1: - topobject = self.pop() - if topobject == self.procmark: - break - proc.append(topobject) - self.proclevel = self.proclevel - 1 - proc.reverse() - return ps_procedure(proc) - elif token == "[": - return self.mark - elif token == "]": - return ps_name("]") - else: - raise PSTokenError("huh?") - - def push(self, object): - self.stack.append(object) - - def pop(self, *types): - stack = self.stack - if not stack: - raise PSError("stack underflow") - object = stack[-1] - if types: - if object.type not in types: - raise PSError( - "typecheck, expected %s, found %s" % (repr(types), object.type) - ) - del stack[-1] - return object - - def do_makearray(self): - array = [] - while 1: - topobject = self.pop() - if topobject == self.mark: - break - array.append(topobject) - array.reverse() - self.push(ps_array(array)) - - def close(self): - """Remove circular references.""" - del self.stack - del self.dictstack - - -def unpack_item(item): - tp = type(item.value) - if tp == dict: - newitem = {} - for key, value in item.value.items(): - newitem[key] = unpack_item(value) - elif tp == list: - newitem = [None] * len(item.value) - for i in range(len(item.value)): - newitem[i] = unpack_item(item.value[i]) - if item.type == "proceduretype": - newitem = tuple(newitem) - else: - newitem = item.value - return newitem - - -def suckfont(data, encoding="ascii"): - m = re.search(rb"/FontName\s+/([^ \t\n\r]+)\s+def", data) - if m: - fontName = m.group(1) - fontName = fontName.decode() - else: - fontName = None - interpreter = PSInterpreter(encoding=encoding) - interpreter.interpret( - b"/Helvetica 4 dict dup /Encoding StandardEncoding put definefont pop" - ) - interpreter.interpret(data) - fontdir = interpreter.dictstack[0]["FontDirectory"].value - if fontName in fontdir: - rawfont = fontdir[fontName] - else: - # fall back, in case fontName wasn't found - fontNames = list(fontdir.keys()) - if len(fontNames) > 1: - fontNames.remove("Helvetica") - fontNames.sort() - rawfont = fontdir[fontNames[0]] - interpreter.close() - return unpack_item(rawfont) diff --git a/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/gradio/templates/cdn/assets/Textbox-1f11d244.js b/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/gradio/templates/cdn/assets/Textbox-1f11d244.js deleted file mode 100644 index 5a2cc70aeb07b058aa75ea95ee899f841ca7e0fa..0000000000000000000000000000000000000000 --- a/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/gradio/templates/cdn/assets/Textbox-1f11d244.js +++ /dev/null @@ -1,2 +0,0 @@ -import{S as ue,e as fe,s as _e,N as z,k as H,O as ee,K as h,U as te,p as y,o as K,M as ge,u as Y,v,y as Z,z as k,A as p,x as L,B as ke,am as we,P as ve,R as ye,a7 as le,h as D,ap as N,aj as pe,Q as g,X as Te,a1 as G,m as oe,n as X,Z as qe,$ as Ee,ak as m,j as ie,t as ne,F as M,E as Be,ae as Ne,q as ze,r as Ce}from"./index-1d65707a.js";/* empty css */import{f as Se,B as je}from"./Button-f155035a.js";import{B as De}from"./BlockTitle-dee077e8.js";import{C as He,a as Ke}from"./Copy-9f1657c4.js";function Le(l){let e;return{c(){e=ve(l[3])},m(t,a){y(t,e,a)},p(t,a){a[0]&8&&ye(e,t[3])},d(t){t&&p(e)}}}function Ue(l){let e,t,a,n,i,u,d,c,r=l[6]&&l[10]&&se(l);return{c(){r&&r.c(),e=ee(),t=z("textarea"),h(t,"data-testid","textbox"),h(t,"class","scroll-hide svelte-1kcgrqr"),h(t,"dir",a=l[11]?"rtl":"ltr"),h(t,"placeholder",l[2]),h(t,"rows",l[1]),t.disabled=l[5],h(t,"style",n=l[12]?"text-align: "+l[12]:"")},m(s,o){r&&r.m(s,o),y(s,e,o),y(s,t,o),N(t,l[0]),l[28](t),u=!0,d||(c=[pe(i=l[19].call(null,t,l[0])),g(t,"input",l[27]),g(t,"keypress",l[18]),g(t,"blur",l[15]),g(t,"select",l[17])],d=!0)},p(s,o){s[6]&&s[10]?r?(r.p(s,o),o[0]&1088&&k(r,1)):(r=se(s),r.c(),k(r,1),r.m(e.parentNode,e)):r&&(Y(),v(r,1,1,()=>{r=null}),Z()),(!u||o[0]&2048&&a!==(a=s[11]?"rtl":"ltr"))&&h(t,"dir",a),(!u||o[0]&4)&&h(t,"placeholder",s[2]),(!u||o[0]&2)&&h(t,"rows",s[1]),(!u||o[0]&32)&&(t.disabled=s[5]),(!u||o[0]&4096&&n!==(n=s[12]?"text-align: "+s[12]:""))&&h(t,"style",n),i&&Te(i.update)&&o[0]&1&&i.update.call(null,s[0]),o[0]&1&&N(t,s[0])},i(s){u||(k(r),u=!0)},o(s){v(r),u=!1},d(s){s&&(p(e),p(t)),r&&r.d(s),l[28](null),d=!1,G(c)}}}function Ae(l){let e;function t(i,u){if(i[9]==="text")return Qe;if(i[9]==="password")return Pe;if(i[9]==="email")return Oe}let a=t(l),n=a&&a(l);return{c(){n&&n.c(),e=oe()},m(i,u){n&&n.m(i,u),y(i,e,u)},p(i,u){a===(a=t(i))&&n?n.p(i,u):(n&&n.d(1),n=a&&a(i),n&&(n.c(),n.m(e.parentNode,e)))},i:X,o:X,d(i){i&&p(e),n&&n.d(i)}}}function se(l){let e,t,a,n;const i=[Me,Fe],u=[];function d(c,r){return c[14]?0:1}return e=d(l),t=u[e]=i[e](l),{c(){t.c(),a=oe()},m(c,r){u[e].m(c,r),y(c,a,r),n=!0},p(c,r){let s=e;e=d(c),e===s?u[e].p(c,r):(Y(),v(u[s],1,1,()=>{u[s]=null}),Z(),t=u[e],t?t.p(c,r):(t=u[e]=i[e](c),t.c()),k(t,1),t.m(a.parentNode,a))},i(c){n||(k(t),n=!0)},o(c){v(t),n=!1},d(c){c&&p(a),u[e].d(c)}}}function Fe(l){let e,t,a,n,i;return t=new He({}),{c(){e=z("button"),H(t.$$.fragment),h(e,"class","copy-text svelte-1kcgrqr")},m(u,d){y(u,e,d),K(t,e,null),a=!0,n||(i=g(e,"click",l[16]),n=!0)},p:X,i(u){a||(k(t.$$.fragment,u),a=!0)},o(u){v(t.$$.fragment,u),a=!1},d(u){u&&p(e),L(t),n=!1,i()}}}function Me(l){let e,t,a,n;return t=new Ke({}),{c(){e=z("button"),H(t.$$.fragment),h(e,"class","svelte-1kcgrqr")},m(i,u){y(i,e,u),K(t,e,null),n=!0},p:X,i(i){n||(k(t.$$.fragment,i),i&&(a||qe(()=>{a=Ee(e,Se,{duration:300}),a.start()})),n=!0)},o(i){v(t.$$.fragment,i),n=!1},d(i){i&&p(e),L(t)}}}function Oe(l){let e,t,a;return{c(){e=z("input"),h(e,"data-testid","textbox"),h(e,"type","email"),h(e,"class","scroll-hide svelte-1kcgrqr"),h(e,"placeholder",l[2]),e.disabled=l[5],h(e,"autocomplete","email")},m(n,i){y(n,e,i),N(e,l[0]),l[26](e),t||(a=[g(e,"input",l[25]),g(e,"keypress",l[18]),g(e,"blur",l[15]),g(e,"select",l[17])],t=!0)},p(n,i){i[0]&4&&h(e,"placeholder",n[2]),i[0]&32&&(e.disabled=n[5]),i[0]&1&&e.value!==n[0]&&N(e,n[0])},d(n){n&&p(e),l[26](null),t=!1,G(a)}}}function Pe(l){let e,t,a;return{c(){e=z("input"),h(e,"data-testid","password"),h(e,"type","password"),h(e,"class","scroll-hide svelte-1kcgrqr"),h(e,"placeholder",l[2]),e.disabled=l[5],h(e,"autocomplete","")},m(n,i){y(n,e,i),N(e,l[0]),l[24](e),t||(a=[g(e,"input",l[23]),g(e,"keypress",l[18]),g(e,"blur",l[15]),g(e,"select",l[17])],t=!0)},p(n,i){i[0]&4&&h(e,"placeholder",n[2]),i[0]&32&&(e.disabled=n[5]),i[0]&1&&e.value!==n[0]&&N(e,n[0])},d(n){n&&p(e),l[24](null),t=!1,G(a)}}}function Qe(l){let e,t,a,n,i;return{c(){e=z("input"),h(e,"data-testid","textbox"),h(e,"type","text"),h(e,"class","scroll-hide svelte-1kcgrqr"),h(e,"dir",t=l[11]?"rtl":"ltr"),h(e,"placeholder",l[2]),e.disabled=l[5],h(e,"style",a=l[12]?"text-align: "+l[12]:"")},m(u,d){y(u,e,d),N(e,l[0]),l[22](e),n||(i=[g(e,"input",l[21]),g(e,"keypress",l[18]),g(e,"blur",l[15]),g(e,"select",l[17])],n=!0)},p(u,d){d[0]&2048&&t!==(t=u[11]?"rtl":"ltr")&&h(e,"dir",t),d[0]&4&&h(e,"placeholder",u[2]),d[0]&32&&(e.disabled=u[5]),d[0]&4096&&a!==(a=u[12]?"text-align: "+u[12]:"")&&h(e,"style",a),d[0]&1&&e.value!==u[0]&&N(e,u[0])},d(u){u&&p(e),l[22](null),n=!1,G(i)}}}function Re(l){let e,t,a,n,i,u;t=new De({props:{show_label:l[6],info:l[4],$$slots:{default:[Le]},$$scope:{ctx:l}}});const d=[Ae,Ue],c=[];function r(s,o){return s[1]===1&&s[8]===1?0:1}return n=r(l),i=c[n]=d[n](l),{c(){e=z("label"),H(t.$$.fragment),a=ee(),i.c(),h(e,"class","svelte-1kcgrqr"),te(e,"container",l[7])},m(s,o){y(s,e,o),K(t,e,null),ge(e,a),c[n].m(e,null),u=!0},p(s,o){const b={};o[0]&64&&(b.show_label=s[6]),o[0]&16&&(b.info=s[4]),o[0]&8|o[1]&8&&(b.$$scope={dirty:o,ctx:s}),t.$set(b);let q=n;n=r(s),n===q?c[n].p(s,o):(Y(),v(c[q],1,1,()=>{c[q]=null}),Z(),i=c[n],i?i.p(s,o):(i=c[n]=d[n](s),i.c()),k(i,1),i.m(e,null)),(!u||o[0]&128)&&te(e,"container",s[7])},i(s){u||(k(t.$$.fragment,s),k(i),u=!0)},o(s){v(t.$$.fragment,s),v(i),u=!1},d(s){s&&p(e),L(t),c[n].d()}}}function Xe(l,e,t){let{value:a=""}=e,{value_is_output:n=!1}=e,{lines:i=1}=e,{placeholder:u="Type here..."}=e,{label:d}=e,{info:c=void 0}=e,{disabled:r=!1}=e,{show_label:s=!0}=e,{container:o=!0}=e,{max_lines:b}=e,{type:q="text"}=e,{show_copy_button:U=!1}=e,{rtl:A=!1}=e,{text_align:F=void 0}=e,w,C=!1,S;const T=ke();function O(){T("change",a),n||T("input")}we(()=>{t(20,n=!1)});function P(){T("blur")}async function I(){"clipboard"in navigator&&(await navigator.clipboard.writeText(a),J())}function J(){t(14,C=!0),S&&clearTimeout(S),S=setTimeout(()=>{t(14,C=!1)},1e3)}function V(_){const E=_.target,Q=E.value,B=[E.selectionStart,E.selectionEnd];T("select",{value:Q.substring(...B),index:B})}async function W(_){await le(),(_.key==="Enter"&&_.shiftKey&&i>1||_.key==="Enter"&&!_.shiftKey&&i===1&&b>=1)&&(_.preventDefault(),T("submit"))}async function j(_){if(await le(),i===b||!o)return;let E=b===void 0?!1:b===void 0?21*11:21*(b+1),Q=21*(i+1);const B=_.target;B.style.height="1px";let R;E&&B.scrollHeight>E?R=E:B.scrollHeight_.removeEventListener("input",j)}}function $(){a=this.value,t(0,a)}function f(_){D[_?"unshift":"push"](()=>{w=_,t(13,w)})}function re(){a=this.value,t(0,a)}function ce(_){D[_?"unshift":"push"](()=>{w=_,t(13,w)})}function he(){a=this.value,t(0,a)}function be(_){D[_?"unshift":"push"](()=>{w=_,t(13,w)})}function de(){a=this.value,t(0,a)}function me(_){D[_?"unshift":"push"](()=>{w=_,t(13,w)})}return l.$$set=_=>{"value"in _&&t(0,a=_.value),"value_is_output"in _&&t(20,n=_.value_is_output),"lines"in _&&t(1,i=_.lines),"placeholder"in _&&t(2,u=_.placeholder),"label"in _&&t(3,d=_.label),"info"in _&&t(4,c=_.info),"disabled"in _&&t(5,r=_.disabled),"show_label"in _&&t(6,s=_.show_label),"container"in _&&t(7,o=_.container),"max_lines"in _&&t(8,b=_.max_lines),"type"in _&&t(9,q=_.type),"show_copy_button"in _&&t(10,U=_.show_copy_button),"rtl"in _&&t(11,A=_.rtl),"text_align"in _&&t(12,F=_.text_align)},l.$$.update=()=>{l.$$.dirty[0]&8451&&w&&i!==b&&j({target:w}),l.$$.dirty[0]&1&&O()},[a,i,u,d,c,r,s,o,b,q,U,A,F,w,C,P,I,V,W,x,n,$,f,re,ce,he,be,de,me]}let Ye=class extends ue{constructor(e){super(),fe(this,e,Xe,Re,_e,{value:0,value_is_output:20,lines:1,placeholder:2,label:3,info:4,disabled:5,show_label:6,container:7,max_lines:8,type:9,show_copy_button:10,rtl:11,text_align:12},null,[-1,-1])}};function ae(l){let e,t;const a=[l[16]];let n={};for(let i=0;iie(t,"value",d)),D.push(()=>ie(t,"value_is_output",c)),t.$on("change",l[22]),t.$on("input",l[23]),t.$on("submit",l[24]),t.$on("blur",l[25]),t.$on("select",l[26]),{c(){u&&u.c(),e=ee(),H(t.$$.fragment)},m(s,o){u&&u.m(s,o),y(s,e,o),K(t,s,o),i=!0},p(s,o){s[16]?u?(u.p(s,o),o&65536&&k(u,1)):(u=ae(s),u.c(),k(u,1),u.m(e.parentNode,e)):u&&(Y(),v(u,1,1,()=>{u=null}),Z());const b={};o&4&&(b.label=s[2]),o&8&&(b.info=s[3]),o&512&&(b.show_label=s[9]),o&128&&(b.lines=s[7]),o&2048&&(b.type=s[11]),o&262144&&(b.rtl=s[18]),o&524288&&(b.text_align=s[19]),o&132224&&(b.max_lines=!s[10]&&s[17]==="static"?s[7]+1:s[10]),o&256&&(b.placeholder=s[8]),o&32768&&(b.show_copy_button=s[15]),o&4096&&(b.container=s[12]),o&131072&&(b.disabled=s[17]==="static"),!a&&o&1&&(a=!0,b.value=s[0],ne(()=>a=!1)),!n&&o&2&&(n=!0,b.value_is_output=s[1],ne(()=>n=!1)),t.$set(b)},i(s){i||(k(u),k(t.$$.fragment,s),i=!0)},o(s){v(u),v(t.$$.fragment,s),i=!1},d(s){s&&p(e),u&&u.d(s),L(t,s)}}}function Ge(l){let e,t;return e=new je({props:{visible:l[6],elem_id:l[4],elem_classes:l[5],scale:l[13],min_width:l[14],allow_overflow:!1,padding:l[12],$$slots:{default:[Ze]},$$scope:{ctx:l}}}),{c(){H(e.$$.fragment)},m(a,n){K(e,a,n),t=!0},p(a,[n]){const i={};n&64&&(i.visible=a[6]),n&16&&(i.elem_id=a[4]),n&32&&(i.elem_classes=a[5]),n&8192&&(i.scale=a[13]),n&16384&&(i.min_width=a[14]),n&4096&&(i.padding=a[12]),n&135241615&&(i.$$scope={dirty:n,ctx:a}),e.$set(i)},i(a){t||(k(e.$$.fragment,a),t=!0)},o(a){v(e.$$.fragment,a),t=!1},d(a){L(e,a)}}}function Ie(l,e,t){let{label:a="Textbox"}=e,{info:n=void 0}=e,{elem_id:i=""}=e,{elem_classes:u=[]}=e,{visible:d=!0}=e,{value:c=""}=e,{lines:r}=e,{placeholder:s=""}=e,{show_label:o}=e,{max_lines:b}=e,{type:q="text"}=e,{container:U=!0}=e,{scale:A=null}=e,{min_width:F=void 0}=e,{show_copy_button:w=!1}=e,{loading_status:C=void 0}=e,{mode:S}=e,{value_is_output:T=!1}=e,{rtl:O=!1}=e,{text_align:P=void 0}=e;function I(f){c=f,t(0,c)}function J(f){T=f,t(1,T)}function V(f){M.call(this,l,f)}function W(f){M.call(this,l,f)}function j(f){M.call(this,l,f)}function x(f){M.call(this,l,f)}function $(f){M.call(this,l,f)}return l.$$set=f=>{"label"in f&&t(2,a=f.label),"info"in f&&t(3,n=f.info),"elem_id"in f&&t(4,i=f.elem_id),"elem_classes"in f&&t(5,u=f.elem_classes),"visible"in f&&t(6,d=f.visible),"value"in f&&t(0,c=f.value),"lines"in f&&t(7,r=f.lines),"placeholder"in f&&t(8,s=f.placeholder),"show_label"in f&&t(9,o=f.show_label),"max_lines"in f&&t(10,b=f.max_lines),"type"in f&&t(11,q=f.type),"container"in f&&t(12,U=f.container),"scale"in f&&t(13,A=f.scale),"min_width"in f&&t(14,F=f.min_width),"show_copy_button"in f&&t(15,w=f.show_copy_button),"loading_status"in f&&t(16,C=f.loading_status),"mode"in f&&t(17,S=f.mode),"value_is_output"in f&&t(1,T=f.value_is_output),"rtl"in f&&t(18,O=f.rtl),"text_align"in f&&t(19,P=f.text_align)},[c,T,a,n,i,u,d,r,s,o,b,q,U,A,F,w,C,S,O,P,I,J,V,W,j,x,$]}class tt extends ue{constructor(e){super(),fe(this,e,Ie,Ge,_e,{label:2,info:3,elem_id:4,elem_classes:5,visible:6,value:0,lines:7,placeholder:8,show_label:9,max_lines:10,type:11,container:12,scale:13,min_width:14,show_copy_button:15,loading_status:16,mode:17,value_is_output:1,rtl:18,text_align:19})}get label(){return this.$$.ctx[2]}set label(e){this.$$set({label:e}),m()}get info(){return this.$$.ctx[3]}set info(e){this.$$set({info:e}),m()}get elem_id(){return this.$$.ctx[4]}set elem_id(e){this.$$set({elem_id:e}),m()}get elem_classes(){return this.$$.ctx[5]}set elem_classes(e){this.$$set({elem_classes:e}),m()}get visible(){return this.$$.ctx[6]}set visible(e){this.$$set({visible:e}),m()}get value(){return this.$$.ctx[0]}set value(e){this.$$set({value:e}),m()}get lines(){return this.$$.ctx[7]}set lines(e){this.$$set({lines:e}),m()}get placeholder(){return this.$$.ctx[8]}set placeholder(e){this.$$set({placeholder:e}),m()}get show_label(){return this.$$.ctx[9]}set show_label(e){this.$$set({show_label:e}),m()}get max_lines(){return this.$$.ctx[10]}set max_lines(e){this.$$set({max_lines:e}),m()}get type(){return this.$$.ctx[11]}set type(e){this.$$set({type:e}),m()}get container(){return this.$$.ctx[12]}set container(e){this.$$set({container:e}),m()}get scale(){return this.$$.ctx[13]}set scale(e){this.$$set({scale:e}),m()}get min_width(){return this.$$.ctx[14]}set min_width(e){this.$$set({min_width:e}),m()}get show_copy_button(){return this.$$.ctx[15]}set show_copy_button(e){this.$$set({show_copy_button:e}),m()}get loading_status(){return this.$$.ctx[16]}set loading_status(e){this.$$set({loading_status:e}),m()}get mode(){return this.$$.ctx[17]}set mode(e){this.$$set({mode:e}),m()}get value_is_output(){return this.$$.ctx[1]}set value_is_output(e){this.$$set({value_is_output:e}),m()}get rtl(){return this.$$.ctx[18]}set rtl(e){this.$$set({rtl:e}),m()}get text_align(){return this.$$.ctx[19]}set text_align(e){this.$$set({text_align:e}),m()}}export{tt as T}; -//# sourceMappingURL=Textbox-1f11d244.js.map diff --git a/spaces/Dagfinn1962/CPU/README.md b/spaces/Dagfinn1962/CPU/README.md deleted file mode 100644 index 305b17c26e6cf9097d8ed11927e463c17017fa49..0000000000000000000000000000000000000000 --- a/spaces/Dagfinn1962/CPU/README.md +++ /dev/null @@ -1,14 +0,0 @@ ---- -title: SD 2.1 CPU -emoji: 🐢 -colorFrom: green -colorTo: indigo -sdk: gradio -sdk_version: 3.23.0 -app_file: app.py -pinned: false -license: mit -duplicated_from: Manjushri/SD-2.1-CPU ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference \ No newline at end of file diff --git a/spaces/DaleChen/AutoGPT/autogpt/memory/no_memory.py b/spaces/DaleChen/AutoGPT/autogpt/memory/no_memory.py deleted file mode 100644 index 0371e96ae89f5eb88dae019a66351a229596ed7a..0000000000000000000000000000000000000000 --- a/spaces/DaleChen/AutoGPT/autogpt/memory/no_memory.py +++ /dev/null @@ -1,73 +0,0 @@ -"""A class that does not store any data. This is the default memory provider.""" -from __future__ import annotations - -from typing import Any - -from autogpt.memory.base import MemoryProviderSingleton - - -class NoMemory(MemoryProviderSingleton): - """ - A class that does not store any data. This is the default memory provider. - """ - - def __init__(self, cfg): - """ - Initializes the NoMemory provider. - - Args: - cfg: The config object. - - Returns: None - """ - pass - - def add(self, data: str) -> str: - """ - Adds a data point to the memory. No action is taken in NoMemory. - - Args: - data: The data to add. - - Returns: An empty string. - """ - return "" - - def get(self, data: str) -> list[Any] | None: - """ - Gets the data from the memory that is most relevant to the given data. - NoMemory always returns None. - - Args: - data: The data to compare to. - - Returns: None - """ - return None - - def clear(self) -> str: - """ - Clears the memory. No action is taken in NoMemory. - - Returns: An empty string. - """ - return "" - - def get_relevant(self, data: str, num_relevant: int = 5) -> list[Any] | None: - """ - Returns all the data in the memory that is relevant to the given data. - NoMemory always returns None. - - Args: - data: The data to compare to. - num_relevant: The number of relevant data to return. - - Returns: None - """ - return None - - def get_stats(self): - """ - Returns: An empty dictionary as there are no stats in NoMemory. - """ - return {} diff --git a/spaces/Dao3/chatwithdocs/embeddings.py b/spaces/Dao3/chatwithdocs/embeddings.py deleted file mode 100644 index d7596d473dd2539e182058296e1f8844c0a37a22..0000000000000000000000000000000000000000 --- a/spaces/Dao3/chatwithdocs/embeddings.py +++ /dev/null @@ -1,115 +0,0 @@ -"""Wrapper around OpenAI embedding models.""" -from typing import Any, Dict, List, Optional - -from pydantic import BaseModel, Extra, root_validator - -from langchain.embeddings.base import Embeddings -from langchain.utils import get_from_dict_or_env - -from tenacity import ( - retry, - retry_if_exception_type, - stop_after_attempt, - wait_exponential, -) -from openai.error import Timeout, APIError, APIConnectionError, RateLimitError - - -class OpenAIEmbeddings(BaseModel, Embeddings): - """Wrapper around OpenAI embedding models. - To use, you should have the ``openai`` python package installed, and the - environment variable ``OPENAI_API_KEY`` set with your API key or pass it - as a named parameter to the constructor. - Example: - .. code-block:: python - from langchain.embeddings import OpenAIEmbeddings - openai = OpenAIEmbeddings(openai_api_key="my-api-key") - """ - - client: Any #: :meta private: - document_model_name: str = "text-embedding-ada-002" - query_model_name: str = "text-embedding-ada-002" - openai_api_key: Optional[str] = None - - class Config: - """Configuration for this pydantic object.""" - - extra = Extra.forbid - - # TODO: deprecate this - @root_validator(pre=True, allow_reuse=True) - def get_model_names(cls, values: Dict) -> Dict: - """Get model names from just old model name.""" - if "model_name" in values: - if "document_model_name" in values: - raise ValueError( - "Both `model_name` and `document_model_name` were provided, " - "but only one should be." - ) - if "query_model_name" in values: - raise ValueError( - "Both `model_name` and `query_model_name` were provided, " - "but only one should be." - ) - model_name = values.pop("model_name") - values["document_model_name"] = f"text-search-{model_name}-doc-001" - values["query_model_name"] = f"text-search-{model_name}-query-001" - return values - - @root_validator(allow_reuse=True) - def validate_environment(cls, values: Dict) -> Dict: - """Validate that api key and python package exists in environment.""" - openai_api_key = get_from_dict_or_env( - values, "openai_api_key", "OPENAI_API_KEY" - ) - try: - import openai - - openai.api_key = openai_api_key - values["client"] = openai.Embedding - except ImportError: - raise ValueError( - "Could not import openai python package. " - "Please it install it with `pip install openai`." - ) - return values - - @retry( - reraise=True, - stop=stop_after_attempt(100), - wait=wait_exponential(multiplier=1, min=10, max=60), - retry=( - retry_if_exception_type(Timeout) - | retry_if_exception_type(APIError) - | retry_if_exception_type(APIConnectionError) - | retry_if_exception_type(RateLimitError) - ), - ) - def _embedding_func(self, text: str, *, engine: str) -> List[float]: - """Call out to OpenAI's embedding endpoint with exponential backoff.""" - # replace newlines, which can negatively affect performance. - text = text.replace("\n", " ") - return self.client.create(input=[text], engine=engine)["data"][0]["embedding"] - - def embed_documents(self, texts: List[str]) -> List[List[float]]: - """Call out to OpenAI's embedding endpoint for embedding search docs. - Args: - texts: The list of texts to embed. - Returns: - List of embeddings, one for each text. - """ - responses = [ - self._embedding_func(text, engine=self.document_model_name) - for text in texts - ] - return responses - - def embed_query(self, text: str) -> List[float]: - """Call out to OpenAI's embedding endpoint for embedding query text. - Args: - text: The text to embed. - Returns: - Embeddings for the text. - """ - embedding = self._embedding_func(text, engine=self.query_model_name) - return embedding \ No newline at end of file diff --git a/spaces/Demosthene-OR/avr23-cds-translation/tabs/intro.py b/spaces/Demosthene-OR/avr23-cds-translation/tabs/intro.py deleted file mode 100644 index b5cb350304ce0dc171c9346c50a9eeda25426a0f..0000000000000000000000000000000000000000 --- a/spaces/Demosthene-OR/avr23-cds-translation/tabs/intro.py +++ /dev/null @@ -1,55 +0,0 @@ -import streamlit as st - - - -title = "Système de traduction adapté aux lunettes connectées" -sidebar_name = "Introduction" - - -def run(): - - # TODO: choose between one of these GIFs - # st.image("https://dst-studio-template.s3.eu-west-3.amazonaws.com/1.gif") - # st.image("https://dst-studio-template.s3.eu-west-3.amazonaws.com/2.gif") - # st.image("https://dst-studio-template.s3.eu-west-3.amazonaws.com/3.gif") - # st.image("assets/tough-communication.gif",use_column_width=True) - st.image("https://media.tenor.com/pfOeAfytY98AAAAC/miss-honey-glasses-off.gif",use_column_width=True) - st.title(title) - - st.markdown("--------------------------------------------------------") - - st.header("**Contexte**") - - st.markdown( - """ - Les personnes malentendantes souffrent d’un problème auditif et se trouvent donc dans l’incapacité de communiquer aisément avec autrui. - Par ailleurs, toute personne se trouvant dans un pays étranger dont il ne connaît pas la langue se trouve dans la situation d’une personne malentendante. - Les lunettes connectées sont dotées de la technologie de reconnaissance vocale avec des algorithmes de deep learning en intelligence artificielle. - Elles permettent de localiser la voix d’un interlocuteur puis d’afficher sur les verres la transcription textuelle en temps réel. A partir de cette transcription, il est possible d’:red[**afficher la traduction dans la langue du porteur de ces lunettes**]. - - """ - ) - st.header("**Objectifs**") - - st.markdown( - """ - L’objectif de ce projet est d’adapter un système de traduction au projet de lunettes connectées. Le système implémenté par ces lunettes permet de localiser, de transcrire la voix d’un interlocuteur et d’afficher la transcription sur des lunettes connectées. - Dans ce projet, notre groupe implémentera un :red[**système de traduction**] qui élargira l’utilisation de ces lunettes à un public plus vaste et permettra à deux individus ne pratiquant pas la même langue de pouvoir communiquer aisément. - Ce projet concentrera ses efforts sur l'implémentation d’un système de traduction plutôt que sur la reconnaissance vocale. Celle-ci nous sera fournie. - - Il nous faut prendre en considération quelques contraintes d’usages final, et voir si nous pourrons les respecter : - - - Traduction en temps réel d’un dialogue oral -> optimisation sur la rapidité - - Dialogue courant sans expertise particulière (champs sémantique généraliste) - - Prise en compte de la vitesse de lecture de chacun, la traduction doit être synthétique et conserver l’idée clé sans biais. (tout public et/ou design inclusif) - - Il est souhaitable que le système puisse rapidement :red[**identifier si les phrases fournies sont exprimées dans une des langues connues**] par le système de traduction, et si c’est le cas, :red[**laquelle**]. - De plus, si le système de reconnaissance vocale n’est pas fiable, il est souhaitable de corriger la phrase en fonction des mots environnants ou des phrases préalablement entendues. - Lors de la traduction, nous prendrons en compte le contexte défini par la phrase précédente ainsi que par le contexte des phrases préalablement traduites. - Nous évaluerons la qualité de nos résultats en les comparant avec des systèmes performants tels que “[Google translate](https://translate.google.fr/)” et “[Deepl](https://www.deepl.com/translator)”. - Enfin, si le temps, nos compétences et les datasets existants, le permettent, nous intégreront une langue originale, non proposée par ces systèmes, telle qu’une langue régionale ou de l’argot. - - Le projet est enregistré sur [Github](https://github.com/DataScientest-Studio/AVR23_CDS_Reco_vocale/tree/main) - - """ - ) \ No newline at end of file diff --git a/spaces/Dinoking/Guccio-AI-Designer/netdissect/upsegmodel/__init__.py b/spaces/Dinoking/Guccio-AI-Designer/netdissect/upsegmodel/__init__.py deleted file mode 100644 index 76b40a0a36bc2976f185dbdc344c5a7c09b65920..0000000000000000000000000000000000000000 --- a/spaces/Dinoking/Guccio-AI-Designer/netdissect/upsegmodel/__init__.py +++ /dev/null @@ -1 +0,0 @@ -from .models import ModelBuilder, SegmentationModule diff --git a/spaces/Dipl0/Dipl0-pepe-diffuser/app.py b/spaces/Dipl0/Dipl0-pepe-diffuser/app.py deleted file mode 100644 index 4c2da02a033d91ee480f2844f58ce46439f97c3b..0000000000000000000000000000000000000000 --- a/spaces/Dipl0/Dipl0-pepe-diffuser/app.py +++ /dev/null @@ -1,3 +0,0 @@ -import gradio as gr - -gr.Interface.load("models/Dipl0/pepe-diffuser").launch() \ No newline at end of file diff --git a/spaces/Djacon/emotion_detection/static/index.html b/spaces/Djacon/emotion_detection/static/index.html deleted file mode 100644 index 6e00a4e98607a67667f978489218067a5532abe2..0000000000000000000000000000000000000000 --- a/spaces/Djacon/emotion_detection/static/index.html +++ /dev/null @@ -1,348 +0,0 @@ - - - - - - - - Text2Feature | Homepage - - - - - - - - - - -
      -
      - - - -
      -
      -
      -
      - - -
      -
      - - - - - - - - - -
      - - -
      -
      - - -
      -
      - -
      -
      -
      -

      Homepage

      - - -
      - -
      -
      -
      -
      -

      Today’s Users

      - -
      -

      23

      - - -15% - -
      -
      - -
      - - - -
      -
      - -
      -
      -

      Total Users

      - -
      -

      54

      - - +5% - -
      -
      - -
      - - - -
      -
      - -
      -
      -

      New Users

      - -
      -

      +15

      - - +34% - -
      -
      - -
      - - - -
      -
      - -
      -
      -

      Total Projects

      - -
      -

      2

      -
      -
      - -
      - - - -
      -
      -
      - -
      -
      -

      - Welcome to Text2Feature! -

      -

      - – your gateway to the world of text processing and analysis! Our tools empower you to - easily and swiftly process textual information from any source, be it files, web pages, or text data. We provide - you with powerful instruments to search for and extract key 'features' within text, aiding you in extracting - valuable insights and making informed decisions. -

      -

      - With Text2Feature, you can: -

      -
        -
      • Import and analyze text files in various formats.
      • -
      • Search for and highlight important features within text for further exploration.
      • -
      • Structure and organize your textual content for more effective analysis.
      • -
      • Utilize a range of tools and methods for text processing and knowledge extraction.
      • -
      -

      - Join Text2Feature and transform text into valuable knowledge effortlessly. Get started now and bring your - research and analytical ideas to life! -

      -
      -
      -
      -
      -
      -
      -
      - - - - - \ No newline at end of file diff --git a/spaces/DragGan/DragGan-Inversion/stylegan_human/training_scripts/sg3/train.py b/spaces/DragGan/DragGan-Inversion/stylegan_human/training_scripts/sg3/train.py deleted file mode 100644 index afc4c934c6944b4333efa38a025f14888c67c59d..0000000000000000000000000000000000000000 --- a/spaces/DragGan/DragGan-Inversion/stylegan_human/training_scripts/sg3/train.py +++ /dev/null @@ -1,325 +0,0 @@ -# Copyright (c) SenseTime Research. All rights reserved. - -# Copyright (c) 2021, NVIDIA CORPORATION & AFFILIATES. All rights reserved. -# -# NVIDIA CORPORATION and its licensors retain all intellectual property -# and proprietary rights in and to this software, related documentation -# and any modifications thereto. Any use, reproduction, disclosure or -# distribution of this software and related documentation without an express -# license agreement from NVIDIA CORPORATION is strictly prohibited. - -"""Train a GAN using the techniques described in the paper -"Alias-Free Generative Adversarial Networks".""" - -import os -import click -import re -import json -import tempfile -import torch - -import dnnlib -from training import training_loop -from metrics import metric_main -from torch_utils import training_stats -from torch_utils import custom_ops -import ast -# ---------------------------------------------------------------------------- - - -def subprocess_fn(rank, c, temp_dir): - dnnlib.util.Logger(file_name=os.path.join( - c.run_dir, 'log.txt'), file_mode='a', should_flush=True) - - # Init torch.distributed. - if c.num_gpus > 1: - init_file = os.path.abspath(os.path.join( - temp_dir, '.torch_distributed_init')) - if os.name == 'nt': - init_method = 'file:///' + init_file.replace('\\', '/') - torch.distributed.init_process_group( - backend='gloo', init_method=init_method, rank=rank, world_size=c.num_gpus) - else: - init_method = f'file://{init_file}' - torch.distributed.init_process_group( - backend='nccl', init_method=init_method, rank=rank, world_size=c.num_gpus) - - # Init torch_utils. - sync_device = torch.device('cuda', rank) if c.num_gpus > 1 else None - training_stats.init_multiprocessing(rank=rank, sync_device=sync_device) - if rank != 0: - custom_ops.verbosity = 'none' - - # Execute training loop. - training_loop.training_loop(rank=rank, **c) - -# ---------------------------------------------------------------------------- - - -def launch_training(c, desc, outdir, dry_run): - dnnlib.util.Logger(should_flush=True) - - # Pick output directory. - prev_run_dirs = [] - if os.path.isdir(outdir): - prev_run_dirs = [x for x in os.listdir( - outdir) if os.path.isdir(os.path.join(outdir, x))] - prev_run_ids = [re.match(r'^\d+', x) for x in prev_run_dirs] - prev_run_ids = [int(x.group()) for x in prev_run_ids if x is not None] - cur_run_id = max(prev_run_ids, default=-1) + 1 - c.run_dir = os.path.join(outdir, f'{cur_run_id:05d}-{desc}') - assert not os.path.exists(c.run_dir) - - # Print options. - print() - print('Training options:') - print(json.dumps(c, indent=2)) - print() - print(f'Output directory: {c.run_dir}') - print(f'Number of GPUs: {c.num_gpus}') - print(f'Batch size: {c.batch_size} images') - print(f'Training duration: {c.total_kimg} kimg') - print(f'Dataset path: {c.training_set_kwargs.path}') - print(f'Dataset size: {c.training_set_kwargs.max_size} images') - print(f'Dataset resolution: {c.training_set_kwargs.resolution}') - print(f'Dataset labels: {c.training_set_kwargs.use_labels}') - print(f'Dataset x-flips: {c.training_set_kwargs.xflip}') - print() - - # Dry run? - if dry_run: - print('Dry run; exiting.') - return - - # Create output directory. - print('Creating output directory...') - os.makedirs(c.run_dir) - with open(os.path.join(c.run_dir, 'training_options.json'), 'wt') as f: - json.dump(c, f, indent=2) - - # Launch processes. - print('Launching processes...') - torch.multiprocessing.set_start_method('spawn') - with tempfile.TemporaryDirectory() as temp_dir: - if c.num_gpus == 1: - subprocess_fn(rank=0, c=c, temp_dir=temp_dir) - else: - torch.multiprocessing.spawn( - fn=subprocess_fn, args=(c, temp_dir), nprocs=c.num_gpus) - -# ---------------------------------------------------------------------------- - - -def init_dataset_kwargs(data, square=False): - # dataset - - try: - dataset_kwargs = dnnlib.EasyDict(class_name='training.dataset.ImageFolderDataset', - path=data, use_labels=True, max_size=None, xflip=False, square=square) - # Subclass of training.dataset.Dataset. - dataset_obj = dnnlib.util.construct_class_by_name(**dataset_kwargs) - # Be explicit about resolution. - dataset_kwargs.resolution = dataset_obj.resolution - # Be explicit about labels. - dataset_kwargs.use_labels = dataset_obj.has_labels - # Be explicit about dataset size. - dataset_kwargs.max_size = len(dataset_obj) - return dataset_kwargs, dataset_obj.name - except IOError as err: - raise click.ClickException(f'--data: {err}') - - print("out of dataset") -# ---------------------------------------------------------------------------- - - -def parse_comma_separated_list(s): - if isinstance(s, list): - return s - if s is None or s.lower() == 'none' or s == '': - return [] - return s.split(',') - -# ---------------------------------------------------------------------------- - - -@click.command() -# Required. -@click.option('--outdir', help='Where to save the results', metavar='DIR', required=True) -@click.option('--cfg', help='Base configuration', type=click.Choice(['stylegan3-t', 'stylegan3-r', 'stylegan2']), required=True) -@click.option('--data', help='Training data', metavar='PATH', required=True) -@click.option('--gpus', help='Number of GPUs to use', metavar='INT', type=click.IntRange(min=1), required=True) -@click.option('--batch', help='Total batch size', metavar='INT', type=click.IntRange(min=1), required=True) -@click.option('--gamma', help='R1 regularization weight', metavar='FLOAT', type=click.FloatRange(min=0), required=True) -@click.option('--square', help='True for square, False for rectangle', type=bool, metavar='BOOL', default=False) -# Optional features. -@click.option('--cond', help='Train conditional model', metavar='BOOL', type=bool, default=False, show_default=True) -@click.option('--mirror', help='Enable dataset x-flips', metavar='BOOL', type=bool, default=False, show_default=True) -@click.option('--aug', help='Augmentation mode', type=click.Choice(['noaug', 'ada', 'fixed']), default='ada', show_default=True) -@click.option('--resume', help='Resume from given network pickle', metavar='[PATH|URL]', type=str) -@click.option('--freezed', help='Freeze first layers of D', metavar='INT', type=click.IntRange(min=0), default=0, show_default=True) -# Misc hyperparameters. -@click.option('--p', help='Probability for --aug=fixed', metavar='FLOAT', type=click.FloatRange(min=0, max=1), default=0.2, show_default=True) -@click.option('--target', help='Target value for --aug=ada', metavar='FLOAT', type=click.FloatRange(min=0, max=1), default=0.6, show_default=True) -@click.option('--batch-gpu', help='Limit batch size per GPU', metavar='INT', type=click.IntRange(min=1)) -@click.option('--cbase', help='Capacity multiplier', metavar='INT', type=click.IntRange(min=1), default=32768, show_default=True) -@click.option('--cmax', help='Max. feature maps', metavar='INT', type=click.IntRange(min=1), default=512, show_default=True) -@click.option('--glr', help='G learning rate [default: varies]', metavar='FLOAT', type=click.FloatRange(min=0)) -@click.option('--dlr', help='D learning rate', metavar='FLOAT', type=click.FloatRange(min=0), default=0.002, show_default=True) -@click.option('--map-depth', help='Mapping network depth [default: varies]', metavar='INT', type=click.IntRange(min=1)) -@click.option('--mbstd-group', help='Minibatch std group size', metavar='INT', type=click.IntRange(min=1), default=4, show_default=True) -# Misc settings. -@click.option('--desc', help='String to include in result dir name', metavar='STR', type=str) -@click.option('--metrics', help='Quality metrics', metavar='[NAME|A,B,C|none]', type=parse_comma_separated_list, default='fid50k_full', show_default=True) -@click.option('--kimg', help='Total training duration', metavar='KIMG', type=click.IntRange(min=1), default=25000, show_default=True) -@click.option('--tick', help='How often to print progress', metavar='KIMG', type=click.IntRange(min=1), default=4, show_default=True) -@click.option('--snap', help='How often to save snapshots', metavar='TICKS', type=click.IntRange(min=1), default=50, show_default=True) -@click.option('--seed', help='Random seed', metavar='INT', type=click.IntRange(min=0), default=0, show_default=True) -@click.option('--fp32', help='Disable mixed-precision', metavar='BOOL', type=bool, default=False, show_default=True) -@click.option('--nobench', help='Disable cuDNN benchmarking', metavar='BOOL', type=bool, default=False, show_default=True) -@click.option('--workers', help='DataLoader worker processes', metavar='INT', type=click.IntRange(min=1), default=3, show_default=True) -@click.option('-n', '--dry-run', help='Print training options and exit', is_flag=True) -def main(**kwargs): - """Train a GAN using the techniques described in the paper - "Alias-Free Generative Adversarial Networks". - - Examples: - - \b - # Train StyleGAN3-T for AFHQv2 using 8 GPUs. - python train.py --outdir=~/training-runs --cfg=stylegan3-t --data=~/datasets/afhqv2-512x512.zip \\ - --gpus=8 --batch=32 --gamma=8.2 --mirror=1 - - \b - # Fine-tune StyleGAN3-R for MetFaces-U using 1 GPU, starting from the pre-trained FFHQ-U pickle. - python train.py --outdir=~/training-runs --cfg=stylegan3-r --data=~/datasets/metfacesu-1024x1024.zip \\ - --gpus=8 --batch=32 --gamma=6.6 --mirror=1 --kimg=5000 --snap=5 \\ - --resume=https://api.ngc.nvidia.com/v2/models/nvidia/research/stylegan3/versions/1/files/stylegan3-r-ffhqu-1024x1024.pkl - - \b - # Train StyleGAN2 for FFHQ at 1024x1024 resolution using 8 GPUs. - python train.py --outdir=~/training-runs --cfg=stylegan2 --data=~/datasets/ffhq-1024x1024.zip \\ - --gpus=8 --batch=32 --gamma=10 --mirror=1 --aug=noaug - """ - - # Initialize config. - opts = dnnlib.EasyDict(kwargs) # Command line arguments. - c = dnnlib.EasyDict() # Main config dict. - print('---- square: ', opts.square) - c.G_kwargs = dnnlib.EasyDict( - class_name=None, z_dim=512, w_dim=512, mapping_kwargs=dnnlib.EasyDict(), square=opts.square) - c.D_kwargs = dnnlib.EasyDict(class_name='training.networks_stylegan2.Discriminator', block_kwargs=dnnlib.EasyDict( - ), mapping_kwargs=dnnlib.EasyDict(), epilogue_kwargs=dnnlib.EasyDict(), square=opts.square) - c.G_opt_kwargs = dnnlib.EasyDict( - class_name='torch.optim.Adam', betas=[0, 0.99], eps=1e-8) - c.D_opt_kwargs = dnnlib.EasyDict( - class_name='torch.optim.Adam', betas=[0, 0.99], eps=1e-8) - c.loss_kwargs = dnnlib.EasyDict(class_name='training.loss.StyleGAN2Loss') - c.data_loader_kwargs = dnnlib.EasyDict(pin_memory=True, prefetch_factor=2) - - # Training set. - c.training_set_kwargs, dataset_name = init_dataset_kwargs( - data=opts.data, square=opts.square) - if opts.cond and not c.training_set_kwargs.use_labels: - raise click.ClickException( - '--cond=True requires labels specified in dataset.json') - c.training_set_kwargs.use_labels = opts.cond - c.training_set_kwargs.xflip = opts.mirror - - # Hyperparameters & settings. - c.num_gpus = opts.gpus - c.batch_size = opts.batch - c.batch_gpu = opts.batch_gpu or opts.batch // opts.gpus - c.G_kwargs.channel_base = c.D_kwargs.channel_base = opts.cbase - c.G_kwargs.channel_max = c.D_kwargs.channel_max = opts.cmax - c.G_kwargs.mapping_kwargs.num_layers = ( - 8 if opts.cfg == 'stylegan2' else 2) if opts.map_depth is None else opts.map_depth - c.D_kwargs.block_kwargs.freeze_layers = opts.freezed - c.D_kwargs.epilogue_kwargs.mbstd_group_size = opts.mbstd_group - c.loss_kwargs.r1_gamma = opts.gamma - c.G_opt_kwargs.lr = ( - 0.002 if opts.cfg == 'stylegan2' else 0.0025) if opts.glr is None else opts.glr - c.D_opt_kwargs.lr = opts.dlr - c.metrics = opts.metrics - c.total_kimg = opts.kimg - c.kimg_per_tick = opts.tick - c.image_snapshot_ticks = c.network_snapshot_ticks = opts.snap - c.random_seed = c.training_set_kwargs.random_seed = opts.seed - c.data_loader_kwargs.num_workers = opts.workers - - # Sanity checks. - if c.batch_size % c.num_gpus != 0: - raise click.ClickException('--batch must be a multiple of --gpus') - if c.batch_size % (c.num_gpus * c.batch_gpu) != 0: - raise click.ClickException( - '--batch must be a multiple of --gpus times --batch-gpu') - if c.batch_gpu < c.D_kwargs.epilogue_kwargs.mbstd_group_size: - raise click.ClickException( - '--batch-gpu cannot be smaller than --mbstd') - if any(not metric_main.is_valid_metric(metric) for metric in c.metrics): - raise click.ClickException('\n'.join( - ['--metrics can only contain the following values:'] + metric_main.list_valid_metrics())) - - # Base configuration. - c.ema_kimg = c.batch_size * 10 / 32 - if opts.cfg == 'stylegan2': - c.G_kwargs.class_name = 'training.networks_stylegan2.Generator' - # Enable style mixing regularization. - c.loss_kwargs.style_mixing_prob = 0.9 - c.loss_kwargs.pl_weight = 2 # Enable path length regularization. - c.G_reg_interval = 4 # Enable lazy regularization for G. - # Speed up training by using regular convolutions instead of grouped convolutions. - c.G_kwargs.fused_modconv_default = 'inference_only' - # Speed up path length regularization by skipping gradient computation wrt. conv2d weights. - c.loss_kwargs.pl_no_weight_grad = True - else: - c.G_kwargs.class_name = 'training.networks_stylegan3.Generator' - c.G_kwargs.magnitude_ema_beta = 0.5 ** (c.batch_size / (20 * 1e3)) - if opts.cfg == 'stylegan3-r': - c.G_kwargs.conv_kernel = 1 # Use 1x1 convolutions. - c.G_kwargs.channel_base *= 2 # Double the number of feature maps. - c.G_kwargs.channel_max *= 2 - # Use radially symmetric downsampling filters. - c.G_kwargs.use_radial_filters = True - # Blur the images seen by the discriminator. - c.loss_kwargs.blur_init_sigma = 10 - # Fade out the blur during the first N kimg. - c.loss_kwargs.blur_fade_kimg = c.batch_size * 200 / 32 - - # Augmentation. - if opts.aug != 'noaug': - c.augment_kwargs = dnnlib.EasyDict(class_name='training.augment.AugmentPipe', xflip=1, rotate90=1, xint=1, - scale=1, rotate=1, aniso=1, xfrac=1, brightness=1, contrast=1, lumaflip=1, hue=1, saturation=1) - if opts.aug == 'ada': - c.ada_target = opts.target - if opts.aug == 'fixed': - c.augment_p = opts.p - - # Resume. - if opts.resume is not None: - c.resume_pkl = opts.resume - c.ada_kimg = 100 # Make ADA react faster at the beginning. - c.ema_rampup = None # Disable EMA rampup. - c.loss_kwargs.blur_init_sigma = 0 # Disable blur rampup. - - # Performance-related toggles. - if opts.fp32: - c.G_kwargs.num_fp16_res = c.D_kwargs.num_fp16_res = 0 - c.G_kwargs.conv_clamp = c.D_kwargs.conv_clamp = None - if opts.nobench: - c.cudnn_benchmark = False - - # Description string. - desc = f'{opts.cfg:s}-{dataset_name:s}-gpus{c.num_gpus:d}-batch{c.batch_size:d}-gamma{c.loss_kwargs.r1_gamma:g}' - if opts.desc is not None: - desc += f'-{opts.desc}' - - # Launch. - launch_training(c=c, desc=desc, outdir=opts.outdir, dry_run=opts.dry_run) - -# ---------------------------------------------------------------------------- - - -if __name__ == "__main__": - main() # pylint: disable=no-value-for-parameter - -# ---------------------------------------------------------------------------- diff --git a/spaces/ECCV2022/bytetrack/exps/example/mot/yolox_x_ablation.py b/spaces/ECCV2022/bytetrack/exps/example/mot/yolox_x_ablation.py deleted file mode 100644 index 6afb771555419b1166adfdce8489303ae912c9fc..0000000000000000000000000000000000000000 --- a/spaces/ECCV2022/bytetrack/exps/example/mot/yolox_x_ablation.py +++ /dev/null @@ -1,138 +0,0 @@ -# encoding: utf-8 -import os -import random -import torch -import torch.nn as nn -import torch.distributed as dist - -from yolox.exp import Exp as MyExp -from yolox.data import get_yolox_datadir - -class Exp(MyExp): - def __init__(self): - super(Exp, self).__init__() - self.num_classes = 1 - self.depth = 1.33 - self.width = 1.25 - self.exp_name = os.path.split(os.path.realpath(__file__))[1].split(".")[0] - self.train_ann = "train.json" - self.val_ann = "val_half.json" - self.input_size = (800, 1440) - self.test_size = (800, 1440) - self.random_size = (18, 32) - self.max_epoch = 80 - self.print_interval = 20 - self.eval_interval = 5 - self.test_conf = 0.1 - self.nmsthre = 0.7 - self.no_aug_epochs = 10 - self.basic_lr_per_img = 0.001 / 64.0 - self.warmup_epochs = 1 - - def get_data_loader(self, batch_size, is_distributed, no_aug=False): - from yolox.data import ( - MOTDataset, - TrainTransform, - YoloBatchSampler, - DataLoader, - InfiniteSampler, - MosaicDetection, - ) - - dataset = MOTDataset( - data_dir=os.path.join(get_yolox_datadir(), "mix_mot_ch"), - json_file=self.train_ann, - name='', - img_size=self.input_size, - preproc=TrainTransform( - rgb_means=(0.485, 0.456, 0.406), - std=(0.229, 0.224, 0.225), - max_labels=500, - ), - ) - - dataset = MosaicDetection( - dataset, - mosaic=not no_aug, - img_size=self.input_size, - preproc=TrainTransform( - rgb_means=(0.485, 0.456, 0.406), - std=(0.229, 0.224, 0.225), - max_labels=1000, - ), - degrees=self.degrees, - translate=self.translate, - scale=self.scale, - shear=self.shear, - perspective=self.perspective, - enable_mixup=self.enable_mixup, - ) - - self.dataset = dataset - - if is_distributed: - batch_size = batch_size // dist.get_world_size() - - sampler = InfiniteSampler( - len(self.dataset), seed=self.seed if self.seed else 0 - ) - - batch_sampler = YoloBatchSampler( - sampler=sampler, - batch_size=batch_size, - drop_last=False, - input_dimension=self.input_size, - mosaic=not no_aug, - ) - - dataloader_kwargs = {"num_workers": self.data_num_workers, "pin_memory": True} - dataloader_kwargs["batch_sampler"] = batch_sampler - train_loader = DataLoader(self.dataset, **dataloader_kwargs) - - return train_loader - - def get_eval_loader(self, batch_size, is_distributed, testdev=False): - from yolox.data import MOTDataset, ValTransform - - valdataset = MOTDataset( - data_dir=os.path.join(get_yolox_datadir(), "mot"), - json_file=self.val_ann, - img_size=self.test_size, - name='train', - preproc=ValTransform( - rgb_means=(0.485, 0.456, 0.406), - std=(0.229, 0.224, 0.225), - ), - ) - - if is_distributed: - batch_size = batch_size // dist.get_world_size() - sampler = torch.utils.data.distributed.DistributedSampler( - valdataset, shuffle=False - ) - else: - sampler = torch.utils.data.SequentialSampler(valdataset) - - dataloader_kwargs = { - "num_workers": self.data_num_workers, - "pin_memory": True, - "sampler": sampler, - } - dataloader_kwargs["batch_size"] = batch_size - val_loader = torch.utils.data.DataLoader(valdataset, **dataloader_kwargs) - - return val_loader - - def get_evaluator(self, batch_size, is_distributed, testdev=False): - from yolox.evaluators import COCOEvaluator - - val_loader = self.get_eval_loader(batch_size, is_distributed, testdev=testdev) - evaluator = COCOEvaluator( - dataloader=val_loader, - img_size=self.test_size, - confthre=self.test_conf, - nmsthre=self.nmsthre, - num_classes=self.num_classes, - testdev=testdev, - ) - return evaluator diff --git a/spaces/EronSamez/RVC_HFmeu/lib/infer_pack/modules/F0Predictor/PMF0Predictor.py b/spaces/EronSamez/RVC_HFmeu/lib/infer_pack/modules/F0Predictor/PMF0Predictor.py deleted file mode 100644 index b2c592527a5966e6f8e79e8c52dc5b414246dcc6..0000000000000000000000000000000000000000 --- a/spaces/EronSamez/RVC_HFmeu/lib/infer_pack/modules/F0Predictor/PMF0Predictor.py +++ /dev/null @@ -1,97 +0,0 @@ -from lib.infer_pack.modules.F0Predictor.F0Predictor import F0Predictor -import parselmouth -import numpy as np - - -class PMF0Predictor(F0Predictor): - def __init__(self, hop_length=512, f0_min=50, f0_max=1100, sampling_rate=44100): - self.hop_length = hop_length - self.f0_min = f0_min - self.f0_max = f0_max - self.sampling_rate = sampling_rate - - def interpolate_f0(self, f0): - """ - 对F0进行插值处理 - """ - - data = np.reshape(f0, (f0.size, 1)) - - vuv_vector = np.zeros((data.size, 1), dtype=np.float32) - vuv_vector[data > 0.0] = 1.0 - vuv_vector[data <= 0.0] = 0.0 - - ip_data = data - - frame_number = data.size - last_value = 0.0 - for i in range(frame_number): - if data[i] <= 0.0: - j = i + 1 - for j in range(i + 1, frame_number): - if data[j] > 0.0: - break - if j < frame_number - 1: - if last_value > 0.0: - step = (data[j] - data[i - 1]) / float(j - i) - for k in range(i, j): - ip_data[k] = data[i - 1] + step * (k - i + 1) - else: - for k in range(i, j): - ip_data[k] = data[j] - else: - for k in range(i, frame_number): - ip_data[k] = last_value - else: - ip_data[i] = data[i] # 这里可能存在一个没有必要的拷贝 - last_value = data[i] - - return ip_data[:, 0], vuv_vector[:, 0] - - def compute_f0(self, wav, p_len=None): - x = wav - if p_len is None: - p_len = x.shape[0] // self.hop_length - else: - assert abs(p_len - x.shape[0] // self.hop_length) < 4, "pad length error" - time_step = self.hop_length / self.sampling_rate * 1000 - f0 = ( - parselmouth.Sound(x, self.sampling_rate) - .to_pitch_ac( - time_step=time_step / 1000, - voicing_threshold=0.6, - pitch_floor=self.f0_min, - pitch_ceiling=self.f0_max, - ) - .selected_array["frequency"] - ) - - pad_size = (p_len - len(f0) + 1) // 2 - if pad_size > 0 or p_len - len(f0) - pad_size > 0: - f0 = np.pad(f0, [[pad_size, p_len - len(f0) - pad_size]], mode="constant") - f0, uv = self.interpolate_f0(f0) - return f0 - - def compute_f0_uv(self, wav, p_len=None): - x = wav - if p_len is None: - p_len = x.shape[0] // self.hop_length - else: - assert abs(p_len - x.shape[0] // self.hop_length) < 4, "pad length error" - time_step = self.hop_length / self.sampling_rate * 1000 - f0 = ( - parselmouth.Sound(x, self.sampling_rate) - .to_pitch_ac( - time_step=time_step / 1000, - voicing_threshold=0.6, - pitch_floor=self.f0_min, - pitch_ceiling=self.f0_max, - ) - .selected_array["frequency"] - ) - - pad_size = (p_len - len(f0) + 1) // 2 - if pad_size > 0 or p_len - len(f0) - pad_size > 0: - f0 = np.pad(f0, [[pad_size, p_len - len(f0) - pad_size]], mode="constant") - f0, uv = self.interpolate_f0(f0) - return f0, uv diff --git a/spaces/EsoCode/text-generation-webui/extensions/superbooga/download_urls.py b/spaces/EsoCode/text-generation-webui/extensions/superbooga/download_urls.py deleted file mode 100644 index efe300d28393e4550f241808073f04c98fb33ace..0000000000000000000000000000000000000000 --- a/spaces/EsoCode/text-generation-webui/extensions/superbooga/download_urls.py +++ /dev/null @@ -1,32 +0,0 @@ -import concurrent.futures - -import requests - - -def download_single(url): - response = requests.get(url, timeout=5) - if response.status_code == 200: - return response.content - else: - raise Exception("Failed to download URL") - - -def download_urls(urls, threads=1): - with concurrent.futures.ThreadPoolExecutor(max_workers=threads) as executor: - futures = [] - for url in urls: - future = executor.submit(download_single, url) - futures.append(future) - - results = [] - i = 0 - for future in concurrent.futures.as_completed(futures): - try: - result = future.result() - results.append(result) - i += 1 - yield f"{i}/{len(urls)}", results - except Exception: - pass - - yield "Done", results diff --git a/spaces/EuroPython2022/mmocr-demo/configs/_base_/recog_pipelines/seg_pipeline.py b/spaces/EuroPython2022/mmocr-demo/configs/_base_/recog_pipelines/seg_pipeline.py deleted file mode 100644 index 378474dfb5341ec93e73bb61047c43ba72d5e127..0000000000000000000000000000000000000000 --- a/spaces/EuroPython2022/mmocr-demo/configs/_base_/recog_pipelines/seg_pipeline.py +++ /dev/null @@ -1,66 +0,0 @@ -img_norm_cfg = dict(mean=[0.485, 0.456, 0.406], std=[0.229, 0.224, 0.225]) - -gt_label_convertor = dict( - type='SegConvertor', dict_type='DICT36', with_unknown=True, lower=True) - -train_pipeline = [ - dict(type='LoadImageFromFile'), - dict( - type='RandomPaddingOCR', - max_ratio=[0.15, 0.2, 0.15, 0.2], - box_type='char_quads'), - dict(type='OpencvToPil'), - dict( - type='RandomRotateImageBox', - min_angle=-17, - max_angle=17, - box_type='char_quads'), - dict(type='PilToOpencv'), - dict( - type='ResizeOCR', - height=64, - min_width=64, - max_width=512, - keep_aspect_ratio=True), - dict( - type='OCRSegTargets', - label_convertor=gt_label_convertor, - box_type='char_quads'), - dict(type='RandomRotateTextDet', rotate_ratio=0.5, max_angle=15), - dict(type='ColorJitter', brightness=0.4, contrast=0.4, saturation=0.4), - dict(type='ToTensorOCR'), - dict(type='FancyPCA'), - dict(type='NormalizeOCR', **img_norm_cfg), - dict( - type='CustomFormatBundle', - keys=['gt_kernels'], - visualize=dict(flag=False, boundary_key=None), - call_super=False), - dict( - type='Collect', - keys=['img', 'gt_kernels'], - meta_keys=['filename', 'ori_shape', 'resize_shape']) -] - -test_img_norm_cfg = dict( - mean=[x * 255 for x in img_norm_cfg['mean']], - std=[x * 255 for x in img_norm_cfg['std']]) - -test_pipeline = [ - dict(type='LoadImageFromFile'), - dict( - type='ResizeOCR', - height=64, - min_width=64, - max_width=None, - keep_aspect_ratio=True), - dict(type='Normalize', **test_img_norm_cfg), - dict(type='DefaultFormatBundle'), - dict( - type='Collect', - keys=['img'], - meta_keys=[ - 'filename', 'resize_shape', 'img_norm_cfg', 'ori_filename', - 'img_shape', 'ori_shape' - ]) -] diff --git a/spaces/Falah/female/app.py b/spaces/Falah/female/app.py deleted file mode 100644 index 53c83b218c0f6e27ff2ec10315b591568660598a..0000000000000000000000000000000000000000 --- a/spaces/Falah/female/app.py +++ /dev/null @@ -1,35 +0,0 @@ -import random -import yaml -import gradio as gr - -def generate_text(): - input_data = get_options('female.yaml') - command = input_data['command'] - for key in input_data.keys(): - if key != 'command': - command = command.replace(f'[{key}]', input_data[key][random.randint(0, len(input_data[key]) - 1)]) - return command - -def get_options(file_path): - with open(file_path, 'r') as file: - options = yaml.load(file, Loader=yaml.FullLoader) - return options - -iface = gr.Interface( - fn=generate_text, - inputs=None, - outputs=gr.outputs.Textbox(label="Generated Text"), - title="Beautiful Female AI Generator Prompts", - description="Generates a random text prompt for a female by AI ", - allow_flagging=False, - theme="compact", - examples=[ - ["Generate"], - ["Generate"], - ["Generate"] - ], - # Add image to the interface - #image="https://iraqprogrammer.files.wordpress.com/2023/03/00045-3227812873.png?w=1400&h=" -) - -iface.launch() diff --git a/spaces/Faridmaruf/RVCV2MODEL/lib/infer_pack/modules.py b/spaces/Faridmaruf/RVCV2MODEL/lib/infer_pack/modules.py deleted file mode 100644 index c83289df7c79a4810dacd15c050148544ba0b6a9..0000000000000000000000000000000000000000 --- a/spaces/Faridmaruf/RVCV2MODEL/lib/infer_pack/modules.py +++ /dev/null @@ -1,522 +0,0 @@ -import copy -import math -import numpy as np -import scipy -import torch -from torch import nn -from torch.nn import functional as F - -from torch.nn import Conv1d, ConvTranspose1d, AvgPool1d, Conv2d -from torch.nn.utils import weight_norm, remove_weight_norm - -from lib.infer_pack import commons -from lib.infer_pack.commons import init_weights, get_padding -from lib.infer_pack.transforms import piecewise_rational_quadratic_transform - - -LRELU_SLOPE = 0.1 - - -class LayerNorm(nn.Module): - def __init__(self, channels, eps=1e-5): - super().__init__() - self.channels = channels - self.eps = eps - - self.gamma = nn.Parameter(torch.ones(channels)) - self.beta = nn.Parameter(torch.zeros(channels)) - - def forward(self, x): - x = x.transpose(1, -1) - x = F.layer_norm(x, (self.channels,), self.gamma, self.beta, self.eps) - return x.transpose(1, -1) - - -class ConvReluNorm(nn.Module): - def __init__( - self, - in_channels, - hidden_channels, - out_channels, - kernel_size, - n_layers, - p_dropout, - ): - super().__init__() - self.in_channels = in_channels - self.hidden_channels = hidden_channels - self.out_channels = out_channels - self.kernel_size = kernel_size - self.n_layers = n_layers - self.p_dropout = p_dropout - assert n_layers > 1, "Number of layers should be larger than 0." - - self.conv_layers = nn.ModuleList() - self.norm_layers = nn.ModuleList() - self.conv_layers.append( - nn.Conv1d( - in_channels, hidden_channels, kernel_size, padding=kernel_size // 2 - ) - ) - self.norm_layers.append(LayerNorm(hidden_channels)) - self.relu_drop = nn.Sequential(nn.ReLU(), nn.Dropout(p_dropout)) - for _ in range(n_layers - 1): - self.conv_layers.append( - nn.Conv1d( - hidden_channels, - hidden_channels, - kernel_size, - padding=kernel_size // 2, - ) - ) - self.norm_layers.append(LayerNorm(hidden_channels)) - self.proj = nn.Conv1d(hidden_channels, out_channels, 1) - self.proj.weight.data.zero_() - self.proj.bias.data.zero_() - - def forward(self, x, x_mask): - x_org = x - for i in range(self.n_layers): - x = self.conv_layers[i](x * x_mask) - x = self.norm_layers[i](x) - x = self.relu_drop(x) - x = x_org + self.proj(x) - return x * x_mask - - -class DDSConv(nn.Module): - """ - Dialted and Depth-Separable Convolution - """ - - def __init__(self, channels, kernel_size, n_layers, p_dropout=0.0): - super().__init__() - self.channels = channels - self.kernel_size = kernel_size - self.n_layers = n_layers - self.p_dropout = p_dropout - - self.drop = nn.Dropout(p_dropout) - self.convs_sep = nn.ModuleList() - self.convs_1x1 = nn.ModuleList() - self.norms_1 = nn.ModuleList() - self.norms_2 = nn.ModuleList() - for i in range(n_layers): - dilation = kernel_size**i - padding = (kernel_size * dilation - dilation) // 2 - self.convs_sep.append( - nn.Conv1d( - channels, - channels, - kernel_size, - groups=channels, - dilation=dilation, - padding=padding, - ) - ) - self.convs_1x1.append(nn.Conv1d(channels, channels, 1)) - self.norms_1.append(LayerNorm(channels)) - self.norms_2.append(LayerNorm(channels)) - - def forward(self, x, x_mask, g=None): - if g is not None: - x = x + g - for i in range(self.n_layers): - y = self.convs_sep[i](x * x_mask) - y = self.norms_1[i](y) - y = F.gelu(y) - y = self.convs_1x1[i](y) - y = self.norms_2[i](y) - y = F.gelu(y) - y = self.drop(y) - x = x + y - return x * x_mask - - -class WN(torch.nn.Module): - def __init__( - self, - hidden_channels, - kernel_size, - dilation_rate, - n_layers, - gin_channels=0, - p_dropout=0, - ): - super(WN, self).__init__() - assert kernel_size % 2 == 1 - self.hidden_channels = hidden_channels - self.kernel_size = (kernel_size,) - self.dilation_rate = dilation_rate - self.n_layers = n_layers - self.gin_channels = gin_channels - self.p_dropout = p_dropout - - self.in_layers = torch.nn.ModuleList() - self.res_skip_layers = torch.nn.ModuleList() - self.drop = nn.Dropout(p_dropout) - - if gin_channels != 0: - cond_layer = torch.nn.Conv1d( - gin_channels, 2 * hidden_channels * n_layers, 1 - ) - self.cond_layer = torch.nn.utils.weight_norm(cond_layer, name="weight") - - for i in range(n_layers): - dilation = dilation_rate**i - padding = int((kernel_size * dilation - dilation) / 2) - in_layer = torch.nn.Conv1d( - hidden_channels, - 2 * hidden_channels, - kernel_size, - dilation=dilation, - padding=padding, - ) - in_layer = torch.nn.utils.weight_norm(in_layer, name="weight") - self.in_layers.append(in_layer) - - # last one is not necessary - if i < n_layers - 1: - res_skip_channels = 2 * hidden_channels - else: - res_skip_channels = hidden_channels - - res_skip_layer = torch.nn.Conv1d(hidden_channels, res_skip_channels, 1) - res_skip_layer = torch.nn.utils.weight_norm(res_skip_layer, name="weight") - self.res_skip_layers.append(res_skip_layer) - - def forward(self, x, x_mask, g=None, **kwargs): - output = torch.zeros_like(x) - n_channels_tensor = torch.IntTensor([self.hidden_channels]) - - if g is not None: - g = self.cond_layer(g) - - for i in range(self.n_layers): - x_in = self.in_layers[i](x) - if g is not None: - cond_offset = i * 2 * self.hidden_channels - g_l = g[:, cond_offset : cond_offset + 2 * self.hidden_channels, :] - else: - g_l = torch.zeros_like(x_in) - - acts = commons.fused_add_tanh_sigmoid_multiply(x_in, g_l, n_channels_tensor) - acts = self.drop(acts) - - res_skip_acts = self.res_skip_layers[i](acts) - if i < self.n_layers - 1: - res_acts = res_skip_acts[:, : self.hidden_channels, :] - x = (x + res_acts) * x_mask - output = output + res_skip_acts[:, self.hidden_channels :, :] - else: - output = output + res_skip_acts - return output * x_mask - - def remove_weight_norm(self): - if self.gin_channels != 0: - torch.nn.utils.remove_weight_norm(self.cond_layer) - for l in self.in_layers: - torch.nn.utils.remove_weight_norm(l) - for l in self.res_skip_layers: - torch.nn.utils.remove_weight_norm(l) - - -class ResBlock1(torch.nn.Module): - def __init__(self, channels, kernel_size=3, dilation=(1, 3, 5)): - super(ResBlock1, self).__init__() - self.convs1 = nn.ModuleList( - [ - weight_norm( - Conv1d( - channels, - channels, - kernel_size, - 1, - dilation=dilation[0], - padding=get_padding(kernel_size, dilation[0]), - ) - ), - weight_norm( - Conv1d( - channels, - channels, - kernel_size, - 1, - dilation=dilation[1], - padding=get_padding(kernel_size, dilation[1]), - ) - ), - weight_norm( - Conv1d( - channels, - channels, - kernel_size, - 1, - dilation=dilation[2], - padding=get_padding(kernel_size, dilation[2]), - ) - ), - ] - ) - self.convs1.apply(init_weights) - - self.convs2 = nn.ModuleList( - [ - weight_norm( - Conv1d( - channels, - channels, - kernel_size, - 1, - dilation=1, - padding=get_padding(kernel_size, 1), - ) - ), - weight_norm( - Conv1d( - channels, - channels, - kernel_size, - 1, - dilation=1, - padding=get_padding(kernel_size, 1), - ) - ), - weight_norm( - Conv1d( - channels, - channels, - kernel_size, - 1, - dilation=1, - padding=get_padding(kernel_size, 1), - ) - ), - ] - ) - self.convs2.apply(init_weights) - - def forward(self, x, x_mask=None): - for c1, c2 in zip(self.convs1, self.convs2): - xt = F.leaky_relu(x, LRELU_SLOPE) - if x_mask is not None: - xt = xt * x_mask - xt = c1(xt) - xt = F.leaky_relu(xt, LRELU_SLOPE) - if x_mask is not None: - xt = xt * x_mask - xt = c2(xt) - x = xt + x - if x_mask is not None: - x = x * x_mask - return x - - def remove_weight_norm(self): - for l in self.convs1: - remove_weight_norm(l) - for l in self.convs2: - remove_weight_norm(l) - - -class ResBlock2(torch.nn.Module): - def __init__(self, channels, kernel_size=3, dilation=(1, 3)): - super(ResBlock2, self).__init__() - self.convs = nn.ModuleList( - [ - weight_norm( - Conv1d( - channels, - channels, - kernel_size, - 1, - dilation=dilation[0], - padding=get_padding(kernel_size, dilation[0]), - ) - ), - weight_norm( - Conv1d( - channels, - channels, - kernel_size, - 1, - dilation=dilation[1], - padding=get_padding(kernel_size, dilation[1]), - ) - ), - ] - ) - self.convs.apply(init_weights) - - def forward(self, x, x_mask=None): - for c in self.convs: - xt = F.leaky_relu(x, LRELU_SLOPE) - if x_mask is not None: - xt = xt * x_mask - xt = c(xt) - x = xt + x - if x_mask is not None: - x = x * x_mask - return x - - def remove_weight_norm(self): - for l in self.convs: - remove_weight_norm(l) - - -class Log(nn.Module): - def forward(self, x, x_mask, reverse=False, **kwargs): - if not reverse: - y = torch.log(torch.clamp_min(x, 1e-5)) * x_mask - logdet = torch.sum(-y, [1, 2]) - return y, logdet - else: - x = torch.exp(x) * x_mask - return x - - -class Flip(nn.Module): - def forward(self, x, *args, reverse=False, **kwargs): - x = torch.flip(x, [1]) - if not reverse: - logdet = torch.zeros(x.size(0)).to(dtype=x.dtype, device=x.device) - return x, logdet - else: - return x - - -class ElementwiseAffine(nn.Module): - def __init__(self, channels): - super().__init__() - self.channels = channels - self.m = nn.Parameter(torch.zeros(channels, 1)) - self.logs = nn.Parameter(torch.zeros(channels, 1)) - - def forward(self, x, x_mask, reverse=False, **kwargs): - if not reverse: - y = self.m + torch.exp(self.logs) * x - y = y * x_mask - logdet = torch.sum(self.logs * x_mask, [1, 2]) - return y, logdet - else: - x = (x - self.m) * torch.exp(-self.logs) * x_mask - return x - - -class ResidualCouplingLayer(nn.Module): - def __init__( - self, - channels, - hidden_channels, - kernel_size, - dilation_rate, - n_layers, - p_dropout=0, - gin_channels=0, - mean_only=False, - ): - assert channels % 2 == 0, "channels should be divisible by 2" - super().__init__() - self.channels = channels - self.hidden_channels = hidden_channels - self.kernel_size = kernel_size - self.dilation_rate = dilation_rate - self.n_layers = n_layers - self.half_channels = channels // 2 - self.mean_only = mean_only - - self.pre = nn.Conv1d(self.half_channels, hidden_channels, 1) - self.enc = WN( - hidden_channels, - kernel_size, - dilation_rate, - n_layers, - p_dropout=p_dropout, - gin_channels=gin_channels, - ) - self.post = nn.Conv1d(hidden_channels, self.half_channels * (2 - mean_only), 1) - self.post.weight.data.zero_() - self.post.bias.data.zero_() - - def forward(self, x, x_mask, g=None, reverse=False): - x0, x1 = torch.split(x, [self.half_channels] * 2, 1) - h = self.pre(x0) * x_mask - h = self.enc(h, x_mask, g=g) - stats = self.post(h) * x_mask - if not self.mean_only: - m, logs = torch.split(stats, [self.half_channels] * 2, 1) - else: - m = stats - logs = torch.zeros_like(m) - - if not reverse: - x1 = m + x1 * torch.exp(logs) * x_mask - x = torch.cat([x0, x1], 1) - logdet = torch.sum(logs, [1, 2]) - return x, logdet - else: - x1 = (x1 - m) * torch.exp(-logs) * x_mask - x = torch.cat([x0, x1], 1) - return x - - def remove_weight_norm(self): - self.enc.remove_weight_norm() - - -class ConvFlow(nn.Module): - def __init__( - self, - in_channels, - filter_channels, - kernel_size, - n_layers, - num_bins=10, - tail_bound=5.0, - ): - super().__init__() - self.in_channels = in_channels - self.filter_channels = filter_channels - self.kernel_size = kernel_size - self.n_layers = n_layers - self.num_bins = num_bins - self.tail_bound = tail_bound - self.half_channels = in_channels // 2 - - self.pre = nn.Conv1d(self.half_channels, filter_channels, 1) - self.convs = DDSConv(filter_channels, kernel_size, n_layers, p_dropout=0.0) - self.proj = nn.Conv1d( - filter_channels, self.half_channels * (num_bins * 3 - 1), 1 - ) - self.proj.weight.data.zero_() - self.proj.bias.data.zero_() - - def forward(self, x, x_mask, g=None, reverse=False): - x0, x1 = torch.split(x, [self.half_channels] * 2, 1) - h = self.pre(x0) - h = self.convs(h, x_mask, g=g) - h = self.proj(h) * x_mask - - b, c, t = x0.shape - h = h.reshape(b, c, -1, t).permute(0, 1, 3, 2) # [b, cx?, t] -> [b, c, t, ?] - - unnormalized_widths = h[..., : self.num_bins] / math.sqrt(self.filter_channels) - unnormalized_heights = h[..., self.num_bins : 2 * self.num_bins] / math.sqrt( - self.filter_channels - ) - unnormalized_derivatives = h[..., 2 * self.num_bins :] - - x1, logabsdet = piecewise_rational_quadratic_transform( - x1, - unnormalized_widths, - unnormalized_heights, - unnormalized_derivatives, - inverse=reverse, - tails="linear", - tail_bound=self.tail_bound, - ) - - x = torch.cat([x0, x1], 1) * x_mask - logdet = torch.sum(logabsdet * x_mask, [1, 2]) - if not reverse: - return x, logdet - else: - return x diff --git a/spaces/Fernando22/freegpt-webui/g4f/Provider/Providers/hteyun.py b/spaces/Fernando22/freegpt-webui/g4f/Provider/Providers/hteyun.py deleted file mode 100644 index a6eba7c00331d720afb47215e818f5900d4aedcf..0000000000000000000000000000000000000000 --- a/spaces/Fernando22/freegpt-webui/g4f/Provider/Providers/hteyun.py +++ /dev/null @@ -1,34 +0,0 @@ -import requests -import os -import json -from ...typing import sha256, Dict, get_type_hints - -url = 'https://hteyun.com' -model = ['gpt-3.5-turbo', 'gpt-3.5-turbo-16k', 'gpt-3.5-turbo-16k-0613', 'gpt-3.5-turbo-0613'] -supports_stream = True -needs_auth = False - -def _create_completion(model: str, messages: list, stream: bool, temperature: float = 0.7, **kwargs): - headers = { - 'Content-Type': 'application/json', - 'Accept': 'application/json, text/plain, */*', - 'Accept-Language': 'ru-RU,ru;q=0.9,en-US;q=0.8,en;q=0.7,ja;q=0.6,zh-TW;q=0.5,zh;q=0.4', - 'Origin': 'https://hteyun.com', - 'Referer': 'https://hteyun.com/chat/', - } - data = { - 'messages': messages, - 'model': model, - 'systemMessage': 'You are ChatGPT, a large language model trained by OpenAI. Follow the user\'s instructions carefully. Respond using russian language.', - 'temperature': 0.7, - 'presence_penalty': 0, - } - response = requests.post(url + '/api/chat-stream', json=data, headers=headers, stream=True) - print(response.json()) - - # Извлечение текста из response - return response.json()['text'] - - -params = f'g4f.Providers.{os.path.basename(__file__)[:-3]} supports: ' + \ - '(%s)' % ', '.join([f"{name}: {get_type_hints(_create_completion)[name].__name__}" for name in _create_completion.__code__.co_varnames[:_create_completion.__code__.co_argcount]]) diff --git a/spaces/Fisharp/starcoder-playground/src/utils.py b/spaces/Fisharp/starcoder-playground/src/utils.py deleted file mode 100644 index 767c5da57dc769cd1830f2ffdbccd7548e5f9c87..0000000000000000000000000000000000000000 --- a/spaces/Fisharp/starcoder-playground/src/utils.py +++ /dev/null @@ -1,97 +0,0 @@ -import os -from typing import List -from urllib.parse import urljoin - -from settings import ( - DEFAULT_HUGGINGFACE_MODELS_API_BASE_URL, - STATIC_PATH, -) - -def masked(value: str, n_shown: int, length: int = None) -> str: - """Returns a string with the first and last n_shown characters - and the middle of the string replaced with '*' - - Args: - value (str): The string to mask - n_shown (int): The number of characters to show at the beginning and end of the string - length (int, optional): The length of the string. If not given, it will be calculated as the length of the value. Defaults to None. - - Returns: - str: The masked string - """ - l = length or len(value) - return value[0:n_shown] + '*'*(length-2*n_shown) + value[-n_shown:] - - -def ofuscated(value: str) -> str: - """Returns a string with the first and last 4 characters - and the middle of the string replaced with '*' - - Args: - value (str): The string to mask - - Returns: - str: The masked string - """ - return masked(value, 4, len(value)//2) - - -def preview(label:str, value: str, ofuscate=False): - """Print the variable name and its value in a nice way. - If ofuscate is True, it will ofuscate the value - - Args: - variable_name (str): The name of the variable to print - ofuscate (bool, optional): If True, it will ofuscate the value. Defaults to False. - """ - str_value = ofuscated(str(value)) if ofuscate else str(value) - print(f"{label} = {str_value}") - -def get_url_from_env_or_default_path(env_name: str, api_path: str) -> str: - """Takes an url from the env variable (given the env name) - or combines with urljoin the default models base url - with the default path (given the path name) - - Args: - env_name (str): The name of the environment variable to check - api_path (str): The default path to use if the environment variable is not set - - Returns: - str: The url to use - """ - return os.environ.get(env_name) or urljoin( - DEFAULT_HUGGINGFACE_MODELS_API_BASE_URL, api_path - ) - -def get_file_as_string(file_name, path=STATIC_PATH) -> str: - """Loads the content of a file given its name - and returns all of its lines as a single string - if a file path is given, it will be used - instead of the default static path (from settings) - - Args: - file_name (_type_): The name of the file to load. - path (str, optional): The path to the file. Defaults to the current directory. - - Returns: - str: The content of the file as a single string - """ - with open(os.path.join(path, file_name), mode='r', encoding='UTF-8') as f: - return f.read() - - -def get_sections(string: str, delimiter: str, up_to: int = None) -> List[str]: - """Splits a string into sections given a delimiter - - Args: - string (str): The string to split - delimiter (str): The delimiter to use - up_to (int, optional): The maximum number of sections to return. - Defaults to None (which means all sections) - - Returns: - List[str]: The list of sections (up to the given limit, if any provided) - """ - return [section.strip() - for section in string.split(delimiter) - if (section and not section.isspace())][:up_to] diff --git a/spaces/Flyingpotato42/gpt4all-tweaked/app.py b/spaces/Flyingpotato42/gpt4all-tweaked/app.py deleted file mode 100644 index 158da3c08dddf7469a9549786c2a39a822a08f6b..0000000000000000000000000000000000000000 --- a/spaces/Flyingpotato42/gpt4all-tweaked/app.py +++ /dev/null @@ -1,17 +0,0 @@ -import gradio as gr -import os -import base64 -encrypted = "ZWRjNjE1NjZhNTVkZmVkOGI5MzA0Y2JiMmJlNzE0NWQ2YTllMjQ2MTg2YTc0MzMyZTE4NzRkNzEzMmVjMTEzZg==" -base64_bytes = encrypted.encode('ascii') -message_bytes = base64.b64decode(base64_bytes) -os.environ["SERPAPI_API_KEY"] = message_bytes.decode('ascii') -from langchain.agents import load_tools -from langchain.agents import initialize_agent -from langchain.llms import GPT4All -model_path = "./ggml-gpt4all-j-v1.3-groovy.bin" -llm = GPT4All(model=model_path) -agent = initialize_agent(llm, agent="zero-shot-react-description", verbose=False) -def greet(prompt): - return agent.run(prompt) -iface = gr.Interface(fn=greet, inputs="text", outputs="text") -iface.launch() \ No newline at end of file diff --git a/spaces/FrankZxShen/vits-fast-finetuning-pcr/utils.py b/spaces/FrankZxShen/vits-fast-finetuning-pcr/utils.py deleted file mode 100644 index a91f9eb2df9f2b097431432753212eb440f93020..0000000000000000000000000000000000000000 --- a/spaces/FrankZxShen/vits-fast-finetuning-pcr/utils.py +++ /dev/null @@ -1,399 +0,0 @@ -import os -import glob -import sys -import argparse -import logging -import json -import subprocess -import numpy as np -from scipy.io.wavfile import read -import torch -import regex as re - -MATPLOTLIB_FLAG = False - -logging.basicConfig(stream=sys.stdout, level=logging.DEBUG) -logger = logging - - - -zh_pattern = re.compile(r'[\u4e00-\u9fa5]') -en_pattern = re.compile(r'[a-zA-Z]') -jp_pattern = re.compile(r'[\u3040-\u30ff\u31f0-\u31ff]') -kr_pattern = re.compile(r'[\uac00-\ud7af\u1100-\u11ff\u3130-\u318f\ua960-\ua97f]') -num_pattern=re.compile(r'[0-9]') -comma=r"(?<=[.。!!??;;,,、::'\"‘“”’()()《》「」~——])" #向前匹配但固定长度 -tags={'ZH':'[ZH]','EN':'[EN]','JP':'[JA]','KR':'[KR]'} - -def tag_cjke(text): - '''为中英日韩加tag,中日正则分不开,故先分句分离中日再识别,以应对大部分情况''' - sentences = re.split(r"([.。!!??;;,,、::'\"‘“”’()()【】《》「」~——]+ *(?![0-9]))", text) #分句,排除小数点 - sentences.append("") - sentences = ["".join(i) for i in zip(sentences[0::2],sentences[1::2])] - # print(sentences) - prev_lang=None - tagged_text = "" - for s in sentences: - #全为符号跳过 - nu = re.sub(r'[\s\p{P}]+', '', s, flags=re.U).strip() - if len(nu)==0: - continue - s = re.sub(r'[()()《》「」【】‘“”’]+', '', s) - jp=re.findall(jp_pattern, s) - #本句含日语字符判断为日语 - if len(jp)>0: - prev_lang,tagged_jke=tag_jke(s,prev_lang) - tagged_text +=tagged_jke - else: - prev_lang,tagged_cke=tag_cke(s,prev_lang) - tagged_text +=tagged_cke - return tagged_text - -def tag_jke(text,prev_sentence=None): - '''为英日韩加tag''' - # 初始化标记变量 - tagged_text = "" - prev_lang = None - tagged=0 - # 遍历文本 - for char in text: - # 判断当前字符属于哪种语言 - if jp_pattern.match(char): - lang = "JP" - elif zh_pattern.match(char): - lang = "JP" - elif kr_pattern.match(char): - lang = "KR" - elif en_pattern.match(char): - lang = "EN" - # elif num_pattern.match(char): - # lang = prev_sentence - else: - lang = None - tagged_text += char - continue - # 如果当前语言与上一个语言不同,就添加标记 - if lang != prev_lang: - tagged=1 - if prev_lang==None: # 开头 - tagged_text =tags[lang]+tagged_text - else: - tagged_text =tagged_text+tags[prev_lang]+tags[lang] - - # 重置标记变量 - prev_lang = lang - - # 添加当前字符到标记文本中 - tagged_text += char - - # 在最后一个语言的结尾添加对应的标记 - if prev_lang: - tagged_text += tags[prev_lang] - if not tagged: - prev_lang=prev_sentence - tagged_text =tags[prev_lang]+tagged_text+tags[prev_lang] - - return prev_lang,tagged_text - -def tag_cke(text,prev_sentence=None): - '''为中英韩加tag''' - # 初始化标记变量 - tagged_text = "" - prev_lang = None - # 是否全略过未标签 - tagged=0 - - # 遍历文本 - for char in text: - # 判断当前字符属于哪种语言 - if zh_pattern.match(char): - lang = "ZH" - elif kr_pattern.match(char): - lang = "KR" - elif en_pattern.match(char): - lang = "EN" - # elif num_pattern.match(char): - # lang = prev_sentence - else: - # 略过 - lang = None - tagged_text += char - continue - - # 如果当前语言与上一个语言不同,添加标记 - if lang != prev_lang: - tagged=1 - if prev_lang==None: # 开头 - tagged_text =tags[lang]+tagged_text - else: - tagged_text =tagged_text+tags[prev_lang]+tags[lang] - - # 重置标记变量 - prev_lang = lang - - # 添加当前字符到标记文本中 - tagged_text += char - - # 在最后一个语言的结尾添加对应的标记 - if prev_lang: - tagged_text += tags[prev_lang] - # 未标签则继承上一句标签 - if tagged==0: - prev_lang=prev_sentence - tagged_text =tags[prev_lang]+tagged_text+tags[prev_lang] - return prev_lang,tagged_text - - -def load_checkpoint(checkpoint_path, model, optimizer=None, drop_speaker_emb=False): - assert os.path.isfile(checkpoint_path) - checkpoint_dict = torch.load(checkpoint_path, map_location='cpu') - iteration = checkpoint_dict['iteration'] - learning_rate = checkpoint_dict['learning_rate'] - if optimizer is not None: - optimizer.load_state_dict(checkpoint_dict['optimizer']) - saved_state_dict = checkpoint_dict['model'] - if hasattr(model, 'module'): - state_dict = model.module.state_dict() - else: - state_dict = model.state_dict() - new_state_dict = {} - for k, v in state_dict.items(): - try: - if k == 'emb_g.weight': - if drop_speaker_emb: - new_state_dict[k] = v - continue - v[:saved_state_dict[k].shape[0], :] = saved_state_dict[k] - new_state_dict[k] = v - else: - new_state_dict[k] = saved_state_dict[k] - except: - logger.info("%s is not in the checkpoint" % k) - new_state_dict[k] = v - if hasattr(model, 'module'): - model.module.load_state_dict(new_state_dict) - else: - model.load_state_dict(new_state_dict) - logger.info("Loaded checkpoint '{}' (iteration {})".format( - checkpoint_path, iteration)) - return model, optimizer, learning_rate, iteration - - -def save_checkpoint(model, optimizer, learning_rate, iteration, checkpoint_path): - logger.info("Saving model and optimizer state at iteration {} to {}".format( - iteration, checkpoint_path)) - if hasattr(model, 'module'): - state_dict = model.module.state_dict() - else: - state_dict = model.state_dict() - torch.save({'model': state_dict, - 'iteration': iteration, - 'optimizer': optimizer.state_dict() if optimizer is not None else None, - 'learning_rate': learning_rate}, checkpoint_path) - - -def summarize(writer, global_step, scalars={}, histograms={}, images={}, audios={}, audio_sampling_rate=22050): - for k, v in scalars.items(): - writer.add_scalar(k, v, global_step) - for k, v in histograms.items(): - writer.add_histogram(k, v, global_step) - for k, v in images.items(): - writer.add_image(k, v, global_step, dataformats='HWC') - for k, v in audios.items(): - writer.add_audio(k, v, global_step, audio_sampling_rate) - - -def latest_checkpoint_path(dir_path, regex="G_*.pth"): - f_list = glob.glob(os.path.join(dir_path, regex)) - f_list.sort(key=lambda f: int("".join(filter(str.isdigit, f)))) - x = f_list[-1] - print(x) - return x - - -def plot_spectrogram_to_numpy(spectrogram): - global MATPLOTLIB_FLAG - if not MATPLOTLIB_FLAG: - import matplotlib - matplotlib.use("Agg") - MATPLOTLIB_FLAG = True - mpl_logger = logging.getLogger('matplotlib') - mpl_logger.setLevel(logging.WARNING) - import matplotlib.pylab as plt - import numpy as np - - fig, ax = plt.subplots(figsize=(10, 2)) - im = ax.imshow(spectrogram, aspect="auto", origin="lower", - interpolation='none') - plt.colorbar(im, ax=ax) - plt.xlabel("Frames") - plt.ylabel("Channels") - plt.tight_layout() - - fig.canvas.draw() - data = np.fromstring(fig.canvas.tostring_rgb(), dtype=np.uint8, sep='') - data = data.reshape(fig.canvas.get_width_height()[::-1] + (3,)) - plt.close() - return data - - -def plot_alignment_to_numpy(alignment, info=None): - global MATPLOTLIB_FLAG - if not MATPLOTLIB_FLAG: - import matplotlib - matplotlib.use("Agg") - MATPLOTLIB_FLAG = True - mpl_logger = logging.getLogger('matplotlib') - mpl_logger.setLevel(logging.WARNING) - import matplotlib.pylab as plt - import numpy as np - - fig, ax = plt.subplots(figsize=(6, 4)) - im = ax.imshow(alignment.transpose(), aspect='auto', origin='lower', - interpolation='none') - fig.colorbar(im, ax=ax) - xlabel = 'Decoder timestep' - if info is not None: - xlabel += '\n\n' + info - plt.xlabel(xlabel) - plt.ylabel('Encoder timestep') - plt.tight_layout() - - fig.canvas.draw() - data = np.fromstring(fig.canvas.tostring_rgb(), dtype=np.uint8, sep='') - data = data.reshape(fig.canvas.get_width_height()[::-1] + (3,)) - plt.close() - return data - - -def load_wav_to_torch(full_path): - sampling_rate, data = read(full_path) - return torch.FloatTensor(data.astype(np.float32)), sampling_rate - - -def load_filepaths_and_text(filename, split="|"): - with open(filename, encoding='utf-8') as f: - filepaths_and_text = [line.strip().split(split) for line in f] - return filepaths_and_text - - -def get_hparams(init=True): - parser = argparse.ArgumentParser() - parser.add_argument('-c', '--config', type=str, default="./configs/modified_finetune_speaker.json", - help='JSON file for configuration') - parser.add_argument('-m', '--model', type=str, default="pretrained_models", - help='Model name') - parser.add_argument('-n', '--max_epochs', type=int, default=50, - help='finetune epochs') - parser.add_argument('--drop_speaker_embed', type=bool, default=False, help='whether to drop existing characters') - - args = parser.parse_args() - model_dir = os.path.join("./", args.model) - - if not os.path.exists(model_dir): - os.makedirs(model_dir) - - config_path = args.config - config_save_path = os.path.join(model_dir, "config.json") - if init: - with open(config_path, "r") as f: - data = f.read() - with open(config_save_path, "w") as f: - f.write(data) - else: - with open(config_save_path, "r") as f: - data = f.read() - config = json.loads(data) - - hparams = HParams(**config) - hparams.model_dir = model_dir - hparams.max_epochs = args.max_epochs - hparams.drop_speaker_embed = args.drop_speaker_embed - return hparams - - -def get_hparams_from_dir(model_dir): - config_save_path = os.path.join(model_dir, "config.json") - with open(config_save_path, "r") as f: - data = f.read() - config = json.loads(data) - - hparams = HParams(**config) - hparams.model_dir = model_dir - return hparams - - -def get_hparams_from_file(config_path): - with open(config_path, "r", encoding="utf-8") as f: - data = f.read() - config = json.loads(data) - - hparams = HParams(**config) - return hparams - - -def check_git_hash(model_dir): - source_dir = os.path.dirname(os.path.realpath(__file__)) - if not os.path.exists(os.path.join(source_dir, ".git")): - logger.warn("{} is not a git repository, therefore hash value comparison will be ignored.".format( - source_dir - )) - return - - cur_hash = subprocess.getoutput("git rev-parse HEAD") - - path = os.path.join(model_dir, "githash") - if os.path.exists(path): - saved_hash = open(path).read() - if saved_hash != cur_hash: - logger.warn("git hash values are different. {}(saved) != {}(current)".format( - saved_hash[:8], cur_hash[:8])) - else: - open(path, "w").write(cur_hash) - - -def get_logger(model_dir, filename="train.log"): - global logger - logger = logging.getLogger(os.path.basename(model_dir)) - logger.setLevel(logging.DEBUG) - - formatter = logging.Formatter("%(asctime)s\t%(name)s\t%(levelname)s\t%(message)s") - if not os.path.exists(model_dir): - os.makedirs(model_dir) - h = logging.FileHandler(os.path.join(model_dir, filename)) - h.setLevel(logging.DEBUG) - h.setFormatter(formatter) - logger.addHandler(h) - return logger - - -class HParams(): - def __init__(self, **kwargs): - for k, v in kwargs.items(): - if type(v) == dict: - v = HParams(**v) - self[k] = v - - def keys(self): - return self.__dict__.keys() - - def items(self): - return self.__dict__.items() - - def values(self): - return self.__dict__.values() - - def __len__(self): - return len(self.__dict__) - - def __getitem__(self, key): - return getattr(self, key) - - def __setitem__(self, key, value): - return setattr(self, key, value) - - def __contains__(self, key): - return key in self.__dict__ - - def __repr__(self): - return self.__dict__.__repr__() \ No newline at end of file diff --git a/spaces/FridaZuley/RVC_HFKawaii/demucs/utils.py b/spaces/FridaZuley/RVC_HFKawaii/demucs/utils.py deleted file mode 100644 index 4364184059b1afe3c8379c77793a8e76dccf9699..0000000000000000000000000000000000000000 --- a/spaces/FridaZuley/RVC_HFKawaii/demucs/utils.py +++ /dev/null @@ -1,323 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# All rights reserved. -# -# This source code is licensed under the license found in the -# LICENSE file in the root directory of this source tree. - -import errno -import functools -import hashlib -import inspect -import io -import os -import random -import socket -import tempfile -import warnings -import zlib -from contextlib import contextmanager - -from diffq import UniformQuantizer, DiffQuantizer -import torch as th -import tqdm -from torch import distributed -from torch.nn import functional as F - - -def center_trim(tensor, reference): - """ - Center trim `tensor` with respect to `reference`, along the last dimension. - `reference` can also be a number, representing the length to trim to. - If the size difference != 0 mod 2, the extra sample is removed on the right side. - """ - if hasattr(reference, "size"): - reference = reference.size(-1) - delta = tensor.size(-1) - reference - if delta < 0: - raise ValueError("tensor must be larger than reference. " f"Delta is {delta}.") - if delta: - tensor = tensor[..., delta // 2:-(delta - delta // 2)] - return tensor - - -def average_metric(metric, count=1.): - """ - Average `metric` which should be a float across all hosts. `count` should be - the weight for this particular host (i.e. number of examples). - """ - metric = th.tensor([count, count * metric], dtype=th.float32, device='cuda') - distributed.all_reduce(metric, op=distributed.ReduceOp.SUM) - return metric[1].item() / metric[0].item() - - -def free_port(host='', low=20000, high=40000): - """ - Return a port number that is most likely free. - This could suffer from a race condition although - it should be quite rare. - """ - sock = socket.socket() - while True: - port = random.randint(low, high) - try: - sock.bind((host, port)) - except OSError as error: - if error.errno == errno.EADDRINUSE: - continue - raise - return port - - -def sizeof_fmt(num, suffix='B'): - """ - Given `num` bytes, return human readable size. - Taken from https://stackoverflow.com/a/1094933 - """ - for unit in ['', 'Ki', 'Mi', 'Gi', 'Ti', 'Pi', 'Ei', 'Zi']: - if abs(num) < 1024.0: - return "%3.1f%s%s" % (num, unit, suffix) - num /= 1024.0 - return "%.1f%s%s" % (num, 'Yi', suffix) - - -def human_seconds(seconds, display='.2f'): - """ - Given `seconds` seconds, return human readable duration. - """ - value = seconds * 1e6 - ratios = [1e3, 1e3, 60, 60, 24] - names = ['us', 'ms', 's', 'min', 'hrs', 'days'] - last = names.pop(0) - for name, ratio in zip(names, ratios): - if value / ratio < 0.3: - break - value /= ratio - last = name - return f"{format(value, display)} {last}" - - -class TensorChunk: - def __init__(self, tensor, offset=0, length=None): - total_length = tensor.shape[-1] - assert offset >= 0 - assert offset < total_length - - if length is None: - length = total_length - offset - else: - length = min(total_length - offset, length) - - self.tensor = tensor - self.offset = offset - self.length = length - self.device = tensor.device - - @property - def shape(self): - shape = list(self.tensor.shape) - shape[-1] = self.length - return shape - - def padded(self, target_length): - delta = target_length - self.length - total_length = self.tensor.shape[-1] - assert delta >= 0 - - start = self.offset - delta // 2 - end = start + target_length - - correct_start = max(0, start) - correct_end = min(total_length, end) - - pad_left = correct_start - start - pad_right = end - correct_end - - out = F.pad(self.tensor[..., correct_start:correct_end], (pad_left, pad_right)) - assert out.shape[-1] == target_length - return out - - -def tensor_chunk(tensor_or_chunk): - if isinstance(tensor_or_chunk, TensorChunk): - return tensor_or_chunk - else: - assert isinstance(tensor_or_chunk, th.Tensor) - return TensorChunk(tensor_or_chunk) - - -def apply_model(model, mix, shifts=None, split=False, - overlap=0.25, transition_power=1., progress=False): - """ - Apply model to a given mixture. - - Args: - shifts (int): if > 0, will shift in time `mix` by a random amount between 0 and 0.5 sec - and apply the oppositve shift to the output. This is repeated `shifts` time and - all predictions are averaged. This effectively makes the model time equivariant - and improves SDR by up to 0.2 points. - split (bool): if True, the input will be broken down in 8 seconds extracts - and predictions will be performed individually on each and concatenated. - Useful for model with large memory footprint like Tasnet. - progress (bool): if True, show a progress bar (requires split=True) - """ - assert transition_power >= 1, "transition_power < 1 leads to weird behavior." - device = mix.device - channels, length = mix.shape - if split: - out = th.zeros(len(model.sources), channels, length, device=device) - sum_weight = th.zeros(length, device=device) - segment = model.segment_length - stride = int((1 - overlap) * segment) - offsets = range(0, length, stride) - scale = stride / model.samplerate - if progress: - offsets = tqdm.tqdm(offsets, unit_scale=scale, ncols=120, unit='seconds') - # We start from a triangle shaped weight, with maximal weight in the middle - # of the segment. Then we normalize and take to the power `transition_power`. - # Large values of transition power will lead to sharper transitions. - weight = th.cat([th.arange(1, segment // 2 + 1), - th.arange(segment - segment // 2, 0, -1)]).to(device) - assert len(weight) == segment - # If the overlap < 50%, this will translate to linear transition when - # transition_power is 1. - weight = (weight / weight.max())**transition_power - for offset in offsets: - chunk = TensorChunk(mix, offset, segment) - chunk_out = apply_model(model, chunk, shifts=shifts) - chunk_length = chunk_out.shape[-1] - out[..., offset:offset + segment] += weight[:chunk_length] * chunk_out - sum_weight[offset:offset + segment] += weight[:chunk_length] - offset += segment - assert sum_weight.min() > 0 - out /= sum_weight - return out - elif shifts: - max_shift = int(0.5 * model.samplerate) - mix = tensor_chunk(mix) - padded_mix = mix.padded(length + 2 * max_shift) - out = 0 - for _ in range(shifts): - offset = random.randint(0, max_shift) - shifted = TensorChunk(padded_mix, offset, length + max_shift - offset) - shifted_out = apply_model(model, shifted) - out += shifted_out[..., max_shift - offset:] - out /= shifts - return out - else: - valid_length = model.valid_length(length) - mix = tensor_chunk(mix) - padded_mix = mix.padded(valid_length) - with th.no_grad(): - out = model(padded_mix.unsqueeze(0))[0] - return center_trim(out, length) - - -@contextmanager -def temp_filenames(count, delete=True): - names = [] - try: - for _ in range(count): - names.append(tempfile.NamedTemporaryFile(delete=False).name) - yield names - finally: - if delete: - for name in names: - os.unlink(name) - - -def get_quantizer(model, args, optimizer=None): - quantizer = None - if args.diffq: - quantizer = DiffQuantizer( - model, min_size=args.q_min_size, group_size=8) - if optimizer is not None: - quantizer.setup_optimizer(optimizer) - elif args.qat: - quantizer = UniformQuantizer( - model, bits=args.qat, min_size=args.q_min_size) - return quantizer - - -def load_model(path, strict=False): - with warnings.catch_warnings(): - warnings.simplefilter("ignore") - load_from = path - package = th.load(load_from, 'cpu') - - klass = package["klass"] - args = package["args"] - kwargs = package["kwargs"] - - if strict: - model = klass(*args, **kwargs) - else: - sig = inspect.signature(klass) - for key in list(kwargs): - if key not in sig.parameters: - warnings.warn("Dropping inexistant parameter " + key) - del kwargs[key] - model = klass(*args, **kwargs) - - state = package["state"] - training_args = package["training_args"] - quantizer = get_quantizer(model, training_args) - - set_state(model, quantizer, state) - return model - - -def get_state(model, quantizer): - if quantizer is None: - state = {k: p.data.to('cpu') for k, p in model.state_dict().items()} - else: - state = quantizer.get_quantized_state() - buf = io.BytesIO() - th.save(state, buf) - state = {'compressed': zlib.compress(buf.getvalue())} - return state - - -def set_state(model, quantizer, state): - if quantizer is None: - model.load_state_dict(state) - else: - buf = io.BytesIO(zlib.decompress(state["compressed"])) - state = th.load(buf, "cpu") - quantizer.restore_quantized_state(state) - - return state - - -def save_state(state, path): - buf = io.BytesIO() - th.save(state, buf) - sig = hashlib.sha256(buf.getvalue()).hexdigest()[:8] - - path = path.parent / (path.stem + "-" + sig + path.suffix) - path.write_bytes(buf.getvalue()) - - -def save_model(model, quantizer, training_args, path): - args, kwargs = model._init_args_kwargs - klass = model.__class__ - - state = get_state(model, quantizer) - - save_to = path - package = { - 'klass': klass, - 'args': args, - 'kwargs': kwargs, - 'state': state, - 'training_args': training_args, - } - th.save(package, save_to) - - -def capture_init(init): - @functools.wraps(init) - def __init__(self, *args, **kwargs): - self._init_args_kwargs = (args, kwargs) - init(self, *args, **kwargs) - - return __init__ diff --git a/spaces/FridaZuley/RVC_HFKawaii/infer/lib/uvr5_pack/utils.py b/spaces/FridaZuley/RVC_HFKawaii/infer/lib/uvr5_pack/utils.py deleted file mode 100644 index f4805cdb25e7c50611412a19340ad525d1251d7b..0000000000000000000000000000000000000000 --- a/spaces/FridaZuley/RVC_HFKawaii/infer/lib/uvr5_pack/utils.py +++ /dev/null @@ -1,121 +0,0 @@ -import json - -import numpy as np -import torch -from tqdm import tqdm - - -def load_data(file_name: str = "./infer/lib/uvr5_pack/name_params.json") -> dict: - with open(file_name, "r") as f: - data = json.load(f) - - return data - - -def make_padding(width, cropsize, offset): - left = offset - roi_size = cropsize - left * 2 - if roi_size == 0: - roi_size = cropsize - right = roi_size - (width % roi_size) + left - - return left, right, roi_size - - -def inference(X_spec, device, model, aggressiveness, data): - """ - data : dic configs - """ - - def _execute( - X_mag_pad, roi_size, n_window, device, model, aggressiveness, is_half=True - ): - model.eval() - with torch.no_grad(): - preds = [] - - iterations = [n_window] - - total_iterations = sum(iterations) - for i in tqdm(range(n_window)): - start = i * roi_size - X_mag_window = X_mag_pad[ - None, :, :, start : start + data["window_size"] - ] - X_mag_window = torch.from_numpy(X_mag_window) - if is_half: - X_mag_window = X_mag_window.half() - X_mag_window = X_mag_window.to(device) - - pred = model.predict(X_mag_window, aggressiveness) - - pred = pred.detach().cpu().numpy() - preds.append(pred[0]) - - pred = np.concatenate(preds, axis=2) - return pred - - def preprocess(X_spec): - X_mag = np.abs(X_spec) - X_phase = np.angle(X_spec) - - return X_mag, X_phase - - X_mag, X_phase = preprocess(X_spec) - - coef = X_mag.max() - X_mag_pre = X_mag / coef - - n_frame = X_mag_pre.shape[2] - pad_l, pad_r, roi_size = make_padding(n_frame, data["window_size"], model.offset) - n_window = int(np.ceil(n_frame / roi_size)) - - X_mag_pad = np.pad(X_mag_pre, ((0, 0), (0, 0), (pad_l, pad_r)), mode="constant") - - if list(model.state_dict().values())[0].dtype == torch.float16: - is_half = True - else: - is_half = False - pred = _execute( - X_mag_pad, roi_size, n_window, device, model, aggressiveness, is_half - ) - pred = pred[:, :, :n_frame] - - if data["tta"]: - pad_l += roi_size // 2 - pad_r += roi_size // 2 - n_window += 1 - - X_mag_pad = np.pad(X_mag_pre, ((0, 0), (0, 0), (pad_l, pad_r)), mode="constant") - - pred_tta = _execute( - X_mag_pad, roi_size, n_window, device, model, aggressiveness, is_half - ) - pred_tta = pred_tta[:, :, roi_size // 2 :] - pred_tta = pred_tta[:, :, :n_frame] - - return (pred + pred_tta) * 0.5 * coef, X_mag, np.exp(1.0j * X_phase) - else: - return pred * coef, X_mag, np.exp(1.0j * X_phase) - - -def _get_name_params(model_path, model_hash): - data = load_data() - flag = False - ModelName = model_path - for type in list(data): - for model in list(data[type][0]): - for i in range(len(data[type][0][model])): - if str(data[type][0][model][i]["hash_name"]) == model_hash: - flag = True - elif str(data[type][0][model][i]["hash_name"]) in ModelName: - flag = True - - if flag: - model_params_auto = data[type][0][model][i]["model_params"] - param_name_auto = data[type][0][model][i]["param_name"] - if type == "equivalent": - return param_name_auto, model_params_auto - else: - flag = False - return param_name_auto, model_params_auto diff --git a/spaces/GXSA/bingo/src/components/user-menu.tsx b/spaces/GXSA/bingo/src/components/user-menu.tsx deleted file mode 100644 index 9bd1edc9cf9f39b63629b021f0c1186b1a7c1341..0000000000000000000000000000000000000000 --- a/spaces/GXSA/bingo/src/components/user-menu.tsx +++ /dev/null @@ -1,113 +0,0 @@ -'use client' - -import { useEffect, useState } from 'react' -import Image from 'next/image' -import { toast } from 'react-hot-toast' -import { Button } from '@/components/ui/button' -import pkg from '../../package.json' -import { - DropdownMenu, - DropdownMenuContent, - DropdownMenuItem, - DropdownMenuSeparator, - DropdownMenuTrigger -} from '@/components/ui/dropdown-menu' -import { IconCopy, IconExternalLink, IconGitHub } from '@/components/ui/icons' -import SettingIcon from '@/assets/images/settings.svg' -import { useCopyToClipboard } from '@/lib/hooks/use-copy-to-clipboard' - -export function UserMenu() { - const [host, setHost] = useState('') - const { isCopied, copyToClipboard } = useCopyToClipboard({ timeout: 2000 }) - useEffect(() => { - setHost(location.host) - }, []) - - useEffect(() => { - if (isCopied) { - toast.success('复制成功') - } - }, [isCopied]) - return ( -
      - - - - - - - location.href='#dialog="settings"' - } - className="cursor-pointer" - > - 设置用户 - - - - location.href='#dialog="voice"' - } - className="cursor-pointer" - > - 语音设置 - - - - - 开源地址 - - - - - - - - 托管地址 - 🤗 - - - - - - - 复制站点 - - - - - -
      版本信息 {pkg.version}
      -
      - - -
      站点域名
      -
      copyToClipboard(host)} className="flex gap-1 text-xs text-zinc-500 cursor-pointer"> - {host} -
      -
      -
      -
      -
      - ) -} diff --git a/spaces/Gen-Sim/Gen-Sim/cliport/models/resnet.py b/spaces/Gen-Sim/Gen-Sim/cliport/models/resnet.py deleted file mode 100644 index 079b10a53fddbdcd292e72e6903091fa6545fbb2..0000000000000000000000000000000000000000 --- a/spaces/Gen-Sim/Gen-Sim/cliport/models/resnet.py +++ /dev/null @@ -1,122 +0,0 @@ -import torch -import torch.nn as nn -import torch.nn.functional as F - -import cliport.utils.utils as utils - - -class IdentityBlock(nn.Module): - def __init__(self, in_planes, filters, kernel_size, stride=1, final_relu=True, batchnorm=True): - super(IdentityBlock, self).__init__() - self.final_relu = final_relu - self.batchnorm = batchnorm - - filters1, filters2, filters3 = filters - self.conv1 = nn.Conv2d(in_planes, filters1, kernel_size=1, bias=False) - self.bn1 = nn.BatchNorm2d(filters1) if self.batchnorm else nn.Identity() - self.conv2 = nn.Conv2d(filters1, filters2, kernel_size=kernel_size, dilation=1, - stride=stride, padding=1, bias=False) - self.bn2 = nn.BatchNorm2d(filters2) if self.batchnorm else nn.Identity() - self.conv3 = nn.Conv2d(filters2, filters3, kernel_size=1, bias=False) - self.bn3 = nn.BatchNorm2d(filters3) if self.batchnorm else nn.Identity() - - def forward(self, x): - out = F.relu(self.bn1(self.conv1(x))) - out = F.relu(self.bn2(self.conv2(out))) - out = self.bn3(self.conv3(out)) - out += x - if self.final_relu: - out = F.relu(out) - return out - - -class ConvBlock(nn.Module): - def __init__(self, in_planes, filters, kernel_size, stride=1, final_relu=True, batchnorm=True): - super(ConvBlock, self).__init__() - self.final_relu = final_relu - self.batchnorm = batchnorm - - filters1, filters2, filters3 = filters - self.conv1 = nn.Conv2d(in_planes, filters1, kernel_size=1, bias=False) - self.bn1 = nn.BatchNorm2d(filters1) if self.batchnorm else nn.Identity() - self.conv2 = nn.Conv2d(filters1, filters2, kernel_size=kernel_size, dilation=1, - stride=stride, padding=1, bias=False) - self.bn2 = nn.BatchNorm2d(filters2) if self.batchnorm else nn.Identity() - self.conv3 = nn.Conv2d(filters2, filters3, kernel_size=1, bias=False) - self.bn3 = nn.BatchNorm2d(filters3) if self.batchnorm else nn.Identity() - - self.shortcut = nn.Sequential( - nn.Conv2d(in_planes, filters3, - kernel_size=1, stride=stride, bias=False), - nn.BatchNorm2d(filters3) if self.batchnorm else nn.Identity() - ) - - def forward(self, x): - out = F.relu(self.bn1(self.conv1(x))) - out = F.relu(self.bn2(self.conv2(out))) - out = self.bn3(self.conv3(out)) - out += self.shortcut(x) - if self.final_relu: - out = F.relu(out) - return out - - -class ResNet43_8s(nn.Module): - def __init__(self, input_shape, output_dim, cfg, device, preprocess): - super(ResNet43_8s, self).__init__() - self.input_shape = input_shape - self.input_dim = input_shape[-1] - self.output_dim = output_dim - self.cfg = cfg - self.device = device - self.batchnorm = self.cfg['train']['batchnorm'] - self.preprocess = preprocess - - self.layers = self._make_layers() - - def _make_layers(self): - layers = nn.Sequential( - # conv1 - nn.Conv2d(self.input_dim, 64, stride=1, kernel_size=3, padding=1), - nn.BatchNorm2d(64) if self.batchnorm else nn.Identity(), - nn.ReLU(True), - - # fcn - ConvBlock(64, [64, 64, 64], kernel_size=3, stride=1, batchnorm=self.batchnorm), - IdentityBlock(64, [64, 64, 64], kernel_size=3, stride=1, batchnorm=self.batchnorm), - - ConvBlock(64, [128, 128, 128], kernel_size=3, stride=2, batchnorm=self.batchnorm), - IdentityBlock(128, [128, 128, 128], kernel_size=3, stride=1, batchnorm=self.batchnorm), - - ConvBlock(128, [256, 256, 256], kernel_size=3, stride=2, batchnorm=self.batchnorm), - IdentityBlock(256, [256, 256, 256], kernel_size=3, stride=1, batchnorm=self.batchnorm), - - ConvBlock(256, [512, 512, 512], kernel_size=3, stride=2, batchnorm=self.batchnorm), - IdentityBlock(512, [512, 512, 512], kernel_size=3, stride=1, batchnorm=self.batchnorm), - - # head - ConvBlock(512, [256, 256, 256], kernel_size=3, stride=1, batchnorm=self.batchnorm), - IdentityBlock(256, [256, 256, 256], kernel_size=3, stride=1, batchnorm=self.batchnorm), - nn.UpsamplingBilinear2d(scale_factor=2), - - ConvBlock(256, [128, 128, 128], kernel_size=3, stride=1, batchnorm=self.batchnorm), - IdentityBlock(128, [128, 128, 128], kernel_size=3, stride=1, batchnorm=self.batchnorm), - nn.UpsamplingBilinear2d(scale_factor=2), - - ConvBlock(128, [64, 64, 64], kernel_size=3, stride=1, batchnorm=self.batchnorm), - IdentityBlock(64, [64, 64, 64], kernel_size=3, stride=1, batchnorm=self.batchnorm), - nn.UpsamplingBilinear2d(scale_factor=2), - - # conv2 - ConvBlock(64, [16, 16, self.output_dim], kernel_size=3, stride=1, - final_relu=False, batchnorm=self.batchnorm), - IdentityBlock(self.output_dim, [16, 16, self.output_dim], kernel_size=3, stride=1, - final_relu=False, batchnorm=self.batchnorm), - ) - return layers - - def forward(self, x): - x = self.preprocess(x, dist='transporter') - - out = self.layers(x) - return out \ No newline at end of file diff --git a/spaces/Gen-Sim/Gen-Sim/scripts/docker_build.py b/spaces/Gen-Sim/Gen-Sim/scripts/docker_build.py deleted file mode 100644 index 5c504d2b3869ee3615a8656ab0d077c5a0470a7a..0000000000000000000000000000000000000000 --- a/spaces/Gen-Sim/Gen-Sim/scripts/docker_build.py +++ /dev/null @@ -1,53 +0,0 @@ -#!/usr/bin/env python - -######### -# Credit: https://github.com/RobotLocomotion/pytorch-dense-correspondence/blob/master/docker/docker_build.py -######### - -from __future__ import print_function - -import argparse -import os -import getpass - -if __name__=="__main__": - - print("building docker container . . . ") - user_name = getpass.getuser() - default_image_name = user_name + "-cliport" - - - parser = argparse.ArgumentParser() - parser.add_argument("-i", "--image", type=str, - help="name for the newly created docker image", default=default_image_name) - - parser.add_argument("-dr", "--dry_run", action='store_true', help="(optional) perform a dry_run, print the command that would have been executed but don't execute it.") - - parser.add_argument("-pw", "--password", type=str, - help="(optional) password for the user", default="password") - - parser.add_argument('-uid','--user_id', type=int, help="(optional) user id for this user", default=os.getuid()) - parser.add_argument('-gid','--group_id', type=int, help="(optional) user gid for this user", default=os.getgid()) - - parser.add_argument('-p', "--passthrough", type=str, help="(optional) passthrough arguments to add to the docker build") - - args = parser.parse_args() - print("building docker image named ", args.image) - cmd = "docker build --build-arg USER_NAME=%(user_name)s --build-arg USER_PASSWORD=%(password)s --build-arg USER_ID=%(user_id)s --build-arg USER_GID=%(group_id)s" \ - %{'user_name': user_name, 'password': args.password, 'user_id': args.user_id, 'group_id': args.group_id} - - if args.passthrough: - cmd += " " + args.passthrough - - cmd += " -t %s -f Dockerfile ." % args.image - - - print("command = \n \n", cmd) - print("") - - # build the docker image - if not args.dry_run: - print("executing shell command") - os.system(cmd) - else: - print("dry run, not executing command") \ No newline at end of file diff --git "a/spaces/Gmq-x/gpt-academic/crazy_functions/\344\270\213\350\275\275arxiv\350\256\272\346\226\207\347\277\273\350\257\221\346\221\230\350\246\201.py" "b/spaces/Gmq-x/gpt-academic/crazy_functions/\344\270\213\350\275\275arxiv\350\256\272\346\226\207\347\277\273\350\257\221\346\221\230\350\246\201.py" deleted file mode 100644 index 3da831fd07e361a532777c83bb02cff265b94abd..0000000000000000000000000000000000000000 --- "a/spaces/Gmq-x/gpt-academic/crazy_functions/\344\270\213\350\275\275arxiv\350\256\272\346\226\207\347\277\273\350\257\221\346\221\230\350\246\201.py" +++ /dev/null @@ -1,194 +0,0 @@ -from toolbox import update_ui -from toolbox import CatchException, report_execption, write_results_to_file, get_conf -import re, requests, unicodedata, os -from .crazy_utils import request_gpt_model_in_new_thread_with_ui_alive -def download_arxiv_(url_pdf): - if 'arxiv.org' not in url_pdf: - if ('.' in url_pdf) and ('/' not in url_pdf): - new_url = 'https://arxiv.org/abs/'+url_pdf - print('下载编号:', url_pdf, '自动定位:', new_url) - # download_arxiv_(new_url) - return download_arxiv_(new_url) - else: - print('不能识别的URL!') - return None - if 'abs' in url_pdf: - url_pdf = url_pdf.replace('abs', 'pdf') - url_pdf = url_pdf + '.pdf' - - url_abs = url_pdf.replace('.pdf', '').replace('pdf', 'abs') - title, other_info = get_name(_url_=url_abs) - - paper_id = title.split()[0] # '[1712.00559]' - if '2' in other_info['year']: - title = other_info['year'] + ' ' + title - - known_conf = ['NeurIPS', 'NIPS', 'Nature', 'Science', 'ICLR', 'AAAI'] - for k in known_conf: - if k in other_info['comment']: - title = k + ' ' + title - - download_dir = './gpt_log/arxiv/' - os.makedirs(download_dir, exist_ok=True) - - title_str = title.replace('?', '?')\ - .replace(':', ':')\ - .replace('\"', '“')\ - .replace('\n', '')\ - .replace(' ', ' ')\ - .replace(' ', ' ') - - requests_pdf_url = url_pdf - file_path = download_dir+title_str - # if os.path.exists(file_path): - # print('返回缓存文件') - # return './gpt_log/arxiv/'+title_str - - print('下载中') - proxies, = get_conf('proxies') - r = requests.get(requests_pdf_url, proxies=proxies) - with open(file_path, 'wb+') as f: - f.write(r.content) - print('下载完成') - - # print('输出下载命令:','aria2c -o \"%s\" %s'%(title_str,url_pdf)) - # subprocess.call('aria2c --all-proxy=\"172.18.116.150:11084\" -o \"%s\" %s'%(download_dir+title_str,url_pdf), shell=True) - - x = "%s %s %s.bib" % (paper_id, other_info['year'], other_info['authors']) - x = x.replace('?', '?')\ - .replace(':', ':')\ - .replace('\"', '“')\ - .replace('\n', '')\ - .replace(' ', ' ')\ - .replace(' ', ' ') - return './gpt_log/arxiv/'+title_str, other_info - - -def get_name(_url_): - import os - from bs4 import BeautifulSoup - print('正在获取文献名!') - print(_url_) - - # arxiv_recall = {} - # if os.path.exists('./arxiv_recall.pkl'): - # with open('./arxiv_recall.pkl', 'rb') as f: - # arxiv_recall = pickle.load(f) - - # if _url_ in arxiv_recall: - # print('在缓存中') - # return arxiv_recall[_url_] - - proxies, = get_conf('proxies') - res = requests.get(_url_, proxies=proxies) - - bs = BeautifulSoup(res.text, 'html.parser') - other_details = {} - - # get year - try: - year = bs.find_all(class_='dateline')[0].text - year = re.search(r'(\d{4})', year, re.M | re.I).group(1) - other_details['year'] = year - abstract = bs.find_all(class_='abstract mathjax')[0].text - other_details['abstract'] = abstract - except: - other_details['year'] = '' - print('年份获取失败') - - # get author - try: - authors = bs.find_all(class_='authors')[0].text - authors = authors.split('Authors:')[1] - other_details['authors'] = authors - except: - other_details['authors'] = '' - print('authors获取失败') - - # get comment - try: - comment = bs.find_all(class_='metatable')[0].text - real_comment = None - for item in comment.replace('\n', ' ').split(' '): - if 'Comments' in item: - real_comment = item - if real_comment is not None: - other_details['comment'] = real_comment - else: - other_details['comment'] = '' - except: - other_details['comment'] = '' - print('年份获取失败') - - title_str = BeautifulSoup( - res.text, 'html.parser').find('title').contents[0] - print('获取成功:', title_str) - # arxiv_recall[_url_] = (title_str+'.pdf', other_details) - # with open('./arxiv_recall.pkl', 'wb') as f: - # pickle.dump(arxiv_recall, f) - - return title_str+'.pdf', other_details - - - -@CatchException -def 下载arxiv论文并翻译摘要(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, web_port): - - CRAZY_FUNCTION_INFO = "下载arxiv论文并翻译摘要,函数插件作者[binary-husky]。正在提取摘要并下载PDF文档……" - import glob - import os - - # 基本信息:功能、贡献者 - chatbot.append(["函数插件功能?", CRAZY_FUNCTION_INFO]) - yield from update_ui(chatbot=chatbot, history=history) # 刷新界面 - - # 尝试导入依赖,如果缺少依赖,则给出安装建议 - try: - import pdfminer, bs4 - except: - report_execption(chatbot, history, - a = f"解析项目: {txt}", - b = f"导入软件依赖失败。使用该模块需要额外依赖,安装方法```pip install --upgrade pdfminer beautifulsoup4```。") - yield from update_ui(chatbot=chatbot, history=history) # 刷新界面 - return - - # 清空历史,以免输入溢出 - history = [] - - # 提取摘要,下载PDF文档 - try: - pdf_path, info = download_arxiv_(txt) - except: - report_execption(chatbot, history, - a = f"解析项目: {txt}", - b = f"下载pdf文件未成功") - yield from update_ui(chatbot=chatbot, history=history) # 刷新界面 - return - - # 翻译摘要等 - i_say = f"请你阅读以下学术论文相关的材料,提取摘要,翻译为中文。材料如下:{str(info)}" - i_say_show_user = f'请你阅读以下学术论文相关的材料,提取摘要,翻译为中文。论文:{pdf_path}' - chatbot.append((i_say_show_user, "[Local Message] waiting gpt response.")) - yield from update_ui(chatbot=chatbot, history=history) # 刷新界面 - msg = '正常' - # ** gpt request ** - # 单线,获取文章meta信息 - gpt_say = yield from request_gpt_model_in_new_thread_with_ui_alive( - inputs=i_say, - inputs_show_user=i_say_show_user, - llm_kwargs=llm_kwargs, - chatbot=chatbot, history=[], - sys_prompt="Your job is to collect information from materials and translate to Chinese。", - ) - - chatbot[-1] = (i_say_show_user, gpt_say) - history.append(i_say_show_user); history.append(gpt_say) - yield from update_ui(chatbot=chatbot, history=history, msg=msg) # 刷新界面 - # 写入文件 - import shutil - # 重置文件的创建时间 - shutil.copyfile(pdf_path, f'./gpt_log/{os.path.basename(pdf_path)}'); os.remove(pdf_path) - res = write_results_to_file(history) - chatbot.append(("完成了吗?", res + "\n\nPDF文件也已经下载")) - yield from update_ui(chatbot=chatbot, history=history, msg=msg) # 刷新界面 - diff --git a/spaces/GodParticle69/minor_demo/mrcnn/model.py b/spaces/GodParticle69/minor_demo/mrcnn/model.py deleted file mode 100644 index fe9ed45da41953a9ae4fe5b589b2e5a204cff5cf..0000000000000000000000000000000000000000 --- a/spaces/GodParticle69/minor_demo/mrcnn/model.py +++ /dev/null @@ -1,2756 +0,0 @@ -""" -Mask R-CNN -The main Mask R-CNN model implemenetation. - -Copyright (c) 2017 Matterport, Inc. -Licensed under the MIT License (see LICENSE for details) -Written by Waleed Abdulla -""" - -import os -import random -import datetime -import re -import math -import logging -from collections import OrderedDict -import multiprocessing -import numpy as np -import skimage.transform -import tensorflow as tf -import keras -import keras.backend as K -import keras.layers as KL -import keras.engine as KE -import keras.models as KM - -from mrcnn import utils - -# Requires TensorFlow 1.3+ and Keras 2.0.8+. -from distutils.version import LooseVersion -assert LooseVersion(tf.__version__) >= LooseVersion("1.3") -assert LooseVersion(keras.__version__) >= LooseVersion('2.0.8') - - -############################################################ -# Utility Functions -############################################################ - -def log(text, array=None): - """Prints a text message. And, optionally, if a Numpy array is provided it - prints it's shape, min, and max values. - """ - if array is not None: - text = text.ljust(25) - text += ("shape: {:20} min: {:10.5f} max: {:10.5f} {}".format( - str(array.shape), - array.min() if array.size else "", - array.max() if array.size else "", - array.dtype)) - print(text) - - -class BatchNorm(KL.BatchNormalization): - """Extends the Keras BatchNormalization class to allow a central place - to make changes if needed. - - Batch normalization has a negative effect on training if batches are small - so this layer is often frozen (via setting in Config class) and functions - as linear layer. - """ - def call(self, inputs, training=None): - """ - Note about training values: - None: Train BN layers. This is the normal mode - False: Freeze BN layers. Good when batch size is small - True: (don't use). Set layer in training mode even when inferencing - """ - return super(self.__class__, self).call(inputs, training=training) - - -def compute_backbone_shapes(config, image_shape): - """Computes the width and height of each stage of the backbone network. - - Returns: - [N, (height, width)]. Where N is the number of stages - """ - # Currently supports ResNet only - assert config.BACKBONE in ["resnet50", "resnet101"] - return np.array( - [[int(math.ceil(image_shape[0] / stride)), - int(math.ceil(image_shape[1] / stride))] - for stride in config.BACKBONE_STRIDES]) - - -############################################################ -# Resnet Graph -############################################################ - -# Code adopted from: -# https://github.com/fchollet/deep-learning-models/blob/master/resnet50.py - -def identity_block(input_tensor, kernel_size, filters, stage, block, - use_bias=True, train_bn=True): - """The identity_block is the block that has no conv layer at shortcut - # Arguments - input_tensor: input tensor - kernel_size: defualt 3, the kernel size of middle conv layer at main path - filters: list of integers, the nb_filters of 3 conv layer at main path - stage: integer, current stage label, used for generating layer names - block: 'a','b'..., current block label, used for generating layer names - use_bias: Boolean. To use or not use a bias in conv layers. - train_bn: Boolean. Train or freeze Batch Norm layres - """ - nb_filter1, nb_filter2, nb_filter3 = filters - conv_name_base = 'res' + str(stage) + block + '_branch' - bn_name_base = 'bn' + str(stage) + block + '_branch' - - x = KL.Conv2D(nb_filter1, (1, 1), name=conv_name_base + '2a', - use_bias=use_bias)(input_tensor) - x = BatchNorm(name=bn_name_base + '2a')(x, training=train_bn) - x = KL.Activation('relu')(x) - - x = KL.Conv2D(nb_filter2, (kernel_size, kernel_size), padding='same', - name=conv_name_base + '2b', use_bias=use_bias)(x) - x = BatchNorm(name=bn_name_base + '2b')(x, training=train_bn) - x = KL.Activation('relu')(x) - - x = KL.Conv2D(nb_filter3, (1, 1), name=conv_name_base + '2c', - use_bias=use_bias)(x) - x = BatchNorm(name=bn_name_base + '2c')(x, training=train_bn) - - x = KL.Add()([x, input_tensor]) - x = KL.Activation('relu', name='res' + str(stage) + block + '_out')(x) - return x - - -def conv_block(input_tensor, kernel_size, filters, stage, block, - strides=(2, 2), use_bias=True, train_bn=True): - """conv_block is the block that has a conv layer at shortcut - # Arguments - input_tensor: input tensor - kernel_size: defualt 3, the kernel size of middle conv layer at main path - filters: list of integers, the nb_filters of 3 conv layer at main path - stage: integer, current stage label, used for generating layer names - block: 'a','b'..., current block label, used for generating layer names - use_bias: Boolean. To use or not use a bias in conv layers. - train_bn: Boolean. Train or freeze Batch Norm layres - Note that from stage 3, the first conv layer at main path is with subsample=(2,2) - And the shortcut should have subsample=(2,2) as well - """ - nb_filter1, nb_filter2, nb_filter3 = filters - conv_name_base = 'res' + str(stage) + block + '_branch' - bn_name_base = 'bn' + str(stage) + block + '_branch' - - x = KL.Conv2D(nb_filter1, (1, 1), strides=strides, - name=conv_name_base + '2a', use_bias=use_bias)(input_tensor) - x = BatchNorm(name=bn_name_base + '2a')(x, training=train_bn) - x = KL.Activation('relu')(x) - - x = KL.Conv2D(nb_filter2, (kernel_size, kernel_size), padding='same', - name=conv_name_base + '2b', use_bias=use_bias)(x) - x = BatchNorm(name=bn_name_base + '2b')(x, training=train_bn) - x = KL.Activation('relu')(x) - - x = KL.Conv2D(nb_filter3, (1, 1), name=conv_name_base + - '2c', use_bias=use_bias)(x) - x = BatchNorm(name=bn_name_base + '2c')(x, training=train_bn) - - shortcut = KL.Conv2D(nb_filter3, (1, 1), strides=strides, - name=conv_name_base + '1', use_bias=use_bias)(input_tensor) - shortcut = BatchNorm(name=bn_name_base + '1')(shortcut, training=train_bn) - - x = KL.Add()([x, shortcut]) - x = KL.Activation('relu', name='res' + str(stage) + block + '_out')(x) - return x - - -def resnet_graph(input_image, architecture, stage5=False, train_bn=True): - """Build a ResNet graph. - architecture: Can be resnet50 or resnet101 - stage5: Boolean. If False, stage5 of the network is not created - train_bn: Boolean. Train or freeze Batch Norm layres - """ - assert architecture in ["resnet50", "resnet101"] - # Stage 1 - x = KL.ZeroPadding2D((3, 3))(input_image) - x = KL.Conv2D(64, (7, 7), strides=(2, 2), name='conv1', use_bias=True)(x) - x = BatchNorm(name='bn_conv1')(x, training=train_bn) - x = KL.Activation('relu')(x) - C1 = x = KL.MaxPooling2D((3, 3), strides=(2, 2), padding="same")(x) - # Stage 2 - x = conv_block(x, 3, [64, 64, 256], stage=2, block='a', strides=(1, 1), train_bn=train_bn) - x = identity_block(x, 3, [64, 64, 256], stage=2, block='b', train_bn=train_bn) - C2 = x = identity_block(x, 3, [64, 64, 256], stage=2, block='c', train_bn=train_bn) - # Stage 3 - x = conv_block(x, 3, [128, 128, 512], stage=3, block='a', train_bn=train_bn) - x = identity_block(x, 3, [128, 128, 512], stage=3, block='b', train_bn=train_bn) - x = identity_block(x, 3, [128, 128, 512], stage=3, block='c', train_bn=train_bn) - C3 = x = identity_block(x, 3, [128, 128, 512], stage=3, block='d', train_bn=train_bn) - # Stage 4 - x = conv_block(x, 3, [256, 256, 1024], stage=4, block='a', train_bn=train_bn) - block_count = {"resnet50": 5, "resnet101": 22}[architecture] - for i in range(block_count): - x = identity_block(x, 3, [256, 256, 1024], stage=4, block=chr(98 + i), train_bn=train_bn) - C4 = x - # Stage 5 - if stage5: - x = conv_block(x, 3, [512, 512, 2048], stage=5, block='a', train_bn=train_bn) - x = identity_block(x, 3, [512, 512, 2048], stage=5, block='b', train_bn=train_bn) - C5 = x = identity_block(x, 3, [512, 512, 2048], stage=5, block='c', train_bn=train_bn) - else: - C5 = None - return [C1, C2, C3, C4, C5] - - -############################################################ -# Proposal Layer -############################################################ - -def apply_box_deltas_graph(boxes, deltas): - """Applies the given deltas to the given boxes. - boxes: [N, (y1, x1, y2, x2)] boxes to update - deltas: [N, (dy, dx, log(dh), log(dw))] refinements to apply - """ - # Convert to y, x, h, w - height = boxes[:, 2] - boxes[:, 0] - width = boxes[:, 3] - boxes[:, 1] - center_y = boxes[:, 0] + 0.5 * height - center_x = boxes[:, 1] + 0.5 * width - # Apply deltas - center_y += deltas[:, 0] * height - center_x += deltas[:, 1] * width - height *= tf.exp(deltas[:, 2]) - width *= tf.exp(deltas[:, 3]) - # Convert back to y1, x1, y2, x2 - y1 = center_y - 0.5 * height - x1 = center_x - 0.5 * width - y2 = y1 + height - x2 = x1 + width - result = tf.stack([y1, x1, y2, x2], axis=1, name="apply_box_deltas_out") - return result - - -def clip_boxes_graph(boxes, window): - """ - boxes: [N, (y1, x1, y2, x2)] - window: [4] in the form y1, x1, y2, x2 - """ - # Split - wy1, wx1, wy2, wx2 = tf.split(window, 4) - y1, x1, y2, x2 = tf.split(boxes, 4, axis=1) - # Clip - y1 = tf.maximum(tf.minimum(y1, wy2), wy1) - x1 = tf.maximum(tf.minimum(x1, wx2), wx1) - y2 = tf.maximum(tf.minimum(y2, wy2), wy1) - x2 = tf.maximum(tf.minimum(x2, wx2), wx1) - clipped = tf.concat([y1, x1, y2, x2], axis=1, name="clipped_boxes") - clipped.set_shape((clipped.shape[0], 4)) - return clipped - - -class ProposalLayer(KE.Layer): - """Receives anchor scores and selects a subset to pass as proposals - to the second stage. Filtering is done based on anchor scores and - non-max suppression to remove overlaps. It also applies bounding - box refinement deltas to anchors. - - Inputs: - rpn_probs: [batch, anchors, (bg prob, fg prob)] - rpn_bbox: [batch, anchors, (dy, dx, log(dh), log(dw))] - anchors: [batch, (y1, x1, y2, x2)] anchors in normalized coordinates - - Returns: - Proposals in normalized coordinates [batch, rois, (y1, x1, y2, x2)] - """ - - def __init__(self, proposal_count, nms_threshold, config=None, **kwargs): - super(ProposalLayer, self).__init__(**kwargs) - self.config = config - self.proposal_count = proposal_count - self.nms_threshold = nms_threshold - - def call(self, inputs): - # Box Scores. Use the foreground class confidence. [Batch, num_rois, 1] - scores = inputs[0][:, :, 1] - # Box deltas [batch, num_rois, 4] - deltas = inputs[1] - deltas = deltas * np.reshape(self.config.RPN_BBOX_STD_DEV, [1, 1, 4]) - # Anchors - anchors = inputs[2] - - # Improve performance by trimming to top anchors by score - # and doing the rest on the smaller subset. - pre_nms_limit = tf.minimum(6000, tf.shape(anchors)[1]) - ix = tf.nn.top_k(scores, pre_nms_limit, sorted=True, - name="top_anchors").indices - scores = utils.batch_slice([scores, ix], lambda x, y: tf.gather(x, y), - self.config.IMAGES_PER_GPU) - deltas = utils.batch_slice([deltas, ix], lambda x, y: tf.gather(x, y), - self.config.IMAGES_PER_GPU) - pre_nms_anchors = utils.batch_slice([anchors, ix], lambda a, x: tf.gather(a, x), - self.config.IMAGES_PER_GPU, - names=["pre_nms_anchors"]) - - # Apply deltas to anchors to get refined anchors. - # [batch, N, (y1, x1, y2, x2)] - boxes = utils.batch_slice([pre_nms_anchors, deltas], - lambda x, y: apply_box_deltas_graph(x, y), - self.config.IMAGES_PER_GPU, - names=["refined_anchors"]) - - # Clip to image boundaries. Since we're in normalized coordinates, - # clip to 0..1 range. [batch, N, (y1, x1, y2, x2)] - window = np.array([0, 0, 1, 1], dtype=np.float32) - boxes = utils.batch_slice(boxes, - lambda x: clip_boxes_graph(x, window), - self.config.IMAGES_PER_GPU, - names=["refined_anchors_clipped"]) - - # Filter out small boxes - # According to Xinlei Chen's paper, this reduces detection accuracy - # for small objects, so we're skipping it. - - # Non-max suppression - def nms(boxes, scores): - indices = tf.image.non_max_suppression( - boxes, scores, self.proposal_count, - self.nms_threshold, name="rpn_non_max_suppression") - proposals = tf.gather(boxes, indices) - # Pad if needed - padding = tf.maximum(self.proposal_count - tf.shape(proposals)[0], 0) - proposals = tf.pad(proposals, [(0, padding), (0, 0)]) - return proposals - proposals = utils.batch_slice([boxes, scores], nms, - self.config.IMAGES_PER_GPU) - return proposals - - def compute_output_shape(self, input_shape): - return (None, self.proposal_count, 4) - - -############################################################ -# ROIAlign Layer -############################################################ - -def log2_graph(x): - """Implementatin of Log2. TF doesn't have a native implemenation.""" - return tf.log(x) / tf.log(2.0) - - -class PyramidROIAlign(KE.Layer): - """Implements ROI Pooling on multiple levels of the feature pyramid. - - Params: - - pool_shape: [height, width] of the output pooled regions. Usually [7, 7] - - Inputs: - - boxes: [batch, num_boxes, (y1, x1, y2, x2)] in normalized - coordinates. Possibly padded with zeros if not enough - boxes to fill the array. - - image_meta: [batch, (meta data)] Image details. See compose_image_meta() - - Feature maps: List of feature maps from different levels of the pyramid. - Each is [batch, height, width, channels] - - Output: - Pooled regions in the shape: [batch, num_boxes, height, width, channels]. - The width and height are those specific in the pool_shape in the layer - constructor. - """ - - def __init__(self, pool_shape, **kwargs): - super(PyramidROIAlign, self).__init__(**kwargs) - self.pool_shape = tuple(pool_shape) - - def call(self, inputs): - # Crop boxes [batch, num_boxes, (y1, x1, y2, x2)] in normalized coords - boxes = inputs[0] - - # Image meta - # Holds details about the image. See compose_image_meta() - image_meta = inputs[1] - - # Feature Maps. List of feature maps from different level of the - # feature pyramid. Each is [batch, height, width, channels] - feature_maps = inputs[2:] - - # Assign each ROI to a level in the pyramid based on the ROI area. - y1, x1, y2, x2 = tf.split(boxes, 4, axis=2) - h = y2 - y1 - w = x2 - x1 - # Use shape of first image. Images in a batch must have the same size. - image_shape = parse_image_meta_graph(image_meta)['image_shape'][0] - # Equation 1 in the Feature Pyramid Networks paper. Account for - # the fact that our coordinates are normalized here. - # e.g. a 224x224 ROI (in pixels) maps to P4 - image_area = tf.cast(image_shape[0] * image_shape[1], tf.float32) - roi_level = log2_graph(tf.sqrt(h * w) / (224.0 / tf.sqrt(image_area))) - roi_level = tf.minimum(5, tf.maximum( - 2, 4 + tf.cast(tf.round(roi_level), tf.int32))) - roi_level = tf.squeeze(roi_level, 2) - - # Loop through levels and apply ROI pooling to each. P2 to P5. - pooled = [] - box_to_level = [] - for i, level in enumerate(range(2, 6)): - ix = tf.where(tf.equal(roi_level, level)) - level_boxes = tf.gather_nd(boxes, ix) - - # Box indicies for crop_and_resize. - box_indices = tf.cast(ix[:, 0], tf.int32) - - # Keep track of which box is mapped to which level - box_to_level.append(ix) - - # Stop gradient propogation to ROI proposals - level_boxes = tf.stop_gradient(level_boxes) - box_indices = tf.stop_gradient(box_indices) - - # Crop and Resize - # From Mask R-CNN paper: "We sample four regular locations, so - # that we can evaluate either max or average pooling. In fact, - # interpolating only a single value at each bin center (without - # pooling) is nearly as effective." - # - # Here we use the simplified approach of a single value per bin, - # which is how it's done in tf.crop_and_resize() - # Result: [batch * num_boxes, pool_height, pool_width, channels] - pooled.append(tf.image.crop_and_resize( - feature_maps[i], level_boxes, box_indices, self.pool_shape, - method="bilinear")) - - # Pack pooled features into one tensor - pooled = tf.concat(pooled, axis=0) - - # Pack box_to_level mapping into one array and add another - # column representing the order of pooled boxes - box_to_level = tf.concat(box_to_level, axis=0) - box_range = tf.expand_dims(tf.range(tf.shape(box_to_level)[0]), 1) - box_to_level = tf.concat([tf.cast(box_to_level, tf.int32), box_range], - axis=1) - - # Rearrange pooled features to match the order of the original boxes - # Sort box_to_level by batch then box index - # TF doesn't have a way to sort by two columns, so merge them and sort. - sorting_tensor = box_to_level[:, 0] * 100000 + box_to_level[:, 1] - ix = tf.nn.top_k(sorting_tensor, k=tf.shape( - box_to_level)[0]).indices[::-1] - ix = tf.gather(box_to_level[:, 2], ix) - pooled = tf.gather(pooled, ix) - - # Re-add the batch dimension - pooled = tf.expand_dims(pooled, 0) - return pooled - - def compute_output_shape(self, input_shape): - return input_shape[0][:2] + self.pool_shape + (input_shape[2][-1], ) - - -############################################################ -# Detection Target Layer -############################################################ - -def overlaps_graph(boxes1, boxes2): - """Computes IoU overlaps between two sets of boxes. - boxes1, boxes2: [N, (y1, x1, y2, x2)]. - """ - # 1. Tile boxes2 and repeate boxes1. This allows us to compare - # every boxes1 against every boxes2 without loops. - # TF doesn't have an equivalent to np.repeate() so simulate it - # using tf.tile() and tf.reshape. - b1 = tf.reshape(tf.tile(tf.expand_dims(boxes1, 1), - [1, 1, tf.shape(boxes2)[0]]), [-1, 4]) - b2 = tf.tile(boxes2, [tf.shape(boxes1)[0], 1]) - # 2. Compute intersections - b1_y1, b1_x1, b1_y2, b1_x2 = tf.split(b1, 4, axis=1) - b2_y1, b2_x1, b2_y2, b2_x2 = tf.split(b2, 4, axis=1) - y1 = tf.maximum(b1_y1, b2_y1) - x1 = tf.maximum(b1_x1, b2_x1) - y2 = tf.minimum(b1_y2, b2_y2) - x2 = tf.minimum(b1_x2, b2_x2) - intersection = tf.maximum(x2 - x1, 0) * tf.maximum(y2 - y1, 0) - # 3. Compute unions - b1_area = (b1_y2 - b1_y1) * (b1_x2 - b1_x1) - b2_area = (b2_y2 - b2_y1) * (b2_x2 - b2_x1) - union = b1_area + b2_area - intersection - # 4. Compute IoU and reshape to [boxes1, boxes2] - iou = intersection / union - overlaps = tf.reshape(iou, [tf.shape(boxes1)[0], tf.shape(boxes2)[0]]) - return overlaps - - -def detection_targets_graph(proposals, gt_class_ids, gt_boxes, gt_masks, config): - """Generates detection targets for one image. Subsamples proposals and - generates target class IDs, bounding box deltas, and masks for each. - - Inputs: - proposals: [N, (y1, x1, y2, x2)] in normalized coordinates. Might - be zero padded if there are not enough proposals. - gt_class_ids: [MAX_GT_INSTANCES] int class IDs - gt_boxes: [MAX_GT_INSTANCES, (y1, x1, y2, x2)] in normalized coordinates. - gt_masks: [height, width, MAX_GT_INSTANCES] of boolean type. - - Returns: Target ROIs and corresponding class IDs, bounding box shifts, - and masks. - rois: [TRAIN_ROIS_PER_IMAGE, (y1, x1, y2, x2)] in normalized coordinates - class_ids: [TRAIN_ROIS_PER_IMAGE]. Integer class IDs. Zero padded. - deltas: [TRAIN_ROIS_PER_IMAGE, NUM_CLASSES, (dy, dx, log(dh), log(dw))] - Class-specific bbox refinements. - masks: [TRAIN_ROIS_PER_IMAGE, height, width). Masks cropped to bbox - boundaries and resized to neural network output size. - - Note: Returned arrays might be zero padded if not enough target ROIs. - """ - # Assertions - asserts = [ - tf.Assert(tf.greater(tf.shape(proposals)[0], 0), [proposals], - name="roi_assertion"), - ] - with tf.control_dependencies(asserts): - proposals = tf.identity(proposals) - - # Remove zero padding - proposals, _ = trim_zeros_graph(proposals, name="trim_proposals") - gt_boxes, non_zeros = trim_zeros_graph(gt_boxes, name="trim_gt_boxes") - gt_class_ids = tf.boolean_mask(gt_class_ids, non_zeros, - name="trim_gt_class_ids") - gt_masks = tf.gather(gt_masks, tf.where(non_zeros)[:, 0], axis=2, - name="trim_gt_masks") - - # Handle COCO crowds - # A crowd box in COCO is a bounding box around several instances. Exclude - # them from training. A crowd box is given a negative class ID. - crowd_ix = tf.where(gt_class_ids < 0)[:, 0] - non_crowd_ix = tf.where(gt_class_ids > 0)[:, 0] - crowd_boxes = tf.gather(gt_boxes, crowd_ix) - crowd_masks = tf.gather(gt_masks, crowd_ix, axis=2) - gt_class_ids = tf.gather(gt_class_ids, non_crowd_ix) - gt_boxes = tf.gather(gt_boxes, non_crowd_ix) - gt_masks = tf.gather(gt_masks, non_crowd_ix, axis=2) - - # Compute overlaps matrix [proposals, gt_boxes] - overlaps = overlaps_graph(proposals, gt_boxes) - - # Compute overlaps with crowd boxes [anchors, crowds] - crowd_overlaps = overlaps_graph(proposals, crowd_boxes) - crowd_iou_max = tf.reduce_max(crowd_overlaps, axis=1) - no_crowd_bool = (crowd_iou_max < 0.001) - - # Determine postive and negative ROIs - roi_iou_max = tf.reduce_max(overlaps, axis=1) - # 1. Positive ROIs are those with >= 0.5 IoU with a GT box - positive_roi_bool = (roi_iou_max >= 0.5) - positive_indices = tf.where(positive_roi_bool)[:, 0] - # 2. Negative ROIs are those with < 0.5 with every GT box. Skip crowds. - negative_indices = tf.where(tf.logical_and(roi_iou_max < 0.5, no_crowd_bool))[:, 0] - - # Subsample ROIs. Aim for 33% positive - # Positive ROIs - positive_count = int(config.TRAIN_ROIS_PER_IMAGE * - config.ROI_POSITIVE_RATIO) - positive_indices = tf.random_shuffle(positive_indices)[:positive_count] - positive_count = tf.shape(positive_indices)[0] - # Negative ROIs. Add enough to maintain positive:negative ratio. - r = 1.0 / config.ROI_POSITIVE_RATIO - negative_count = tf.cast(r * tf.cast(positive_count, tf.float32), tf.int32) - positive_count - negative_indices = tf.random_shuffle(negative_indices)[:negative_count] - # Gather selected ROIs - positive_rois = tf.gather(proposals, positive_indices) - negative_rois = tf.gather(proposals, negative_indices) - - # Assign positive ROIs to GT boxes. - positive_overlaps = tf.gather(overlaps, positive_indices) - roi_gt_box_assignment = tf.argmax(positive_overlaps, axis=1) - roi_gt_boxes = tf.gather(gt_boxes, roi_gt_box_assignment) - roi_gt_class_ids = tf.gather(gt_class_ids, roi_gt_box_assignment) - - # Compute bbox refinement for positive ROIs - deltas = utils.box_refinement_graph(positive_rois, roi_gt_boxes) - deltas /= config.BBOX_STD_DEV - - # Assign positive ROIs to GT masks - # Permute masks to [N, height, width, 1] - transposed_masks = tf.expand_dims(tf.transpose(gt_masks, [2, 0, 1]), -1) - # Pick the right mask for each ROI - roi_masks = tf.gather(transposed_masks, roi_gt_box_assignment) - - # Compute mask targets - boxes = positive_rois - if config.USE_MINI_MASK: - # Transform ROI corrdinates from normalized image space - # to normalized mini-mask space. - y1, x1, y2, x2 = tf.split(positive_rois, 4, axis=1) - gt_y1, gt_x1, gt_y2, gt_x2 = tf.split(roi_gt_boxes, 4, axis=1) - gt_h = gt_y2 - gt_y1 - gt_w = gt_x2 - gt_x1 - y1 = (y1 - gt_y1) / gt_h - x1 = (x1 - gt_x1) / gt_w - y2 = (y2 - gt_y1) / gt_h - x2 = (x2 - gt_x1) / gt_w - boxes = tf.concat([y1, x1, y2, x2], 1) - box_ids = tf.range(0, tf.shape(roi_masks)[0]) - masks = tf.image.crop_and_resize(tf.cast(roi_masks, tf.float32), boxes, - box_ids, - config.MASK_SHAPE) - # Remove the extra dimension from masks. - masks = tf.squeeze(masks, axis=3) - - # Threshold mask pixels at 0.5 to have GT masks be 0 or 1 to use with - # binary cross entropy loss. - masks = tf.round(masks) - - # Append negative ROIs and pad bbox deltas and masks that - # are not used for negative ROIs with zeros. - rois = tf.concat([positive_rois, negative_rois], axis=0) - N = tf.shape(negative_rois)[0] - P = tf.maximum(config.TRAIN_ROIS_PER_IMAGE - tf.shape(rois)[0], 0) - rois = tf.pad(rois, [(0, P), (0, 0)]) - roi_gt_boxes = tf.pad(roi_gt_boxes, [(0, N + P), (0, 0)]) - roi_gt_class_ids = tf.pad(roi_gt_class_ids, [(0, N + P)]) - deltas = tf.pad(deltas, [(0, N + P), (0, 0)]) - masks = tf.pad(masks, [[0, N + P], (0, 0), (0, 0)]) - - return rois, roi_gt_class_ids, deltas, masks - - -class DetectionTargetLayer(KE.Layer): - """Subsamples proposals and generates target box refinement, class_ids, - and masks for each. - - Inputs: - proposals: [batch, N, (y1, x1, y2, x2)] in normalized coordinates. Might - be zero padded if there are not enough proposals. - gt_class_ids: [batch, MAX_GT_INSTANCES] Integer class IDs. - gt_boxes: [batch, MAX_GT_INSTANCES, (y1, x1, y2, x2)] in normalized - coordinates. - gt_masks: [batch, height, width, MAX_GT_INSTANCES] of boolean type - - Returns: Target ROIs and corresponding class IDs, bounding box shifts, - and masks. - rois: [batch, TRAIN_ROIS_PER_IMAGE, (y1, x1, y2, x2)] in normalized - coordinates - target_class_ids: [batch, TRAIN_ROIS_PER_IMAGE]. Integer class IDs. - target_deltas: [batch, TRAIN_ROIS_PER_IMAGE, NUM_CLASSES, - (dy, dx, log(dh), log(dw), class_id)] - Class-specific bbox refinements. - target_mask: [batch, TRAIN_ROIS_PER_IMAGE, height, width) - Masks cropped to bbox boundaries and resized to neural - network output size. - - Note: Returned arrays might be zero padded if not enough target ROIs. - """ - - def __init__(self, config, **kwargs): - super(DetectionTargetLayer, self).__init__(**kwargs) - self.config = config - - def call(self, inputs): - proposals = inputs[0] - gt_class_ids = inputs[1] - gt_boxes = inputs[2] - gt_masks = inputs[3] - - # Slice the batch and run a graph for each slice - # TODO: Rename target_bbox to target_deltas for clarity - names = ["rois", "target_class_ids", "target_bbox", "target_mask"] - outputs = utils.batch_slice( - [proposals, gt_class_ids, gt_boxes, gt_masks], - lambda w, x, y, z: detection_targets_graph( - w, x, y, z, self.config), - self.config.IMAGES_PER_GPU, names=names) - return outputs - - def compute_output_shape(self, input_shape): - return [ - (None, self.config.TRAIN_ROIS_PER_IMAGE, 4), # rois - (None, 1), # class_ids - (None, self.config.TRAIN_ROIS_PER_IMAGE, 4), # deltas - (None, self.config.TRAIN_ROIS_PER_IMAGE, self.config.MASK_SHAPE[0], - self.config.MASK_SHAPE[1]) # masks - ] - - def compute_mask(self, inputs, mask=None): - return [None, None, None, None] - - -############################################################ -# Detection Layer -############################################################ - -def refine_detections_graph(rois, probs, deltas, window, config): - """Refine classified proposals and filter overlaps and return final - detections. - - Inputs: - rois: [N, (y1, x1, y2, x2)] in normalized coordinates - probs: [N, num_classes]. Class probabilities. - deltas: [N, num_classes, (dy, dx, log(dh), log(dw))]. Class-specific - bounding box deltas. - window: (y1, x1, y2, x2) in image coordinates. The part of the image - that contains the image excluding the padding. - - Returns detections shaped: [N, (y1, x1, y2, x2, class_id, score)] where - coordinates are normalized. - """ - # Class IDs per ROI - class_ids = tf.argmax(probs, axis=1, output_type=tf.int32) - # Class probability of the top class of each ROI - indices = tf.stack([tf.range(probs.shape[0]), class_ids], axis=1) - class_scores = tf.gather_nd(probs, indices) - # Class-specific bounding box deltas - deltas_specific = tf.gather_nd(deltas, indices) - # Apply bounding box deltas - # Shape: [boxes, (y1, x1, y2, x2)] in normalized coordinates - refined_rois = apply_box_deltas_graph( - rois, deltas_specific * config.BBOX_STD_DEV) - # Clip boxes to image window - refined_rois = clip_boxes_graph(refined_rois, window) - - # TODO: Filter out boxes with zero area - - # Filter out background boxes - keep = tf.where(class_ids > 0)[:, 0] - # Filter out low confidence boxes - if config.DETECTION_MIN_CONFIDENCE: - conf_keep = tf.where(class_scores >= config.DETECTION_MIN_CONFIDENCE)[:, 0] - keep = tf.sets.set_intersection(tf.expand_dims(keep, 0), - tf.expand_dims(conf_keep, 0)) - keep = tf.sparse_tensor_to_dense(keep)[0] - - # Apply per-class NMS - # 1. Prepare variables - pre_nms_class_ids = tf.gather(class_ids, keep) - pre_nms_scores = tf.gather(class_scores, keep) - pre_nms_rois = tf.gather(refined_rois, keep) - unique_pre_nms_class_ids = tf.unique(pre_nms_class_ids)[0] - - def nms_keep_map(class_id): - """Apply Non-Maximum Suppression on ROIs of the given class.""" - # Indices of ROIs of the given class - ixs = tf.where(tf.equal(pre_nms_class_ids, class_id))[:, 0] - # Apply NMS - class_keep = tf.image.non_max_suppression( - tf.gather(pre_nms_rois, ixs), - tf.gather(pre_nms_scores, ixs), - max_output_size=config.DETECTION_MAX_INSTANCES, - iou_threshold=config.DETECTION_NMS_THRESHOLD) - # Map indicies - class_keep = tf.gather(keep, tf.gather(ixs, class_keep)) - # Pad with -1 so returned tensors have the same shape - gap = config.DETECTION_MAX_INSTANCES - tf.shape(class_keep)[0] - class_keep = tf.pad(class_keep, [(0, gap)], - mode='CONSTANT', constant_values=-1) - # Set shape so map_fn() can infer result shape - class_keep.set_shape([config.DETECTION_MAX_INSTANCES]) - return class_keep - - # 2. Map over class IDs - nms_keep = tf.map_fn(nms_keep_map, unique_pre_nms_class_ids, - dtype=tf.int64) - # 3. Merge results into one list, and remove -1 padding - nms_keep = tf.reshape(nms_keep, [-1]) - nms_keep = tf.gather(nms_keep, tf.where(nms_keep > -1)[:, 0]) - # 4. Compute intersection between keep and nms_keep - keep = tf.sets.set_intersection(tf.expand_dims(keep, 0), - tf.expand_dims(nms_keep, 0)) - keep = tf.sparse_tensor_to_dense(keep)[0] - # Keep top detections - roi_count = config.DETECTION_MAX_INSTANCES - class_scores_keep = tf.gather(class_scores, keep) - num_keep = tf.minimum(tf.shape(class_scores_keep)[0], roi_count) - top_ids = tf.nn.top_k(class_scores_keep, k=num_keep, sorted=True)[1] - keep = tf.gather(keep, top_ids) - - # Arrange output as [N, (y1, x1, y2, x2, class_id, score)] - # Coordinates are normalized. - detections = tf.concat([ - tf.gather(refined_rois, keep), - tf.to_float(tf.gather(class_ids, keep))[..., tf.newaxis], - tf.gather(class_scores, keep)[..., tf.newaxis] - ], axis=1) - - # Pad with zeros if detections < DETECTION_MAX_INSTANCES - gap = config.DETECTION_MAX_INSTANCES - tf.shape(detections)[0] - detections = tf.pad(detections, [(0, gap), (0, 0)], "CONSTANT") - return detections - - -class DetectionLayer(KE.Layer): - """Takes classified proposal boxes and their bounding box deltas and - returns the final detection boxes. - - Returns: - [batch, num_detections, (y1, x1, y2, x2, class_id, class_score)] where - coordinates are normalized. - """ - - def __init__(self, config=None, **kwargs): - super(DetectionLayer, self).__init__(**kwargs) - self.config = config - - def call(self, inputs): - rois = inputs[0] - mrcnn_class = inputs[1] - mrcnn_bbox = inputs[2] - image_meta = inputs[3] - - # Get windows of images in normalized coordinates. Windows are the area - # in the image that excludes the padding. - # Use the shape of the first image in the batch to normalize the window - # because we know that all images get resized to the same size. - m = parse_image_meta_graph(image_meta) - image_shape = m['image_shape'][0] - window = norm_boxes_graph(m['window'], image_shape[:2]) - - # Run detection refinement graph on each item in the batch - detections_batch = utils.batch_slice( - [rois, mrcnn_class, mrcnn_bbox, window], - lambda x, y, w, z: refine_detections_graph(x, y, w, z, self.config), - self.config.IMAGES_PER_GPU) - - # Reshape output - # [batch, num_detections, (y1, x1, y2, x2, class_score)] in - # normalized coordinates - return tf.reshape( - detections_batch, - [self.config.BATCH_SIZE, self.config.DETECTION_MAX_INSTANCES, 6]) - - def compute_output_shape(self, input_shape): - return (None, self.config.DETECTION_MAX_INSTANCES, 6) - - -############################################################ -# Region Proposal Network (RPN) -############################################################ - -def rpn_graph(feature_map, anchors_per_location, anchor_stride): - """Builds the computation graph of Region Proposal Network. - - feature_map: backbone features [batch, height, width, depth] - anchors_per_location: number of anchors per pixel in the feature map - anchor_stride: Controls the density of anchors. Typically 1 (anchors for - every pixel in the feature map), or 2 (every other pixel). - - Returns: - rpn_logits: [batch, H, W, 2] Anchor classifier logits (before softmax) - rpn_probs: [batch, H, W, 2] Anchor classifier probabilities. - rpn_bbox: [batch, H, W, (dy, dx, log(dh), log(dw))] Deltas to be - applied to anchors. - """ - # TODO: check if stride of 2 causes alignment issues if the featuremap - # is not even. - # Shared convolutional base of the RPN - shared = KL.Conv2D(512, (3, 3), padding='same', activation='relu', - strides=anchor_stride, - name='rpn_conv_shared')(feature_map) - - # Anchor Score. [batch, height, width, anchors per location * 2]. - x = KL.Conv2D(2 * anchors_per_location, (1, 1), padding='valid', - activation='linear', name='rpn_class_raw')(shared) - - # Reshape to [batch, anchors, 2] - rpn_class_logits = KL.Lambda( - lambda t: tf.reshape(t, [tf.shape(t)[0], -1, 2]))(x) - - # Softmax on last dimension of BG/FG. - rpn_probs = KL.Activation( - "softmax", name="rpn_class_xxx")(rpn_class_logits) - - # Bounding box refinement. [batch, H, W, anchors per location, depth] - # where depth is [x, y, log(w), log(h)] - x = KL.Conv2D(anchors_per_location * 4, (1, 1), padding="valid", - activation='linear', name='rpn_bbox_pred')(shared) - - # Reshape to [batch, anchors, 4] - rpn_bbox = KL.Lambda(lambda t: tf.reshape(t, [tf.shape(t)[0], -1, 4]))(x) - - return [rpn_class_logits, rpn_probs, rpn_bbox] - - -def build_rpn_model(anchor_stride, anchors_per_location, depth): - """Builds a Keras model of the Region Proposal Network. - It wraps the RPN graph so it can be used multiple times with shared - weights. - - anchors_per_location: number of anchors per pixel in the feature map - anchor_stride: Controls the density of anchors. Typically 1 (anchors for - every pixel in the feature map), or 2 (every other pixel). - depth: Depth of the backbone feature map. - - Returns a Keras Model object. The model outputs, when called, are: - rpn_logits: [batch, H, W, 2] Anchor classifier logits (before softmax) - rpn_probs: [batch, W, W, 2] Anchor classifier probabilities. - rpn_bbox: [batch, H, W, (dy, dx, log(dh), log(dw))] Deltas to be - applied to anchors. - """ - input_feature_map = KL.Input(shape=[None, None, depth], - name="input_rpn_feature_map") - outputs = rpn_graph(input_feature_map, anchors_per_location, anchor_stride) - return KM.Model([input_feature_map], outputs, name="rpn_model") - - -############################################################ -# Feature Pyramid Network Heads -############################################################ - -def fpn_classifier_graph(rois, feature_maps, image_meta, - pool_size, num_classes, train_bn=True): - """Builds the computation graph of the feature pyramid network classifier - and regressor heads. - - rois: [batch, num_rois, (y1, x1, y2, x2)] Proposal boxes in normalized - coordinates. - feature_maps: List of feature maps from diffent layers of the pyramid, - [P2, P3, P4, P5]. Each has a different resolution. - - image_meta: [batch, (meta data)] Image details. See compose_image_meta() - pool_size: The width of the square feature map generated from ROI Pooling. - num_classes: number of classes, which determines the depth of the results - train_bn: Boolean. Train or freeze Batch Norm layres - - Returns: - logits: [N, NUM_CLASSES] classifier logits (before softmax) - probs: [N, NUM_CLASSES] classifier probabilities - bbox_deltas: [N, (dy, dx, log(dh), log(dw))] Deltas to apply to - proposal boxes - """ - # ROI Pooling - # Shape: [batch, num_boxes, pool_height, pool_width, channels] - x = PyramidROIAlign([pool_size, pool_size], - name="roi_align_classifier")([rois, image_meta] + feature_maps) - # Two 1024 FC layers (implemented with Conv2D for consistency) - x = KL.TimeDistributed(KL.Conv2D(1024, (pool_size, pool_size), padding="valid"), - name="mrcnn_class_conv1")(x) - x = KL.TimeDistributed(BatchNorm(), name='mrcnn_class_bn1')(x, training=train_bn) - x = KL.Activation('relu')(x) - x = KL.TimeDistributed(KL.Conv2D(1024, (1, 1)), - name="mrcnn_class_conv2")(x) - x = KL.TimeDistributed(BatchNorm(), name='mrcnn_class_bn2')(x, training=train_bn) - x = KL.Activation('relu')(x) - - shared = KL.Lambda(lambda x: K.squeeze(K.squeeze(x, 3), 2), - name="pool_squeeze")(x) - - # Classifier head - mrcnn_class_logits = KL.TimeDistributed(KL.Dense(num_classes), - name='mrcnn_class_logits')(shared) - mrcnn_probs = KL.TimeDistributed(KL.Activation("softmax"), - name="mrcnn_class")(mrcnn_class_logits) - - # BBox head - # [batch, boxes, num_classes * (dy, dx, log(dh), log(dw))] - x = KL.TimeDistributed(KL.Dense(num_classes * 4, activation='linear'), - name='mrcnn_bbox_fc')(shared) - # Reshape to [batch, boxes, num_classes, (dy, dx, log(dh), log(dw))] - s = K.int_shape(x) - mrcnn_bbox = KL.Reshape((s[1], num_classes, 4), name="mrcnn_bbox")(x) - - return mrcnn_class_logits, mrcnn_probs, mrcnn_bbox - - -def build_fpn_mask_graph(rois, feature_maps, image_meta, - pool_size, num_classes, train_bn=True): - """Builds the computation graph of the mask head of Feature Pyramid Network. - - rois: [batch, num_rois, (y1, x1, y2, x2)] Proposal boxes in normalized - coordinates. - feature_maps: List of feature maps from diffent layers of the pyramid, - [P2, P3, P4, P5]. Each has a different resolution. - image_meta: [batch, (meta data)] Image details. See compose_image_meta() - pool_size: The width of the square feature map generated from ROI Pooling. - num_classes: number of classes, which determines the depth of the results - train_bn: Boolean. Train or freeze Batch Norm layres - - Returns: Masks [batch, roi_count, height, width, num_classes] - """ - # ROI Pooling - # Shape: [batch, boxes, pool_height, pool_width, channels] - x = PyramidROIAlign([pool_size, pool_size], - name="roi_align_mask")([rois, image_meta] + feature_maps) - - # Conv layers - x = KL.TimeDistributed(KL.Conv2D(256, (3, 3), padding="same"), - name="mrcnn_mask_conv1")(x) - x = KL.TimeDistributed(BatchNorm(), - name='mrcnn_mask_bn1')(x, training=train_bn) - x = KL.Activation('relu')(x) - - x = KL.TimeDistributed(KL.Conv2D(256, (3, 3), padding="same"), - name="mrcnn_mask_conv2")(x) - x = KL.TimeDistributed(BatchNorm(), - name='mrcnn_mask_bn2')(x, training=train_bn) - x = KL.Activation('relu')(x) - - x = KL.TimeDistributed(KL.Conv2D(256, (3, 3), padding="same"), - name="mrcnn_mask_conv3")(x) - x = KL.TimeDistributed(BatchNorm(), - name='mrcnn_mask_bn3')(x, training=train_bn) - x = KL.Activation('relu')(x) - - x = KL.TimeDistributed(KL.Conv2D(256, (3, 3), padding="same"), - name="mrcnn_mask_conv4")(x) - x = KL.TimeDistributed(BatchNorm(), - name='mrcnn_mask_bn4')(x, training=train_bn) - x = KL.Activation('relu')(x) - - x = KL.TimeDistributed(KL.Conv2DTranspose(256, (2, 2), strides=2, activation="relu"), - name="mrcnn_mask_deconv")(x) - x = KL.TimeDistributed(KL.Conv2D(num_classes, (1, 1), strides=1, activation="sigmoid"), - name="mrcnn_mask")(x) - return x - - -############################################################ -# Loss Functions -############################################################ - -def smooth_l1_loss(y_true, y_pred): - """Implements Smooth-L1 loss. - y_true and y_pred are typicallly: [N, 4], but could be any shape. - """ - diff = K.abs(y_true - y_pred) - less_than_one = K.cast(K.less(diff, 1.0), "float32") - loss = (less_than_one * 0.5 * diff**2) + (1 - less_than_one) * (diff - 0.5) - return loss - - -def rpn_class_loss_graph(rpn_match, rpn_class_logits): - """RPN anchor classifier loss. - - rpn_match: [batch, anchors, 1]. Anchor match type. 1=positive, - -1=negative, 0=neutral anchor. - rpn_class_logits: [batch, anchors, 2]. RPN classifier logits for FG/BG. - """ - # Squeeze last dim to simplify - rpn_match = tf.squeeze(rpn_match, -1) - # Get anchor classes. Convert the -1/+1 match to 0/1 values. - anchor_class = K.cast(K.equal(rpn_match, 1), tf.int32) - # Positive and Negative anchors contribute to the loss, - # but neutral anchors (match value = 0) don't. - indices = tf.where(K.not_equal(rpn_match, 0)) - # Pick rows that contribute to the loss and filter out the rest. - rpn_class_logits = tf.gather_nd(rpn_class_logits, indices) - anchor_class = tf.gather_nd(anchor_class, indices) - # Crossentropy loss - loss = K.sparse_categorical_crossentropy(target=anchor_class, - output=rpn_class_logits, - from_logits=True) - loss = K.switch(tf.size(loss) > 0, K.mean(loss), tf.constant(0.0)) - return loss - - -def rpn_bbox_loss_graph(config, target_bbox, rpn_match, rpn_bbox): - """Return the RPN bounding box loss graph. - - config: the model config object. - target_bbox: [batch, max positive anchors, (dy, dx, log(dh), log(dw))]. - Uses 0 padding to fill in unsed bbox deltas. - rpn_match: [batch, anchors, 1]. Anchor match type. 1=positive, - -1=negative, 0=neutral anchor. - rpn_bbox: [batch, anchors, (dy, dx, log(dh), log(dw))] - """ - # Positive anchors contribute to the loss, but negative and - # neutral anchors (match value of 0 or -1) don't. - rpn_match = K.squeeze(rpn_match, -1) - indices = tf.where(K.equal(rpn_match, 1)) - - # Pick bbox deltas that contribute to the loss - rpn_bbox = tf.gather_nd(rpn_bbox, indices) - - # Trim target bounding box deltas to the same length as rpn_bbox. - batch_counts = K.sum(K.cast(K.equal(rpn_match, 1), tf.int32), axis=1) - target_bbox = batch_pack_graph(target_bbox, batch_counts, - config.IMAGES_PER_GPU) - - # TODO: use smooth_l1_loss() rather than reimplementing here - # to reduce code duplication - diff = K.abs(target_bbox - rpn_bbox) - less_than_one = K.cast(K.less(diff, 1.0), "float32") - loss = (less_than_one * 0.5 * diff**2) + (1 - less_than_one) * (diff - 0.5) - - loss = K.switch(tf.size(loss) > 0, K.mean(loss), tf.constant(0.0)) - return loss - - -def mrcnn_class_loss_graph(target_class_ids, pred_class_logits, - active_class_ids): - """Loss for the classifier head of Mask RCNN. - - target_class_ids: [batch, num_rois]. Integer class IDs. Uses zero - padding to fill in the array. - pred_class_logits: [batch, num_rois, num_classes] - active_class_ids: [batch, num_classes]. Has a value of 1 for - classes that are in the dataset of the image, and 0 - for classes that are not in the dataset. - """ - target_class_ids = tf.cast(target_class_ids, 'int64') - - # Find predictions of classes that are not in the dataset. - pred_class_ids = tf.argmax(pred_class_logits, axis=2) - # TODO: Update this line to work with batch > 1. Right now it assumes all - # images in a batch have the same active_class_ids - pred_active = tf.gather(active_class_ids[0], pred_class_ids) - - # Loss - loss = tf.nn.sparse_softmax_cross_entropy_with_logits( - labels=target_class_ids, logits=pred_class_logits) - - # Erase losses of predictions of classes that are not in the active - # classes of the image. - loss = loss * pred_active - - # Computer loss mean. Use only predictions that contribute - # to the loss to get a correct mean. - loss = tf.reduce_sum(loss) / tf.reduce_sum(pred_active) - return loss - - -def mrcnn_bbox_loss_graph(target_bbox, target_class_ids, pred_bbox): - """Loss for Mask R-CNN bounding box refinement. - - target_bbox: [batch, num_rois, (dy, dx, log(dh), log(dw))] - target_class_ids: [batch, num_rois]. Integer class IDs. - pred_bbox: [batch, num_rois, num_classes, (dy, dx, log(dh), log(dw))] - """ - # Reshape to merge batch and roi dimensions for simplicity. - target_class_ids = K.reshape(target_class_ids, (-1,)) - target_bbox = K.reshape(target_bbox, (-1, 4)) - pred_bbox = K.reshape(pred_bbox, (-1, K.int_shape(pred_bbox)[2], 4)) - - # Only positive ROIs contribute to the loss. And only - # the right class_id of each ROI. Get their indicies. - positive_roi_ix = tf.where(target_class_ids > 0)[:, 0] - positive_roi_class_ids = tf.cast( - tf.gather(target_class_ids, positive_roi_ix), tf.int64) - indices = tf.stack([positive_roi_ix, positive_roi_class_ids], axis=1) - - # Gather the deltas (predicted and true) that contribute to loss - target_bbox = tf.gather(target_bbox, positive_roi_ix) - pred_bbox = tf.gather_nd(pred_bbox, indices) - - # Smooth-L1 Loss - loss = K.switch(tf.size(target_bbox) > 0, - smooth_l1_loss(y_true=target_bbox, y_pred=pred_bbox), - tf.constant(0.0)) - loss = K.mean(loss) - return loss - - -def mrcnn_mask_loss_graph(target_masks, target_class_ids, pred_masks): - """Mask binary cross-entropy loss for the masks head. - - target_masks: [batch, num_rois, height, width]. - A float32 tensor of values 0 or 1. Uses zero padding to fill array. - target_class_ids: [batch, num_rois]. Integer class IDs. Zero padded. - pred_masks: [batch, proposals, height, width, num_classes] float32 tensor - with values from 0 to 1. - """ - # Reshape for simplicity. Merge first two dimensions into one. - target_class_ids = K.reshape(target_class_ids, (-1,)) - mask_shape = tf.shape(target_masks) - target_masks = K.reshape(target_masks, (-1, mask_shape[2], mask_shape[3])) - pred_shape = tf.shape(pred_masks) - pred_masks = K.reshape(pred_masks, - (-1, pred_shape[2], pred_shape[3], pred_shape[4])) - # Permute predicted masks to [N, num_classes, height, width] - pred_masks = tf.transpose(pred_masks, [0, 3, 1, 2]) - - # Only positive ROIs contribute to the loss. And only - # the class specific mask of each ROI. - positive_ix = tf.where(target_class_ids > 0)[:, 0] - positive_class_ids = tf.cast( - tf.gather(target_class_ids, positive_ix), tf.int64) - indices = tf.stack([positive_ix, positive_class_ids], axis=1) - - # Gather the masks (predicted and true) that contribute to loss - y_true = tf.gather(target_masks, positive_ix) - y_pred = tf.gather_nd(pred_masks, indices) - - # Compute binary cross entropy. If no positive ROIs, then return 0. - # shape: [batch, roi, num_classes] - loss = K.switch(tf.size(y_true) > 0, - K.binary_crossentropy(target=y_true, output=y_pred), - tf.constant(0.0)) - loss = K.mean(loss) - return loss - - -############################################################ -# Data Generator -############################################################ - -def load_image_gt(dataset, config, image_id, augment=False, augmentation=None, - use_mini_mask=False): - """Load and return ground truth data for an image (image, mask, bounding boxes). - - augment: (Depricated. Use augmentation instead). If true, apply random - image augmentation. Currently, only horizontal flipping is offered. - augmentation: Optional. An imgaug (https://github.com/aleju/imgaug) augmentation. - For example, passing imgaug.augmenters.Fliplr(0.5) flips images - right/left 50% of the time. - use_mini_mask: If False, returns full-size masks that are the same height - and width as the original image. These can be big, for example - 1024x1024x100 (for 100 instances). Mini masks are smaller, typically, - 224x224 and are generated by extracting the bounding box of the - object and resizing it to MINI_MASK_SHAPE. - - Returns: - image: [height, width, 3] - shape: the original shape of the image before resizing and cropping. - class_ids: [instance_count] Integer class IDs - bbox: [instance_count, (y1, x1, y2, x2)] - mask: [height, width, instance_count]. The height and width are those - of the image unless use_mini_mask is True, in which case they are - defined in MINI_MASK_SHAPE. - """ - # Load image and mask - image = dataset.load_image(image_id) - mask, class_ids = dataset.load_mask(image_id) - original_shape = image.shape - image, window, scale, padding = utils.resize_image( - image, - min_dim=config.IMAGE_MIN_DIM, - max_dim=config.IMAGE_MAX_DIM, - mode=config.IMAGE_RESIZE_MODE) - mask = utils.resize_mask(mask, scale, padding) - - # Random horizontal flips. - # TODO: will be removed in a future update in favor of augmentation - if augment: - logging.warning("'augment' is depricated. Use 'augmentation' instead.") - if random.randint(0, 1): - image = np.fliplr(image) - mask = np.fliplr(mask) - - # Augmentation - # This requires the imgaug lib (https://github.com/aleju/imgaug) - if augmentation: - import imgaug - - # Augmentors that are safe to apply to masks - # Some, such as Affine, have settings that make them unsafe, so always - # test your augmentation on masks - MASK_AUGMENTERS = ["Sequential", "SomeOf", "OneOf", "Sometimes", - "Fliplr", "Flipud", "CropAndPad", - "Affine", "PiecewiseAffine"] - - def hook(images, augmenter, parents, default): - """Determines which augmenters to apply to masks.""" - return (augmenter.__class__.__name__ in MASK_AUGMENTERS) - - # Store shapes before augmentation to compare - image_shape = image.shape - mask_shape = mask.shape - # Make augmenters deterministic to apply similarly to images and masks - det = augmentation.to_deterministic() - image = det.augment_image(image) - # Change mask to np.uint8 because imgaug doesn't support np.bool - mask = det.augment_image(mask.astype(np.uint8), - hooks=imgaug.HooksImages(activator=hook)) - # Verify that shapes didn't change - assert image.shape == image_shape, "Augmentation shouldn't change image size" - assert mask.shape == mask_shape, "Augmentation shouldn't change mask size" - # Change mask back to bool - mask = mask.astype(np.bool) - - # Note that some boxes might be all zeros if the corresponding mask got cropped out. - # and here is to filter them out - _idx = np.sum(mask, axis=(0, 1)) > 0 - mask = mask[:, :, _idx] - class_ids = class_ids[_idx] - # Bounding boxes. Note that some boxes might be all zeros - # if the corresponding mask got cropped out. - # bbox: [num_instances, (y1, x1, y2, x2)] - bbox = utils.extract_bboxes(mask) - - # Active classes - # Different datasets have different classes, so track the - # classes supported in the dataset of this image. - active_class_ids = np.zeros([dataset.num_classes], dtype=np.int32) - source_class_ids = dataset.source_class_ids[dataset.image_info[image_id]["source"]] - active_class_ids[source_class_ids] = 1 - - # Resize masks to smaller size to reduce memory usage - if use_mini_mask: - mask = utils.minimize_mask(bbox, mask, config.MINI_MASK_SHAPE) - - # Image meta data - image_meta = compose_image_meta(image_id, original_shape, image.shape, - window, scale, active_class_ids) - - return image, image_meta, class_ids, bbox, mask - - -def build_detection_targets(rpn_rois, gt_class_ids, gt_boxes, gt_masks, config): - """Generate targets for training Stage 2 classifier and mask heads. - This is not used in normal training. It's useful for debugging or to train - the Mask RCNN heads without using the RPN head. - - Inputs: - rpn_rois: [N, (y1, x1, y2, x2)] proposal boxes. - gt_class_ids: [instance count] Integer class IDs - gt_boxes: [instance count, (y1, x1, y2, x2)] - gt_masks: [height, width, instance count] Grund truth masks. Can be full - size or mini-masks. - - Returns: - rois: [TRAIN_ROIS_PER_IMAGE, (y1, x1, y2, x2)] - class_ids: [TRAIN_ROIS_PER_IMAGE]. Integer class IDs. - bboxes: [TRAIN_ROIS_PER_IMAGE, NUM_CLASSES, (y, x, log(h), log(w))]. Class-specific - bbox refinements. - masks: [TRAIN_ROIS_PER_IMAGE, height, width, NUM_CLASSES). Class specific masks cropped - to bbox boundaries and resized to neural network output size. - """ - assert rpn_rois.shape[0] > 0 - assert gt_class_ids.dtype == np.int32, "Expected int but got {}".format( - gt_class_ids.dtype) - assert gt_boxes.dtype == np.int32, "Expected int but got {}".format( - gt_boxes.dtype) - assert gt_masks.dtype == np.bool_, "Expected bool but got {}".format( - gt_masks.dtype) - - # It's common to add GT Boxes to ROIs but we don't do that here because - # according to XinLei Chen's paper, it doesn't help. - - # Trim empty padding in gt_boxes and gt_masks parts - instance_ids = np.where(gt_class_ids > 0)[0] - assert instance_ids.shape[0] > 0, "Image must contain instances." - gt_class_ids = gt_class_ids[instance_ids] - gt_boxes = gt_boxes[instance_ids] - gt_masks = gt_masks[:, :, instance_ids] - - # Compute areas of ROIs and ground truth boxes. - rpn_roi_area = (rpn_rois[:, 2] - rpn_rois[:, 0]) * \ - (rpn_rois[:, 3] - rpn_rois[:, 1]) - gt_box_area = (gt_boxes[:, 2] - gt_boxes[:, 0]) * \ - (gt_boxes[:, 3] - gt_boxes[:, 1]) - - # Compute overlaps [rpn_rois, gt_boxes] - overlaps = np.zeros((rpn_rois.shape[0], gt_boxes.shape[0])) - for i in range(overlaps.shape[1]): - gt = gt_boxes[i] - overlaps[:, i] = utils.compute_iou( - gt, rpn_rois, gt_box_area[i], rpn_roi_area) - - # Assign ROIs to GT boxes - rpn_roi_iou_argmax = np.argmax(overlaps, axis=1) - rpn_roi_iou_max = overlaps[np.arange( - overlaps.shape[0]), rpn_roi_iou_argmax] - # GT box assigned to each ROI - rpn_roi_gt_boxes = gt_boxes[rpn_roi_iou_argmax] - rpn_roi_gt_class_ids = gt_class_ids[rpn_roi_iou_argmax] - - # Positive ROIs are those with >= 0.5 IoU with a GT box. - fg_ids = np.where(rpn_roi_iou_max > 0.5)[0] - - # Negative ROIs are those with max IoU 0.1-0.5 (hard example mining) - # TODO: To hard example mine or not to hard example mine, that's the question -# bg_ids = np.where((rpn_roi_iou_max >= 0.1) & (rpn_roi_iou_max < 0.5))[0] - bg_ids = np.where(rpn_roi_iou_max < 0.5)[0] - - # Subsample ROIs. Aim for 33% foreground. - # FG - fg_roi_count = int(config.TRAIN_ROIS_PER_IMAGE * config.ROI_POSITIVE_RATIO) - if fg_ids.shape[0] > fg_roi_count: - keep_fg_ids = np.random.choice(fg_ids, fg_roi_count, replace=False) - else: - keep_fg_ids = fg_ids - # BG - remaining = config.TRAIN_ROIS_PER_IMAGE - keep_fg_ids.shape[0] - if bg_ids.shape[0] > remaining: - keep_bg_ids = np.random.choice(bg_ids, remaining, replace=False) - else: - keep_bg_ids = bg_ids - # Combine indicies of ROIs to keep - keep = np.concatenate([keep_fg_ids, keep_bg_ids]) - # Need more? - remaining = config.TRAIN_ROIS_PER_IMAGE - keep.shape[0] - if remaining > 0: - # Looks like we don't have enough samples to maintain the desired - # balance. Reduce requirements and fill in the rest. This is - # likely different from the Mask RCNN paper. - - # There is a small chance we have neither fg nor bg samples. - if keep.shape[0] == 0: - # Pick bg regions with easier IoU threshold - bg_ids = np.where(rpn_roi_iou_max < 0.5)[0] - assert bg_ids.shape[0] >= remaining - keep_bg_ids = np.random.choice(bg_ids, remaining, replace=False) - assert keep_bg_ids.shape[0] == remaining - keep = np.concatenate([keep, keep_bg_ids]) - else: - # Fill the rest with repeated bg rois. - keep_extra_ids = np.random.choice( - keep_bg_ids, remaining, replace=True) - keep = np.concatenate([keep, keep_extra_ids]) - assert keep.shape[0] == config.TRAIN_ROIS_PER_IMAGE, \ - "keep doesn't match ROI batch size {}, {}".format( - keep.shape[0], config.TRAIN_ROIS_PER_IMAGE) - - # Reset the gt boxes assigned to BG ROIs. - rpn_roi_gt_boxes[keep_bg_ids, :] = 0 - rpn_roi_gt_class_ids[keep_bg_ids] = 0 - - # For each kept ROI, assign a class_id, and for FG ROIs also add bbox refinement. - rois = rpn_rois[keep] - roi_gt_boxes = rpn_roi_gt_boxes[keep] - roi_gt_class_ids = rpn_roi_gt_class_ids[keep] - roi_gt_assignment = rpn_roi_iou_argmax[keep] - - # Class-aware bbox deltas. [y, x, log(h), log(w)] - bboxes = np.zeros((config.TRAIN_ROIS_PER_IMAGE, - config.NUM_CLASSES, 4), dtype=np.float32) - pos_ids = np.where(roi_gt_class_ids > 0)[0] - bboxes[pos_ids, roi_gt_class_ids[pos_ids]] = utils.box_refinement( - rois[pos_ids], roi_gt_boxes[pos_ids, :4]) - # Normalize bbox refinements - bboxes /= config.BBOX_STD_DEV - - # Generate class-specific target masks - masks = np.zeros((config.TRAIN_ROIS_PER_IMAGE, config.MASK_SHAPE[0], config.MASK_SHAPE[1], config.NUM_CLASSES), - dtype=np.float32) - for i in pos_ids: - class_id = roi_gt_class_ids[i] - assert class_id > 0, "class id must be greater than 0" - gt_id = roi_gt_assignment[i] - class_mask = gt_masks[:, :, gt_id] - - if config.USE_MINI_MASK: - # Create a mask placeholder, the size of the image - placeholder = np.zeros(config.IMAGE_SHAPE[:2], dtype=bool) - # GT box - gt_y1, gt_x1, gt_y2, gt_x2 = gt_boxes[gt_id] - gt_w = gt_x2 - gt_x1 - gt_h = gt_y2 - gt_y1 - # Resize mini mask to size of GT box - placeholder[gt_y1:gt_y2, gt_x1:gt_x2] = \ - np.round(skimage.transform.resize( - class_mask, (gt_h, gt_w), order=1, mode="constant")).astype(bool) - # Place the mini batch in the placeholder - class_mask = placeholder - - # Pick part of the mask and resize it - y1, x1, y2, x2 = rois[i].astype(np.int32) - m = class_mask[y1:y2, x1:x2] - mask = skimage.transform.resize(m, config.MASK_SHAPE, order=1, mode="constant") - masks[i, :, :, class_id] = mask - - return rois, roi_gt_class_ids, bboxes, masks - - -def build_rpn_targets(image_shape, anchors, gt_class_ids, gt_boxes, config): - """Given the anchors and GT boxes, compute overlaps and identify positive - anchors and deltas to refine them to match their corresponding GT boxes. - - anchors: [num_anchors, (y1, x1, y2, x2)] - gt_class_ids: [num_gt_boxes] Integer class IDs. - gt_boxes: [num_gt_boxes, (y1, x1, y2, x2)] - - Returns: - rpn_match: [N] (int32) matches between anchors and GT boxes. - 1 = positive anchor, -1 = negative anchor, 0 = neutral - rpn_bbox: [N, (dy, dx, log(dh), log(dw))] Anchor bbox deltas. - """ - # RPN Match: 1 = positive anchor, -1 = negative anchor, 0 = neutral - rpn_match = np.zeros([anchors.shape[0]], dtype=np.int32) - # RPN bounding boxes: [max anchors per image, (dy, dx, log(dh), log(dw))] - rpn_bbox = np.zeros((config.RPN_TRAIN_ANCHORS_PER_IMAGE, 4)) - - # Handle COCO crowds - # A crowd box in COCO is a bounding box around several instances. Exclude - # them from training. A crowd box is given a negative class ID. - crowd_ix = np.where(gt_class_ids < 0)[0] - if crowd_ix.shape[0] > 0: - # Filter out crowds from ground truth class IDs and boxes - non_crowd_ix = np.where(gt_class_ids > 0)[0] - crowd_boxes = gt_boxes[crowd_ix] - gt_class_ids = gt_class_ids[non_crowd_ix] - gt_boxes = gt_boxes[non_crowd_ix] - # Compute overlaps with crowd boxes [anchors, crowds] - crowd_overlaps = utils.compute_overlaps(anchors, crowd_boxes) - crowd_iou_max = np.amax(crowd_overlaps, axis=1) - no_crowd_bool = (crowd_iou_max < 0.001) - else: - # All anchors don't intersect a crowd - no_crowd_bool = np.ones([anchors.shape[0]], dtype=bool) - - # Compute overlaps [num_anchors, num_gt_boxes] - overlaps = utils.compute_overlaps(anchors, gt_boxes) - - # Match anchors to GT Boxes - # If an anchor overlaps a GT box with IoU >= 0.7 then it's positive. - # If an anchor overlaps a GT box with IoU < 0.3 then it's negative. - # Neutral anchors are those that don't match the conditions above, - # and they don't influence the loss function. - # However, don't keep any GT box unmatched (rare, but happens). Instead, - # match it to the closest anchor (even if its max IoU is < 0.3). - # - # 1. Set negative anchors first. They get overwritten below if a GT box is - # matched to them. Skip boxes in crowd areas. - anchor_iou_argmax = np.argmax(overlaps, axis=1) - anchor_iou_max = overlaps[np.arange(overlaps.shape[0]), anchor_iou_argmax] - rpn_match[(anchor_iou_max < 0.3) & (no_crowd_bool)] = -1 - # 2. Set an anchor for each GT box (regardless of IoU value). - # TODO: If multiple anchors have the same IoU match all of them - gt_iou_argmax = np.argmax(overlaps, axis=0) - rpn_match[gt_iou_argmax] = 1 - # 3. Set anchors with high overlap as positive. - rpn_match[anchor_iou_max >= 0.7] = 1 - - # Subsample to balance positive and negative anchors - # Don't let positives be more than half the anchors - ids = np.where(rpn_match == 1)[0] - extra = len(ids) - (config.RPN_TRAIN_ANCHORS_PER_IMAGE // 2) - if extra > 0: - # Reset the extra ones to neutral - ids = np.random.choice(ids, extra, replace=False) - rpn_match[ids] = 0 - # Same for negative proposals - ids = np.where(rpn_match == -1)[0] - extra = len(ids) - (config.RPN_TRAIN_ANCHORS_PER_IMAGE - - np.sum(rpn_match == 1)) - if extra > 0: - # Rest the extra ones to neutral - ids = np.random.choice(ids, extra, replace=False) - rpn_match[ids] = 0 - - # For positive anchors, compute shift and scale needed to transform them - # to match the corresponding GT boxes. - ids = np.where(rpn_match == 1)[0] - ix = 0 # index into rpn_bbox - # TODO: use box_refinement() rather than duplicating the code here - for i, a in zip(ids, anchors[ids]): - # Closest gt box (it might have IoU < 0.7) - gt = gt_boxes[anchor_iou_argmax[i]] - - # Convert coordinates to center plus width/height. - # GT Box - gt_h = gt[2] - gt[0] - gt_w = gt[3] - gt[1] - gt_center_y = gt[0] + 0.5 * gt_h - gt_center_x = gt[1] + 0.5 * gt_w - # Anchor - a_h = a[2] - a[0] - a_w = a[3] - a[1] - a_center_y = a[0] + 0.5 * a_h - a_center_x = a[1] + 0.5 * a_w - - # Compute the bbox refinement that the RPN should predict. - rpn_bbox[ix] = [ - (gt_center_y - a_center_y) / a_h, - (gt_center_x - a_center_x) / a_w, - np.log(gt_h / a_h), - np.log(gt_w / a_w), - ] - # Normalize - rpn_bbox[ix] /= config.RPN_BBOX_STD_DEV - ix += 1 - - return rpn_match, rpn_bbox - - -def generate_random_rois(image_shape, count, gt_class_ids, gt_boxes): - """Generates ROI proposals similar to what a region proposal network - would generate. - - image_shape: [Height, Width, Depth] - count: Number of ROIs to generate - gt_class_ids: [N] Integer ground truth class IDs - gt_boxes: [N, (y1, x1, y2, x2)] Ground truth boxes in pixels. - - Returns: [count, (y1, x1, y2, x2)] ROI boxes in pixels. - """ - # placeholder - rois = np.zeros((count, 4), dtype=np.int32) - - # Generate random ROIs around GT boxes (90% of count) - rois_per_box = int(0.9 * count / gt_boxes.shape[0]) - for i in range(gt_boxes.shape[0]): - gt_y1, gt_x1, gt_y2, gt_x2 = gt_boxes[i] - h = gt_y2 - gt_y1 - w = gt_x2 - gt_x1 - # random boundaries - r_y1 = max(gt_y1 - h, 0) - r_y2 = min(gt_y2 + h, image_shape[0]) - r_x1 = max(gt_x1 - w, 0) - r_x2 = min(gt_x2 + w, image_shape[1]) - - # To avoid generating boxes with zero area, we generate double what - # we need and filter out the extra. If we get fewer valid boxes - # than we need, we loop and try again. - while True: - y1y2 = np.random.randint(r_y1, r_y2, (rois_per_box * 2, 2)) - x1x2 = np.random.randint(r_x1, r_x2, (rois_per_box * 2, 2)) - # Filter out zero area boxes - threshold = 1 - y1y2 = y1y2[np.abs(y1y2[:, 0] - y1y2[:, 1]) >= - threshold][:rois_per_box] - x1x2 = x1x2[np.abs(x1x2[:, 0] - x1x2[:, 1]) >= - threshold][:rois_per_box] - if y1y2.shape[0] == rois_per_box and x1x2.shape[0] == rois_per_box: - break - - # Sort on axis 1 to ensure x1 <= x2 and y1 <= y2 and then reshape - # into x1, y1, x2, y2 order - x1, x2 = np.split(np.sort(x1x2, axis=1), 2, axis=1) - y1, y2 = np.split(np.sort(y1y2, axis=1), 2, axis=1) - box_rois = np.hstack([y1, x1, y2, x2]) - rois[rois_per_box * i:rois_per_box * (i + 1)] = box_rois - - # Generate random ROIs anywhere in the image (10% of count) - remaining_count = count - (rois_per_box * gt_boxes.shape[0]) - # To avoid generating boxes with zero area, we generate double what - # we need and filter out the extra. If we get fewer valid boxes - # than we need, we loop and try again. - while True: - y1y2 = np.random.randint(0, image_shape[0], (remaining_count * 2, 2)) - x1x2 = np.random.randint(0, image_shape[1], (remaining_count * 2, 2)) - # Filter out zero area boxes - threshold = 1 - y1y2 = y1y2[np.abs(y1y2[:, 0] - y1y2[:, 1]) >= - threshold][:remaining_count] - x1x2 = x1x2[np.abs(x1x2[:, 0] - x1x2[:, 1]) >= - threshold][:remaining_count] - if y1y2.shape[0] == remaining_count and x1x2.shape[0] == remaining_count: - break - - # Sort on axis 1 to ensure x1 <= x2 and y1 <= y2 and then reshape - # into x1, y1, x2, y2 order - x1, x2 = np.split(np.sort(x1x2, axis=1), 2, axis=1) - y1, y2 = np.split(np.sort(y1y2, axis=1), 2, axis=1) - global_rois = np.hstack([y1, x1, y2, x2]) - rois[-remaining_count:] = global_rois - return rois - - -def data_generator(dataset, config, shuffle=True, augment=False, augmentation=None, - random_rois=0, batch_size=1, detection_targets=False): - """A generator that returns images and corresponding target class ids, - bounding box deltas, and masks. - - dataset: The Dataset object to pick data from - config: The model config object - shuffle: If True, shuffles the samples before every epoch - augment: (Depricated. Use augmentation instead). If true, apply random - image augmentation. Currently, only horizontal flipping is offered. - augmentation: Optional. An imgaug (https://github.com/aleju/imgaug) augmentation. - For example, passing imgaug.augmenters.Fliplr(0.5) flips images - right/left 50% of the time. - random_rois: If > 0 then generate proposals to be used to train the - network classifier and mask heads. Useful if training - the Mask RCNN part without the RPN. - batch_size: How many images to return in each call - detection_targets: If True, generate detection targets (class IDs, bbox - deltas, and masks). Typically for debugging or visualizations because - in trainig detection targets are generated by DetectionTargetLayer. - - Returns a Python generator. Upon calling next() on it, the - generator returns two lists, inputs and outputs. The containtes - of the lists differs depending on the received arguments: - inputs list: - - images: [batch, H, W, C] - - image_meta: [batch, (meta data)] Image details. See compose_image_meta() - - rpn_match: [batch, N] Integer (1=positive anchor, -1=negative, 0=neutral) - - rpn_bbox: [batch, N, (dy, dx, log(dh), log(dw))] Anchor bbox deltas. - - gt_class_ids: [batch, MAX_GT_INSTANCES] Integer class IDs - - gt_boxes: [batch, MAX_GT_INSTANCES, (y1, x1, y2, x2)] - - gt_masks: [batch, height, width, MAX_GT_INSTANCES]. The height and width - are those of the image unless use_mini_mask is True, in which - case they are defined in MINI_MASK_SHAPE. - - outputs list: Usually empty in regular training. But if detection_targets - is True then the outputs list contains target class_ids, bbox deltas, - and masks. - """ - b = 0 # batch item index - image_index = -1 - image_ids = np.copy(dataset.image_ids) - error_count = 0 - - # Anchors - # [anchor_count, (y1, x1, y2, x2)] - backbone_shapes = compute_backbone_shapes(config, config.IMAGE_SHAPE) - anchors = utils.generate_pyramid_anchors(config.RPN_ANCHOR_SCALES, - config.RPN_ANCHOR_RATIOS, - backbone_shapes, - config.BACKBONE_STRIDES, - config.RPN_ANCHOR_STRIDE) - - # Keras requires a generator to run indefinately. - while True: - try: - # Increment index to pick next image. Shuffle if at the start of an epoch. - image_index = (image_index + 1) % len(image_ids) - if shuffle and image_index == 0: - np.random.shuffle(image_ids) - - # Get GT bounding boxes and masks for image. - image_id = image_ids[image_index] - image, image_meta, gt_class_ids, gt_boxes, gt_masks = \ - load_image_gt(dataset, config, image_id, augment=augment, - augmentation=augmentation, - use_mini_mask=config.USE_MINI_MASK) - - # Skip images that have no instances. This can happen in cases - # where we train on a subset of classes and the image doesn't - # have any of the classes we care about. - if not np.any(gt_class_ids > 0): - continue - - # RPN Targets - rpn_match, rpn_bbox = build_rpn_targets(image.shape, anchors, - gt_class_ids, gt_boxes, config) - - # Mask R-CNN Targets - if random_rois: - rpn_rois = generate_random_rois( - image.shape, random_rois, gt_class_ids, gt_boxes) - if detection_targets: - rois, mrcnn_class_ids, mrcnn_bbox, mrcnn_mask =\ - build_detection_targets( - rpn_rois, gt_class_ids, gt_boxes, gt_masks, config) - - # Init batch arrays - if b == 0: - batch_image_meta = np.zeros( - (batch_size,) + image_meta.shape, dtype=image_meta.dtype) - batch_rpn_match = np.zeros( - [batch_size, anchors.shape[0], 1], dtype=rpn_match.dtype) - batch_rpn_bbox = np.zeros( - [batch_size, config.RPN_TRAIN_ANCHORS_PER_IMAGE, 4], dtype=rpn_bbox.dtype) - batch_images = np.zeros( - (batch_size,) + image.shape, dtype=np.float32) - batch_gt_class_ids = np.zeros( - (batch_size, config.MAX_GT_INSTANCES), dtype=np.int32) - batch_gt_boxes = np.zeros( - (batch_size, config.MAX_GT_INSTANCES, 4), dtype=np.int32) - batch_gt_masks = np.zeros( - (batch_size, gt_masks.shape[1], gt_masks.shape[1], - config.MAX_GT_INSTANCES), dtype=gt_masks.dtype) - if random_rois: - batch_rpn_rois = np.zeros( - (batch_size, rpn_rois.shape[0], 4), dtype=rpn_rois.dtype) - if detection_targets: - batch_rois = np.zeros( - (batch_size,) + rois.shape, dtype=rois.dtype) - batch_mrcnn_class_ids = np.zeros( - (batch_size,) + mrcnn_class_ids.shape, dtype=mrcnn_class_ids.dtype) - batch_mrcnn_bbox = np.zeros( - (batch_size,) + mrcnn_bbox.shape, dtype=mrcnn_bbox.dtype) - batch_mrcnn_mask = np.zeros( - (batch_size,) + mrcnn_mask.shape, dtype=mrcnn_mask.dtype) - - # If more instances than fits in the array, sub-sample from them. - if gt_boxes.shape[0] > config.MAX_GT_INSTANCES: - ids = np.random.choice( - np.arange(gt_boxes.shape[0]), config.MAX_GT_INSTANCES, replace=False) - gt_class_ids = gt_class_ids[ids] - gt_boxes = gt_boxes[ids] - gt_masks = gt_masks[:, :, ids] - - # Add to batch - batch_image_meta[b] = image_meta - batch_rpn_match[b] = rpn_match[:, np.newaxis] - batch_rpn_bbox[b] = rpn_bbox - batch_images[b] = mold_image(image.astype(np.float32), config) - batch_gt_class_ids[b, :gt_class_ids.shape[0]] = gt_class_ids - batch_gt_boxes[b, :gt_boxes.shape[0]] = gt_boxes - batch_gt_masks[b, :, :, :gt_masks.shape[-1]] = gt_masks - if random_rois: - batch_rpn_rois[b] = rpn_rois - if detection_targets: - batch_rois[b] = rois - batch_mrcnn_class_ids[b] = mrcnn_class_ids - batch_mrcnn_bbox[b] = mrcnn_bbox - batch_mrcnn_mask[b] = mrcnn_mask - b += 1 - - # Batch full? - if b >= batch_size: - inputs = [batch_images, batch_image_meta, batch_rpn_match, batch_rpn_bbox, - batch_gt_class_ids, batch_gt_boxes, batch_gt_masks] - outputs = [] - - if random_rois: - inputs.extend([batch_rpn_rois]) - if detection_targets: - inputs.extend([batch_rois]) - # Keras requires that output and targets have the same number of dimensions - batch_mrcnn_class_ids = np.expand_dims( - batch_mrcnn_class_ids, -1) - outputs.extend( - [batch_mrcnn_class_ids, batch_mrcnn_bbox, batch_mrcnn_mask]) - - yield inputs, outputs - - # start a new batch - b = 0 - except (GeneratorExit, KeyboardInterrupt): - raise - except: - # Log it and skip the image - logging.exception("Error processing image {}".format( - dataset.image_info[image_id])) - error_count += 1 - if error_count > 5: - raise - - -############################################################ -# MaskRCNN Class -############################################################ - -class MaskRCNN(): - """Encapsulates the Mask RCNN model functionality. - - The actual Keras model is in the keras_model property. - """ - - def __init__(self, mode, config, model_dir): - """ - mode: Either "training" or "inference" - config: A Sub-class of the Config class - model_dir: Directory to save training logs and trained weights - """ - assert mode in ['training', 'inference'] - self.mode = mode - self.config = config - self.model_dir = model_dir - self.set_log_dir() - self.keras_model = self.build(mode=mode, config=config) - - def build(self, mode, config): - """Build Mask R-CNN architecture. - input_shape: The shape of the input image. - mode: Either "training" or "inference". The inputs and - outputs of the model differ accordingly. - """ - assert mode in ['training', 'inference'] - - # Image size must be dividable by 2 multiple times - h, w = config.IMAGE_SHAPE[:2] - if h / 2**6 != int(h / 2**6) or w / 2**6 != int(w / 2**6): - raise Exception("Image size must be dividable by 2 at least 6 times " - "to avoid fractions when downscaling and upscaling." - "For example, use 256, 320, 384, 448, 512, ... etc. ") - - # Inputs - input_image = KL.Input( - shape=[None, None, 3], name="input_image") - input_image_meta = KL.Input(shape=[config.IMAGE_META_SIZE], - name="input_image_meta") - if mode == "training": - # RPN GT - input_rpn_match = KL.Input( - shape=[None, 1], name="input_rpn_match", dtype=tf.int32) - input_rpn_bbox = KL.Input( - shape=[None, 4], name="input_rpn_bbox", dtype=tf.float32) - - # Detection GT (class IDs, bounding boxes, and masks) - # 1. GT Class IDs (zero padded) - input_gt_class_ids = KL.Input( - shape=[None], name="input_gt_class_ids", dtype=tf.int32) - # 2. GT Boxes in pixels (zero padded) - # [batch, MAX_GT_INSTANCES, (y1, x1, y2, x2)] in image coordinates - input_gt_boxes = KL.Input( - shape=[None, 4], name="input_gt_boxes", dtype=tf.float32) - # Normalize coordinates - gt_boxes = KL.Lambda(lambda x: norm_boxes_graph( - x, K.shape(input_image)[1:3]))(input_gt_boxes) - # 3. GT Masks (zero padded) - # [batch, height, width, MAX_GT_INSTANCES] - if config.USE_MINI_MASK: - input_gt_masks = KL.Input( - shape=[config.MINI_MASK_SHAPE[0], - config.MINI_MASK_SHAPE[1], None], - name="input_gt_masks", dtype=bool) - else: - input_gt_masks = KL.Input( - shape=[config.IMAGE_SHAPE[0], config.IMAGE_SHAPE[1], None], - name="input_gt_masks", dtype=bool) - elif mode == "inference": - # Anchors in normalized coordinates - input_anchors = KL.Input(shape=[None, 4], name="input_anchors") - - # Build the shared convolutional layers. - # Bottom-up Layers - # Returns a list of the last layers of each stage, 5 in total. - # Don't create the thead (stage 5), so we pick the 4th item in the list. - _, C2, C3, C4, C5 = resnet_graph(input_image, config.BACKBONE, - stage5=True, train_bn=config.TRAIN_BN) - # Top-down Layers - # TODO: add assert to varify feature map sizes match what's in config - P5 = KL.Conv2D(256, (1, 1), name='fpn_c5p5')(C5) - P4 = KL.Add(name="fpn_p4add")([ - KL.UpSampling2D(size=(2, 2), name="fpn_p5upsampled")(P5), - KL.Conv2D(256, (1, 1), name='fpn_c4p4')(C4)]) - P3 = KL.Add(name="fpn_p3add")([ - KL.UpSampling2D(size=(2, 2), name="fpn_p4upsampled")(P4), - KL.Conv2D(256, (1, 1), name='fpn_c3p3')(C3)]) - P2 = KL.Add(name="fpn_p2add")([ - KL.UpSampling2D(size=(2, 2), name="fpn_p3upsampled")(P3), - KL.Conv2D(256, (1, 1), name='fpn_c2p2')(C2)]) - # Attach 3x3 conv to all P layers to get the final feature maps. - P2 = KL.Conv2D(256, (3, 3), padding="SAME", name="fpn_p2")(P2) - P3 = KL.Conv2D(256, (3, 3), padding="SAME", name="fpn_p3")(P3) - P4 = KL.Conv2D(256, (3, 3), padding="SAME", name="fpn_p4")(P4) - P5 = KL.Conv2D(256, (3, 3), padding="SAME", name="fpn_p5")(P5) - # P6 is used for the 5th anchor scale in RPN. Generated by - # subsampling from P5 with stride of 2. - P6 = KL.MaxPooling2D(pool_size=(1, 1), strides=2, name="fpn_p6")(P5) - - # Note that P6 is used in RPN, but not in the classifier heads. - rpn_feature_maps = [P2, P3, P4, P5, P6] - mrcnn_feature_maps = [P2, P3, P4, P5] - - # Anchors - if mode == "training": - anchors = self.get_anchors(config.IMAGE_SHAPE) - # Duplicate across the batch dimension because Keras requires it - # TODO: can this be optimized to avoid duplicating the anchors? - anchors = np.broadcast_to(anchors, (config.BATCH_SIZE,) + anchors.shape) - # A hack to get around Keras's bad support for constants - anchors = KL.Lambda(lambda x: tf.constant(anchors), name="anchors")(input_image) - else: - anchors = input_anchors - - # RPN Model - rpn = build_rpn_model(config.RPN_ANCHOR_STRIDE, - len(config.RPN_ANCHOR_RATIOS), 256) - # Loop through pyramid layers - layer_outputs = [] # list of lists - for p in rpn_feature_maps: - layer_outputs.append(rpn([p])) - # Concatenate layer outputs - # Convert from list of lists of level outputs to list of lists - # of outputs across levels. - # e.g. [[a1, b1, c1], [a2, b2, c2]] => [[a1, a2], [b1, b2], [c1, c2]] - output_names = ["rpn_class_logits", "rpn_class", "rpn_bbox"] - outputs = list(zip(*layer_outputs)) - outputs = [KL.Concatenate(axis=1, name=n)(list(o)) - for o, n in zip(outputs, output_names)] - - rpn_class_logits, rpn_class, rpn_bbox = outputs - - # Generate proposals - # Proposals are [batch, N, (y1, x1, y2, x2)] in normalized coordinates - # and zero padded. - proposal_count = config.POST_NMS_ROIS_TRAINING if mode == "training"\ - else config.POST_NMS_ROIS_INFERENCE - rpn_rois = ProposalLayer( - proposal_count=proposal_count, - nms_threshold=config.RPN_NMS_THRESHOLD, - name="ROI", - config=config)([rpn_class, rpn_bbox, anchors]) - - if mode == "training": - # Class ID mask to mark class IDs supported by the dataset the image - # came from. - active_class_ids = KL.Lambda( - lambda x: parse_image_meta_graph(x)["active_class_ids"] - )(input_image_meta) - - if not config.USE_RPN_ROIS: - # Ignore predicted ROIs and use ROIs provided as an input. - input_rois = KL.Input(shape=[config.POST_NMS_ROIS_TRAINING, 4], - name="input_roi", dtype=np.int32) - # Normalize coordinates - target_rois = KL.Lambda(lambda x: norm_boxes_graph( - x, K.shape(input_image)[1:3]))(input_rois) - else: - target_rois = rpn_rois - - # Generate detection targets - # Subsamples proposals and generates target outputs for training - # Note that proposal class IDs, gt_boxes, and gt_masks are zero - # padded. Equally, returned rois and targets are zero padded. - rois, target_class_ids, target_bbox, target_mask =\ - DetectionTargetLayer(config, name="proposal_targets")([ - target_rois, input_gt_class_ids, gt_boxes, input_gt_masks]) - - # Network Heads - # TODO: verify that this handles zero padded ROIs - mrcnn_class_logits, mrcnn_class, mrcnn_bbox =\ - fpn_classifier_graph(rois, mrcnn_feature_maps, input_image_meta, - config.POOL_SIZE, config.NUM_CLASSES, - train_bn=config.TRAIN_BN) - - mrcnn_mask = build_fpn_mask_graph(rois, mrcnn_feature_maps, - input_image_meta, - config.MASK_POOL_SIZE, - config.NUM_CLASSES, - train_bn=config.TRAIN_BN) - - # TODO: clean up (use tf.identify if necessary) - output_rois = KL.Lambda(lambda x: x * 1, name="output_rois")(rois) - - # Losses - rpn_class_loss = KL.Lambda(lambda x: rpn_class_loss_graph(*x), name="rpn_class_loss")( - [input_rpn_match, rpn_class_logits]) - rpn_bbox_loss = KL.Lambda(lambda x: rpn_bbox_loss_graph(config, *x), name="rpn_bbox_loss")( - [input_rpn_bbox, input_rpn_match, rpn_bbox]) - class_loss = KL.Lambda(lambda x: mrcnn_class_loss_graph(*x), name="mrcnn_class_loss")( - [target_class_ids, mrcnn_class_logits, active_class_ids]) - bbox_loss = KL.Lambda(lambda x: mrcnn_bbox_loss_graph(*x), name="mrcnn_bbox_loss")( - [target_bbox, target_class_ids, mrcnn_bbox]) - mask_loss = KL.Lambda(lambda x: mrcnn_mask_loss_graph(*x), name="mrcnn_mask_loss")( - [target_mask, target_class_ids, mrcnn_mask]) - - # Model - inputs = [input_image, input_image_meta, - input_rpn_match, input_rpn_bbox, input_gt_class_ids, input_gt_boxes, input_gt_masks] - if not config.USE_RPN_ROIS: - inputs.append(input_rois) - outputs = [rpn_class_logits, rpn_class, rpn_bbox, - mrcnn_class_logits, mrcnn_class, mrcnn_bbox, mrcnn_mask, - rpn_rois, output_rois, - rpn_class_loss, rpn_bbox_loss, class_loss, bbox_loss, mask_loss] - model = KM.Model(inputs, outputs, name='mask_rcnn') - else: - # Network Heads - # Proposal classifier and BBox regressor heads - mrcnn_class_logits, mrcnn_class, mrcnn_bbox =\ - fpn_classifier_graph(rpn_rois, mrcnn_feature_maps, input_image_meta, - config.POOL_SIZE, config.NUM_CLASSES, - train_bn=config.TRAIN_BN) - - # Detections - # output is [batch, num_detections, (y1, x1, y2, x2, class_id, score)] in - # normalized coordinates - detections = DetectionLayer(config, name="mrcnn_detection")( - [rpn_rois, mrcnn_class, mrcnn_bbox, input_image_meta]) - - # Create masks for detections - detection_boxes = KL.Lambda(lambda x: x[..., :4])(detections) - mrcnn_mask = build_fpn_mask_graph(detection_boxes, mrcnn_feature_maps, - input_image_meta, - config.MASK_POOL_SIZE, - config.NUM_CLASSES, - train_bn=config.TRAIN_BN) - - model = KM.Model([input_image, input_image_meta, input_anchors], - [detections, mrcnn_class, mrcnn_bbox, - mrcnn_mask, rpn_rois, rpn_class, rpn_bbox], - name='mask_rcnn') - - # Add multi-GPU support. - if config.GPU_COUNT > 1: - from mrcnn.parallel_model import ParallelModel - model = ParallelModel(model, config.GPU_COUNT) - - return model - - def find_last(self): - """Finds the last checkpoint file of the last trained model in the - model directory. - Returns: - log_dir: The directory where events and weights are saved - checkpoint_path: the path to the last checkpoint file - """ - # Get directory names. Each directory corresponds to a model - dir_names = next(os.walk(self.model_dir))[1] - key = self.config.NAME.lower() - dir_names = filter(lambda f: f.startswith(key), dir_names) - dir_names = sorted(dir_names) - if not dir_names: - return None, None - # Pick last directory - dir_name = os.path.join(self.model_dir, dir_names[-1]) - # Find the last checkpoint - checkpoints = next(os.walk(dir_name))[2] - checkpoints = filter(lambda f: f.startswith("mask_rcnn"), checkpoints) - checkpoints = sorted(checkpoints) - if not checkpoints: - return dir_name, None - checkpoint = os.path.join(dir_name, checkpoints[-1]) - return dir_name, checkpoint - - def load_weights(self, filepath, by_name=False, exclude=None): - """Modified version of the correspoding Keras function with - the addition of multi-GPU support and the ability to exclude - some layers from loading. - exlude: list of layer names to excluce - """ - import h5py - from keras.engine import topology - - if exclude: - by_name = True - - if h5py is None: - raise ImportError('`load_weights` requires h5py.') - f = h5py.File(filepath, mode='r') - if 'layer_names' not in f.attrs and 'model_weights' in f: - f = f['model_weights'] - - # In multi-GPU training, we wrap the model. Get layers - # of the inner model because they have the weights. - keras_model = self.keras_model - layers = keras_model.inner_model.layers if hasattr(keras_model, "inner_model")\ - else keras_model.layers - - # Exclude some layers - if exclude: - layers = filter(lambda l: l.name not in exclude, layers) - - if by_name: - topology.load_weights_from_hdf5_group_by_name(f, layers) - else: - topology.load_weights_from_hdf5_group(f, layers) - if hasattr(f, 'close'): - f.close() - - # Update the log directory - self.set_log_dir(filepath) - - def get_imagenet_weights(self): - """Downloads ImageNet trained weights from Keras. - Returns path to weights file. - """ - from keras.utils.data_utils import get_file - TF_WEIGHTS_PATH_NO_TOP = 'https://github.com/fchollet/deep-learning-models/'\ - 'releases/download/v0.2/'\ - 'resnet50_weights_tf_dim_ordering_tf_kernels_notop.h5' - weights_path = get_file('resnet50_weights_tf_dim_ordering_tf_kernels_notop.h5', - TF_WEIGHTS_PATH_NO_TOP, - cache_subdir='models', - md5_hash='a268eb855778b3df3c7506639542a6af') - return weights_path - - def compile(self, learning_rate, momentum): - """Gets the model ready for training. Adds losses, regularization, and - metrics. Then calls the Keras compile() function. - """ - # Optimizer object - optimizer = keras.optimizers.SGD(lr=learning_rate, momentum=momentum, - clipnorm=self.config.GRADIENT_CLIP_NORM) - # Add Losses - # First, clear previously set losses to avoid duplication - self.keras_model._losses = [] - self.keras_model._per_input_losses = {} - loss_names = ["rpn_class_loss", "rpn_bbox_loss", - "mrcnn_class_loss", "mrcnn_bbox_loss", "mrcnn_mask_loss"] - for name in loss_names: - layer = self.keras_model.get_layer(name) - if layer.output in self.keras_model.losses: - continue - self.keras_model.add_loss( - tf.reduce_mean(layer.output, keep_dims=True)) - - # Add L2 Regularization - # Skip gamma and beta weights of batch normalization layers. - reg_losses = [keras.regularizers.l2(self.config.WEIGHT_DECAY)(w) / tf.cast(tf.size(w), tf.float32) - for w in self.keras_model.trainable_weights - if 'gamma' not in w.name and 'beta' not in w.name] - self.keras_model.add_loss(tf.add_n(reg_losses)) - - # Compile - self.keras_model.compile(optimizer=optimizer, loss=[ - None] * len(self.keras_model.outputs)) - - # Add metrics for losses - for name in loss_names: - if name in self.keras_model.metrics_names: - continue - layer = self.keras_model.get_layer(name) - self.keras_model.metrics_names.append(name) - self.keras_model.metrics_tensors.append(tf.reduce_mean( - layer.output, keep_dims=True)) - - def set_trainable(self, layer_regex, keras_model=None, indent=0, verbose=1): - """Sets model layers as trainable if their names match - the given regular expression. - """ - # Print message on the first call (but not on recursive calls) - if verbose > 0 and keras_model is None: - log("Selecting layers to train") - - keras_model = keras_model or self.keras_model - - # In multi-GPU training, we wrap the model. Get layers - # of the inner model because they have the weights. - layers = keras_model.inner_model.layers if hasattr(keras_model, "inner_model")\ - else keras_model.layers - - for layer in layers: - # Is the layer a model? - if layer.__class__.__name__ == 'Model': - print("In model: ", layer.name) - self.set_trainable( - layer_regex, keras_model=layer, indent=indent + 4) - continue - - if not layer.weights: - continue - # Is it trainable? - trainable = bool(re.fullmatch(layer_regex, layer.name)) - # Update layer. If layer is a container, update inner layer. - if layer.__class__.__name__ == 'TimeDistributed': - layer.layer.trainable = trainable - else: - layer.trainable = trainable - # Print trainble layer names - if trainable and verbose > 0: - log("{}{:20} ({})".format(" " * indent, layer.name, - layer.__class__.__name__)) - - def set_log_dir(self, model_path=None): - """Sets the model log directory and epoch counter. - - model_path: If None, or a format different from what this code uses - then set a new log directory and start epochs from 0. Otherwise, - extract the log directory and the epoch counter from the file - name. - """ - # Set date and epoch counter as if starting a new model - self.epoch = 0 - now = datetime.datetime.now() - - # If we have a model path with date and epochs use them - if model_path: - # Continue from we left of. Get epoch and date from the file name - # A sample model path might look like: - # /path/to/logs/coco20171029T2315/mask_rcnn_coco_0001.h5 - regex = r".*/\w+(\d{4})(\d{2})(\d{2})T(\d{2})(\d{2})/mask\_rcnn\_\w+(\d{4})\.h5" - m = re.match(regex, model_path) - if m: - now = datetime.datetime(int(m.group(1)), int(m.group(2)), int(m.group(3)), - int(m.group(4)), int(m.group(5))) - # Epoch number in file is 1-based, and in Keras code it's 0-based. - # So, adjust for that then increment by one to start from the next epoch - self.epoch = int(m.group(6)) - 1 + 1 - - # Directory for training logs - self.log_dir = os.path.join(self.model_dir, "{}{:%Y%m%dT%H%M}".format( - self.config.NAME.lower(), now)) - - # Path to save after each epoch. Include placeholders that get filled by Keras. - self.checkpoint_path = os.path.join(self.log_dir, "mask_rcnn_{}_*epoch*.h5".format( - self.config.NAME.lower())) - self.checkpoint_path = self.checkpoint_path.replace( - "*epoch*", "{epoch:04d}") - - def train(self, train_dataset, val_dataset, learning_rate, epochs, layers, - augmentation=None): - """Train the model. - train_dataset, val_dataset: Training and validation Dataset objects. - learning_rate: The learning rate to train with - epochs: Number of training epochs. Note that previous training epochs - are considered to be done alreay, so this actually determines - the epochs to train in total rather than in this particaular - call. - layers: Allows selecting wich layers to train. It can be: - - A regular expression to match layer names to train - - One of these predefined values: - heaads: The RPN, classifier and mask heads of the network - all: All the layers - 3+: Train Resnet stage 3 and up - 4+: Train Resnet stage 4 and up - 5+: Train Resnet stage 5 and up - augmentation: Optional. An imgaug (https://github.com/aleju/imgaug) - augmentation. For example, passing imgaug.augmenters.Fliplr(0.5) - flips images right/left 50% of the time. You can pass complex - augmentations as well. This augmentation applies 50% of the - time, and when it does it flips images right/left half the time - and adds a Gausssian blur with a random sigma in range 0 to 5. - - augmentation = imgaug.augmenters.Sometimes(0.5, [ - imgaug.augmenters.Fliplr(0.5), - imgaug.augmenters.GaussianBlur(sigma=(0.0, 5.0)) - ]) - """ - assert self.mode == "training", "Create model in training mode." - - # Pre-defined layer regular expressions - layer_regex = { - # all layers but the backbone - "heads": r"(mrcnn\_.*)|(rpn\_.*)|(fpn\_.*)", - # From a specific Resnet stage and up - "3+": r"(res3.*)|(bn3.*)|(res4.*)|(bn4.*)|(res5.*)|(bn5.*)|(mrcnn\_.*)|(rpn\_.*)|(fpn\_.*)", - "4+": r"(res4.*)|(bn4.*)|(res5.*)|(bn5.*)|(mrcnn\_.*)|(rpn\_.*)|(fpn\_.*)", - "5+": r"(res5.*)|(bn5.*)|(mrcnn\_.*)|(rpn\_.*)|(fpn\_.*)", - # All layers - "all": ".*", - } - if layers in layer_regex.keys(): - layers = layer_regex[layers] - - # Data generators - train_generator = data_generator(train_dataset, self.config, shuffle=True, - augmentation=augmentation, - batch_size=self.config.BATCH_SIZE) - val_generator = data_generator(val_dataset, self.config, shuffle=True, - batch_size=self.config.BATCH_SIZE) - - # Callbacks - callbacks = [ - keras.callbacks.TensorBoard(log_dir=self.log_dir, - histogram_freq=0, write_graph=True, write_images=False), - keras.callbacks.ModelCheckpoint(self.checkpoint_path, - verbose=0, save_weights_only=True), - ] - - # Train - log("\nStarting at epoch {}. LR={}\n".format(self.epoch, learning_rate)) - log("Checkpoint Path: {}".format(self.checkpoint_path)) - self.set_trainable(layers) - self.compile(learning_rate, self.config.LEARNING_MOMENTUM) - - # Work-around for Windows: Keras fails on Windows when using - # multiprocessing workers. See discussion here: - # https://github.com/matterport/Mask_RCNN/issues/13#issuecomment-353124009 - if os.name is 'nt': - workers = 0 - else: - workers = multiprocessing.cpu_count() - - self.keras_model.fit_generator( - train_generator, - initial_epoch=self.epoch, - epochs=epochs, - steps_per_epoch=self.config.STEPS_PER_EPOCH, - callbacks=callbacks, - validation_data=val_generator, - validation_steps=self.config.VALIDATION_STEPS, - max_queue_size=100, - workers=workers, - use_multiprocessing=True, - ) - self.epoch = max(self.epoch, epochs) - - def mold_inputs(self, images): - """Takes a list of images and modifies them to the format expected - as an input to the neural network. - images: List of image matricies [height,width,depth]. Images can have - different sizes. - - Returns 3 Numpy matricies: - molded_images: [N, h, w, 3]. Images resized and normalized. - image_metas: [N, length of meta data]. Details about each image. - windows: [N, (y1, x1, y2, x2)]. The portion of the image that has the - original image (padding excluded). - """ - molded_images = [] - image_metas = [] - windows = [] - for image in images: - # Resize image - # TODO: move resizing to mold_image() - molded_image, window, scale, padding = utils.resize_image( - image, - min_dim=self.config.IMAGE_MIN_DIM, - max_dim=self.config.IMAGE_MAX_DIM, - mode=self.config.IMAGE_RESIZE_MODE) - molded_image = mold_image(molded_image, self.config) - # Build image_meta - image_meta = compose_image_meta( - 0, image.shape, molded_image.shape, window, scale, - np.zeros([self.config.NUM_CLASSES], dtype=np.int32)) - # Append - molded_images.append(molded_image) - windows.append(window) - image_metas.append(image_meta) - # Pack into arrays - molded_images = np.stack(molded_images) - image_metas = np.stack(image_metas) - windows = np.stack(windows) - return molded_images, image_metas, windows - - def unmold_detections(self, detections, mrcnn_mask, original_image_shape, - image_shape, window): - """Reformats the detections of one image from the format of the neural - network output to a format suitable for use in the rest of the - application. - - detections: [N, (y1, x1, y2, x2, class_id, score)] in normalized coordinates - mrcnn_mask: [N, height, width, num_classes] - original_image_shape: [H, W, C] Original image shape before resizing - image_shape: [H, W, C] Shape of the image after resizing and padding - window: [y1, x1, y2, x2] Pixel coordinates of box in the image where the real - image is excluding the padding. - - Returns: - boxes: [N, (y1, x1, y2, x2)] Bounding boxes in pixels - class_ids: [N] Integer class IDs for each bounding box - scores: [N] Float probability scores of the class_id - masks: [height, width, num_instances] Instance masks - """ - # How many detections do we have? - # Detections array is padded with zeros. Find the first class_id == 0. - zero_ix = np.where(detections[:, 4] == 0)[0] - N = zero_ix[0] if zero_ix.shape[0] > 0 else detections.shape[0] - - # Extract boxes, class_ids, scores, and class-specific masks - boxes = detections[:N, :4] - class_ids = detections[:N, 4].astype(np.int32) - scores = detections[:N, 5] - masks = mrcnn_mask[np.arange(N), :, :, class_ids] - - # Translate normalized coordinates in the resized image to pixel - # coordinates in the original image before resizing - window = utils.norm_boxes(window, image_shape[:2]) - wy1, wx1, wy2, wx2 = window - shift = np.array([wy1, wx1, wy1, wx1]) - wh = wy2 - wy1 # window height - ww = wx2 - wx1 # window width - scale = np.array([wh, ww, wh, ww]) - # Convert boxes to normalized coordinates on the window - boxes = np.divide(boxes - shift, scale) - # Convert boxes to pixel coordinates on the original image - boxes = utils.denorm_boxes(boxes, original_image_shape[:2]) - - # Filter out detections with zero area. Happens in early training when - # network weights are still random - exclude_ix = np.where( - (boxes[:, 2] - boxes[:, 0]) * (boxes[:, 3] - boxes[:, 1]) <= 0)[0] - if exclude_ix.shape[0] > 0: - boxes = np.delete(boxes, exclude_ix, axis=0) - class_ids = np.delete(class_ids, exclude_ix, axis=0) - scores = np.delete(scores, exclude_ix, axis=0) - masks = np.delete(masks, exclude_ix, axis=0) - N = class_ids.shape[0] - - # Resize masks to original image size and set boundary threshold. - full_masks = [] - for i in range(N): - # Convert neural network mask to full size mask - full_mask = utils.unmold_mask(masks[i], boxes[i], original_image_shape) - full_masks.append(full_mask) - full_masks = np.stack(full_masks, axis=-1)\ - if full_masks else np.empty((0,) + masks.shape[1:3]) - - return boxes, class_ids, scores, full_masks - - def detect(self, images, verbose=0): - """Runs the detection pipeline. - - images: List of images, potentially of different sizes. - - Returns a list of dicts, one dict per image. The dict contains: - rois: [N, (y1, x1, y2, x2)] detection bounding boxes - class_ids: [N] int class IDs - scores: [N] float probability scores for the class IDs - masks: [H, W, N] instance binary masks - """ - assert self.mode == "inference", "Create model in inference mode." - assert len( - images) == self.config.BATCH_SIZE, "len(images) must be equal to BATCH_SIZE" - - if verbose: - log("Processing {} images".format(len(images))) - for image in images: - log("image", image) - - # Mold inputs to format expected by the neural network - molded_images, image_metas, windows = self.mold_inputs(images) - - # Validate image sizes - # All images in a batch MUST be of the same size - image_shape = molded_images[0].shape - for g in molded_images[1:]: - assert g.shape == image_shape,\ - "After resizing, all images must have the same size. Check IMAGE_RESIZE_MODE and image sizes." - - # Anchors - anchors = self.get_anchors(image_shape) - # Duplicate across the batch dimension because Keras requires it - # TODO: can this be optimized to avoid duplicating the anchors? - anchors = np.broadcast_to(anchors, (self.config.BATCH_SIZE,) + anchors.shape) - - if verbose: - log("molded_images", molded_images) - log("image_metas", image_metas) - log("anchors", anchors) - # Run object detection - detections, _, _, mrcnn_mask, _, _, _ =\ - self.keras_model.predict([molded_images, image_metas, anchors], verbose=0) - # Process detections - results = [] - for i, image in enumerate(images): - final_rois, final_class_ids, final_scores, final_masks =\ - self.unmold_detections(detections[i], mrcnn_mask[i], - image.shape, molded_images[i].shape, - windows[i]) - results.append({ - "rois": final_rois, - "class_ids": final_class_ids, - "scores": final_scores, - "masks": final_masks, - }) - return results - - def get_anchors(self, image_shape): - """Returns anchor pyramid for the given image size.""" - backbone_shapes = compute_backbone_shapes(self.config, image_shape) - # Cache anchors and reuse if image shape is the same - if not hasattr(self, "_anchor_cache"): - self._anchor_cache = {} - if not tuple(image_shape) in self._anchor_cache: - # Generate Anchors - a = utils.generate_pyramid_anchors( - self.config.RPN_ANCHOR_SCALES, - self.config.RPN_ANCHOR_RATIOS, - backbone_shapes, - self.config.BACKBONE_STRIDES, - self.config.RPN_ANCHOR_STRIDE) - # Keep a copy of the latest anchors in pixel coordinates because - # it's used in inspect_model notebooks. - # TODO: Remove this after the notebook are refactored to not use it - self.anchors = a - # Normalize coordinates - self._anchor_cache[tuple(image_shape)] = utils.norm_boxes(a, image_shape[:2]) - return self._anchor_cache[tuple(image_shape)] - - def ancestor(self, tensor, name, checked=None): - """Finds the ancestor of a TF tensor in the computation graph. - tensor: TensorFlow symbolic tensor. - name: Name of ancestor tensor to find - checked: For internal use. A list of tensors that were already - searched to avoid loops in traversing the graph. - """ - checked = checked if checked is not None else [] - # Put a limit on how deep we go to avoid very long loops - if len(checked) > 500: - return None - # Convert name to a regex and allow matching a number prefix - # because Keras adds them automatically - if isinstance(name, str): - name = re.compile(name.replace("/", r"(\_\d+)*/")) - - parents = tensor.op.inputs - for p in parents: - if p in checked: - continue - if bool(re.fullmatch(name, p.name)): - return p - checked.append(p) - a = self.ancestor(p, name, checked) - if a is not None: - return a - return None - - def find_trainable_layer(self, layer): - """If a layer is encapsulated by another layer, this function - digs through the encapsulation and returns the layer that holds - the weights. - """ - if layer.__class__.__name__ == 'TimeDistributed': - return self.find_trainable_layer(layer.layer) - return layer - - def get_trainable_layers(self): - """Returns a list of layers that have weights.""" - layers = [] - # Loop through all layers - for l in self.keras_model.layers: - # If layer is a wrapper, find inner trainable layer - l = self.find_trainable_layer(l) - # Include layer if it has weights - if l.get_weights(): - layers.append(l) - return layers - - def run_graph(self, images, outputs): - """Runs a sub-set of the computation graph that computes the given - outputs. - - outputs: List of tuples (name, tensor) to compute. The tensors are - symbolic TensorFlow tensors and the names are for easy tracking. - - Returns an ordered dict of results. Keys are the names received in the - input and values are Numpy arrays. - """ - model = self.keras_model - - # Organize desired outputs into an ordered dict - outputs = OrderedDict(outputs) - for o in outputs.values(): - assert o is not None - - # Build a Keras function to run parts of the computation graph - inputs = model.inputs - if model.uses_learning_phase and not isinstance(K.learning_phase(), int): - inputs += [K.learning_phase()] - kf = K.function(model.inputs, list(outputs.values())) - - # Prepare inputs - molded_images, image_metas, windows = self.mold_inputs(images) - image_shape = molded_images[0].shape - # TODO: support training mode? - # if TEST_MODE == "training": - # model_in = [molded_images, image_metas, - # target_rpn_match, target_rpn_bbox, - # gt_boxes, gt_masks] - # if not config.USE_RPN_ROIS: - # model_in.append(target_rois) - # if model.uses_learning_phase and not isinstance(K.learning_phase(), int): - # model_in.append(1.) - # outputs_np = kf(model_in) - # else: - # Anchors - anchors = self.get_anchors(image_shape) - # Duplicate across the batch dimension because Keras requires it - # TODO: can this be optimized to avoid duplicating the anchors? - anchors = np.broadcast_to(anchors, (self.config.BATCH_SIZE,) + anchors.shape) - model_in = [molded_images, image_metas, anchors] - - # Run inference - if model.uses_learning_phase and not isinstance(K.learning_phase(), int): - model_in.append(0.) - outputs_np = kf(model_in) - - # Pack the generated Numpy arrays into a a dict and log the results. - outputs_np = OrderedDict([(k, v) - for k, v in zip(outputs.keys(), outputs_np)]) - for k, v in outputs_np.items(): - log(k, v) - return outputs_np - - -############################################################ -# Data Formatting -############################################################ - -def compose_image_meta(image_id, original_image_shape, image_shape, - window, scale, active_class_ids): - """Takes attributes of an image and puts them in one 1D array. - - image_id: An int ID of the image. Useful for debugging. - original_image_shape: [H, W, C] before resizing or padding. - image_shape: [H, W, C] after resizing and padding - window: (y1, x1, y2, x2) in pixels. The area of the image where the real - image is (excluding the padding) - scale: The scaling factor applied to the original image (float32) - active_class_ids: List of class_ids available in the dataset from which - the image came. Useful if training on images from multiple datasets - where not all classes are present in all datasets. - """ - meta = np.array( - [image_id] + # size=1 - list(original_image_shape) + # size=3 - list(image_shape) + # size=3 - list(window) + # size=4 (y1, x1, y2, x2) in image cooredinates - [scale] + # size=1 - list(active_class_ids) # size=num_classes - ) - return meta - - -def parse_image_meta(meta): - """Parses an array that contains image attributes to its components. - See compose_image_meta() for more details. - - meta: [batch, meta length] where meta length depends on NUM_CLASSES - - Returns a dict of the parsed values. - """ - image_id = meta[:, 0] - original_image_shape = meta[:, 1:4] - image_shape = meta[:, 4:7] - window = meta[:, 7:11] # (y1, x1, y2, x2) window of image in in pixels - scale = meta[:, 11] - active_class_ids = meta[:, 12:] - return { - "image_id": image_id.astype(np.int32), - "original_image_shape": original_image_shape.astype(np.int32), - "image_shape": image_shape.astype(np.int32), - "window": window.astype(np.int32), - "scale": scale.astype(np.float32), - "active_class_ids": active_class_ids.astype(np.int32), - } - - -def parse_image_meta_graph(meta): - """Parses a tensor that contains image attributes to its components. - See compose_image_meta() for more details. - - meta: [batch, meta length] where meta length depends on NUM_CLASSES - - Returns a dict of the parsed tensors. - """ - image_id = meta[:, 0] - original_image_shape = meta[:, 1:4] - image_shape = meta[:, 4:7] - window = meta[:, 7:11] # (y1, x1, y2, x2) window of image in in pixels - scale = meta[:, 11] - active_class_ids = meta[:, 12:] - return { - "image_id": image_id, - "original_image_shape": original_image_shape, - "image_shape": image_shape, - "window": window, - "scale": scale, - "active_class_ids": active_class_ids, - } - - -def mold_image(images, config): - """Expects an RGB image (or array of images) and subtraces - the mean pixel and converts it to float. Expects image - colors in RGB order. - """ - return images.astype(np.float32) - config.MEAN_PIXEL - - -def unmold_image(normalized_images, config): - """Takes a image normalized with mold() and returns the original.""" - return (normalized_images + config.MEAN_PIXEL).astype(np.uint8) - - -############################################################ -# Miscellenous Graph Functions -############################################################ - -def trim_zeros_graph(boxes, name=None): - """Often boxes are represented with matricies of shape [N, 4] and - are padded with zeros. This removes zero boxes. - - boxes: [N, 4] matrix of boxes. - non_zeros: [N] a 1D boolean mask identifying the rows to keep - """ - non_zeros = tf.cast(tf.reduce_sum(tf.abs(boxes), axis=1), tf.bool) - boxes = tf.boolean_mask(boxes, non_zeros, name=name) - return boxes, non_zeros - - -def batch_pack_graph(x, counts, num_rows): - """Picks different number of values from each row - in x depending on the values in counts. - """ - outputs = [] - for i in range(num_rows): - outputs.append(x[i, :counts[i]]) - return tf.concat(outputs, axis=0) - - -def norm_boxes_graph(boxes, shape): - """Converts boxes from pixel coordinates to normalized coordinates. - boxes: [..., (y1, x1, y2, x2)] in pixel coordinates - shape: [..., (height, width)] in pixels - - Note: In pixel coordinates (y2, x2) is outside the box. But in normalized - coordinates it's inside the box. - - Returns: - [..., (y1, x1, y2, x2)] in normalized coordinates - """ - h, w = tf.split(tf.cast(shape, tf.float32), 2) - scale = tf.concat([h, w, h, w], axis=-1) - tf.constant(1.0) - shift = tf.constant([0., 0., 1., 1.]) - return tf.divide(boxes - shift, scale) - - -def denorm_boxes_graph(boxes, shape): - """Converts boxes from normalized coordinates to pixel coordinates. - boxes: [..., (y1, x1, y2, x2)] in normalized coordinates - shape: [..., (height, width)] in pixels - - Note: In pixel coordinates (y2, x2) is outside the box. But in normalized - coordinates it's inside the box. - - Returns: - [..., (y1, x1, y2, x2)] in pixel coordinates - """ - h, w = tf.split(tf.cast(shape, tf.float32), 2) - scale = tf.concat([h, w, h, w], axis=-1) - tf.constant(1.0) - shift = tf.constant([0., 0., 1., 1.]) - return tf.cast(tf.round(tf.multiply(boxes, scale) + shift), tf.int32) diff --git a/spaces/Goutam982/RVC_V2_voice_clone/lib/infer_pack/modules.py b/spaces/Goutam982/RVC_V2_voice_clone/lib/infer_pack/modules.py deleted file mode 100644 index c83289df7c79a4810dacd15c050148544ba0b6a9..0000000000000000000000000000000000000000 --- a/spaces/Goutam982/RVC_V2_voice_clone/lib/infer_pack/modules.py +++ /dev/null @@ -1,522 +0,0 @@ -import copy -import math -import numpy as np -import scipy -import torch -from torch import nn -from torch.nn import functional as F - -from torch.nn import Conv1d, ConvTranspose1d, AvgPool1d, Conv2d -from torch.nn.utils import weight_norm, remove_weight_norm - -from lib.infer_pack import commons -from lib.infer_pack.commons import init_weights, get_padding -from lib.infer_pack.transforms import piecewise_rational_quadratic_transform - - -LRELU_SLOPE = 0.1 - - -class LayerNorm(nn.Module): - def __init__(self, channels, eps=1e-5): - super().__init__() - self.channels = channels - self.eps = eps - - self.gamma = nn.Parameter(torch.ones(channels)) - self.beta = nn.Parameter(torch.zeros(channels)) - - def forward(self, x): - x = x.transpose(1, -1) - x = F.layer_norm(x, (self.channels,), self.gamma, self.beta, self.eps) - return x.transpose(1, -1) - - -class ConvReluNorm(nn.Module): - def __init__( - self, - in_channels, - hidden_channels, - out_channels, - kernel_size, - n_layers, - p_dropout, - ): - super().__init__() - self.in_channels = in_channels - self.hidden_channels = hidden_channels - self.out_channels = out_channels - self.kernel_size = kernel_size - self.n_layers = n_layers - self.p_dropout = p_dropout - assert n_layers > 1, "Number of layers should be larger than 0." - - self.conv_layers = nn.ModuleList() - self.norm_layers = nn.ModuleList() - self.conv_layers.append( - nn.Conv1d( - in_channels, hidden_channels, kernel_size, padding=kernel_size // 2 - ) - ) - self.norm_layers.append(LayerNorm(hidden_channels)) - self.relu_drop = nn.Sequential(nn.ReLU(), nn.Dropout(p_dropout)) - for _ in range(n_layers - 1): - self.conv_layers.append( - nn.Conv1d( - hidden_channels, - hidden_channels, - kernel_size, - padding=kernel_size // 2, - ) - ) - self.norm_layers.append(LayerNorm(hidden_channels)) - self.proj = nn.Conv1d(hidden_channels, out_channels, 1) - self.proj.weight.data.zero_() - self.proj.bias.data.zero_() - - def forward(self, x, x_mask): - x_org = x - for i in range(self.n_layers): - x = self.conv_layers[i](x * x_mask) - x = self.norm_layers[i](x) - x = self.relu_drop(x) - x = x_org + self.proj(x) - return x * x_mask - - -class DDSConv(nn.Module): - """ - Dialted and Depth-Separable Convolution - """ - - def __init__(self, channels, kernel_size, n_layers, p_dropout=0.0): - super().__init__() - self.channels = channels - self.kernel_size = kernel_size - self.n_layers = n_layers - self.p_dropout = p_dropout - - self.drop = nn.Dropout(p_dropout) - self.convs_sep = nn.ModuleList() - self.convs_1x1 = nn.ModuleList() - self.norms_1 = nn.ModuleList() - self.norms_2 = nn.ModuleList() - for i in range(n_layers): - dilation = kernel_size**i - padding = (kernel_size * dilation - dilation) // 2 - self.convs_sep.append( - nn.Conv1d( - channels, - channels, - kernel_size, - groups=channels, - dilation=dilation, - padding=padding, - ) - ) - self.convs_1x1.append(nn.Conv1d(channels, channels, 1)) - self.norms_1.append(LayerNorm(channels)) - self.norms_2.append(LayerNorm(channels)) - - def forward(self, x, x_mask, g=None): - if g is not None: - x = x + g - for i in range(self.n_layers): - y = self.convs_sep[i](x * x_mask) - y = self.norms_1[i](y) - y = F.gelu(y) - y = self.convs_1x1[i](y) - y = self.norms_2[i](y) - y = F.gelu(y) - y = self.drop(y) - x = x + y - return x * x_mask - - -class WN(torch.nn.Module): - def __init__( - self, - hidden_channels, - kernel_size, - dilation_rate, - n_layers, - gin_channels=0, - p_dropout=0, - ): - super(WN, self).__init__() - assert kernel_size % 2 == 1 - self.hidden_channels = hidden_channels - self.kernel_size = (kernel_size,) - self.dilation_rate = dilation_rate - self.n_layers = n_layers - self.gin_channels = gin_channels - self.p_dropout = p_dropout - - self.in_layers = torch.nn.ModuleList() - self.res_skip_layers = torch.nn.ModuleList() - self.drop = nn.Dropout(p_dropout) - - if gin_channels != 0: - cond_layer = torch.nn.Conv1d( - gin_channels, 2 * hidden_channels * n_layers, 1 - ) - self.cond_layer = torch.nn.utils.weight_norm(cond_layer, name="weight") - - for i in range(n_layers): - dilation = dilation_rate**i - padding = int((kernel_size * dilation - dilation) / 2) - in_layer = torch.nn.Conv1d( - hidden_channels, - 2 * hidden_channels, - kernel_size, - dilation=dilation, - padding=padding, - ) - in_layer = torch.nn.utils.weight_norm(in_layer, name="weight") - self.in_layers.append(in_layer) - - # last one is not necessary - if i < n_layers - 1: - res_skip_channels = 2 * hidden_channels - else: - res_skip_channels = hidden_channels - - res_skip_layer = torch.nn.Conv1d(hidden_channels, res_skip_channels, 1) - res_skip_layer = torch.nn.utils.weight_norm(res_skip_layer, name="weight") - self.res_skip_layers.append(res_skip_layer) - - def forward(self, x, x_mask, g=None, **kwargs): - output = torch.zeros_like(x) - n_channels_tensor = torch.IntTensor([self.hidden_channels]) - - if g is not None: - g = self.cond_layer(g) - - for i in range(self.n_layers): - x_in = self.in_layers[i](x) - if g is not None: - cond_offset = i * 2 * self.hidden_channels - g_l = g[:, cond_offset : cond_offset + 2 * self.hidden_channels, :] - else: - g_l = torch.zeros_like(x_in) - - acts = commons.fused_add_tanh_sigmoid_multiply(x_in, g_l, n_channels_tensor) - acts = self.drop(acts) - - res_skip_acts = self.res_skip_layers[i](acts) - if i < self.n_layers - 1: - res_acts = res_skip_acts[:, : self.hidden_channels, :] - x = (x + res_acts) * x_mask - output = output + res_skip_acts[:, self.hidden_channels :, :] - else: - output = output + res_skip_acts - return output * x_mask - - def remove_weight_norm(self): - if self.gin_channels != 0: - torch.nn.utils.remove_weight_norm(self.cond_layer) - for l in self.in_layers: - torch.nn.utils.remove_weight_norm(l) - for l in self.res_skip_layers: - torch.nn.utils.remove_weight_norm(l) - - -class ResBlock1(torch.nn.Module): - def __init__(self, channels, kernel_size=3, dilation=(1, 3, 5)): - super(ResBlock1, self).__init__() - self.convs1 = nn.ModuleList( - [ - weight_norm( - Conv1d( - channels, - channels, - kernel_size, - 1, - dilation=dilation[0], - padding=get_padding(kernel_size, dilation[0]), - ) - ), - weight_norm( - Conv1d( - channels, - channels, - kernel_size, - 1, - dilation=dilation[1], - padding=get_padding(kernel_size, dilation[1]), - ) - ), - weight_norm( - Conv1d( - channels, - channels, - kernel_size, - 1, - dilation=dilation[2], - padding=get_padding(kernel_size, dilation[2]), - ) - ), - ] - ) - self.convs1.apply(init_weights) - - self.convs2 = nn.ModuleList( - [ - weight_norm( - Conv1d( - channels, - channels, - kernel_size, - 1, - dilation=1, - padding=get_padding(kernel_size, 1), - ) - ), - weight_norm( - Conv1d( - channels, - channels, - kernel_size, - 1, - dilation=1, - padding=get_padding(kernel_size, 1), - ) - ), - weight_norm( - Conv1d( - channels, - channels, - kernel_size, - 1, - dilation=1, - padding=get_padding(kernel_size, 1), - ) - ), - ] - ) - self.convs2.apply(init_weights) - - def forward(self, x, x_mask=None): - for c1, c2 in zip(self.convs1, self.convs2): - xt = F.leaky_relu(x, LRELU_SLOPE) - if x_mask is not None: - xt = xt * x_mask - xt = c1(xt) - xt = F.leaky_relu(xt, LRELU_SLOPE) - if x_mask is not None: - xt = xt * x_mask - xt = c2(xt) - x = xt + x - if x_mask is not None: - x = x * x_mask - return x - - def remove_weight_norm(self): - for l in self.convs1: - remove_weight_norm(l) - for l in self.convs2: - remove_weight_norm(l) - - -class ResBlock2(torch.nn.Module): - def __init__(self, channels, kernel_size=3, dilation=(1, 3)): - super(ResBlock2, self).__init__() - self.convs = nn.ModuleList( - [ - weight_norm( - Conv1d( - channels, - channels, - kernel_size, - 1, - dilation=dilation[0], - padding=get_padding(kernel_size, dilation[0]), - ) - ), - weight_norm( - Conv1d( - channels, - channels, - kernel_size, - 1, - dilation=dilation[1], - padding=get_padding(kernel_size, dilation[1]), - ) - ), - ] - ) - self.convs.apply(init_weights) - - def forward(self, x, x_mask=None): - for c in self.convs: - xt = F.leaky_relu(x, LRELU_SLOPE) - if x_mask is not None: - xt = xt * x_mask - xt = c(xt) - x = xt + x - if x_mask is not None: - x = x * x_mask - return x - - def remove_weight_norm(self): - for l in self.convs: - remove_weight_norm(l) - - -class Log(nn.Module): - def forward(self, x, x_mask, reverse=False, **kwargs): - if not reverse: - y = torch.log(torch.clamp_min(x, 1e-5)) * x_mask - logdet = torch.sum(-y, [1, 2]) - return y, logdet - else: - x = torch.exp(x) * x_mask - return x - - -class Flip(nn.Module): - def forward(self, x, *args, reverse=False, **kwargs): - x = torch.flip(x, [1]) - if not reverse: - logdet = torch.zeros(x.size(0)).to(dtype=x.dtype, device=x.device) - return x, logdet - else: - return x - - -class ElementwiseAffine(nn.Module): - def __init__(self, channels): - super().__init__() - self.channels = channels - self.m = nn.Parameter(torch.zeros(channels, 1)) - self.logs = nn.Parameter(torch.zeros(channels, 1)) - - def forward(self, x, x_mask, reverse=False, **kwargs): - if not reverse: - y = self.m + torch.exp(self.logs) * x - y = y * x_mask - logdet = torch.sum(self.logs * x_mask, [1, 2]) - return y, logdet - else: - x = (x - self.m) * torch.exp(-self.logs) * x_mask - return x - - -class ResidualCouplingLayer(nn.Module): - def __init__( - self, - channels, - hidden_channels, - kernel_size, - dilation_rate, - n_layers, - p_dropout=0, - gin_channels=0, - mean_only=False, - ): - assert channels % 2 == 0, "channels should be divisible by 2" - super().__init__() - self.channels = channels - self.hidden_channels = hidden_channels - self.kernel_size = kernel_size - self.dilation_rate = dilation_rate - self.n_layers = n_layers - self.half_channels = channels // 2 - self.mean_only = mean_only - - self.pre = nn.Conv1d(self.half_channels, hidden_channels, 1) - self.enc = WN( - hidden_channels, - kernel_size, - dilation_rate, - n_layers, - p_dropout=p_dropout, - gin_channels=gin_channels, - ) - self.post = nn.Conv1d(hidden_channels, self.half_channels * (2 - mean_only), 1) - self.post.weight.data.zero_() - self.post.bias.data.zero_() - - def forward(self, x, x_mask, g=None, reverse=False): - x0, x1 = torch.split(x, [self.half_channels] * 2, 1) - h = self.pre(x0) * x_mask - h = self.enc(h, x_mask, g=g) - stats = self.post(h) * x_mask - if not self.mean_only: - m, logs = torch.split(stats, [self.half_channels] * 2, 1) - else: - m = stats - logs = torch.zeros_like(m) - - if not reverse: - x1 = m + x1 * torch.exp(logs) * x_mask - x = torch.cat([x0, x1], 1) - logdet = torch.sum(logs, [1, 2]) - return x, logdet - else: - x1 = (x1 - m) * torch.exp(-logs) * x_mask - x = torch.cat([x0, x1], 1) - return x - - def remove_weight_norm(self): - self.enc.remove_weight_norm() - - -class ConvFlow(nn.Module): - def __init__( - self, - in_channels, - filter_channels, - kernel_size, - n_layers, - num_bins=10, - tail_bound=5.0, - ): - super().__init__() - self.in_channels = in_channels - self.filter_channels = filter_channels - self.kernel_size = kernel_size - self.n_layers = n_layers - self.num_bins = num_bins - self.tail_bound = tail_bound - self.half_channels = in_channels // 2 - - self.pre = nn.Conv1d(self.half_channels, filter_channels, 1) - self.convs = DDSConv(filter_channels, kernel_size, n_layers, p_dropout=0.0) - self.proj = nn.Conv1d( - filter_channels, self.half_channels * (num_bins * 3 - 1), 1 - ) - self.proj.weight.data.zero_() - self.proj.bias.data.zero_() - - def forward(self, x, x_mask, g=None, reverse=False): - x0, x1 = torch.split(x, [self.half_channels] * 2, 1) - h = self.pre(x0) - h = self.convs(h, x_mask, g=g) - h = self.proj(h) * x_mask - - b, c, t = x0.shape - h = h.reshape(b, c, -1, t).permute(0, 1, 3, 2) # [b, cx?, t] -> [b, c, t, ?] - - unnormalized_widths = h[..., : self.num_bins] / math.sqrt(self.filter_channels) - unnormalized_heights = h[..., self.num_bins : 2 * self.num_bins] / math.sqrt( - self.filter_channels - ) - unnormalized_derivatives = h[..., 2 * self.num_bins :] - - x1, logabsdet = piecewise_rational_quadratic_transform( - x1, - unnormalized_widths, - unnormalized_heights, - unnormalized_derivatives, - inverse=reverse, - tails="linear", - tail_bound=self.tail_bound, - ) - - x = torch.cat([x0, x1], 1) * x_mask - logdet = torch.sum(logabsdet * x_mask, [1, 2]) - if not reverse: - return x, logdet - else: - return x diff --git a/spaces/Gradio-Blocks/uniformer_image_detection/configs/sabl/sabl_retinanet_r50_fpn_gn_1x_coco.py b/spaces/Gradio-Blocks/uniformer_image_detection/configs/sabl/sabl_retinanet_r50_fpn_gn_1x_coco.py deleted file mode 100644 index 6acf080afe1b04e50467b16b60700feb5c12e886..0000000000000000000000000000000000000000 --- a/spaces/Gradio-Blocks/uniformer_image_detection/configs/sabl/sabl_retinanet_r50_fpn_gn_1x_coco.py +++ /dev/null @@ -1,52 +0,0 @@ -_base_ = [ - '../_base_/models/retinanet_r50_fpn.py', - '../_base_/datasets/coco_detection.py', - '../_base_/schedules/schedule_1x.py', '../_base_/default_runtime.py' -] -# model settings -norm_cfg = dict(type='GN', num_groups=32, requires_grad=True) -model = dict( - bbox_head=dict( - _delete_=True, - type='SABLRetinaHead', - num_classes=80, - in_channels=256, - stacked_convs=4, - feat_channels=256, - approx_anchor_generator=dict( - type='AnchorGenerator', - octave_base_scale=4, - scales_per_octave=3, - ratios=[0.5, 1.0, 2.0], - strides=[8, 16, 32, 64, 128]), - square_anchor_generator=dict( - type='AnchorGenerator', - ratios=[1.0], - scales=[4], - strides=[8, 16, 32, 64, 128]), - norm_cfg=norm_cfg, - bbox_coder=dict( - type='BucketingBBoxCoder', num_buckets=14, scale_factor=3.0), - loss_cls=dict( - type='FocalLoss', - use_sigmoid=True, - gamma=2.0, - alpha=0.25, - loss_weight=1.0), - loss_bbox_cls=dict( - type='CrossEntropyLoss', use_sigmoid=True, loss_weight=1.5), - loss_bbox_reg=dict( - type='SmoothL1Loss', beta=1.0 / 9.0, loss_weight=1.5)), - # training and testing settings - train_cfg=dict( - assigner=dict( - type='ApproxMaxIoUAssigner', - pos_iou_thr=0.5, - neg_iou_thr=0.4, - min_pos_iou=0.0, - ignore_iof_thr=-1), - allowed_border=-1, - pos_weight=-1, - debug=False)) -# optimizer -optimizer = dict(type='SGD', lr=0.01, momentum=0.9, weight_decay=0.0001) diff --git a/spaces/GrandaddyShmax/MusicGen_Plus/audiocraft/models/musicgen.py b/spaces/GrandaddyShmax/MusicGen_Plus/audiocraft/models/musicgen.py deleted file mode 100644 index 288f7fd6973edcd17f8e648debe9ea42d3f63390..0000000000000000000000000000000000000000 --- a/spaces/GrandaddyShmax/MusicGen_Plus/audiocraft/models/musicgen.py +++ /dev/null @@ -1,375 +0,0 @@ -# Copyright (c) Meta Platforms, Inc. and affiliates. -# All rights reserved. -# -# This source code is licensed under the license found in the -# LICENSE file in the root directory of this source tree. - -""" -Main model for using MusicGen. This will combine all the required components -and provide easy access to the generation API. -""" - -import os -import typing as tp - -import torch - -from .encodec import CompressionModel -from .lm import LMModel -from .builders import get_debug_compression_model, get_debug_lm_model -from .loaders import load_compression_model, load_lm_model, HF_MODEL_CHECKPOINTS_MAP -from ..data.audio_utils import convert_audio -from ..modules.conditioners import ConditioningAttributes, WavCondition -from ..utils.autocast import TorchAutocast - - -MelodyList = tp.List[tp.Optional[torch.Tensor]] -MelodyType = tp.Union[torch.Tensor, MelodyList] - - -class MusicGen: - """MusicGen main model with convenient generation API. - - Args: - name (str): name of the model. - compression_model (CompressionModel): Compression model - used to map audio to invertible discrete representations. - lm (LMModel): Language model over discrete representations. - """ - def __init__(self, name: str, compression_model: CompressionModel, lm: LMModel, - max_duration: float = 30): - self.name = name - self.compression_model = compression_model - self.lm = lm - self.max_duration = max_duration - self.device = next(iter(lm.parameters())).device - self.generation_params: dict = {} - self.set_generation_params(duration=15) # 15 seconds by default - self._progress_callback: tp.Optional[tp.Callable[[int, int], None]] = None - if self.device.type == 'cpu': - self.autocast = TorchAutocast(enabled=False) - else: - self.autocast = TorchAutocast( - enabled=True, device_type=self.device.type, dtype=torch.float16) - - @property - def frame_rate(self) -> int: - """Roughly the number of AR steps per seconds.""" - return self.compression_model.frame_rate - - @property - def sample_rate(self) -> int: - """Sample rate of the generated audio.""" - return self.compression_model.sample_rate - - @property - def audio_channels(self) -> int: - """Audio channels of the generated audio.""" - return self.compression_model.channels - - @staticmethod - def get_pretrained(name: str = 'melody', device=None): - """Return pretrained model, we provide four models: - - small (300M), text to music, # see: https://huggingface.co/facebook/musicgen-small - - medium (1.5B), text to music, # see: https://huggingface.co/facebook/musicgen-medium - - melody (1.5B) text to music and text+melody to music, # see: https://huggingface.co/facebook/musicgen-melody - - large (3.3B), text to music, # see: https://huggingface.co/facebook/musicgen-large - """ - - if device is None: - if torch.cuda.device_count(): - device = 'cuda' - else: - device = 'cpu' - - if name == 'debug': - # used only for unit tests - compression_model = get_debug_compression_model(device) - lm = get_debug_lm_model(device) - return MusicGen(name, compression_model, lm) - - if name not in HF_MODEL_CHECKPOINTS_MAP: - raise ValueError( - f"{name} is not a valid checkpoint name. " - f"Choose one of {', '.join(HF_MODEL_CHECKPOINTS_MAP.keys())}" - ) - - cache_dir = os.environ.get('MUSICGEN_ROOT', None) - compression_model = load_compression_model(name, device=device, cache_dir=cache_dir) - lm = load_lm_model(name, device=device, cache_dir=cache_dir) - if name == 'melody': - lm.condition_provider.conditioners['self_wav'].match_len_on_eval = True - - return MusicGen(name, compression_model, lm) - - def set_generation_params(self, use_sampling: bool = True, top_k: int = 250, - top_p: float = 0.0, temperature: float = 1.0, - duration: float = 30.0, cfg_coef: float = 3.0, - two_step_cfg: bool = False, extend_stride: float = 18): - """Set the generation parameters for MusicGen. - - Args: - use_sampling (bool, optional): Use sampling if True, else do argmax decoding. Defaults to True. - top_k (int, optional): top_k used for sampling. Defaults to 250. - top_p (float, optional): top_p used for sampling, when set to 0 top_k is used. Defaults to 0.0. - temperature (float, optional): Softmax temperature parameter. Defaults to 1.0. - duration (float, optional): Duration of the generated waveform. Defaults to 30.0. - cfg_coef (float, optional): Coefficient used for classifier free guidance. Defaults to 3.0. - two_step_cfg (bool, optional): If True, performs 2 forward for Classifier Free Guidance, - instead of batching together the two. This has some impact on how things - are padded but seems to have little impact in practice. - extend_stride: when doing extended generation (i.e. more than 30 seconds), by how much - should we extend the audio each time. Larger values will mean less context is - preserved, and shorter value will require extra computations. - """ - assert extend_stride < self.max_duration, "Cannot stride by more than max generation duration." - self.extend_stride = extend_stride - self.duration = duration - self.generation_params = { - 'use_sampling': use_sampling, - 'temp': temperature, - 'top_k': top_k, - 'top_p': top_p, - 'cfg_coef': cfg_coef, - 'two_step_cfg': two_step_cfg, - } - - def set_custom_progress_callback(self, progress_callback: tp.Optional[tp.Callable[[int, int], None]] = None): - """Override the default progress callback.""" - self._progress_callback = progress_callback - - def generate_unconditional(self, num_samples: int, progress: bool = False) -> torch.Tensor: - """Generate samples in an unconditional manner. - - Args: - num_samples (int): Number of samples to be generated. - progress (bool, optional): Flag to display progress of the generation process. Defaults to False. - """ - descriptions: tp.List[tp.Optional[str]] = [None] * num_samples - attributes, prompt_tokens = self._prepare_tokens_and_attributes(descriptions, None) - return self._generate_tokens(attributes, prompt_tokens, progress) - - def generate(self, descriptions: tp.List[str], progress: bool = False) -> torch.Tensor: - """Generate samples conditioned on text. - - Args: - descriptions (tp.List[str]): A list of strings used as text conditioning. - progress (bool, optional): Flag to display progress of the generation process. Defaults to False. - """ - attributes, prompt_tokens = self._prepare_tokens_and_attributes(descriptions, None) - assert prompt_tokens is None - return self._generate_tokens(attributes, prompt_tokens, progress) - - def generate_with_chroma(self, descriptions: tp.List[str], melody_wavs: MelodyType, - melody_sample_rate: int, progress: bool = False) -> torch.Tensor: - """Generate samples conditioned on text and melody. - - Args: - descriptions (tp.List[str]): A list of strings used as text conditioning. - melody_wavs: (torch.Tensor or list of Tensor): A batch of waveforms used as - melody conditioning. Should have shape [B, C, T] with B matching the description length, - C=1 or 2. It can be [C, T] if there is a single description. It can also be - a list of [C, T] tensors. - melody_sample_rate: (int): Sample rate of the melody waveforms. - progress (bool, optional): Flag to display progress of the generation process. Defaults to False. - """ - if isinstance(melody_wavs, torch.Tensor): - if melody_wavs.dim() == 2: - melody_wavs = melody_wavs[None] - if melody_wavs.dim() != 3: - raise ValueError("Melody wavs should have a shape [B, C, T].") - melody_wavs = list(melody_wavs) - else: - for melody in melody_wavs: - if melody is not None: - assert melody.dim() == 2, "One melody in the list has the wrong number of dims." - - melody_wavs = [ - convert_audio(wav, melody_sample_rate, self.sample_rate, self.audio_channels) - if wav is not None else None - for wav in melody_wavs] - attributes, prompt_tokens = self._prepare_tokens_and_attributes(descriptions=descriptions, prompt=None, - melody_wavs=melody_wavs) - assert prompt_tokens is None - return self._generate_tokens(attributes, prompt_tokens, progress) - - def generate_continuation(self, prompt: torch.Tensor, prompt_sample_rate: int, - descriptions: tp.Optional[tp.List[tp.Optional[str]]] = None, - progress: bool = False) -> torch.Tensor: - """Generate samples conditioned on audio prompts. - - Args: - prompt (torch.Tensor): A batch of waveforms used for continuation. - Prompt should be [B, C, T], or [C, T] if only one sample is generated. - prompt_sample_rate (int): Sampling rate of the given audio waveforms. - descriptions (tp.List[str], optional): A list of strings used as text conditioning. Defaults to None. - progress (bool, optional): Flag to display progress of the generation process. Defaults to False. - """ - if prompt.dim() == 2: - prompt = prompt[None] - if prompt.dim() != 3: - raise ValueError("prompt should have 3 dimensions: [B, C, T] (C = 1).") - prompt = convert_audio(prompt, prompt_sample_rate, self.sample_rate, self.audio_channels) - if descriptions is None: - descriptions = [None] * len(prompt) - attributes, prompt_tokens = self._prepare_tokens_and_attributes(descriptions, prompt) - assert prompt_tokens is not None - return self._generate_tokens(attributes, prompt_tokens, progress) - - @torch.no_grad() - def _prepare_tokens_and_attributes( - self, - descriptions: tp.Sequence[tp.Optional[str]], - prompt: tp.Optional[torch.Tensor], - melody_wavs: tp.Optional[MelodyList] = None, - ) -> tp.Tuple[tp.List[ConditioningAttributes], tp.Optional[torch.Tensor]]: - """Prepare model inputs. - - Args: - descriptions (tp.List[str]): A list of strings used as text conditioning. - prompt (torch.Tensor): A batch of waveforms used for continuation. - melody_wavs (tp.Optional[torch.Tensor], optional): A batch of waveforms - used as melody conditioning. Defaults to None. - """ - attributes = [ - ConditioningAttributes(text={'description': description}) - for description in descriptions] - - if melody_wavs is None: - for attr in attributes: - attr.wav['self_wav'] = WavCondition( - torch.zeros((1, 1), device=self.device), - torch.tensor([0], device=self.device), - path='null_wav') # type: ignore - else: - if self.name != "melody": - raise RuntimeError("This model doesn't support melody conditioning. " - "Use the `melody` model.") - assert len(melody_wavs) == len(descriptions), \ - f"number of melody wavs must match number of descriptions! " \ - f"got melody len={len(melody_wavs)}, and descriptions len={len(descriptions)}" - for attr, melody in zip(attributes, melody_wavs): - if melody is None: - attr.wav['self_wav'] = WavCondition( - torch.zeros((1, 1), device=self.device), - torch.tensor([0], device=self.device), - path='null_wav') # type: ignore - else: - attr.wav['self_wav'] = WavCondition( - melody.to(device=self.device), - torch.tensor([melody.shape[-1]], device=self.device)) - - if prompt is not None: - if descriptions is not None: - assert len(descriptions) == len(prompt), "Prompt and nb. descriptions doesn't match" - prompt = prompt.to(self.device) - prompt_tokens, scale = self.compression_model.encode(prompt) - assert scale is None - else: - prompt_tokens = None - return attributes, prompt_tokens - - def _generate_tokens(self, attributes: tp.List[ConditioningAttributes], - prompt_tokens: tp.Optional[torch.Tensor], progress: bool = False) -> torch.Tensor: - """Generate discrete audio tokens given audio prompt and/or conditions. - - Args: - attributes (tp.List[ConditioningAttributes]): Conditions used for generation (text/melody). - prompt_tokens (tp.Optional[torch.Tensor]): Audio prompt used for continuation. - progress (bool, optional): Flag to display progress of the generation process. Defaults to False. - Returns: - torch.Tensor: Generated audio, of shape [B, C, T], T is defined by the generation params. - """ - i = 0 - prompt_list = attributes[0].text['description'] - total_gen_len = int(self.duration * self.frame_rate) - max_prompt_len = int(min(self.duration, self.max_duration) * self.frame_rate) - current_gen_offset: int = 0 - - def _progress_callback(generated_tokens: int, tokens_to_generate: int): - generated_tokens += current_gen_offset - if current_gen_offset > 0: - generated_tokens += (self.max_duration - self.extend_stride) * self.frame_rate - if self._progress_callback is not None: - # Note that total_gen_len might be quite wrong depending on the - # codebook pattern used, but with delay it is almost accurate. - self._progress_callback(generated_tokens, total_gen_len) - else: - print(f'{generated_tokens: 6d} / {total_gen_len: 6d}', end='\r') - - if prompt_tokens is not None: - assert max_prompt_len >= prompt_tokens.shape[-1], \ - "Prompt is longer than audio to generate" - - callback = None - if progress: - callback = _progress_callback - - if self.duration <= self.max_duration: - # generate by sampling from LM, simple case. - with self.autocast: - attributes[0].text['description'] = prompt_list[0] - gen_tokens = self.lm.generate( - prompt_tokens, attributes, - callback=callback, max_gen_len=total_gen_len, **self.generation_params) - - else: - # now this gets a bit messier, we need to handle prompts, - # melody conditioning etc. - ref_wavs = [attr.wav['self_wav'] for attr in attributes] - all_tokens = [] - if prompt_tokens is None: - prompt_length = 0 - else: - all_tokens.append(prompt_tokens) - prompt_length = prompt_tokens.shape[-1] - - stride_tokens = int(self.frame_rate * self.extend_stride) - - while current_gen_offset + prompt_length < total_gen_len: - time_offset = current_gen_offset / self.frame_rate - chunk_duration = min(self.duration - time_offset, self.max_duration) - max_gen_len = int(chunk_duration * self.frame_rate) - for attr, ref_wav in zip(attributes, ref_wavs): - wav_length = ref_wav.length.item() - if wav_length == 0: - continue - # We will extend the wav periodically if it not long enough. - # we have to do it here rather than in conditioners.py as otherwise - # we wouldn't have the full wav. - initial_position = int(time_offset * self.sample_rate) - wav_target_length = int(self.max_duration * self.sample_rate) - print(initial_position / self.sample_rate, wav_target_length / self.sample_rate) - positions = torch.arange(initial_position, - initial_position + wav_target_length, device=self.device) - attr.wav['self_wav'] = WavCondition( - ref_wav[0][:, positions % wav_length], - torch.full_like(ref_wav[1], wav_target_length)) - with self.autocast: - if i >= len(prompt_list): - i = len(prompt_list) - 1 - attributes[0].text['description'] = prompt_list[i] - gen_tokens = self.lm.generate( - prompt_tokens, attributes, - callback=callback, max_gen_len=max_gen_len, **self.generation_params) - i = i + 1 - if prompt_tokens is None: - all_tokens.append(gen_tokens) - else: - all_tokens.append(gen_tokens[:, :, prompt_tokens.shape[-1]:]) - prompt_tokens = gen_tokens[:, :, stride_tokens:] - prompt_length = prompt_tokens.shape[-1] - current_gen_offset += stride_tokens - - gen_tokens = torch.cat(all_tokens, dim=-1) - - # generate audio - assert gen_tokens.dim() == 3 - with torch.no_grad(): - gen_audio = self.compression_model.decode(gen_tokens, None) - return gen_audio - - def to(self, device: str): - self.compression_model.to(device) - self.lm.to(device) - return self diff --git a/spaces/Hackatos/Smart-Shower-ATC/README.md b/spaces/Hackatos/Smart-Shower-ATC/README.md deleted file mode 100644 index a4cdfac00628e63ee8bf5d58dc6867d1d6fcf4c1..0000000000000000000000000000000000000000 --- a/spaces/Hackatos/Smart-Shower-ATC/README.md +++ /dev/null @@ -1,50 +0,0 @@ ---- -title: Smart Shower Dashboard -emoji: 📈 -colorFrom: purple -colorTo: gray -sdk: docker -app_port: 8050 ---- -# Docker Dash (Plotly) - -Dockerize a Python Dash app for quick prototyping. - -## Build and run - -`prod` version is served by `gunicorn` instead of the `flask` dev server. - -```sh -# dev -docker build -f Dockerfile.dev -t docker-dash-example-dev . -docker run -p 8050:8050 -v "$(pwd)"/app:/app --rm docker-dash-example-dev - -# prod -docker build -f Dockerfile -t docker-dash-example-prod . -docker run -p 8050:8050 -v "$(pwd)"/app:/app --rm docker-dash-example-prod -``` - -## Access the page - -Go to `http://localhost:8050` in browser. - -## Switch debug mode in Dockerfile - -```dockerfile -ENV DASH_DEBUG_MODE True # False -``` - -## Development - -Install the app requirements for development to get better editor support. - -```sh -poetry install -``` - -Optional: clean initialization of `poetry`: - -```sh -poetry init -cat app/requirements.txt | xargs poetry add -``` \ No newline at end of file diff --git a/spaces/HarryLee/eCommerceImageCaptioning/fairseq/examples/adaptive_span/truncated_bptt_lm_task.py b/spaces/HarryLee/eCommerceImageCaptioning/fairseq/examples/adaptive_span/truncated_bptt_lm_task.py deleted file mode 100644 index a92da3a298e21528b7007df3f8198bb3af94a485..0000000000000000000000000000000000000000 --- a/spaces/HarryLee/eCommerceImageCaptioning/fairseq/examples/adaptive_span/truncated_bptt_lm_task.py +++ /dev/null @@ -1 +0,0 @@ -../truncated_bptt/truncated_bptt_lm_task.py \ No newline at end of file diff --git a/spaces/HarryLee/eCommerceImageCaptioning/fairseq/examples/simultaneous_translation/eval/agents/simul_t2t_enja.py b/spaces/HarryLee/eCommerceImageCaptioning/fairseq/examples/simultaneous_translation/eval/agents/simul_t2t_enja.py deleted file mode 100644 index 8f3c8703ca37398b9d389ce5181bdfac2333cdf2..0000000000000000000000000000000000000000 --- a/spaces/HarryLee/eCommerceImageCaptioning/fairseq/examples/simultaneous_translation/eval/agents/simul_t2t_enja.py +++ /dev/null @@ -1,226 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -import os - -from fairseq import checkpoint_utils, tasks -import sentencepiece as spm -import torch - -try: - from simuleval import READ_ACTION, WRITE_ACTION, DEFAULT_EOS - from simuleval.agents import TextAgent -except ImportError: - print("Please install simuleval 'pip install simuleval'") - - -BOS_PREFIX = "\u2581" - - -class SimulTransTextAgentJA(TextAgent): - """ - Simultaneous Translation - Text agent for Japanese - """ - def __init__(self, args): - - # Whether use gpu - self.gpu = getattr(args, "gpu", False) - - # Max len - self.max_len = args.max_len - - # Load Model - self.load_model_vocab(args) - - # build word splitter - self.build_word_splitter(args) - - self.eos = DEFAULT_EOS - - def initialize_states(self, states): - states.incremental_states = dict() - states.incremental_states["online"] = dict() - - def to_device(self, tensor): - if self.gpu: - return tensor.cuda() - else: - return tensor.cpu() - - def load_model_vocab(self, args): - - filename = args.model_path - if not os.path.exists(filename): - raise IOError("Model file not found: {}".format(filename)) - - state = checkpoint_utils.load_checkpoint_to_cpu(filename) - - task_args = state["cfg"]["task"] - task_args.data = args.data_bin - - task = tasks.setup_task(task_args) - - # build model for ensemble - state["cfg"]["model"].load_pretrained_encoder_from = None - state["cfg"]["model"].load_pretrained_decoder_from = None - - self.model = task.build_model(state["cfg"]["model"]) - self.model.load_state_dict(state["model"], strict=True) - self.model.eval() - self.model.share_memory() - - if self.gpu: - self.model.cuda() - - # Set dictionary - self.dict = {} - self.dict["tgt"] = task.target_dictionary - self.dict["src"] = task.source_dictionary - - @staticmethod - def add_args(parser): - # fmt: off - parser.add_argument('--model-path', type=str, required=True, - help='path to your pretrained model.') - parser.add_argument("--data-bin", type=str, required=True, - help="Path of data binary") - parser.add_argument("--max-len", type=int, default=100, - help="Max length of translation") - parser.add_argument("--tgt-splitter-type", type=str, default="SentencePiece", - help="Subword splitter type for target text.") - parser.add_argument("--tgt-splitter-path", type=str, default=None, - help="Subword splitter model path for target text.") - parser.add_argument("--src-splitter-type", type=str, default="SentencePiece", - help="Subword splitter type for source text.") - parser.add_argument("--src-splitter-path", type=str, default=None, - help="Subword splitter model path for source text.") - # fmt: on - return parser - - def build_word_splitter(self, args): - self.spm = {} - for lang in ['src', 'tgt']: - if getattr(args, f'{lang}_splitter_type', None): - path = getattr(args, f'{lang}_splitter_path', None) - if path: - self.spm[lang] = spm.SentencePieceProcessor() - self.spm[lang].Load(path) - - def segment_to_units(self, segment, states): - # Split a full word (segment) into subwords (units) - return self.spm['src'].EncodeAsPieces(segment) - - def update_model_encoder(self, states): - if len(states.units.source) == 0: - return - - src_indices = [ - self.dict['src'].index(x) - for x in states.units.source.value - ] - - if states.finish_read(): - # Append the eos index when the prediction is over - src_indices += [self.dict["tgt"].eos_index] - - src_indices = self.to_device( - torch.LongTensor(src_indices).unsqueeze(0) - ) - src_lengths = self.to_device( - torch.LongTensor([src_indices.size(1)]) - ) - - states.encoder_states = self.model.encoder(src_indices, src_lengths) - - torch.cuda.empty_cache() - - def update_states_read(self, states): - # Happens after a read action. - self.update_model_encoder(states) - - def units_to_segment(self, units, states): - # Merge sub words (units) to full word (segment). - # For Japanese, we can directly send - # the untokenized token to server except the BOS token - # with following option - # --sacrebleu-tokenizer MeCab - # --eval-latency-unit char - # --no-space - token = units.value.pop() - - if ( - token == self.dict["tgt"].eos_word - or len(states.segments.target) > self.max_len - ): - return DEFAULT_EOS - - if BOS_PREFIX == token: - return None - if token[0] == BOS_PREFIX: - return token[1:] - else: - return token - - def policy(self, states): - - if not getattr(states, "encoder_states", None): - # No encoder states, read a token first - return READ_ACTION - - # encode previous predicted target tokens - tgt_indices = self.to_device( - torch.LongTensor( - [self.model.decoder.dictionary.eos()] - + [ - self.dict['tgt'].index(x) - for x in states.units.target.value - if x is not None - ] - ).unsqueeze(0) - ) - - # Current steps - states.incremental_states["steps"] = { - "src": states.encoder_states["encoder_out"][0].size(0), - "tgt": 1 + len(states.units.target), - } - - # Online only means the reading is not finished - states.incremental_states["online"]["only"] = ( - torch.BoolTensor([not states.finish_read()]) - ) - - x, outputs = self.model.decoder.forward( - prev_output_tokens=tgt_indices, - encoder_out=states.encoder_states, - incremental_state=states.incremental_states, - ) - - states.decoder_out = x - - torch.cuda.empty_cache() - - if outputs.action == 0: - return READ_ACTION - else: - return WRITE_ACTION - - def predict(self, states): - # Predict target token from decoder states - decoder_states = states.decoder_out - - lprobs = self.model.get_normalized_probs( - [decoder_states[:, -1:]], log_probs=True - ) - - index = lprobs.argmax(dim=-1)[0, 0].item() - - if index != self.dict['tgt'].eos_index: - token = self.dict['tgt'].string([index]) - else: - token = self.dict['tgt'].eos_word - - return token diff --git a/spaces/HarryLee/eCommerceImageCaptioning/fairseq/examples/speech_recognition/data/collaters.py b/spaces/HarryLee/eCommerceImageCaptioning/fairseq/examples/speech_recognition/data/collaters.py deleted file mode 100644 index 6acfec876b87e5a00bc92083b1181301a2a18e3f..0000000000000000000000000000000000000000 --- a/spaces/HarryLee/eCommerceImageCaptioning/fairseq/examples/speech_recognition/data/collaters.py +++ /dev/null @@ -1,131 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. -""" - This module contains collection of classes which implement - collate functionalities for various tasks. - - Collaters should know what data to expect for each sample - and they should pack / collate them into batches -""" - - -from __future__ import absolute_import, division, print_function, unicode_literals - -import numpy as np -import torch -from fairseq.data import data_utils as fairseq_data_utils - - -class Seq2SeqCollater(object): - """ - Implements collate function mainly for seq2seq tasks - This expects each sample to contain feature (src_tokens) and - targets. - This collator is also used for aligned training task. - """ - - def __init__( - self, - feature_index=0, - label_index=1, - pad_index=1, - eos_index=2, - move_eos_to_beginning=True, - ): - self.feature_index = feature_index - self.label_index = label_index - self.pad_index = pad_index - self.eos_index = eos_index - self.move_eos_to_beginning = move_eos_to_beginning - - def _collate_frames(self, frames): - """Convert a list of 2d frames into a padded 3d tensor - Args: - frames (list): list of 2d frames of size L[i]*f_dim. Where L[i] is - length of i-th frame and f_dim is static dimension of features - Returns: - 3d tensor of size len(frames)*len_max*f_dim where len_max is max of L[i] - """ - len_max = max(frame.size(0) for frame in frames) - f_dim = frames[0].size(1) - res = frames[0].new(len(frames), len_max, f_dim).fill_(0.0) - - for i, v in enumerate(frames): - res[i, : v.size(0)] = v - - return res - - def collate(self, samples): - """ - utility function to collate samples into batch for speech recognition. - """ - if len(samples) == 0: - return {} - - # parse samples into torch tensors - parsed_samples = [] - for s in samples: - # skip invalid samples - if s["data"][self.feature_index] is None: - continue - source = s["data"][self.feature_index] - if isinstance(source, (np.ndarray, np.generic)): - source = torch.from_numpy(source) - target = s["data"][self.label_index] - if isinstance(target, (np.ndarray, np.generic)): - target = torch.from_numpy(target).long() - elif isinstance(target, list): - target = torch.LongTensor(target) - - parsed_sample = {"id": s["id"], "source": source, "target": target} - parsed_samples.append(parsed_sample) - samples = parsed_samples - - id = torch.LongTensor([s["id"] for s in samples]) - frames = self._collate_frames([s["source"] for s in samples]) - # sort samples by descending number of frames - frames_lengths = torch.LongTensor([s["source"].size(0) for s in samples]) - frames_lengths, sort_order = frames_lengths.sort(descending=True) - id = id.index_select(0, sort_order) - frames = frames.index_select(0, sort_order) - - target = None - target_lengths = None - prev_output_tokens = None - if samples[0].get("target", None) is not None: - ntokens = sum(len(s["target"]) for s in samples) - target = fairseq_data_utils.collate_tokens( - [s["target"] for s in samples], - self.pad_index, - self.eos_index, - left_pad=False, - move_eos_to_beginning=False, - ) - target = target.index_select(0, sort_order) - target_lengths = torch.LongTensor( - [s["target"].size(0) for s in samples] - ).index_select(0, sort_order) - prev_output_tokens = fairseq_data_utils.collate_tokens( - [s["target"] for s in samples], - self.pad_index, - self.eos_index, - left_pad=False, - move_eos_to_beginning=self.move_eos_to_beginning, - ) - prev_output_tokens = prev_output_tokens.index_select(0, sort_order) - else: - ntokens = sum(len(s["source"]) for s in samples) - - batch = { - "id": id, - "ntokens": ntokens, - "net_input": {"src_tokens": frames, "src_lengths": frames_lengths}, - "target": target, - "target_lengths": target_lengths, - "nsentences": len(samples), - } - if prev_output_tokens is not None: - batch["net_input"]["prev_output_tokens"] = prev_output_tokens - return batch diff --git a/spaces/HarryLee/eCommerceImageCaptioning/fairseq/examples/textless_nlp/gslm/unit2speech/tts_data.py b/spaces/HarryLee/eCommerceImageCaptioning/fairseq/examples/textless_nlp/gslm/unit2speech/tts_data.py deleted file mode 100644 index eb0f7c360d749fd9d489b40b04dae8652b095098..0000000000000000000000000000000000000000 --- a/spaces/HarryLee/eCommerceImageCaptioning/fairseq/examples/textless_nlp/gslm/unit2speech/tts_data.py +++ /dev/null @@ -1,52 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - - -import torch -import numpy as np -from examples.textless_nlp.gslm.unit2speech.tacotron2.text import ( - EOS_TOK, - SOS_TOK, - code_to_sequence, - text_to_sequence, -) -from examples.textless_nlp.gslm.unit2speech.tacotron2.utils import ( - load_code_dict, -) - - -class TacotronInputDataset: - def __init__(self, hparams, append_str=""): - self.is_text = getattr(hparams, "text_or_code", "text") == "text" - if not self.is_text: - self.code_dict = load_code_dict(hparams.code_dict) - self.code_key = hparams.code_key - self.add_sos = hparams.add_sos - self.add_eos = hparams.add_eos - self.collapse_code = hparams.collapse_code - self.append_str = append_str - - def process_code(self, inp_str): - inp_toks = inp_str.split() - if self.add_sos: - inp_toks = [SOS_TOK] + inp_toks - if self.add_eos: - inp_toks = inp_toks + [EOS_TOK] - return code_to_sequence(inp_toks, self.code_dict, self.collapse_code) - - def process_text(self, inp_str): - return text_to_sequence(inp_str, ["english_cleaners"]) - - def get_tensor(self, inp_str): - # uid, txt, inp_str = self._get_data(idx) - inp_str = inp_str + self.append_str - if self.is_text: - inp_toks = self.process_text(inp_str) - else: - inp_toks = self.process_code(inp_str) - return torch.from_numpy(np.array(inp_toks)).long() - - def __len__(self): - return len(self.data) diff --git a/spaces/HarryLee/eCommerceImageCaptioning/fairseq/tests/test_memory_efficient_fp16.py b/spaces/HarryLee/eCommerceImageCaptioning/fairseq/tests/test_memory_efficient_fp16.py deleted file mode 100644 index 2bf2f29888d6027896128930626b1aafe7f18475..0000000000000000000000000000000000000000 --- a/spaces/HarryLee/eCommerceImageCaptioning/fairseq/tests/test_memory_efficient_fp16.py +++ /dev/null @@ -1,78 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -import argparse -import logging -import unittest - -import torch -from fairseq.optim.adam import FairseqAdam -from fairseq.optim.fp16_optimizer import MemoryEfficientFP16Optimizer -from omegaconf import OmegaConf - - -@unittest.skipIf(not torch.cuda.is_available(), "test requires a GPU") -class TestMemoryEfficientFP16(unittest.TestCase): - def setUp(self): - logging.disable(logging.CRITICAL) - - def tearDown(self): - logging.disable(logging.NOTSET) - - def test_load_state_dict(self): - # define simple FP16 model - model = torch.nn.Linear(5, 5).cuda().half() - params = list(model.parameters()) - - # initialize memory efficient FP16 optimizer - # with pseudo DictConfigs - optimizer = FairseqAdam( - cfg=OmegaConf.create( - vars( - argparse.Namespace( - adam_betas="(0.9, 0.999)", - adam_eps=1e-8, - weight_decay=0.0, - lr=[0.00001], - ) - ) - ), - params=params, - ) - me_optimizer = MemoryEfficientFP16Optimizer( - cfg=OmegaConf.create( - { - "common": vars( - argparse.Namespace( - fp16_init_scale=1, - fp16_scale_window=1, - fp16_scale_tolerance=1, - threshold_loss_scale=1, - min_loss_scale=1e-4, - ) - ) - } - ), - params=params, - optimizer=optimizer, - ) - - # optimizer state is created in the first step - loss = model(torch.rand(5).cuda().half()).sum() - me_optimizer.backward(loss) - me_optimizer.step() - - # reload state - state = me_optimizer.state_dict() - me_optimizer.load_state_dict(state) - for k, v in me_optimizer.optimizer.state.items(): - self.assertTrue(k.dtype == torch.float16) - for v_i in v.values(): - if torch.is_tensor(v_i): - self.assertTrue(v_i.dtype == torch.float32) - - -if __name__ == "__main__": - unittest.main() diff --git a/spaces/Harveenchadha/oiTrans/scripts/__init__.py b/spaces/Harveenchadha/oiTrans/scripts/__init__.py deleted file mode 100644 index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000 diff --git a/spaces/Hila/RobustViT/imagenet_finetune_tokencut.py b/spaces/Hila/RobustViT/imagenet_finetune_tokencut.py deleted file mode 100644 index d1b24fae00ed34f01c9151ae18449833848cc7d5..0000000000000000000000000000000000000000 --- a/spaces/Hila/RobustViT/imagenet_finetune_tokencut.py +++ /dev/null @@ -1,577 +0,0 @@ -import argparse -import os -import random -import shutil -import time -import warnings - -import torch -import torch.nn as nn -import torch.nn.parallel -import torch.backends.cudnn as cudnn -import torch.distributed as dist -import torch.optim -import torch.multiprocessing as mp -import torch.utils.data -import torch.utils.data.distributed -import torchvision.transforms as transforms -import torchvision.datasets as datasets -import torchvision.models as models -from tokencut_dataset import SegmentationDataset, VAL_PARTITION, TRAIN_PARTITION - -# Uncomment the expected model below - -# ViT -from ViT.ViT import vit_base_patch16_224 as vit -# from ViT.ViT import vit_large_patch16_224 as vit - -# ViT-AugReg -# from ViT.ViT_new import vit_small_patch16_224 as vit -# from ViT.ViT_new import vit_base_patch16_224 as vit -# from ViT.ViT_new import vit_large_patch16_224 as vit - -# DeiT -# from ViT.ViT import deit_base_patch16_224 as vit -# from ViT.ViT import deit_small_patch16_224 as vit - -from ViT.explainer import generate_relevance, get_image_with_relevance -import torchvision -import cv2 -from torch.utils.tensorboard import SummaryWriter -import json - -model_names = sorted(name for name in models.__dict__ - if name.islower() and not name.startswith("__") - and callable(models.__dict__[name])) -model_names.append("vit") - -parser = argparse.ArgumentParser(description='PyTorch ImageNet Training') -parser.add_argument('--data', metavar='DATA', - help='path to dataset') -parser.add_argument('--seg_data', metavar='SEG_DATA', - help='path to segmentation dataset') -parser.add_argument('-j', '--workers', default=4, type=int, metavar='N', - help='number of data loading workers (default: 4)') -parser.add_argument('--epochs', default=150, type=int, metavar='N', - help='number of total epochs to run') -parser.add_argument('--start-epoch', default=0, type=int, metavar='N', - help='manual epoch number (useful on restarts)') -parser.add_argument('-b', '--batch-size', default=10, type=int, - metavar='N', - help='mini-batch size (default: 256), this is the total ' - 'batch size of all GPUs on the current node when ' - 'using Data Parallel or Distributed Data Parallel') -parser.add_argument('--lr', '--learning-rate', default=3e-6, type=float, - metavar='LR', help='initial learning rate', dest='lr') -parser.add_argument('--momentum', default=0.9, type=float, metavar='M', - help='momentum') -parser.add_argument('--wd', '--weight-decay', default=1e-4, type=float, - metavar='W', help='weight decay (default: 1e-4)', - dest='weight_decay') -parser.add_argument('-p', '--print-freq', default=10, type=int, - metavar='N', help='print frequency (default: 10)') -parser.add_argument('--resume', default='', type=str, metavar='PATH', - help='path to latest checkpoint (default: none)') -parser.add_argument('-e', '--evaluate', dest='evaluate', action='store_true', - help='evaluate model on validation set') -parser.add_argument('--pretrained', dest='pretrained', action='store_true', - help='use pre-trained model') -parser.add_argument('--world-size', default=-1, type=int, - help='number of nodes for distributed training') -parser.add_argument('--rank', default=-1, type=int, - help='node rank for distributed training') -parser.add_argument('--dist-url', default='tcp://224.66.41.62:23456', type=str, - help='url used to set up distributed training') -parser.add_argument('--dist-backend', default='nccl', type=str, - help='distributed backend') -parser.add_argument('--seed', default=None, type=int, - help='seed for initializing training. ') -parser.add_argument('--gpu', default=None, type=int, - help='GPU id to use.') -parser.add_argument('--save_interval', default=20, type=int, - help='interval to save segmentation results.') -parser.add_argument('--num_samples', default=3, type=int, - help='number of samples per class for training') -parser.add_argument('--multiprocessing-distributed', action='store_true', - help='Use multi-processing distributed training to launch ' - 'N processes per node, which has N GPUs. This is the ' - 'fastest way to use PyTorch for either single node or ' - 'multi node data parallel training') -parser.add_argument('--lambda_seg', default=0.1, type=float, - help='influence of segmentation loss.') -parser.add_argument('--lambda_acc', default=1, type=float, - help='influence of accuracy loss.') -parser.add_argument('--experiment_folder', default=None, type=str, - help='path to folder to use for experiment.') -parser.add_argument('--dilation', default=0, type=float, - help='Use dilation on the segmentation maps.') -parser.add_argument('--lambda_background', default=1, type=float, - help='coefficient of loss for segmentation background.') -parser.add_argument('--lambda_foreground', default=0.3, type=float, - help='coefficient of loss for segmentation foreground.') -parser.add_argument('--num_classes', default=500, type=int, - help='coefficient of loss for segmentation foreground.') -parser.add_argument('--temperature', default=1, type=float, - help='temperature for softmax (mostly for DeiT).') - -best_loss = float('inf') - -def main(): - args = parser.parse_args() - - if args.experiment_folder is None: - args.experiment_folder = f'experiment/' \ - f'lr_{args.lr}_seg_{args.lambda_seg}_acc_{args.lambda_acc}' \ - f'_bckg_{args.lambda_background}_fgd_{args.lambda_foreground}' - if args.temperature != 1: - args.experiment_folder = args.experiment_folder + f'_tempera_{args.temperature}' - if args.batch_size != 8: - args.experiment_folder = args.experiment_folder + f'_bs_{args.batch_size}' - if args.num_classes != 500: - args.experiment_folder = args.experiment_folder + f'_num_classes_{args.num_classes}' - if args.num_samples != 3: - args.experiment_folder = args.experiment_folder + f'_num_samples_{args.num_samples}' - if args.epochs != 150: - args.experiment_folder = args.experiment_folder + f'_num_epochs_{args.epochs}' - - if os.path.exists(args.experiment_folder): - raise Exception(f"Experiment path {args.experiment_folder} already exists!") - os.mkdir(args.experiment_folder) - os.mkdir(f'{args.experiment_folder}/train_samples') - os.mkdir(f'{args.experiment_folder}/val_samples') - - with open(f'{args.experiment_folder}/commandline_args.txt', 'w') as f: - json.dump(args.__dict__, f, indent=2) - - if args.seed is not None: - random.seed(args.seed) - torch.manual_seed(args.seed) - cudnn.deterministic = True - warnings.warn('You have chosen to seed training. ' - 'This will turn on the CUDNN deterministic setting, ' - 'which can slow down your training considerably! ' - 'You may see unexpected behavior when restarting ' - 'from checkpoints.') - - if args.gpu is not None: - warnings.warn('You have chosen a specific GPU. This will completely ' - 'disable data parallelism.') - - if args.dist_url == "env://" and args.world_size == -1: - args.world_size = int(os.environ["WORLD_SIZE"]) - - args.distributed = args.world_size > 1 or args.multiprocessing_distributed - - ngpus_per_node = torch.cuda.device_count() - if args.multiprocessing_distributed: - # Since we have ngpus_per_node processes per node, the total world_size - # needs to be adjusted accordingly - args.world_size = ngpus_per_node * args.world_size - # Use torch.multiprocessing.spawn to launch distributed processes: the - # main_worker process function - mp.spawn(main_worker, nprocs=ngpus_per_node, args=(ngpus_per_node, args)) - else: - # Simply call main_worker function - main_worker(args.gpu, ngpus_per_node, args) - - -def main_worker(gpu, ngpus_per_node, args): - global best_loss - args.gpu = gpu - - if args.gpu is not None: - print("Use GPU: {} for training".format(args.gpu)) - - if args.distributed: - if args.dist_url == "env://" and args.rank == -1: - args.rank = int(os.environ["RANK"]) - if args.multiprocessing_distributed: - # For multiprocessing distributed training, rank needs to be the - # global rank among all the processes - args.rank = args.rank * ngpus_per_node + gpu - dist.init_process_group(backend=args.dist_backend, init_method=args.dist_url, - world_size=args.world_size, rank=args.rank) - # create model - print("=> creating model") - model = vit(pretrained=True).cuda() - model.train() - print("done") - - if not torch.cuda.is_available(): - print('using CPU, this will be slow') - elif args.distributed: - # For multiprocessing distributed, DistributedDataParallel constructor - # should always set the single device scope, otherwise, - # DistributedDataParallel will use all available devices. - if args.gpu is not None: - torch.cuda.set_device(args.gpu) - model.cuda(args.gpu) - # When using a single GPU per process and per - # DistributedDataParallel, we need to divide the batch size - # ourselves based on the total number of GPUs we have - args.batch_size = int(args.batch_size / ngpus_per_node) - args.workers = int((args.workers + ngpus_per_node - 1) / ngpus_per_node) - model = torch.nn.parallel.DistributedDataParallel(model, device_ids=[args.gpu]) - else: - model.cuda() - # DistributedDataParallel will divide and allocate batch_size to all - # available GPUs if device_ids are not set - model = torch.nn.parallel.DistributedDataParallel(model) - elif args.gpu is not None: - torch.cuda.set_device(args.gpu) - model = model.cuda(args.gpu) - else: - # DataParallel will divide and allocate batch_size to all available GPUs - print("start") - model = torch.nn.DataParallel(model).cuda() - - # define loss function (criterion) and optimizer - criterion = nn.CrossEntropyLoss().cuda(args.gpu) - optimizer = torch.optim.AdamW(model.parameters(), args.lr, weight_decay=args.weight_decay) - - # optionally resume from a checkpoint - if args.resume: - if os.path.isfile(args.resume): - print("=> loading checkpoint '{}'".format(args.resume)) - if args.gpu is None: - checkpoint = torch.load(args.resume) - else: - # Map model to be loaded to specified single gpu. - loc = 'cuda:{}'.format(args.gpu) - checkpoint = torch.load(args.resume, map_location=loc) - args.start_epoch = checkpoint['epoch'] - best_loss = checkpoint['best_loss'] - if args.gpu is not None: - # best_loss may be from a checkpoint from a different GPU - best_loss = best_loss.to(args.gpu) - model.load_state_dict(checkpoint['state_dict']) - optimizer.load_state_dict(checkpoint['optimizer']) - print("=> loaded checkpoint '{}' (epoch {})" - .format(args.resume, checkpoint['epoch'])) - else: - print("=> no checkpoint found at '{}'".format(args.resume)) - - cudnn.benchmark = True - - train_dataset = SegmentationDataset(args.seg_data, args.data, partition=TRAIN_PARTITION, train_classes=args.num_classes, - num_samples=args.num_samples) - - if args.distributed: - train_sampler = torch.utils.data.distributed.DistributedSampler(train_dataset) - else: - train_sampler = None - - train_loader = torch.utils.data.DataLoader( - train_dataset, batch_size=args.batch_size, shuffle=(train_sampler is None), - num_workers=args.workers, pin_memory=True, sampler=train_sampler) - - val_dataset = SegmentationDataset(args.seg_data, args.data, partition=VAL_PARTITION, train_classes=args.num_classes, - num_samples=1) - - val_loader = torch.utils.data.DataLoader( - val_dataset, batch_size=10, shuffle=False, - num_workers=args.workers, pin_memory=True) - - if args.evaluate: - validate(val_loader, model, criterion, 0, args) - return - - for epoch in range(args.start_epoch, args.epochs): - if args.distributed: - train_sampler.set_epoch(epoch) - adjust_learning_rate(optimizer, epoch, args) - - log_dir = os.path.join(args.experiment_folder, 'logs') - logger = SummaryWriter(log_dir=log_dir) - args.logger = logger - - # train for one epoch - train(train_loader, model, criterion, optimizer, epoch, args) - - # evaluate on validation set - loss1 = validate(val_loader, model, criterion, epoch, args) - - # remember best acc@1 and save checkpoint - is_best = loss1 <= best_loss - best_loss = min(loss1, best_loss) - - if not args.multiprocessing_distributed or (args.multiprocessing_distributed - and args.rank % ngpus_per_node == 0): - save_checkpoint({ - 'epoch': epoch + 1, - 'state_dict': model.state_dict(), - 'best_loss': best_loss, - 'optimizer' : optimizer.state_dict(), - }, is_best, folder=args.experiment_folder) - - -def train(train_loader, model, criterion, optimizer, epoch, args): - mse_criterion = torch.nn.MSELoss(reduction='mean') - losses = AverageMeter('Loss', ':.4e') - top1 = AverageMeter('Acc@1', ':6.2f') - top5 = AverageMeter('Acc@5', ':6.2f') - orig_top1 = AverageMeter('Acc@1_orig', ':6.2f') - orig_top5 = AverageMeter('Acc@5_orig', ':6.2f') - progress = ProgressMeter( - len(train_loader), - # [batch_time, data_time, losses, top1, top5, orig_top1, orig_top5], - [losses, top1, top5, orig_top1, orig_top5], - prefix="Epoch: [{}]".format(epoch)) - - orig_model = vit(pretrained=True).cuda() - orig_model.eval() - - # switch to train mode - model.train() - - end = time.time() - for i, (seg_map, image_ten, class_name) in enumerate(train_loader): - - if torch.cuda.is_available(): - image_ten = image_ten.cuda(args.gpu, non_blocking=True) - seg_map = seg_map.cuda(args.gpu, non_blocking=True) - class_name = class_name.cuda(args.gpu, non_blocking=True) - - # compute output - - # segmentation loss - relevance = generate_relevance(model, image_ten, index=class_name) - - reverse_seg_map = seg_map.clone() - reverse_seg_map[reverse_seg_map == 1] = -1 - reverse_seg_map[reverse_seg_map == 0] = 1 - reverse_seg_map[reverse_seg_map == -1] = 0 - background_loss = mse_criterion(relevance * reverse_seg_map, torch.zeros_like(relevance)) - foreground_loss = mse_criterion(relevance * seg_map, seg_map) - segmentation_loss = args.lambda_background * background_loss - segmentation_loss += args.lambda_foreground * foreground_loss - - # classification loss - output = model(image_ten) - with torch.no_grad(): - output_orig = orig_model(image_ten) - - _, pred = output.topk(1, 1, True, True) - pred = pred.flatten() - if args.temperature != 1: - output = output / args.temperature - classification_loss = criterion(output, pred) - - loss = args.lambda_seg * segmentation_loss + args.lambda_acc * classification_loss - - # debugging output - if i % args.save_interval == 0: - orig_relevance = generate_relevance(orig_model, image_ten, index=class_name) - for j in range(image_ten.shape[0]): - image = get_image_with_relevance(image_ten[j], torch.ones_like(image_ten[j])) - new_vis = get_image_with_relevance(image_ten[j], relevance[j]) - old_vis = get_image_with_relevance(image_ten[j], orig_relevance[j]) - gt = get_image_with_relevance(image_ten[j], seg_map[j]) - h_img = cv2.hconcat([image, gt, old_vis, new_vis]) - cv2.imwrite(f'{args.experiment_folder}/train_samples/res_{i}_{j}.jpg', h_img) - - # measure accuracy and record loss - acc1, acc5 = accuracy(output, class_name, topk=(1, 5)) - losses.update(loss.item(), image_ten.size(0)) - top1.update(acc1[0], image_ten.size(0)) - top5.update(acc5[0], image_ten.size(0)) - - # metrics for original vit - acc1_orig, acc5_orig = accuracy(output_orig, class_name, topk=(1, 5)) - orig_top1.update(acc1_orig[0], image_ten.size(0)) - orig_top5.update(acc5_orig[0], image_ten.size(0)) - - # compute gradient and do SGD step - optimizer.zero_grad() - loss.backward() - optimizer.step() - - if i % args.print_freq == 0: - progress.display(i) - args.logger.add_scalar('{}/{}'.format('train', 'segmentation_loss'), segmentation_loss, - epoch*len(train_loader)+i) - args.logger.add_scalar('{}/{}'.format('train', 'classification_loss'), classification_loss, - epoch * len(train_loader) + i) - args.logger.add_scalar('{}/{}'.format('train', 'orig_top1'), acc1_orig, - epoch * len(train_loader) + i) - args.logger.add_scalar('{}/{}'.format('train', 'top1'), acc1, - epoch * len(train_loader) + i) - args.logger.add_scalar('{}/{}'.format('train', 'orig_top5'), acc5_orig, - epoch * len(train_loader) + i) - args.logger.add_scalar('{}/{}'.format('train', 'top5'), acc5, - epoch * len(train_loader) + i) - args.logger.add_scalar('{}/{}'.format('train', 'tot_loss'), loss, - epoch * len(train_loader) + i) - - -def validate(val_loader, model, criterion, epoch, args): - mse_criterion = torch.nn.MSELoss(reduction='mean') - losses = AverageMeter('Loss', ':.4e') - top1 = AverageMeter('Acc@1', ':6.2f') - top5 = AverageMeter('Acc@5', ':6.2f') - orig_top1 = AverageMeter('Acc@1_orig', ':6.2f') - orig_top5 = AverageMeter('Acc@5_orig', ':6.2f') - progress = ProgressMeter( - len(val_loader), - [losses, top1, top5, orig_top1, orig_top5], - prefix="Epoch: [{}]".format(val_loader)) - - # switch to evaluate mode - model.eval() - - orig_model = vit(pretrained=True).cuda() - orig_model.eval() - - with torch.no_grad(): - end = time.time() - for i, (seg_map, image_ten, class_name) in enumerate(val_loader): - if args.gpu is not None: - image_ten = image_ten.cuda(args.gpu, non_blocking=True) - if torch.cuda.is_available(): - seg_map = seg_map.cuda(args.gpu, non_blocking=True) - class_name = class_name.cuda(args.gpu, non_blocking=True) - - # segmentation loss - with torch.enable_grad(): - relevance = generate_relevance(model, image_ten, index=class_name) - - reverse_seg_map = seg_map.clone() - reverse_seg_map[reverse_seg_map == 1] = -1 - reverse_seg_map[reverse_seg_map == 0] = 1 - reverse_seg_map[reverse_seg_map == -1] = 0 - background_loss = mse_criterion(relevance * reverse_seg_map, torch.zeros_like(relevance)) - foreground_loss = mse_criterion(relevance * seg_map, seg_map) - segmentation_loss = args.lambda_background * background_loss - segmentation_loss += args.lambda_foreground * foreground_loss - - # classification loss - with torch.no_grad(): - output = model(image_ten) - output_orig = orig_model(image_ten) - - _, pred = output.topk(1, 1, True, True) - pred = pred.flatten() - if args.temperature != 1: - output = output / args.temperature - classification_loss = criterion(output, pred) - loss = args.lambda_seg * segmentation_loss + args.lambda_acc * classification_loss - - # save results - if i % args.save_interval == 0: - with torch.enable_grad(): - orig_relevance = generate_relevance(orig_model, image_ten, index=class_name) - for j in range(image_ten.shape[0]): - image = get_image_with_relevance(image_ten[j], torch.ones_like(image_ten[j])) - new_vis = get_image_with_relevance(image_ten[j], relevance[j]) - old_vis = get_image_with_relevance(image_ten[j], orig_relevance[j]) - gt = get_image_with_relevance(image_ten[j], seg_map[j]) - h_img = cv2.hconcat([image, gt, old_vis, new_vis]) - cv2.imwrite(f'{args.experiment_folder}/val_samples/res_{i}_{j}.jpg', h_img) - - # measure accuracy and record loss - acc1, acc5 = accuracy(output, class_name, topk=(1, 5)) - losses.update(loss.item(), image_ten.size(0)) - top1.update(acc1[0], image_ten.size(0)) - top5.update(acc5[0], image_ten.size(0)) - - # metrics for original vit - acc1_orig, acc5_orig = accuracy(output_orig, class_name, topk=(1, 5)) - orig_top1.update(acc1_orig[0], image_ten.size(0)) - orig_top5.update(acc5_orig[0], image_ten.size(0)) - - if i % args.print_freq == 0: - progress.display(i) - args.logger.add_scalar('{}/{}'.format('val', 'segmentation_loss'), segmentation_loss, - epoch * len(val_loader) + i) - args.logger.add_scalar('{}/{}'.format('val', 'classification_loss'), classification_loss, - epoch * len(val_loader) + i) - args.logger.add_scalar('{}/{}'.format('val', 'orig_top1'), acc1_orig, - epoch * len(val_loader) + i) - args.logger.add_scalar('{}/{}'.format('val', 'top1'), acc1, - epoch * len(val_loader) + i) - args.logger.add_scalar('{}/{}'.format('val', 'orig_top5'), acc5_orig, - epoch * len(val_loader) + i) - args.logger.add_scalar('{}/{}'.format('val', 'top5'), acc5, - epoch * len(val_loader) + i) - args.logger.add_scalar('{}/{}'.format('val', 'tot_loss'), loss, - epoch * len(val_loader) + i) - - # TODO: this should also be done with the ProgressMeter - print(' * Acc@1 {top1.avg:.3f} Acc@5 {top5.avg:.3f}' - .format(top1=top1, top5=top5)) - - return losses.avg - - -def save_checkpoint(state, is_best, folder, filename='checkpoint.pth.tar'): - torch.save(state, f'{folder}/{filename}') - if is_best: - shutil.copyfile(f'{folder}/{filename}', f'{folder}/model_best.pth.tar') - - -class AverageMeter(object): - """Computes and stores the average and current value""" - def __init__(self, name, fmt=':f'): - self.name = name - self.fmt = fmt - self.reset() - - def reset(self): - self.val = 0 - self.avg = 0 - self.sum = 0 - self.count = 0 - - def update(self, val, n=1): - self.val = val - self.sum += val * n - self.count += n - self.avg = self.sum / self.count - - def __str__(self): - fmtstr = '{name} {val' + self.fmt + '} ({avg' + self.fmt + '})' - return fmtstr.format(**self.__dict__) - - -class ProgressMeter(object): - def __init__(self, num_batches, meters, prefix=""): - self.batch_fmtstr = self._get_batch_fmtstr(num_batches) - self.meters = meters - self.prefix = prefix - - def display(self, batch): - entries = [self.prefix + self.batch_fmtstr.format(batch)] - entries += [str(meter) for meter in self.meters] - print('\t'.join(entries)) - - def _get_batch_fmtstr(self, num_batches): - num_digits = len(str(num_batches // 1)) - fmt = '{:' + str(num_digits) + 'd}' - return '[' + fmt + '/' + fmt.format(num_batches) + ']' - -def adjust_learning_rate(optimizer, epoch, args): - """Sets the learning rate to the initial LR decayed by 10 every 30 epochs""" - lr = args.lr * (0.85 ** (epoch // 2)) - for param_group in optimizer.param_groups: - param_group['lr'] = lr - - -def accuracy(output, target, topk=(1,)): - """Computes the accuracy over the k top predictions for the specified values of k""" - with torch.no_grad(): - maxk = max(topk) - batch_size = target.size(0) - - _, pred = output.topk(maxk, 1, True, True) - pred = pred.t() - correct = pred.eq(target.view(1, -1).expand_as(pred)) - - res = [] - for k in topk: - correct_k = correct[:k].reshape(-1).float().sum(0, keepdim=True) - res.append(correct_k.mul_(100.0 / batch_size)) - return res - - -if __name__ == '__main__': - main() \ No newline at end of file diff --git a/spaces/Hoodady/3DFuse/ldm/models/diffusion/ddpm.py b/spaces/Hoodady/3DFuse/ldm/models/diffusion/ddpm.py deleted file mode 100644 index f52edbb91720ecf276238761754064c5a43a4ed0..0000000000000000000000000000000000000000 --- a/spaces/Hoodady/3DFuse/ldm/models/diffusion/ddpm.py +++ /dev/null @@ -1,1796 +0,0 @@ -""" -wild mixture of -https://github.com/lucidrains/denoising-diffusion-pytorch/blob/7706bdfc6f527f58d33f84b7b522e61e6e3164b3/denoising_diffusion_pytorch/denoising_diffusion_pytorch.py -https://github.com/openai/improved-diffusion/blob/e94489283bb876ac1477d5dd7709bbbd2d9902ce/improved_diffusion/gaussian_diffusion.py -https://github.com/CompVis/taming-transformers --- merci -""" - -import torch -import torch.nn as nn -import numpy as np -import pytorch_lightning as pl -from torch.optim.lr_scheduler import LambdaLR -from einops import rearrange, repeat -from contextlib import contextmanager, nullcontext -from functools import partial -import itertools -from tqdm import tqdm -from torchvision.utils import make_grid -from pytorch_lightning.utilities.distributed import rank_zero_only -from omegaconf import ListConfig - -from ldm.util import log_txt_as_img, exists, default, ismap, isimage, mean_flat, count_params, instantiate_from_config -from ldm.modules.ema import LitEma -from ldm.modules.distributions.distributions import normal_kl, DiagonalGaussianDistribution -from ldm.models.autoencoder import IdentityFirstStage, AutoencoderKL -from ldm.modules.diffusionmodules.util import make_beta_schedule, extract_into_tensor, noise_like -from ldm.models.diffusion.ddim import DDIMSampler - - -__conditioning_keys__ = {'concat': 'c_concat', - 'crossattn': 'c_crossattn', - 'adm': 'y'} - - -def disabled_train(self, mode=True): - """Overwrite model.train with this function to make sure train/eval mode - does not change anymore.""" - return self - - -def uniform_on_device(r1, r2, shape, device): - return (r1 - r2) * torch.rand(*shape, device=device) + r2 - - -class DDPM(pl.LightningModule): - # classic DDPM with Gaussian diffusion, in image space - def __init__(self, - unet_config, - timesteps=1000, - beta_schedule="linear", - loss_type="l2", - ckpt_path=None, - ignore_keys=[], - load_only_unet=False, - monitor="val/loss", - use_ema=True, - first_stage_key="image", - image_size=256, - channels=3, - log_every_t=100, - clip_denoised=True, - linear_start=1e-4, - linear_end=2e-2, - cosine_s=8e-3, - given_betas=None, - original_elbo_weight=0., - v_posterior=0., # weight for choosing posterior variance as sigma = (1-v) * beta_tilde + v * beta - l_simple_weight=1., - conditioning_key=None, - parameterization="eps", # all assuming fixed variance schedules - scheduler_config=None, - use_positional_encodings=False, - learn_logvar=False, - logvar_init=0., - make_it_fit=False, - ucg_training=None, - reset_ema=False, - reset_num_ema_updates=False, - ): - super().__init__() - assert parameterization in ["eps", "x0", "v"], 'currently only supporting "eps" and "x0" and "v"' - self.parameterization = parameterization - print(f"{self.__class__.__name__}: Running in {self.parameterization}-prediction mode") - self.cond_stage_model = None - self.clip_denoised = clip_denoised - self.log_every_t = log_every_t - self.first_stage_key = first_stage_key - self.image_size = image_size # try conv? - self.channels = channels - self.use_positional_encodings = use_positional_encodings - self.model = DiffusionWrapper(unet_config, conditioning_key) - count_params(self.model, verbose=True) - self.use_ema = use_ema - if self.use_ema: - self.model_ema = LitEma(self.model) - print(f"Keeping EMAs of {len(list(self.model_ema.buffers()))}.") - - self.use_scheduler = scheduler_config is not None - if self.use_scheduler: - self.scheduler_config = scheduler_config - - self.v_posterior = v_posterior - self.original_elbo_weight = original_elbo_weight - self.l_simple_weight = l_simple_weight - - if monitor is not None: - self.monitor = monitor - self.make_it_fit = make_it_fit - if reset_ema: assert exists(ckpt_path) - if ckpt_path is not None: - self.init_from_ckpt(ckpt_path, ignore_keys=ignore_keys, only_model=load_only_unet) - if reset_ema: - assert self.use_ema - print(f"Resetting ema to pure model weights. This is useful when restoring from an ema-only checkpoint.") - self.model_ema = LitEma(self.model) - if reset_num_ema_updates: - print(" +++++++++++ WARNING: RESETTING NUM_EMA UPDATES TO ZERO +++++++++++ ") - assert self.use_ema - self.model_ema.reset_num_updates() - - self.register_schedule(given_betas=given_betas, beta_schedule=beta_schedule, timesteps=timesteps, - linear_start=linear_start, linear_end=linear_end, cosine_s=cosine_s) - - self.loss_type = loss_type - - self.learn_logvar = learn_logvar - logvar = torch.full(fill_value=logvar_init, size=(self.num_timesteps,)) - if self.learn_logvar: - self.logvar = nn.Parameter(self.logvar, requires_grad=True) - else: - self.register_buffer('logvar', logvar) - - self.ucg_training = ucg_training or dict() - if self.ucg_training: - self.ucg_prng = np.random.RandomState() - - def register_schedule(self, given_betas=None, beta_schedule="linear", timesteps=1000, - linear_start=1e-4, linear_end=2e-2, cosine_s=8e-3): - if exists(given_betas): - betas = given_betas - else: - betas = make_beta_schedule(beta_schedule, timesteps, linear_start=linear_start, linear_end=linear_end, - cosine_s=cosine_s) - alphas = 1. - betas - alphas_cumprod = np.cumprod(alphas, axis=0) - alphas_cumprod_prev = np.append(1., alphas_cumprod[:-1]) - - timesteps, = betas.shape - self.num_timesteps = int(timesteps) - self.linear_start = linear_start - self.linear_end = linear_end - assert alphas_cumprod.shape[0] == self.num_timesteps, 'alphas have to be defined for each timestep' - - to_torch = partial(torch.tensor, dtype=torch.float32) - - self.register_buffer('betas', to_torch(betas)) - self.register_buffer('alphas_cumprod', to_torch(alphas_cumprod)) - self.register_buffer('alphas_cumprod_prev', to_torch(alphas_cumprod_prev)) - - # calculations for diffusion q(x_t | x_{t-1}) and others - self.register_buffer('sqrt_alphas_cumprod', to_torch(np.sqrt(alphas_cumprod))) - self.register_buffer('sqrt_one_minus_alphas_cumprod', to_torch(np.sqrt(1. - alphas_cumprod))) - self.register_buffer('log_one_minus_alphas_cumprod', to_torch(np.log(1. - alphas_cumprod))) - self.register_buffer('sqrt_recip_alphas_cumprod', to_torch(np.sqrt(1. / alphas_cumprod))) - self.register_buffer('sqrt_recipm1_alphas_cumprod', to_torch(np.sqrt(1. / alphas_cumprod - 1))) - - # calculations for posterior q(x_{t-1} | x_t, x_0) - posterior_variance = (1 - self.v_posterior) * betas * (1. - alphas_cumprod_prev) / ( - 1. - alphas_cumprod) + self.v_posterior * betas - # above: equal to 1. / (1. / (1. - alpha_cumprod_tm1) + alpha_t / beta_t) - self.register_buffer('posterior_variance', to_torch(posterior_variance)) - # below: log calculation clipped because the posterior variance is 0 at the beginning of the diffusion chain - self.register_buffer('posterior_log_variance_clipped', to_torch(np.log(np.maximum(posterior_variance, 1e-20)))) - self.register_buffer('posterior_mean_coef1', to_torch( - betas * np.sqrt(alphas_cumprod_prev) / (1. - alphas_cumprod))) - self.register_buffer('posterior_mean_coef2', to_torch( - (1. - alphas_cumprod_prev) * np.sqrt(alphas) / (1. - alphas_cumprod))) - - if self.parameterization == "eps": - lvlb_weights = self.betas ** 2 / ( - 2 * self.posterior_variance * to_torch(alphas) * (1 - self.alphas_cumprod)) - elif self.parameterization == "x0": - lvlb_weights = 0.5 * np.sqrt(torch.Tensor(alphas_cumprod)) / (2. * 1 - torch.Tensor(alphas_cumprod)) - elif self.parameterization == "v": - lvlb_weights = torch.ones_like(self.betas ** 2 / ( - 2 * self.posterior_variance * to_torch(alphas) * (1 - self.alphas_cumprod))) - else: - raise NotImplementedError("mu not supported") - lvlb_weights[0] = lvlb_weights[1] - self.register_buffer('lvlb_weights', lvlb_weights, persistent=False) - assert not torch.isnan(self.lvlb_weights).all() - - @contextmanager - def ema_scope(self, context=None): - if self.use_ema: - self.model_ema.store(self.model.parameters()) - self.model_ema.copy_to(self.model) - if context is not None: - print(f"{context}: Switched to EMA weights") - try: - yield None - finally: - if self.use_ema: - self.model_ema.restore(self.model.parameters()) - if context is not None: - print(f"{context}: Restored training weights") - - @torch.no_grad() - def init_from_ckpt(self, path, ignore_keys=list(), only_model=False): - sd = torch.load(path, map_location="cpu") - if "state_dict" in list(sd.keys()): - sd = sd["state_dict"] - keys = list(sd.keys()) - for k in keys: - for ik in ignore_keys: - if k.startswith(ik): - print("Deleting key {} from state_dict.".format(k)) - del sd[k] - if self.make_it_fit: - n_params = len([name for name, _ in - itertools.chain(self.named_parameters(), - self.named_buffers())]) - for name, param in tqdm( - itertools.chain(self.named_parameters(), - self.named_buffers()), - desc="Fitting old weights to new weights", - total=n_params - ): - if not name in sd: - continue - old_shape = sd[name].shape - new_shape = param.shape - assert len(old_shape) == len(new_shape) - if len(new_shape) > 2: - # we only modify first two axes - assert new_shape[2:] == old_shape[2:] - # assumes first axis corresponds to output dim - if not new_shape == old_shape: - new_param = param.clone() - old_param = sd[name] - if len(new_shape) == 1: - for i in range(new_param.shape[0]): - new_param[i] = old_param[i % old_shape[0]] - elif len(new_shape) >= 2: - for i in range(new_param.shape[0]): - for j in range(new_param.shape[1]): - new_param[i, j] = old_param[i % old_shape[0], j % old_shape[1]] - - n_used_old = torch.ones(old_shape[1]) - for j in range(new_param.shape[1]): - n_used_old[j % old_shape[1]] += 1 - n_used_new = torch.zeros(new_shape[1]) - for j in range(new_param.shape[1]): - n_used_new[j] = n_used_old[j % old_shape[1]] - - n_used_new = n_used_new[None, :] - while len(n_used_new.shape) < len(new_shape): - n_used_new = n_used_new.unsqueeze(-1) - new_param /= n_used_new - - sd[name] = new_param - - missing, unexpected = self.load_state_dict(sd, strict=False) if not only_model else self.model.load_state_dict( - sd, strict=False) - print(f"Restored from {path} with {len(missing)} missing and {len(unexpected)} unexpected keys") - if len(missing) > 0: - print(f"Missing Keys:\n {missing}") - if len(unexpected) > 0: - print(f"\nUnexpected Keys:\n {unexpected}") - - def q_mean_variance(self, x_start, t): - """ - Get the distribution q(x_t | x_0). - :param x_start: the [N x C x ...] tensor of noiseless inputs. - :param t: the number of diffusion steps (minus 1). Here, 0 means one step. - :return: A tuple (mean, variance, log_variance), all of x_start's shape. - """ - mean = (extract_into_tensor(self.sqrt_alphas_cumprod, t, x_start.shape) * x_start) - variance = extract_into_tensor(1.0 - self.alphas_cumprod, t, x_start.shape) - log_variance = extract_into_tensor(self.log_one_minus_alphas_cumprod, t, x_start.shape) - return mean, variance, log_variance - - def predict_start_from_noise(self, x_t, t, noise): - return ( - extract_into_tensor(self.sqrt_recip_alphas_cumprod, t, x_t.shape) * x_t - - extract_into_tensor(self.sqrt_recipm1_alphas_cumprod, t, x_t.shape) * noise - ) - - def predict_start_from_z_and_v(self, x_t, t, v): - # self.register_buffer('sqrt_alphas_cumprod', to_torch(np.sqrt(alphas_cumprod))) - # self.register_buffer('sqrt_one_minus_alphas_cumprod', to_torch(np.sqrt(1. - alphas_cumprod))) - return ( - extract_into_tensor(self.sqrt_alphas_cumprod, t, x_t.shape) * x_t - - extract_into_tensor(self.sqrt_one_minus_alphas_cumprod, t, x_t.shape) * v - ) - - def predict_eps_from_z_and_v(self, x_t, t, v): - return ( - extract_into_tensor(self.sqrt_alphas_cumprod, t, x_t.shape) * v + - extract_into_tensor(self.sqrt_one_minus_alphas_cumprod, t, x_t.shape) * x_t - ) - - def q_posterior(self, x_start, x_t, t): - posterior_mean = ( - extract_into_tensor(self.posterior_mean_coef1, t, x_t.shape) * x_start + - extract_into_tensor(self.posterior_mean_coef2, t, x_t.shape) * x_t - ) - posterior_variance = extract_into_tensor(self.posterior_variance, t, x_t.shape) - posterior_log_variance_clipped = extract_into_tensor(self.posterior_log_variance_clipped, t, x_t.shape) - return posterior_mean, posterior_variance, posterior_log_variance_clipped - - def p_mean_variance(self, x, t, clip_denoised: bool): - model_out = self.model(x, t) - if self.parameterization == "eps": - x_recon = self.predict_start_from_noise(x, t=t, noise=model_out) - elif self.parameterization == "x0": - x_recon = model_out - if clip_denoised: - x_recon.clamp_(-1., 1.) - - model_mean, posterior_variance, posterior_log_variance = self.q_posterior(x_start=x_recon, x_t=x, t=t) - return model_mean, posterior_variance, posterior_log_variance - - @torch.no_grad() - def p_sample(self, x, t, clip_denoised=True, repeat_noise=False): - b, *_, device = *x.shape, x.device - model_mean, _, model_log_variance = self.p_mean_variance(x=x, t=t, clip_denoised=clip_denoised) - noise = noise_like(x.shape, device, repeat_noise) - # no noise when t == 0 - nonzero_mask = (1 - (t == 0).float()).reshape(b, *((1,) * (len(x.shape) - 1))) - return model_mean + nonzero_mask * (0.5 * model_log_variance).exp() * noise - - @torch.no_grad() - def p_sample_loop(self, shape, return_intermediates=False): - device = self.betas.device - b = shape[0] - img = torch.randn(shape, device=device) - intermediates = [img] - for i in tqdm(reversed(range(0, self.num_timesteps)), desc='Sampling t', total=self.num_timesteps): - img = self.p_sample(img, torch.full((b,), i, device=device, dtype=torch.long), - clip_denoised=self.clip_denoised) - if i % self.log_every_t == 0 or i == self.num_timesteps - 1: - intermediates.append(img) - if return_intermediates: - return img, intermediates - return img - - @torch.no_grad() - def sample(self, batch_size=16, return_intermediates=False): - image_size = self.image_size - channels = self.channels - return self.p_sample_loop((batch_size, channels, image_size, image_size), - return_intermediates=return_intermediates) - - def q_sample(self, x_start, t, noise=None): - noise = default(noise, lambda: torch.randn_like(x_start)) - return (extract_into_tensor(self.sqrt_alphas_cumprod, t, x_start.shape) * x_start + - extract_into_tensor(self.sqrt_one_minus_alphas_cumprod, t, x_start.shape) * noise) - - def get_v(self, x, noise, t): - return ( - extract_into_tensor(self.sqrt_alphas_cumprod, t, x.shape) * noise - - extract_into_tensor(self.sqrt_one_minus_alphas_cumprod, t, x.shape) * x - ) - - def get_loss(self, pred, target, mean=True): - if self.loss_type == 'l1': - loss = (target - pred).abs() - if mean: - loss = loss.mean() - elif self.loss_type == 'l2': - if mean: - loss = torch.nn.functional.mse_loss(target, pred) - else: - loss = torch.nn.functional.mse_loss(target, pred, reduction='none') - else: - raise NotImplementedError("unknown loss type '{loss_type}'") - - return loss - - def p_losses(self, x_start, t, noise=None): - noise = default(noise, lambda: torch.randn_like(x_start)) - x_noisy = self.q_sample(x_start=x_start, t=t, noise=noise) - model_out = self.model(x_noisy, t) - - loss_dict = {} - if self.parameterization == "eps": - target = noise - elif self.parameterization == "x0": - target = x_start - elif self.parameterization == "v": - target = self.get_v(x_start, noise, t) - else: - raise NotImplementedError(f"Parameterization {self.parameterization} not yet supported") - - loss = self.get_loss(model_out, target, mean=False).mean(dim=[1, 2, 3]) - - log_prefix = 'train' if self.training else 'val' - - loss_dict.update({f'{log_prefix}/loss_simple': loss.mean()}) - loss_simple = loss.mean() * self.l_simple_weight - - loss_vlb = (self.lvlb_weights[t] * loss).mean() - loss_dict.update({f'{log_prefix}/loss_vlb': loss_vlb}) - - loss = loss_simple + self.original_elbo_weight * loss_vlb - - loss_dict.update({f'{log_prefix}/loss': loss}) - - return loss, loss_dict - - def forward(self, x, *args, **kwargs): - # b, c, h, w, device, img_size, = *x.shape, x.device, self.image_size - # assert h == img_size and w == img_size, f'height and width of image must be {img_size}' - t = torch.randint(0, self.num_timesteps, (x.shape[0],), device=self.device).long() - return self.p_losses(x, t, *args, **kwargs) - - def get_input(self, batch, k): - x = batch[k] - if len(x.shape) == 3: - x = x[..., None] - x = rearrange(x, 'b h w c -> b c h w') - x = x.to(memory_format=torch.contiguous_format).float() - return x - - def shared_step(self, batch): - x = self.get_input(batch, self.first_stage_key) - loss, loss_dict = self(x) - return loss, loss_dict - - def training_step(self, batch, batch_idx): - for k in self.ucg_training: - p = self.ucg_training[k]["p"] - val = self.ucg_training[k]["val"] - if val is None: - val = "" - for i in range(len(batch[k])): - if self.ucg_prng.choice(2, p=[1 - p, p]): - batch[k][i] = val - - loss, loss_dict = self.shared_step(batch) - - self.log_dict(loss_dict, prog_bar=True, - logger=True, on_step=True, on_epoch=True) - - self.log("global_step", self.global_step, - prog_bar=True, logger=True, on_step=True, on_epoch=False) - - if self.use_scheduler: - lr = self.optimizers().param_groups[0]['lr'] - self.log('lr_abs', lr, prog_bar=True, logger=True, on_step=True, on_epoch=False) - - return loss - - @torch.no_grad() - def validation_step(self, batch, batch_idx): - _, loss_dict_no_ema = self.shared_step(batch) - with self.ema_scope(): - _, loss_dict_ema = self.shared_step(batch) - loss_dict_ema = {key + '_ema': loss_dict_ema[key] for key in loss_dict_ema} - self.log_dict(loss_dict_no_ema, prog_bar=False, logger=True, on_step=False, on_epoch=True) - self.log_dict(loss_dict_ema, prog_bar=False, logger=True, on_step=False, on_epoch=True) - - def on_train_batch_end(self, *args, **kwargs): - if self.use_ema: - self.model_ema(self.model) - - def _get_rows_from_list(self, samples): - n_imgs_per_row = len(samples) - denoise_grid = rearrange(samples, 'n b c h w -> b n c h w') - denoise_grid = rearrange(denoise_grid, 'b n c h w -> (b n) c h w') - denoise_grid = make_grid(denoise_grid, nrow=n_imgs_per_row) - return denoise_grid - - @torch.no_grad() - def log_images(self, batch, N=8, n_row=2, sample=True, return_keys=None, **kwargs): - log = dict() - x = self.get_input(batch, self.first_stage_key) - N = min(x.shape[0], N) - n_row = min(x.shape[0], n_row) - x = x.to(self.device)[:N] - log["inputs"] = x - - # get diffusion row - diffusion_row = list() - x_start = x[:n_row] - - for t in range(self.num_timesteps): - if t % self.log_every_t == 0 or t == self.num_timesteps - 1: - t = repeat(torch.tensor([t]), '1 -> b', b=n_row) - t = t.to(self.device).long() - noise = torch.randn_like(x_start) - x_noisy = self.q_sample(x_start=x_start, t=t, noise=noise) - diffusion_row.append(x_noisy) - - log["diffusion_row"] = self._get_rows_from_list(diffusion_row) - - if sample: - # get denoise row - with self.ema_scope("Plotting"): - samples, denoise_row = self.sample(batch_size=N, return_intermediates=True) - - log["samples"] = samples - log["denoise_row"] = self._get_rows_from_list(denoise_row) - - if return_keys: - if np.intersect1d(list(log.keys()), return_keys).shape[0] == 0: - return log - else: - return {key: log[key] for key in return_keys} - return log - - def configure_optimizers(self): - lr = self.learning_rate - params = list(self.model.parameters()) - if self.learn_logvar: - params = params + [self.logvar] - opt = torch.optim.AdamW(params, lr=lr) - return opt - - -class LatentDiffusion(DDPM): - """main class""" - - def __init__(self, - first_stage_config, - cond_stage_config, - num_timesteps_cond=None, - cond_stage_key="image", - cond_stage_trainable=False, - concat_mode=True, - cond_stage_forward=None, - conditioning_key=None, - scale_factor=1.0, - scale_by_std=False, - force_null_conditioning=False, - *args, **kwargs): - self.force_null_conditioning = force_null_conditioning - self.num_timesteps_cond = default(num_timesteps_cond, 1) - self.scale_by_std = scale_by_std - assert self.num_timesteps_cond <= kwargs['timesteps'] - # for backwards compatibility after implementation of DiffusionWrapper - if conditioning_key is None: - conditioning_key = 'concat' if concat_mode else 'crossattn' - if cond_stage_config == '__is_unconditional__' and not self.force_null_conditioning: - conditioning_key = None - ckpt_path = kwargs.pop("ckpt_path", None) - reset_ema = kwargs.pop("reset_ema", False) - reset_num_ema_updates = kwargs.pop("reset_num_ema_updates", False) - ignore_keys = kwargs.pop("ignore_keys", []) - super().__init__(conditioning_key=conditioning_key, *args, **kwargs) - self.concat_mode = concat_mode - self.cond_stage_trainable = cond_stage_trainable - self.cond_stage_key = cond_stage_key - try: - self.num_downs = len(first_stage_config.params.ddconfig.ch_mult) - 1 - except: - self.num_downs = 0 - if not scale_by_std: - self.scale_factor = scale_factor - else: - self.register_buffer('scale_factor', torch.tensor(scale_factor)) - self.instantiate_first_stage(first_stage_config) - self.instantiate_cond_stage(cond_stage_config) - self.cond_stage_forward = cond_stage_forward - self.clip_denoised = False - self.bbox_tokenizer = None - - self.restarted_from_ckpt = False - if ckpt_path is not None: - self.init_from_ckpt(ckpt_path, ignore_keys) - self.restarted_from_ckpt = True - if reset_ema: - assert self.use_ema - print( - f"Resetting ema to pure model weights. This is useful when restoring from an ema-only checkpoint.") - self.model_ema = LitEma(self.model) - if reset_num_ema_updates: - print(" +++++++++++ WARNING: RESETTING NUM_EMA UPDATES TO ZERO +++++++++++ ") - assert self.use_ema - self.model_ema.reset_num_updates() - - def make_cond_schedule(self, ): - self.cond_ids = torch.full(size=(self.num_timesteps,), fill_value=self.num_timesteps - 1, dtype=torch.long) - ids = torch.round(torch.linspace(0, self.num_timesteps - 1, self.num_timesteps_cond)).long() - self.cond_ids[:self.num_timesteps_cond] = ids - - @rank_zero_only - @torch.no_grad() - def on_train_batch_start(self, batch, batch_idx, dataloader_idx): - # only for very first batch - if self.scale_by_std and self.current_epoch == 0 and self.global_step == 0 and batch_idx == 0 and not self.restarted_from_ckpt: - assert self.scale_factor == 1., 'rather not use custom rescaling and std-rescaling simultaneously' - # set rescale weight to 1./std of encodings - print("### USING STD-RESCALING ###") - x = super().get_input(batch, self.first_stage_key) - x = x.to(self.device) - encoder_posterior = self.encode_first_stage(x) - z = self.get_first_stage_encoding(encoder_posterior).detach() - del self.scale_factor - self.register_buffer('scale_factor', 1. / z.flatten().std()) - print(f"setting self.scale_factor to {self.scale_factor}") - print("### USING STD-RESCALING ###") - - def register_schedule(self, - given_betas=None, beta_schedule="linear", timesteps=1000, - linear_start=1e-4, linear_end=2e-2, cosine_s=8e-3): - super().register_schedule(given_betas, beta_schedule, timesteps, linear_start, linear_end, cosine_s) - - self.shorten_cond_schedule = self.num_timesteps_cond > 1 - if self.shorten_cond_schedule: - self.make_cond_schedule() - - def instantiate_first_stage(self, config): - model = instantiate_from_config(config) - self.first_stage_model = model.eval() - self.first_stage_model.train = disabled_train - for param in self.first_stage_model.parameters(): - param.requires_grad = False - - def instantiate_cond_stage(self, config): - if not self.cond_stage_trainable: - if config == "__is_first_stage__": - print("Using first stage also as cond stage.") - self.cond_stage_model = self.first_stage_model - elif config == "__is_unconditional__": - print(f"Training {self.__class__.__name__} as an unconditional model.") - self.cond_stage_model = None - # self.be_unconditional = True - else: - model = instantiate_from_config(config) - self.cond_stage_model = model.eval() - self.cond_stage_model.train = disabled_train - for param in self.cond_stage_model.parameters(): - param.requires_grad = False - else: - assert config != '__is_first_stage__' - assert config != '__is_unconditional__' - model = instantiate_from_config(config) - self.cond_stage_model = model - - def _get_denoise_row_from_list(self, samples, desc='', force_no_decoder_quantization=False): - denoise_row = [] - for zd in tqdm(samples, desc=desc): - denoise_row.append(self.decode_first_stage(zd.to(self.device), - force_not_quantize=force_no_decoder_quantization)) - n_imgs_per_row = len(denoise_row) - denoise_row = torch.stack(denoise_row) # n_log_step, n_row, C, H, W - denoise_grid = rearrange(denoise_row, 'n b c h w -> b n c h w') - denoise_grid = rearrange(denoise_grid, 'b n c h w -> (b n) c h w') - denoise_grid = make_grid(denoise_grid, nrow=n_imgs_per_row) - return denoise_grid - - def get_first_stage_encoding(self, encoder_posterior): - if isinstance(encoder_posterior, DiagonalGaussianDistribution): - z = encoder_posterior.sample() - elif isinstance(encoder_posterior, torch.Tensor): - z = encoder_posterior - else: - raise NotImplementedError(f"encoder_posterior of type '{type(encoder_posterior)}' not yet implemented") - return self.scale_factor * z - - def get_learned_conditioning(self, c): - if self.cond_stage_forward is None: - if hasattr(self.cond_stage_model, 'encode') and callable(self.cond_stage_model.encode): - c = self.cond_stage_model.encode(c) - if isinstance(c, DiagonalGaussianDistribution): - c = c.mode() - else: - c = self.cond_stage_model(c) - else: - assert hasattr(self.cond_stage_model, self.cond_stage_forward) - c = getattr(self.cond_stage_model, self.cond_stage_forward)(c) - return c - - def meshgrid(self, h, w): - y = torch.arange(0, h).view(h, 1, 1).repeat(1, w, 1) - x = torch.arange(0, w).view(1, w, 1).repeat(h, 1, 1) - - arr = torch.cat([y, x], dim=-1) - return arr - - def delta_border(self, h, w): - """ - :param h: height - :param w: width - :return: normalized distance to image border, - wtith min distance = 0 at border and max dist = 0.5 at image center - """ - lower_right_corner = torch.tensor([h - 1, w - 1]).view(1, 1, 2) - arr = self.meshgrid(h, w) / lower_right_corner - dist_left_up = torch.min(arr, dim=-1, keepdims=True)[0] - dist_right_down = torch.min(1 - arr, dim=-1, keepdims=True)[0] - edge_dist = torch.min(torch.cat([dist_left_up, dist_right_down], dim=-1), dim=-1)[0] - return edge_dist - - def get_weighting(self, h, w, Ly, Lx, device): - weighting = self.delta_border(h, w) - weighting = torch.clip(weighting, self.split_input_params["clip_min_weight"], - self.split_input_params["clip_max_weight"], ) - weighting = weighting.view(1, h * w, 1).repeat(1, 1, Ly * Lx).to(device) - - if self.split_input_params["tie_braker"]: - L_weighting = self.delta_border(Ly, Lx) - L_weighting = torch.clip(L_weighting, - self.split_input_params["clip_min_tie_weight"], - self.split_input_params["clip_max_tie_weight"]) - - L_weighting = L_weighting.view(1, 1, Ly * Lx).to(device) - weighting = weighting * L_weighting - return weighting - - def get_fold_unfold(self, x, kernel_size, stride, uf=1, df=1): # todo load once not every time, shorten code - """ - :param x: img of size (bs, c, h, w) - :return: n img crops of size (n, bs, c, kernel_size[0], kernel_size[1]) - """ - bs, nc, h, w = x.shape - - # number of crops in image - Ly = (h - kernel_size[0]) // stride[0] + 1 - Lx = (w - kernel_size[1]) // stride[1] + 1 - - if uf == 1 and df == 1: - fold_params = dict(kernel_size=kernel_size, dilation=1, padding=0, stride=stride) - unfold = torch.nn.Unfold(**fold_params) - - fold = torch.nn.Fold(output_size=x.shape[2:], **fold_params) - - weighting = self.get_weighting(kernel_size[0], kernel_size[1], Ly, Lx, x.device).to(x.dtype) - normalization = fold(weighting).view(1, 1, h, w) # normalizes the overlap - weighting = weighting.view((1, 1, kernel_size[0], kernel_size[1], Ly * Lx)) - - elif uf > 1 and df == 1: - fold_params = dict(kernel_size=kernel_size, dilation=1, padding=0, stride=stride) - unfold = torch.nn.Unfold(**fold_params) - - fold_params2 = dict(kernel_size=(kernel_size[0] * uf, kernel_size[0] * uf), - dilation=1, padding=0, - stride=(stride[0] * uf, stride[1] * uf)) - fold = torch.nn.Fold(output_size=(x.shape[2] * uf, x.shape[3] * uf), **fold_params2) - - weighting = self.get_weighting(kernel_size[0] * uf, kernel_size[1] * uf, Ly, Lx, x.device).to(x.dtype) - normalization = fold(weighting).view(1, 1, h * uf, w * uf) # normalizes the overlap - weighting = weighting.view((1, 1, kernel_size[0] * uf, kernel_size[1] * uf, Ly * Lx)) - - elif df > 1 and uf == 1: - fold_params = dict(kernel_size=kernel_size, dilation=1, padding=0, stride=stride) - unfold = torch.nn.Unfold(**fold_params) - - fold_params2 = dict(kernel_size=(kernel_size[0] // df, kernel_size[0] // df), - dilation=1, padding=0, - stride=(stride[0] // df, stride[1] // df)) - fold = torch.nn.Fold(output_size=(x.shape[2] // df, x.shape[3] // df), **fold_params2) - - weighting = self.get_weighting(kernel_size[0] // df, kernel_size[1] // df, Ly, Lx, x.device).to(x.dtype) - normalization = fold(weighting).view(1, 1, h // df, w // df) # normalizes the overlap - weighting = weighting.view((1, 1, kernel_size[0] // df, kernel_size[1] // df, Ly * Lx)) - - else: - raise NotImplementedError - - return fold, unfold, normalization, weighting - - @torch.no_grad() - def get_input(self, batch, k, return_first_stage_outputs=False, force_c_encode=False, - cond_key=None, return_original_cond=False, bs=None, return_x=False): - x = super().get_input(batch, k) - if bs is not None: - x = x[:bs] - x = x.to(self.device) - encoder_posterior = self.encode_first_stage(x) - z = self.get_first_stage_encoding(encoder_posterior).detach() - - if self.model.conditioning_key is not None and not self.force_null_conditioning: - if cond_key is None: - cond_key = self.cond_stage_key - if cond_key != self.first_stage_key: - if cond_key in ['caption', 'coordinates_bbox', "txt"]: - xc = batch[cond_key] - elif cond_key in ['class_label', 'cls']: - xc = batch - else: - xc = super().get_input(batch, cond_key).to(self.device) - else: - xc = x - if not self.cond_stage_trainable or force_c_encode: - if isinstance(xc, dict) or isinstance(xc, list): - c = self.get_learned_conditioning(xc) - else: - c = self.get_learned_conditioning(xc.to(self.device)) - else: - c = xc - if bs is not None: - c = c[:bs] - - if self.use_positional_encodings: - pos_x, pos_y = self.compute_latent_shifts(batch) - ckey = __conditioning_keys__[self.model.conditioning_key] - c = {ckey: c, 'pos_x': pos_x, 'pos_y': pos_y} - - else: - c = None - xc = None - if self.use_positional_encodings: - pos_x, pos_y = self.compute_latent_shifts(batch) - c = {'pos_x': pos_x, 'pos_y': pos_y} - out = [z, c] - if return_first_stage_outputs: - xrec = self.decode_first_stage(z) - out.extend([x, xrec]) - if return_x: - out.extend([x]) - if return_original_cond: - out.append(xc) - return out - - @torch.no_grad() - def decode_first_stage(self, z, predict_cids=False, force_not_quantize=False): - if predict_cids: - if z.dim() == 4: - z = torch.argmax(z.exp(), dim=1).long() - z = self.first_stage_model.quantize.get_codebook_entry(z, shape=None) - z = rearrange(z, 'b h w c -> b c h w').contiguous() - - z = 1. / self.scale_factor * z - return self.first_stage_model.decode(z) - - @torch.no_grad() - def encode_first_stage(self, x): - return self.first_stage_model.encode(x) - - def shared_step(self, batch, **kwargs): - x, c = self.get_input(batch, self.first_stage_key) - loss = self(x, c) - return loss - - def forward(self, x, c, *args, **kwargs): - t = torch.randint(0, self.num_timesteps, (x.shape[0],), device=self.device).long() - if self.model.conditioning_key is not None: - assert c is not None - if self.cond_stage_trainable: - c = self.get_learned_conditioning(c) - if self.shorten_cond_schedule: # TODO: drop this option - tc = self.cond_ids[t].to(self.device) - c = self.q_sample(x_start=c, t=tc, noise=torch.randn_like(c.float())) - return self.p_losses(x, c, t, *args, **kwargs) - - def apply_model(self, x_noisy, t, cond, return_ids=False): - if isinstance(cond, dict): - # hybrid case, cond is expected to be a dict - pass - else: - if not isinstance(cond, list): - cond = [cond] - key = 'c_concat' if self.model.conditioning_key == 'concat' else 'c_crossattn' - cond = {key: cond} - x_recon = self.model(x_noisy, t, **cond) - - if isinstance(x_recon, tuple) and not return_ids: - return x_recon[0] - else: - return x_recon - - def _predict_eps_from_xstart(self, x_t, t, pred_xstart): - return (extract_into_tensor(self.sqrt_recip_alphas_cumprod, t, x_t.shape) * x_t - pred_xstart) / \ - extract_into_tensor(self.sqrt_recipm1_alphas_cumprod, t, x_t.shape) - - def _prior_bpd(self, x_start): - """ - Get the prior KL term for the variational lower-bound, measured in - bits-per-dim. - This term can't be optimized, as it only depends on the encoder. - :param x_start: the [N x C x ...] tensor of inputs. - :return: a batch of [N] KL values (in bits), one per batch element. - """ - batch_size = x_start.shape[0] - t = torch.tensor([self.num_timesteps - 1] * batch_size, device=x_start.device) - qt_mean, _, qt_log_variance = self.q_mean_variance(x_start, t) - kl_prior = normal_kl(mean1=qt_mean, logvar1=qt_log_variance, mean2=0.0, logvar2=0.0) - return mean_flat(kl_prior) / np.log(2.0) - - def p_losses(self, x_start, cond, t, noise=None): - noise = default(noise, lambda: torch.randn_like(x_start)) - x_noisy = self.q_sample(x_start=x_start, t=t, noise=noise) - model_output = self.apply_model(x_noisy, t, cond) - - loss_dict = {} - prefix = 'train' if self.training else 'val' - - if self.parameterization == "x0": - target = x_start - elif self.parameterization == "eps": - target = noise - elif self.parameterization == "v": - target = self.get_v(x_start, noise, t) - else: - raise NotImplementedError() - - loss_simple = self.get_loss(model_output, target, mean=False).mean([1, 2, 3]) - loss_dict.update({f'{prefix}/loss_simple': loss_simple.mean()}) - - logvar_t = self.logvar[t].to(self.device) - loss = loss_simple / torch.exp(logvar_t) + logvar_t - # loss = loss_simple / torch.exp(self.logvar) + self.logvar - if self.learn_logvar: - loss_dict.update({f'{prefix}/loss_gamma': loss.mean()}) - loss_dict.update({'logvar': self.logvar.data.mean()}) - - loss = self.l_simple_weight * loss.mean() - - loss_vlb = self.get_loss(model_output, target, mean=False).mean(dim=(1, 2, 3)) - loss_vlb = (self.lvlb_weights[t] * loss_vlb).mean() - loss_dict.update({f'{prefix}/loss_vlb': loss_vlb}) - loss += (self.original_elbo_weight * loss_vlb) - loss_dict.update({f'{prefix}/loss': loss}) - - return loss, loss_dict - - def p_mean_variance(self, x, c, t, clip_denoised: bool, return_codebook_ids=False, quantize_denoised=False, - return_x0=False, score_corrector=None, corrector_kwargs=None): - t_in = t - model_out = self.apply_model(x, t_in, c, return_ids=return_codebook_ids) - - if score_corrector is not None: - assert self.parameterization == "eps" - model_out = score_corrector.modify_score(self, model_out, x, t, c, **corrector_kwargs) - - if return_codebook_ids: - model_out, logits = model_out - - if self.parameterization == "eps": - x_recon = self.predict_start_from_noise(x, t=t, noise=model_out) - elif self.parameterization == "x0": - x_recon = model_out - else: - raise NotImplementedError() - - if clip_denoised: - x_recon.clamp_(-1., 1.) - if quantize_denoised: - x_recon, _, [_, _, indices] = self.first_stage_model.quantize(x_recon) - model_mean, posterior_variance, posterior_log_variance = self.q_posterior(x_start=x_recon, x_t=x, t=t) - if return_codebook_ids: - return model_mean, posterior_variance, posterior_log_variance, logits - elif return_x0: - return model_mean, posterior_variance, posterior_log_variance, x_recon - else: - return model_mean, posterior_variance, posterior_log_variance - - @torch.no_grad() - def p_sample(self, x, c, t, clip_denoised=False, repeat_noise=False, - return_codebook_ids=False, quantize_denoised=False, return_x0=False, - temperature=1., noise_dropout=0., score_corrector=None, corrector_kwargs=None): - b, *_, device = *x.shape, x.device - outputs = self.p_mean_variance(x=x, c=c, t=t, clip_denoised=clip_denoised, - return_codebook_ids=return_codebook_ids, - quantize_denoised=quantize_denoised, - return_x0=return_x0, - score_corrector=score_corrector, corrector_kwargs=corrector_kwargs) - if return_codebook_ids: - raise DeprecationWarning("Support dropped.") - model_mean, _, model_log_variance, logits = outputs - elif return_x0: - model_mean, _, model_log_variance, x0 = outputs - else: - model_mean, _, model_log_variance = outputs - - noise = noise_like(x.shape, device, repeat_noise) * temperature - if noise_dropout > 0.: - noise = torch.nn.functional.dropout(noise, p=noise_dropout) - # no noise when t == 0 - nonzero_mask = (1 - (t == 0).float()).reshape(b, *((1,) * (len(x.shape) - 1))) - - if return_codebook_ids: - return model_mean + nonzero_mask * (0.5 * model_log_variance).exp() * noise, logits.argmax(dim=1) - if return_x0: - return model_mean + nonzero_mask * (0.5 * model_log_variance).exp() * noise, x0 - else: - return model_mean + nonzero_mask * (0.5 * model_log_variance).exp() * noise - - @torch.no_grad() - def progressive_denoising(self, cond, shape, verbose=True, callback=None, quantize_denoised=False, - img_callback=None, mask=None, x0=None, temperature=1., noise_dropout=0., - score_corrector=None, corrector_kwargs=None, batch_size=None, x_T=None, start_T=None, - log_every_t=None): - if not log_every_t: - log_every_t = self.log_every_t - timesteps = self.num_timesteps - if batch_size is not None: - b = batch_size if batch_size is not None else shape[0] - shape = [batch_size] + list(shape) - else: - b = batch_size = shape[0] - if x_T is None: - img = torch.randn(shape, device=self.device) - else: - img = x_T - intermediates = [] - if cond is not None: - if isinstance(cond, dict): - cond = {key: cond[key][:batch_size] if not isinstance(cond[key], list) else - list(map(lambda x: x[:batch_size], cond[key])) for key in cond} - else: - cond = [c[:batch_size] for c in cond] if isinstance(cond, list) else cond[:batch_size] - - if start_T is not None: - timesteps = min(timesteps, start_T) - iterator = tqdm(reversed(range(0, timesteps)), desc='Progressive Generation', - total=timesteps) if verbose else reversed( - range(0, timesteps)) - if type(temperature) == float: - temperature = [temperature] * timesteps - - for i in iterator: - ts = torch.full((b,), i, device=self.device, dtype=torch.long) - if self.shorten_cond_schedule: - assert self.model.conditioning_key != 'hybrid' - tc = self.cond_ids[ts].to(cond.device) - cond = self.q_sample(x_start=cond, t=tc, noise=torch.randn_like(cond)) - - img, x0_partial = self.p_sample(img, cond, ts, - clip_denoised=self.clip_denoised, - quantize_denoised=quantize_denoised, return_x0=True, - temperature=temperature[i], noise_dropout=noise_dropout, - score_corrector=score_corrector, corrector_kwargs=corrector_kwargs) - if mask is not None: - assert x0 is not None - img_orig = self.q_sample(x0, ts) - img = img_orig * mask + (1. - mask) * img - - if i % log_every_t == 0 or i == timesteps - 1: - intermediates.append(x0_partial) - if callback: callback(i) - if img_callback: img_callback(img, i) - return img, intermediates - - @torch.no_grad() - def p_sample_loop(self, cond, shape, return_intermediates=False, - x_T=None, verbose=True, callback=None, timesteps=None, quantize_denoised=False, - mask=None, x0=None, img_callback=None, start_T=None, - log_every_t=None): - - if not log_every_t: - log_every_t = self.log_every_t - device = self.betas.device - b = shape[0] - if x_T is None: - img = torch.randn(shape, device=device) - else: - img = x_T - - intermediates = [img] - if timesteps is None: - timesteps = self.num_timesteps - - if start_T is not None: - timesteps = min(timesteps, start_T) - iterator = tqdm(reversed(range(0, timesteps)), desc='Sampling t', total=timesteps) if verbose else reversed( - range(0, timesteps)) - - if mask is not None: - assert x0 is not None - assert x0.shape[2:3] == mask.shape[2:3] # spatial size has to match - - for i in iterator: - ts = torch.full((b,), i, device=device, dtype=torch.long) - if self.shorten_cond_schedule: - assert self.model.conditioning_key != 'hybrid' - tc = self.cond_ids[ts].to(cond.device) - cond = self.q_sample(x_start=cond, t=tc, noise=torch.randn_like(cond)) - - img = self.p_sample(img, cond, ts, - clip_denoised=self.clip_denoised, - quantize_denoised=quantize_denoised) - if mask is not None: - img_orig = self.q_sample(x0, ts) - img = img_orig * mask + (1. - mask) * img - - if i % log_every_t == 0 or i == timesteps - 1: - intermediates.append(img) - if callback: callback(i) - if img_callback: img_callback(img, i) - - if return_intermediates: - return img, intermediates - return img - - @torch.no_grad() - def sample(self, cond, batch_size=16, return_intermediates=False, x_T=None, - verbose=True, timesteps=None, quantize_denoised=False, - mask=None, x0=None, shape=None, **kwargs): - if shape is None: - shape = (batch_size, self.channels, self.image_size, self.image_size) - if cond is not None: - if isinstance(cond, dict): - cond = {key: cond[key][:batch_size] if not isinstance(cond[key], list) else - list(map(lambda x: x[:batch_size], cond[key])) for key in cond} - else: - cond = [c[:batch_size] for c in cond] if isinstance(cond, list) else cond[:batch_size] - return self.p_sample_loop(cond, - shape, - return_intermediates=return_intermediates, x_T=x_T, - verbose=verbose, timesteps=timesteps, quantize_denoised=quantize_denoised, - mask=mask, x0=x0) - - @torch.no_grad() - def sample_log(self, cond, batch_size, ddim, ddim_steps, **kwargs): - if ddim: - ddim_sampler = DDIMSampler(self) - shape = (self.channels, self.image_size, self.image_size) - samples, intermediates = ddim_sampler.sample(ddim_steps, batch_size, - shape, cond, verbose=False, **kwargs) - - else: - samples, intermediates = self.sample(cond=cond, batch_size=batch_size, - return_intermediates=True, **kwargs) - - return samples, intermediates - - @torch.no_grad() - def get_unconditional_conditioning(self, batch_size, null_label=None): - if null_label is not None: - xc = null_label - if isinstance(xc, ListConfig): - xc = list(xc) - if isinstance(xc, dict) or isinstance(xc, list): - c = self.get_learned_conditioning(xc) - else: - if hasattr(xc, "to"): - xc = xc.to(self.device) - c = self.get_learned_conditioning(xc) - else: - if self.cond_stage_key in ["class_label", "cls"]: - xc = self.cond_stage_model.get_unconditional_conditioning(batch_size, device=self.device) - return self.get_learned_conditioning(xc) - else: - raise NotImplementedError("todo") - if isinstance(c, list): # in case the encoder gives us a list - for i in range(len(c)): - c[i] = repeat(c[i], '1 ... -> b ...', b=batch_size).to(self.device) - else: - c = repeat(c, '1 ... -> b ...', b=batch_size).to(self.device) - return c - - @torch.no_grad() - def log_images(self, batch, N=8, n_row=4, sample=True, ddim_steps=50, ddim_eta=0., return_keys=None, - quantize_denoised=True, inpaint=True, plot_denoise_rows=False, plot_progressive_rows=True, - plot_diffusion_rows=True, unconditional_guidance_scale=1., unconditional_guidance_label=None, - use_ema_scope=True, - **kwargs): - ema_scope = self.ema_scope if use_ema_scope else nullcontext - use_ddim = ddim_steps is not None - - log = dict() - z, c, x, xrec, xc = self.get_input(batch, self.first_stage_key, - return_first_stage_outputs=True, - force_c_encode=True, - return_original_cond=True, - bs=N) - N = min(x.shape[0], N) - n_row = min(x.shape[0], n_row) - log["inputs"] = x - log["reconstruction"] = xrec - if self.model.conditioning_key is not None: - if hasattr(self.cond_stage_model, "decode"): - xc = self.cond_stage_model.decode(c) - log["conditioning"] = xc - elif self.cond_stage_key in ["caption", "txt"]: - xc = log_txt_as_img((x.shape[2], x.shape[3]), batch[self.cond_stage_key], size=x.shape[2] // 25) - log["conditioning"] = xc - elif self.cond_stage_key in ['class_label', "cls"]: - try: - xc = log_txt_as_img((x.shape[2], x.shape[3]), batch["human_label"], size=x.shape[2] // 25) - log['conditioning'] = xc - except KeyError: - # probably no "human_label" in batch - pass - elif isimage(xc): - log["conditioning"] = xc - if ismap(xc): - log["original_conditioning"] = self.to_rgb(xc) - - if plot_diffusion_rows: - # get diffusion row - diffusion_row = list() - z_start = z[:n_row] - for t in range(self.num_timesteps): - if t % self.log_every_t == 0 or t == self.num_timesteps - 1: - t = repeat(torch.tensor([t]), '1 -> b', b=n_row) - t = t.to(self.device).long() - noise = torch.randn_like(z_start) - z_noisy = self.q_sample(x_start=z_start, t=t, noise=noise) - diffusion_row.append(self.decode_first_stage(z_noisy)) - - diffusion_row = torch.stack(diffusion_row) # n_log_step, n_row, C, H, W - diffusion_grid = rearrange(diffusion_row, 'n b c h w -> b n c h w') - diffusion_grid = rearrange(diffusion_grid, 'b n c h w -> (b n) c h w') - diffusion_grid = make_grid(diffusion_grid, nrow=diffusion_row.shape[0]) - log["diffusion_row"] = diffusion_grid - - if sample: - # get denoise row - with ema_scope("Sampling"): - samples, z_denoise_row = self.sample_log(cond=c, batch_size=N, ddim=use_ddim, - ddim_steps=ddim_steps, eta=ddim_eta) - # samples, z_denoise_row = self.sample(cond=c, batch_size=N, return_intermediates=True) - x_samples = self.decode_first_stage(samples) - log["samples"] = x_samples - if plot_denoise_rows: - denoise_grid = self._get_denoise_row_from_list(z_denoise_row) - log["denoise_row"] = denoise_grid - - if quantize_denoised and not isinstance(self.first_stage_model, AutoencoderKL) and not isinstance( - self.first_stage_model, IdentityFirstStage): - # also display when quantizing x0 while sampling - with ema_scope("Plotting Quantized Denoised"): - samples, z_denoise_row = self.sample_log(cond=c, batch_size=N, ddim=use_ddim, - ddim_steps=ddim_steps, eta=ddim_eta, - quantize_denoised=True) - # samples, z_denoise_row = self.sample(cond=c, batch_size=N, return_intermediates=True, - # quantize_denoised=True) - x_samples = self.decode_first_stage(samples.to(self.device)) - log["samples_x0_quantized"] = x_samples - - if unconditional_guidance_scale > 1.0: - uc = self.get_unconditional_conditioning(N, unconditional_guidance_label) - if self.model.conditioning_key == "crossattn-adm": - uc = {"c_crossattn": [uc], "c_adm": c["c_adm"]} - with ema_scope("Sampling with classifier-free guidance"): - samples_cfg, _ = self.sample_log(cond=c, batch_size=N, ddim=use_ddim, - ddim_steps=ddim_steps, eta=ddim_eta, - unconditional_guidance_scale=unconditional_guidance_scale, - unconditional_conditioning=uc, - ) - x_samples_cfg = self.decode_first_stage(samples_cfg) - log[f"samples_cfg_scale_{unconditional_guidance_scale:.2f}"] = x_samples_cfg - - if inpaint: - # make a simple center square - b, h, w = z.shape[0], z.shape[2], z.shape[3] - mask = torch.ones(N, h, w).to(self.device) - # zeros will be filled in - mask[:, h // 4:3 * h // 4, w // 4:3 * w // 4] = 0. - mask = mask[:, None, ...] - with ema_scope("Plotting Inpaint"): - samples, _ = self.sample_log(cond=c, batch_size=N, ddim=use_ddim, eta=ddim_eta, - ddim_steps=ddim_steps, x0=z[:N], mask=mask) - x_samples = self.decode_first_stage(samples.to(self.device)) - log["samples_inpainting"] = x_samples - log["mask"] = mask - - # outpaint - mask = 1. - mask - with ema_scope("Plotting Outpaint"): - samples, _ = self.sample_log(cond=c, batch_size=N, ddim=use_ddim, eta=ddim_eta, - ddim_steps=ddim_steps, x0=z[:N], mask=mask) - x_samples = self.decode_first_stage(samples.to(self.device)) - log["samples_outpainting"] = x_samples - - if plot_progressive_rows: - with ema_scope("Plotting Progressives"): - img, progressives = self.progressive_denoising(c, - shape=(self.channels, self.image_size, self.image_size), - batch_size=N) - prog_row = self._get_denoise_row_from_list(progressives, desc="Progressive Generation") - log["progressive_row"] = prog_row - - if return_keys: - if np.intersect1d(list(log.keys()), return_keys).shape[0] == 0: - return log - else: - return {key: log[key] for key in return_keys} - return log - - def configure_optimizers(self): - lr = self.learning_rate - params = list(self.model.parameters()) - if self.cond_stage_trainable: - print(f"{self.__class__.__name__}: Also optimizing conditioner params!") - params = params + list(self.cond_stage_model.parameters()) - if self.learn_logvar: - print('Diffusion model optimizing logvar') - params.append(self.logvar) - opt = torch.optim.AdamW(params, lr=lr) - if self.use_scheduler: - assert 'target' in self.scheduler_config - scheduler = instantiate_from_config(self.scheduler_config) - - print("Setting up LambdaLR scheduler...") - scheduler = [ - { - 'scheduler': LambdaLR(opt, lr_lambda=scheduler.schedule), - 'interval': 'step', - 'frequency': 1 - }] - return [opt], scheduler - return opt - - @torch.no_grad() - def to_rgb(self, x): - x = x.float() - if not hasattr(self, "colorize"): - self.colorize = torch.randn(3, x.shape[1], 1, 1).to(x) - x = nn.functional.conv2d(x, weight=self.colorize) - x = 2. * (x - x.min()) / (x.max() - x.min()) - 1. - return x - - -class DiffusionWrapper(pl.LightningModule): - def __init__(self, diff_model_config, conditioning_key): - super().__init__() - self.sequential_cross_attn = diff_model_config.pop("sequential_crossattn", False) - self.diffusion_model = instantiate_from_config(diff_model_config) - self.conditioning_key = conditioning_key - assert self.conditioning_key in [None, 'concat', 'crossattn', 'hybrid', 'adm', 'hybrid-adm', 'crossattn-adm'] - - def forward(self, x, t, c_concat: list = None, c_crossattn: list = None, c_adm=None): - if self.conditioning_key is None: - out = self.diffusion_model(x, t) - elif self.conditioning_key == 'concat': - xc = torch.cat([x] + c_concat, dim=1) - out = self.diffusion_model(xc, t) - elif self.conditioning_key == 'crossattn': - if not self.sequential_cross_attn: - cc = torch.cat(c_crossattn, 1) - else: - cc = c_crossattn - out = self.diffusion_model(x, t, context=cc) - elif self.conditioning_key == 'hybrid': - xc = torch.cat([x] + c_concat, dim=1) - cc = torch.cat(c_crossattn, 1) - out = self.diffusion_model(xc, t, context=cc) - elif self.conditioning_key == 'hybrid-adm': - assert c_adm is not None - xc = torch.cat([x] + c_concat, dim=1) - cc = torch.cat(c_crossattn, 1) - out = self.diffusion_model(xc, t, context=cc, y=c_adm) - elif self.conditioning_key == 'crossattn-adm': - assert c_adm is not None - cc = torch.cat(c_crossattn, 1) - out = self.diffusion_model(x, t, context=cc, y=c_adm) - elif self.conditioning_key == 'adm': - cc = c_crossattn[0] - out = self.diffusion_model(x, t, y=cc) - else: - raise NotImplementedError() - - return out - - -class LatentUpscaleDiffusion(LatentDiffusion): - def __init__(self, *args, low_scale_config, low_scale_key="LR", noise_level_key=None, **kwargs): - super().__init__(*args, **kwargs) - # assumes that neither the cond_stage nor the low_scale_model contain trainable params - assert not self.cond_stage_trainable - self.instantiate_low_stage(low_scale_config) - self.low_scale_key = low_scale_key - self.noise_level_key = noise_level_key - - def instantiate_low_stage(self, config): - model = instantiate_from_config(config) - self.low_scale_model = model.eval() - self.low_scale_model.train = disabled_train - for param in self.low_scale_model.parameters(): - param.requires_grad = False - - @torch.no_grad() - def get_input(self, batch, k, cond_key=None, bs=None, log_mode=False): - if not log_mode: - z, c = super().get_input(batch, k, force_c_encode=True, bs=bs) - else: - z, c, x, xrec, xc = super().get_input(batch, self.first_stage_key, return_first_stage_outputs=True, - force_c_encode=True, return_original_cond=True, bs=bs) - x_low = batch[self.low_scale_key][:bs] - x_low = rearrange(x_low, 'b h w c -> b c h w') - x_low = x_low.to(memory_format=torch.contiguous_format).float() - zx, noise_level = self.low_scale_model(x_low) - if self.noise_level_key is not None: - # get noise level from batch instead, e.g. when extracting a custom noise level for bsr - raise NotImplementedError('TODO') - - all_conds = {"c_concat": [zx], "c_crossattn": [c], "c_adm": noise_level} - if log_mode: - # TODO: maybe disable if too expensive - x_low_rec = self.low_scale_model.decode(zx) - return z, all_conds, x, xrec, xc, x_low, x_low_rec, noise_level - return z, all_conds - - @torch.no_grad() - def log_images(self, batch, N=8, n_row=4, sample=True, ddim_steps=200, ddim_eta=1., return_keys=None, - plot_denoise_rows=False, plot_progressive_rows=True, plot_diffusion_rows=True, - unconditional_guidance_scale=1., unconditional_guidance_label=None, use_ema_scope=True, - **kwargs): - ema_scope = self.ema_scope if use_ema_scope else nullcontext - use_ddim = ddim_steps is not None - - log = dict() - z, c, x, xrec, xc, x_low, x_low_rec, noise_level = self.get_input(batch, self.first_stage_key, bs=N, - log_mode=True) - N = min(x.shape[0], N) - n_row = min(x.shape[0], n_row) - log["inputs"] = x - log["reconstruction"] = xrec - log["x_lr"] = x_low - log[f"x_lr_rec_@noise_levels{'-'.join(map(lambda x: str(x), list(noise_level.cpu().numpy())))}"] = x_low_rec - if self.model.conditioning_key is not None: - if hasattr(self.cond_stage_model, "decode"): - xc = self.cond_stage_model.decode(c) - log["conditioning"] = xc - elif self.cond_stage_key in ["caption", "txt"]: - xc = log_txt_as_img((x.shape[2], x.shape[3]), batch[self.cond_stage_key], size=x.shape[2] // 25) - log["conditioning"] = xc - elif self.cond_stage_key in ['class_label', 'cls']: - xc = log_txt_as_img((x.shape[2], x.shape[3]), batch["human_label"], size=x.shape[2] // 25) - log['conditioning'] = xc - elif isimage(xc): - log["conditioning"] = xc - if ismap(xc): - log["original_conditioning"] = self.to_rgb(xc) - - if plot_diffusion_rows: - # get diffusion row - diffusion_row = list() - z_start = z[:n_row] - for t in range(self.num_timesteps): - if t % self.log_every_t == 0 or t == self.num_timesteps - 1: - t = repeat(torch.tensor([t]), '1 -> b', b=n_row) - t = t.to(self.device).long() - noise = torch.randn_like(z_start) - z_noisy = self.q_sample(x_start=z_start, t=t, noise=noise) - diffusion_row.append(self.decode_first_stage(z_noisy)) - - diffusion_row = torch.stack(diffusion_row) # n_log_step, n_row, C, H, W - diffusion_grid = rearrange(diffusion_row, 'n b c h w -> b n c h w') - diffusion_grid = rearrange(diffusion_grid, 'b n c h w -> (b n) c h w') - diffusion_grid = make_grid(diffusion_grid, nrow=diffusion_row.shape[0]) - log["diffusion_row"] = diffusion_grid - - if sample: - # get denoise row - with ema_scope("Sampling"): - samples, z_denoise_row = self.sample_log(cond=c, batch_size=N, ddim=use_ddim, - ddim_steps=ddim_steps, eta=ddim_eta) - # samples, z_denoise_row = self.sample(cond=c, batch_size=N, return_intermediates=True) - x_samples = self.decode_first_stage(samples) - log["samples"] = x_samples - if plot_denoise_rows: - denoise_grid = self._get_denoise_row_from_list(z_denoise_row) - log["denoise_row"] = denoise_grid - - if unconditional_guidance_scale > 1.0: - uc_tmp = self.get_unconditional_conditioning(N, unconditional_guidance_label) - # TODO explore better "unconditional" choices for the other keys - # maybe guide away from empty text label and highest noise level and maximally degraded zx? - uc = dict() - for k in c: - if k == "c_crossattn": - assert isinstance(c[k], list) and len(c[k]) == 1 - uc[k] = [uc_tmp] - elif k == "c_adm": # todo: only run with text-based guidance? - assert isinstance(c[k], torch.Tensor) - #uc[k] = torch.ones_like(c[k]) * self.low_scale_model.max_noise_level - uc[k] = c[k] - elif isinstance(c[k], list): - uc[k] = [c[k][i] for i in range(len(c[k]))] - else: - uc[k] = c[k] - - with ema_scope("Sampling with classifier-free guidance"): - samples_cfg, _ = self.sample_log(cond=c, batch_size=N, ddim=use_ddim, - ddim_steps=ddim_steps, eta=ddim_eta, - unconditional_guidance_scale=unconditional_guidance_scale, - unconditional_conditioning=uc, - ) - x_samples_cfg = self.decode_first_stage(samples_cfg) - log[f"samples_cfg_scale_{unconditional_guidance_scale:.2f}"] = x_samples_cfg - - if plot_progressive_rows: - with ema_scope("Plotting Progressives"): - img, progressives = self.progressive_denoising(c, - shape=(self.channels, self.image_size, self.image_size), - batch_size=N) - prog_row = self._get_denoise_row_from_list(progressives, desc="Progressive Generation") - log["progressive_row"] = prog_row - - return log - - -class LatentFinetuneDiffusion(LatentDiffusion): - """ - Basis for different finetunas, such as inpainting or depth2image - To disable finetuning mode, set finetune_keys to None - """ - - def __init__(self, - concat_keys: tuple, - finetune_keys=("model.diffusion_model.input_blocks.0.0.weight", - "model_ema.diffusion_modelinput_blocks00weight" - ), - keep_finetune_dims=4, - # if model was trained without concat mode before and we would like to keep these channels - c_concat_log_start=None, # to log reconstruction of c_concat codes - c_concat_log_end=None, - *args, **kwargs - ): - ckpt_path = kwargs.pop("ckpt_path", None) - ignore_keys = kwargs.pop("ignore_keys", list()) - super().__init__(*args, **kwargs) - self.finetune_keys = finetune_keys - self.concat_keys = concat_keys - self.keep_dims = keep_finetune_dims - self.c_concat_log_start = c_concat_log_start - self.c_concat_log_end = c_concat_log_end - if exists(self.finetune_keys): assert exists(ckpt_path), 'can only finetune from a given checkpoint' - if exists(ckpt_path): - self.init_from_ckpt(ckpt_path, ignore_keys) - - def init_from_ckpt(self, path, ignore_keys=list(), only_model=False): - sd = torch.load(path, map_location="cpu") - if "state_dict" in list(sd.keys()): - sd = sd["state_dict"] - keys = list(sd.keys()) - for k in keys: - for ik in ignore_keys: - if k.startswith(ik): - print("Deleting key {} from state_dict.".format(k)) - del sd[k] - - # make it explicit, finetune by including extra input channels - if exists(self.finetune_keys) and k in self.finetune_keys: - new_entry = None - for name, param in self.named_parameters(): - if name in self.finetune_keys: - print( - f"modifying key '{name}' and keeping its original {self.keep_dims} (channels) dimensions only") - new_entry = torch.zeros_like(param) # zero init - assert exists(new_entry), 'did not find matching parameter to modify' - new_entry[:, :self.keep_dims, ...] = sd[k] - sd[k] = new_entry - - missing, unexpected = self.load_state_dict(sd, strict=False) if not only_model else self.model.load_state_dict( - sd, strict=False) - print(f"Restored from {path} with {len(missing)} missing and {len(unexpected)} unexpected keys") - if len(missing) > 0: - print(f"Missing Keys: {missing}") - if len(unexpected) > 0: - print(f"Unexpected Keys: {unexpected}") - - @torch.no_grad() - def log_images(self, batch, N=8, n_row=4, sample=True, ddim_steps=200, ddim_eta=1., return_keys=None, - quantize_denoised=True, inpaint=True, plot_denoise_rows=False, plot_progressive_rows=True, - plot_diffusion_rows=True, unconditional_guidance_scale=1., unconditional_guidance_label=None, - use_ema_scope=True, - **kwargs): - ema_scope = self.ema_scope if use_ema_scope else nullcontext - use_ddim = ddim_steps is not None - - log = dict() - z, c, x, xrec, xc = self.get_input(batch, self.first_stage_key, bs=N, return_first_stage_outputs=True) - c_cat, c = c["c_concat"][0], c["c_crossattn"][0] - N = min(x.shape[0], N) - n_row = min(x.shape[0], n_row) - log["inputs"] = x - log["reconstruction"] = xrec - if self.model.conditioning_key is not None: - if hasattr(self.cond_stage_model, "decode"): - xc = self.cond_stage_model.decode(c) - log["conditioning"] = xc - elif self.cond_stage_key in ["caption", "txt"]: - xc = log_txt_as_img((x.shape[2], x.shape[3]), batch[self.cond_stage_key], size=x.shape[2] // 25) - log["conditioning"] = xc - elif self.cond_stage_key in ['class_label', 'cls']: - xc = log_txt_as_img((x.shape[2], x.shape[3]), batch["human_label"], size=x.shape[2] // 25) - log['conditioning'] = xc - elif isimage(xc): - log["conditioning"] = xc - if ismap(xc): - log["original_conditioning"] = self.to_rgb(xc) - - if not (self.c_concat_log_start is None and self.c_concat_log_end is None): - log["c_concat_decoded"] = self.decode_first_stage(c_cat[:, self.c_concat_log_start:self.c_concat_log_end]) - - if plot_diffusion_rows: - # get diffusion row - diffusion_row = list() - z_start = z[:n_row] - for t in range(self.num_timesteps): - if t % self.log_every_t == 0 or t == self.num_timesteps - 1: - t = repeat(torch.tensor([t]), '1 -> b', b=n_row) - t = t.to(self.device).long() - noise = torch.randn_like(z_start) - z_noisy = self.q_sample(x_start=z_start, t=t, noise=noise) - diffusion_row.append(self.decode_first_stage(z_noisy)) - - diffusion_row = torch.stack(diffusion_row) # n_log_step, n_row, C, H, W - diffusion_grid = rearrange(diffusion_row, 'n b c h w -> b n c h w') - diffusion_grid = rearrange(diffusion_grid, 'b n c h w -> (b n) c h w') - diffusion_grid = make_grid(diffusion_grid, nrow=diffusion_row.shape[0]) - log["diffusion_row"] = diffusion_grid - - if sample: - # get denoise row - with ema_scope("Sampling"): - samples, z_denoise_row = self.sample_log(cond={"c_concat": [c_cat], "c_crossattn": [c]}, - batch_size=N, ddim=use_ddim, - ddim_steps=ddim_steps, eta=ddim_eta) - # samples, z_denoise_row = self.sample(cond=c, batch_size=N, return_intermediates=True) - x_samples = self.decode_first_stage(samples) - log["samples"] = x_samples - if plot_denoise_rows: - denoise_grid = self._get_denoise_row_from_list(z_denoise_row) - log["denoise_row"] = denoise_grid - - if unconditional_guidance_scale > 1.0: - uc_cross = self.get_unconditional_conditioning(N, unconditional_guidance_label) - uc_cat = c_cat - uc_full = {"c_concat": [uc_cat], "c_crossattn": [uc_cross]} - with ema_scope("Sampling with classifier-free guidance"): - samples_cfg, _ = self.sample_log(cond={"c_concat": [c_cat], "c_crossattn": [c]}, - batch_size=N, ddim=use_ddim, - ddim_steps=ddim_steps, eta=ddim_eta, - unconditional_guidance_scale=unconditional_guidance_scale, - unconditional_conditioning=uc_full, - ) - x_samples_cfg = self.decode_first_stage(samples_cfg) - log[f"samples_cfg_scale_{unconditional_guidance_scale:.2f}"] = x_samples_cfg - - return log - - -class LatentInpaintDiffusion(LatentFinetuneDiffusion): - """ - can either run as pure inpainting model (only concat mode) or with mixed conditionings, - e.g. mask as concat and text via cross-attn. - To disable finetuning mode, set finetune_keys to None - """ - - def __init__(self, - concat_keys=("mask", "masked_image"), - masked_image_key="masked_image", - *args, **kwargs - ): - super().__init__(concat_keys, *args, **kwargs) - self.masked_image_key = masked_image_key - assert self.masked_image_key in concat_keys - - @torch.no_grad() - def get_input(self, batch, k, cond_key=None, bs=None, return_first_stage_outputs=False): - # note: restricted to non-trainable encoders currently - assert not self.cond_stage_trainable, 'trainable cond stages not yet supported for inpainting' - z, c, x, xrec, xc = super().get_input(batch, self.first_stage_key, return_first_stage_outputs=True, - force_c_encode=True, return_original_cond=True, bs=bs) - - assert exists(self.concat_keys) - c_cat = list() - for ck in self.concat_keys: - cc = rearrange(batch[ck], 'b h w c -> b c h w').to(memory_format=torch.contiguous_format).float() - if bs is not None: - cc = cc[:bs] - cc = cc.to(self.device) - bchw = z.shape - if ck != self.masked_image_key: - cc = torch.nn.functional.interpolate(cc, size=bchw[-2:]) - else: - cc = self.get_first_stage_encoding(self.encode_first_stage(cc)) - c_cat.append(cc) - c_cat = torch.cat(c_cat, dim=1) - all_conds = {"c_concat": [c_cat], "c_crossattn": [c]} - if return_first_stage_outputs: - return z, all_conds, x, xrec, xc - return z, all_conds - - @torch.no_grad() - def log_images(self, *args, **kwargs): - log = super(LatentInpaintDiffusion, self).log_images(*args, **kwargs) - log["masked_image"] = rearrange(args[0]["masked_image"], - 'b h w c -> b c h w').to(memory_format=torch.contiguous_format).float() - return log - - -class LatentDepth2ImageDiffusion(LatentFinetuneDiffusion): - """ - condition on monocular depth estimation - """ - - def __init__(self, depth_stage_config, concat_keys=("midas_in",), *args, **kwargs): - super().__init__(concat_keys=concat_keys, *args, **kwargs) - self.depth_model = instantiate_from_config(depth_stage_config) - self.depth_stage_key = concat_keys[0] - - @torch.no_grad() - def get_input(self, batch, k, cond_key=None, bs=None, return_first_stage_outputs=False): - # note: restricted to non-trainable encoders currently - assert not self.cond_stage_trainable, 'trainable cond stages not yet supported for depth2img' - z, c, x, xrec, xc = super().get_input(batch, self.first_stage_key, return_first_stage_outputs=True, - force_c_encode=True, return_original_cond=True, bs=bs) - - assert exists(self.concat_keys) - assert len(self.concat_keys) == 1 - c_cat = list() - for ck in self.concat_keys: - cc = batch[ck] - if bs is not None: - cc = cc[:bs] - cc = cc.to(self.device) - cc = self.depth_model(cc) - cc = torch.nn.functional.interpolate( - cc, - size=z.shape[2:], - mode="bicubic", - align_corners=False, - ) - - depth_min, depth_max = torch.amin(cc, dim=[1, 2, 3], keepdim=True), torch.amax(cc, dim=[1, 2, 3], - keepdim=True) - cc = 2. * (cc - depth_min) / (depth_max - depth_min + 0.001) - 1. - c_cat.append(cc) - c_cat = torch.cat(c_cat, dim=1) - all_conds = {"c_concat": [c_cat], "c_crossattn": [c]} - if return_first_stage_outputs: - return z, all_conds, x, xrec, xc - return z, all_conds - - @torch.no_grad() - def log_images(self, *args, **kwargs): - log = super().log_images(*args, **kwargs) - depth = self.depth_model(args[0][self.depth_stage_key]) - depth_min, depth_max = torch.amin(depth, dim=[1, 2, 3], keepdim=True), \ - torch.amax(depth, dim=[1, 2, 3], keepdim=True) - log["depth"] = 2. * (depth - depth_min) / (depth_max - depth_min) - 1. - return log - - -class LatentUpscaleFinetuneDiffusion(LatentFinetuneDiffusion): - """ - condition on low-res image (and optionally on some spatial noise augmentation) - """ - def __init__(self, concat_keys=("lr",), reshuffle_patch_size=None, - low_scale_config=None, low_scale_key=None, *args, **kwargs): - super().__init__(concat_keys=concat_keys, *args, **kwargs) - self.reshuffle_patch_size = reshuffle_patch_size - self.low_scale_model = None - if low_scale_config is not None: - print("Initializing a low-scale model") - assert exists(low_scale_key) - self.instantiate_low_stage(low_scale_config) - self.low_scale_key = low_scale_key - - def instantiate_low_stage(self, config): - model = instantiate_from_config(config) - self.low_scale_model = model.eval() - self.low_scale_model.train = disabled_train - for param in self.low_scale_model.parameters(): - param.requires_grad = False - - @torch.no_grad() - def get_input(self, batch, k, cond_key=None, bs=None, return_first_stage_outputs=False): - # note: restricted to non-trainable encoders currently - assert not self.cond_stage_trainable, 'trainable cond stages not yet supported for upscaling-ft' - z, c, x, xrec, xc = super().get_input(batch, self.first_stage_key, return_first_stage_outputs=True, - force_c_encode=True, return_original_cond=True, bs=bs) - - assert exists(self.concat_keys) - assert len(self.concat_keys) == 1 - # optionally make spatial noise_level here - c_cat = list() - noise_level = None - for ck in self.concat_keys: - cc = batch[ck] - cc = rearrange(cc, 'b h w c -> b c h w') - if exists(self.reshuffle_patch_size): - assert isinstance(self.reshuffle_patch_size, int) - cc = rearrange(cc, 'b c (p1 h) (p2 w) -> b (p1 p2 c) h w', - p1=self.reshuffle_patch_size, p2=self.reshuffle_patch_size) - if bs is not None: - cc = cc[:bs] - cc = cc.to(self.device) - if exists(self.low_scale_model) and ck == self.low_scale_key: - cc, noise_level = self.low_scale_model(cc) - c_cat.append(cc) - c_cat = torch.cat(c_cat, dim=1) - if exists(noise_level): - all_conds = {"c_concat": [c_cat], "c_crossattn": [c], "c_adm": noise_level} - else: - all_conds = {"c_concat": [c_cat], "c_crossattn": [c]} - if return_first_stage_outputs: - return z, all_conds, x, xrec, xc - return z, all_conds - - @torch.no_grad() - def log_images(self, *args, **kwargs): - log = super().log_images(*args, **kwargs) - log["lr"] = rearrange(args[0]["lr"], 'b h w c -> b c h w') - return log diff --git a/spaces/HuangLab/CELL-E_2-Image_Prediction/taming/modules/transformer/permuter.py b/spaces/HuangLab/CELL-E_2-Image_Prediction/taming/modules/transformer/permuter.py deleted file mode 100644 index 0d43bb135adde38d94bf18a7e5edaa4523cd95cf..0000000000000000000000000000000000000000 --- a/spaces/HuangLab/CELL-E_2-Image_Prediction/taming/modules/transformer/permuter.py +++ /dev/null @@ -1,248 +0,0 @@ -import torch -import torch.nn as nn -import numpy as np - - -class AbstractPermuter(nn.Module): - def __init__(self, *args, **kwargs): - super().__init__() - def forward(self, x, reverse=False): - raise NotImplementedError - - -class Identity(AbstractPermuter): - def __init__(self): - super().__init__() - - def forward(self, x, reverse=False): - return x - - -class Subsample(AbstractPermuter): - def __init__(self, H, W): - super().__init__() - C = 1 - indices = np.arange(H*W).reshape(C,H,W) - while min(H, W) > 1: - indices = indices.reshape(C,H//2,2,W//2,2) - indices = indices.transpose(0,2,4,1,3) - indices = indices.reshape(C*4,H//2, W//2) - H = H//2 - W = W//2 - C = C*4 - assert H == W == 1 - idx = torch.tensor(indices.ravel()) - self.register_buffer('forward_shuffle_idx', - nn.Parameter(idx, requires_grad=False)) - self.register_buffer('backward_shuffle_idx', - nn.Parameter(torch.argsort(idx), requires_grad=False)) - - def forward(self, x, reverse=False): - if not reverse: - return x[:, self.forward_shuffle_idx] - else: - return x[:, self.backward_shuffle_idx] - - -def mortonify(i, j): - """(i,j) index to linear morton code""" - i = np.uint64(i) - j = np.uint64(j) - - z = np.uint(0) - - for pos in range(32): - z = (z | - ((j & (np.uint64(1) << np.uint64(pos))) << np.uint64(pos)) | - ((i & (np.uint64(1) << np.uint64(pos))) << np.uint64(pos+1)) - ) - return z - - -class ZCurve(AbstractPermuter): - def __init__(self, H, W): - super().__init__() - reverseidx = [np.int64(mortonify(i,j)) for i in range(H) for j in range(W)] - idx = np.argsort(reverseidx) - idx = torch.tensor(idx) - reverseidx = torch.tensor(reverseidx) - self.register_buffer('forward_shuffle_idx', - idx) - self.register_buffer('backward_shuffle_idx', - reverseidx) - - def forward(self, x, reverse=False): - if not reverse: - return x[:, self.forward_shuffle_idx] - else: - return x[:, self.backward_shuffle_idx] - - -class SpiralOut(AbstractPermuter): - def __init__(self, H, W): - super().__init__() - assert H == W - size = W - indices = np.arange(size*size).reshape(size,size) - - i0 = size//2 - j0 = size//2-1 - - i = i0 - j = j0 - - idx = [indices[i0, j0]] - step_mult = 0 - for c in range(1, size//2+1): - step_mult += 1 - # steps left - for k in range(step_mult): - i = i - 1 - j = j - idx.append(indices[i, j]) - - # step down - for k in range(step_mult): - i = i - j = j + 1 - idx.append(indices[i, j]) - - step_mult += 1 - if c < size//2: - # step right - for k in range(step_mult): - i = i + 1 - j = j - idx.append(indices[i, j]) - - # step up - for k in range(step_mult): - i = i - j = j - 1 - idx.append(indices[i, j]) - else: - # end reached - for k in range(step_mult-1): - i = i + 1 - idx.append(indices[i, j]) - - assert len(idx) == size*size - idx = torch.tensor(idx) - self.register_buffer('forward_shuffle_idx', idx) - self.register_buffer('backward_shuffle_idx', torch.argsort(idx)) - - def forward(self, x, reverse=False): - if not reverse: - return x[:, self.forward_shuffle_idx] - else: - return x[:, self.backward_shuffle_idx] - - -class SpiralIn(AbstractPermuter): - def __init__(self, H, W): - super().__init__() - assert H == W - size = W - indices = np.arange(size*size).reshape(size,size) - - i0 = size//2 - j0 = size//2-1 - - i = i0 - j = j0 - - idx = [indices[i0, j0]] - step_mult = 0 - for c in range(1, size//2+1): - step_mult += 1 - # steps left - for k in range(step_mult): - i = i - 1 - j = j - idx.append(indices[i, j]) - - # step down - for k in range(step_mult): - i = i - j = j + 1 - idx.append(indices[i, j]) - - step_mult += 1 - if c < size//2: - # step right - for k in range(step_mult): - i = i + 1 - j = j - idx.append(indices[i, j]) - - # step up - for k in range(step_mult): - i = i - j = j - 1 - idx.append(indices[i, j]) - else: - # end reached - for k in range(step_mult-1): - i = i + 1 - idx.append(indices[i, j]) - - assert len(idx) == size*size - idx = idx[::-1] - idx = torch.tensor(idx) - self.register_buffer('forward_shuffle_idx', idx) - self.register_buffer('backward_shuffle_idx', torch.argsort(idx)) - - def forward(self, x, reverse=False): - if not reverse: - return x[:, self.forward_shuffle_idx] - else: - return x[:, self.backward_shuffle_idx] - - -class Random(nn.Module): - def __init__(self, H, W): - super().__init__() - indices = np.random.RandomState(1).permutation(H*W) - idx = torch.tensor(indices.ravel()) - self.register_buffer('forward_shuffle_idx', idx) - self.register_buffer('backward_shuffle_idx', torch.argsort(idx)) - - def forward(self, x, reverse=False): - if not reverse: - return x[:, self.forward_shuffle_idx] - else: - return x[:, self.backward_shuffle_idx] - - -class AlternateParsing(AbstractPermuter): - def __init__(self, H, W): - super().__init__() - indices = np.arange(W*H).reshape(H,W) - for i in range(1, H, 2): - indices[i, :] = indices[i, ::-1] - idx = indices.flatten() - assert len(idx) == H*W - idx = torch.tensor(idx) - self.register_buffer('forward_shuffle_idx', idx) - self.register_buffer('backward_shuffle_idx', torch.argsort(idx)) - - def forward(self, x, reverse=False): - if not reverse: - return x[:, self.forward_shuffle_idx] - else: - return x[:, self.backward_shuffle_idx] - - -if __name__ == "__main__": - p0 = AlternateParsing(16, 16) - print(p0.forward_shuffle_idx) - print(p0.backward_shuffle_idx) - - x = torch.randint(0, 768, size=(11, 256)) - y = p0(x) - xre = p0(y, reverse=True) - assert torch.equal(x, xre) - - p1 = SpiralOut(2, 2) - print(p1.forward_shuffle_idx) - print(p1.backward_shuffle_idx) diff --git a/spaces/ICML2022/OFA/fairseq/examples/speech_recognition/new/decoders/flashlight_decoder.py b/spaces/ICML2022/OFA/fairseq/examples/speech_recognition/new/decoders/flashlight_decoder.py deleted file mode 100644 index 38c7ac492f390a367a64769d7a72fe228df097c7..0000000000000000000000000000000000000000 --- a/spaces/ICML2022/OFA/fairseq/examples/speech_recognition/new/decoders/flashlight_decoder.py +++ /dev/null @@ -1,431 +0,0 @@ -#!/usr/bin/env python3 - -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -import gc -import os.path as osp -import warnings -from collections import deque, namedtuple -from typing import Any, Dict, Tuple - -import numpy as np -import torch -from fairseq import tasks -from fairseq.data.dictionary import Dictionary -from fairseq.dataclass.utils import convert_namespace_to_omegaconf -from fairseq.models.fairseq_model import FairseqModel -from fairseq.utils import apply_to_sample -from omegaconf import open_dict, OmegaConf - -from typing import List - -from .decoder_config import FlashlightDecoderConfig -from .base_decoder import BaseDecoder - -try: - from flashlight.lib.text.decoder import ( - LM, - CriterionType, - DecodeResult, - KenLM, - LexiconDecoder, - LexiconDecoderOptions, - LexiconFreeDecoder, - LexiconFreeDecoderOptions, - LMState, - SmearingMode, - Trie, - ) - from flashlight.lib.text.dictionary import create_word_dict, load_words -except ImportError: - warnings.warn( - "flashlight python bindings are required to use this functionality. " - "Please install from " - "https://github.com/facebookresearch/flashlight/tree/master/bindings/python" - ) - LM = object - LMState = object - - -class KenLMDecoder(BaseDecoder): - def __init__(self, cfg: FlashlightDecoderConfig, tgt_dict: Dictionary) -> None: - super().__init__(tgt_dict) - - self.nbest = cfg.nbest - self.unitlm = cfg.unitlm - - if cfg.lexicon: - self.lexicon = load_words(cfg.lexicon) - self.word_dict = create_word_dict(self.lexicon) - self.unk_word = self.word_dict.get_index("") - - self.lm = KenLM(cfg.lmpath, self.word_dict) - self.trie = Trie(self.vocab_size, self.silence) - - start_state = self.lm.start(False) - for word, spellings in self.lexicon.items(): - word_idx = self.word_dict.get_index(word) - _, score = self.lm.score(start_state, word_idx) - for spelling in spellings: - spelling_idxs = [tgt_dict.index(token) for token in spelling] - assert ( - tgt_dict.unk() not in spelling_idxs - ), f"{word} {spelling} {spelling_idxs}" - self.trie.insert(spelling_idxs, word_idx, score) - self.trie.smear(SmearingMode.MAX) - - self.decoder_opts = LexiconDecoderOptions( - beam_size=cfg.beam, - beam_size_token=cfg.beamsizetoken or len(tgt_dict), - beam_threshold=cfg.beamthreshold, - lm_weight=cfg.lmweight, - word_score=cfg.wordscore, - unk_score=cfg.unkweight, - sil_score=cfg.silweight, - log_add=False, - criterion_type=CriterionType.CTC, - ) - - self.decoder = LexiconDecoder( - self.decoder_opts, - self.trie, - self.lm, - self.silence, - self.blank, - self.unk_word, - [], - self.unitlm, - ) - else: - assert self.unitlm, "Lexicon-free decoding requires unit LM" - - d = {w: [[w]] for w in tgt_dict.symbols} - self.word_dict = create_word_dict(d) - self.lm = KenLM(cfg.lmpath, self.word_dict) - self.decoder_opts = LexiconFreeDecoderOptions( - beam_size=cfg.beam, - beam_size_token=cfg.beamsizetoken or len(tgt_dict), - beam_threshold=cfg.beamthreshold, - lm_weight=cfg.lmweight, - sil_score=cfg.silweight, - log_add=False, - criterion_type=CriterionType.CTC, - ) - self.decoder = LexiconFreeDecoder( - self.decoder_opts, self.lm, self.silence, self.blank, [] - ) - - def get_timesteps(self, token_idxs: List[int]) -> List[int]: - """Returns frame numbers corresponding to every non-blank token. - - Parameters - ---------- - token_idxs : List[int] - IDs of decoded tokens. - - Returns - ------- - List[int] - Frame numbers corresponding to every non-blank token. - """ - timesteps = [] - for i, token_idx in enumerate(token_idxs): - if token_idx == self.blank: - continue - if i == 0 or token_idx != token_idxs[i-1]: - timesteps.append(i) - return timesteps - - def decode( - self, - emissions: torch.FloatTensor, - ) -> List[List[Dict[str, torch.LongTensor]]]: - B, T, N = emissions.size() - hypos = [] - for b in range(B): - emissions_ptr = emissions.data_ptr() + 4 * b * emissions.stride(0) - results = self.decoder.decode(emissions_ptr, T, N) - - nbest_results = results[: self.nbest] - hypos.append( - [ - { - "tokens": self.get_tokens(result.tokens), - "score": result.score, - "timesteps": self.get_timesteps(result.tokens), - "words": [ - self.word_dict.get_entry(x) for x in result.words if x >= 0 - ], - } - for result in nbest_results - ] - ) - return hypos - - -FairseqLMState = namedtuple( - "FairseqLMState", - [ - "prefix", - "incremental_state", - "probs", - ], -) - - -class FairseqLM(LM): - def __init__(self, dictionary: Dictionary, model: FairseqModel) -> None: - super().__init__() - - self.dictionary = dictionary - self.model = model - self.unk = self.dictionary.unk() - - self.save_incremental = False # this currently does not work properly - self.max_cache = 20_000 - - if torch.cuda.is_available(): - model.cuda() - model.eval() - model.make_generation_fast_() - - self.states = {} - self.stateq = deque() - - def start(self, start_with_nothing: bool) -> LMState: - state = LMState() - prefix = torch.LongTensor([[self.dictionary.eos()]]) - incremental_state = {} if self.save_incremental else None - with torch.no_grad(): - res = self.model(prefix.cuda(), incremental_state=incremental_state) - probs = self.model.get_normalized_probs(res, log_probs=True, sample=None) - - if incremental_state is not None: - incremental_state = apply_to_sample(lambda x: x.cpu(), incremental_state) - self.states[state] = FairseqLMState( - prefix.numpy(), incremental_state, probs[0, -1].cpu().numpy() - ) - self.stateq.append(state) - - return state - - def score( - self, - state: LMState, - token_index: int, - no_cache: bool = False, - ) -> Tuple[LMState, int]: - """ - Evaluate language model based on the current lm state and new word - Parameters: - ----------- - state: current lm state - token_index: index of the word - (can be lexicon index then you should store inside LM the - mapping between indices of lexicon and lm, or lm index of a word) - Returns: - -------- - (LMState, float): pair of (new state, score for the current word) - """ - curr_state = self.states[state] - - def trim_cache(targ_size: int) -> None: - while len(self.stateq) > targ_size: - rem_k = self.stateq.popleft() - rem_st = self.states[rem_k] - rem_st = FairseqLMState(rem_st.prefix, None, None) - self.states[rem_k] = rem_st - - if curr_state.probs is None: - new_incremental_state = ( - curr_state.incremental_state.copy() - if curr_state.incremental_state is not None - else None - ) - with torch.no_grad(): - if new_incremental_state is not None: - new_incremental_state = apply_to_sample( - lambda x: x.cuda(), new_incremental_state - ) - elif self.save_incremental: - new_incremental_state = {} - - res = self.model( - torch.from_numpy(curr_state.prefix).cuda(), - incremental_state=new_incremental_state, - ) - probs = self.model.get_normalized_probs( - res, log_probs=True, sample=None - ) - - if new_incremental_state is not None: - new_incremental_state = apply_to_sample( - lambda x: x.cpu(), new_incremental_state - ) - - curr_state = FairseqLMState( - curr_state.prefix, new_incremental_state, probs[0, -1].cpu().numpy() - ) - - if not no_cache: - self.states[state] = curr_state - self.stateq.append(state) - - score = curr_state.probs[token_index].item() - - trim_cache(self.max_cache) - - outstate = state.child(token_index) - if outstate not in self.states and not no_cache: - prefix = np.concatenate( - [curr_state.prefix, torch.LongTensor([[token_index]])], -1 - ) - incr_state = curr_state.incremental_state - - self.states[outstate] = FairseqLMState(prefix, incr_state, None) - - if token_index == self.unk: - score = float("-inf") - - return outstate, score - - def finish(self, state: LMState) -> Tuple[LMState, int]: - """ - Evaluate eos for language model based on the current lm state - Returns: - -------- - (LMState, float): pair of (new state, score for the current word) - """ - return self.score(state, self.dictionary.eos()) - - def empty_cache(self) -> None: - self.states = {} - self.stateq = deque() - gc.collect() - - -class FairseqLMDecoder(BaseDecoder): - def __init__(self, cfg: FlashlightDecoderConfig, tgt_dict: Dictionary) -> None: - super().__init__(tgt_dict) - - self.nbest = cfg.nbest - self.unitlm = cfg.unitlm - - self.lexicon = load_words(cfg.lexicon) if cfg.lexicon else None - self.idx_to_wrd = {} - - checkpoint = torch.load(cfg.lmpath, map_location="cpu") - - if "cfg" in checkpoint and checkpoint["cfg"] is not None: - lm_args = checkpoint["cfg"] - else: - lm_args = convert_namespace_to_omegaconf(checkpoint["args"]) - - if not OmegaConf.is_dict(lm_args): - lm_args = OmegaConf.create(lm_args) - - with open_dict(lm_args.task): - lm_args.task.data = osp.dirname(cfg.lmpath) - - task = tasks.setup_task(lm_args.task) - model = task.build_model(lm_args.model) - model.load_state_dict(checkpoint["model"], strict=False) - - self.trie = Trie(self.vocab_size, self.silence) - - self.word_dict = task.dictionary - self.unk_word = self.word_dict.unk() - self.lm = FairseqLM(self.word_dict, model) - - if self.lexicon: - start_state = self.lm.start(False) - for i, (word, spellings) in enumerate(self.lexicon.items()): - if self.unitlm: - word_idx = i - self.idx_to_wrd[i] = word - score = 0 - else: - word_idx = self.word_dict.index(word) - _, score = self.lm.score(start_state, word_idx, no_cache=True) - - for spelling in spellings: - spelling_idxs = [tgt_dict.index(token) for token in spelling] - assert ( - tgt_dict.unk() not in spelling_idxs - ), f"{spelling} {spelling_idxs}" - self.trie.insert(spelling_idxs, word_idx, score) - self.trie.smear(SmearingMode.MAX) - - self.decoder_opts = LexiconDecoderOptions( - beam_size=cfg.beam, - beam_size_token=cfg.beamsizetoken or len(tgt_dict), - beam_threshold=cfg.beamthreshold, - lm_weight=cfg.lmweight, - word_score=cfg.wordscore, - unk_score=cfg.unkweight, - sil_score=cfg.silweight, - log_add=False, - criterion_type=CriterionType.CTC, - ) - - self.decoder = LexiconDecoder( - self.decoder_opts, - self.trie, - self.lm, - self.silence, - self.blank, - self.unk_word, - [], - self.unitlm, - ) - else: - assert self.unitlm, "Lexicon-free decoding requires unit LM" - - d = {w: [[w]] for w in tgt_dict.symbols} - self.word_dict = create_word_dict(d) - self.lm = KenLM(cfg.lmpath, self.word_dict) - self.decoder_opts = LexiconFreeDecoderOptions( - beam_size=cfg.beam, - beam_size_token=cfg.beamsizetoken or len(tgt_dict), - beam_threshold=cfg.beamthreshold, - lm_weight=cfg.lmweight, - sil_score=cfg.silweight, - log_add=False, - criterion_type=CriterionType.CTC, - ) - self.decoder = LexiconFreeDecoder( - self.decoder_opts, self.lm, self.silence, self.blank, [] - ) - - def decode( - self, - emissions: torch.FloatTensor, - ) -> List[List[Dict[str, torch.LongTensor]]]: - B, T, N = emissions.size() - hypos = [] - - def make_hypo(result: DecodeResult) -> Dict[str, Any]: - hypo = { - "tokens": self.get_tokens(result.tokens), - "score": result.score, - } - if self.lexicon: - hypo["words"] = [ - self.idx_to_wrd[x] if self.unitlm else self.word_dict[x] - for x in result.words - if x >= 0 - ] - return hypo - - for b in range(B): - emissions_ptr = emissions.data_ptr() + 4 * b * emissions.stride(0) - results = self.decoder.decode(emissions_ptr, T, N) - - nbest_results = results[: self.nbest] - hypos.append([make_hypo(result) for result in nbest_results]) - self.lm.empty_cache() - - return hypos diff --git a/spaces/ICML2022/OFA/fairseq/fairseq/models/wav2vec/__init__.py b/spaces/ICML2022/OFA/fairseq/fairseq/models/wav2vec/__init__.py deleted file mode 100644 index 06cec18183ca14cd534d14558e8b44e25f3e69d5..0000000000000000000000000000000000000000 --- a/spaces/ICML2022/OFA/fairseq/fairseq/models/wav2vec/__init__.py +++ /dev/null @@ -1,8 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -from .wav2vec import * # noqa -from .wav2vec2 import * # noqa -from .wav2vec2_asr import * # noqa diff --git a/spaces/Ibtehaj10/cheating-detection-FYP/yolovs5/utils/loggers/comet/README.md b/spaces/Ibtehaj10/cheating-detection-FYP/yolovs5/utils/loggers/comet/README.md deleted file mode 100644 index 8f206cd9830e0a7c88e8a303390eebaa7aaae1df..0000000000000000000000000000000000000000 --- a/spaces/Ibtehaj10/cheating-detection-FYP/yolovs5/utils/loggers/comet/README.md +++ /dev/null @@ -1,256 +0,0 @@ - - -# YOLOv5 with Comet - -This guide will cover how to use YOLOv5 with [Comet](https://bit.ly/yolov5-readme-comet) - -# About Comet - -Comet builds tools that help data scientists, engineers, and team leaders accelerate and optimize machine learning and deep learning models. - -Track and visualize model metrics in real time, save your hyperparameters, datasets, and model checkpoints, and visualize your model predictions with [Comet Custom Panels](https://bit.ly/yolov5-colab-comet-panels)! -Comet makes sure you never lose track of your work and makes it easy to share results and collaborate across teams of all sizes! - -# Getting Started - -## Install Comet - -```shell -pip install comet_ml -``` - -## Configure Comet Credentials - -There are two ways to configure Comet with YOLOv5. - -You can either set your credentials through enviroment variables - -**Environment Variables** - -```shell -export COMET_API_KEY= -export COMET_PROJECT_NAME= # This will default to 'yolov5' -``` - -Or create a `.comet.config` file in your working directory and set your credentials there. - -**Comet Configuration File** - -``` -[comet] -api_key= -project_name= # This will default to 'yolov5' -``` - -## Run the Training Script - -```shell -# Train YOLOv5s on COCO128 for 5 epochs -python train.py --img 640 --batch 16 --epochs 5 --data coco128.yaml --weights yolov5s.pt -``` - -That's it! Comet will automatically log your hyperparameters, command line arguments, training and valiation metrics. You can visualize and analyze your runs in the Comet UI - -yolo-ui - -# Try out an Example! -Check out an example of a [completed run here](https://www.comet.com/examples/comet-example-yolov5/a0e29e0e9b984e4a822db2a62d0cb357?experiment-tab=chart&showOutliers=true&smoothing=0&transformY=smoothing&xAxis=step&ref=yolov5&utm_source=yolov5&utm_medium=affilliate&utm_campaign=yolov5_comet_integration) - -Or better yet, try it out yourself in this Colab Notebook - -[![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/drive/1RG0WOQyxlDlo5Km8GogJpIEJlg_5lyYO?usp=sharing) - -# Log automatically - -By default, Comet will log the following items - -## Metrics -- Box Loss, Object Loss, Classification Loss for the training and validation data -- mAP_0.5, mAP_0.5:0.95 metrics for the validation data. -- Precision and Recall for the validation data - -## Parameters - -- Model Hyperparameters -- All parameters passed through the command line options - -## Visualizations - -- Confusion Matrix of the model predictions on the validation data -- Plots for the PR and F1 curves across all classes -- Correlogram of the Class Labels - -# Configure Comet Logging - -Comet can be configured to log additional data either through command line flags passed to the training script -or through environment variables. - -```shell -export COMET_MODE=online # Set whether to run Comet in 'online' or 'offline' mode. Defaults to online -export COMET_MODEL_NAME= #Set the name for the saved model. Defaults to yolov5 -export COMET_LOG_CONFUSION_MATRIX=false # Set to disable logging a Comet Confusion Matrix. Defaults to true -export COMET_MAX_IMAGE_UPLOADS= # Controls how many total image predictions to log to Comet. Defaults to 100. -export COMET_LOG_PER_CLASS_METRICS=true # Set to log evaluation metrics for each detected class at the end of training. Defaults to false -export COMET_DEFAULT_CHECKPOINT_FILENAME= # Set this if you would like to resume training from a different checkpoint. Defaults to 'last.pt' -export COMET_LOG_BATCH_LEVEL_METRICS=true # Set this if you would like to log training metrics at the batch level. Defaults to false. -export COMET_LOG_PREDICTIONS=true # Set this to false to disable logging model predictions -``` - -## Logging Checkpoints with Comet - -Logging Models to Comet is disabled by default. To enable it, pass the `save-period` argument to the training script. This will save the -logged checkpoints to Comet based on the interval value provided by `save-period` - -```shell -python train.py \ ---img 640 \ ---batch 16 \ ---epochs 5 \ ---data coco128.yaml \ ---weights yolov5s.pt \ ---save-period 1 -``` - -## Logging Model Predictions - -By default, model predictions (images, ground truth labels and bounding boxes) will be logged to Comet. - -You can control the frequency of logged predictions and the associated images by passing the `bbox_interval` command line argument. Predictions can be visualized using Comet's Object Detection Custom Panel. This frequency corresponds to every Nth batch of data per epoch. In the example below, we are logging every 2nd batch of data for each epoch. - -**Note:** The YOLOv5 validation dataloader will default to a batch size of 32, so you will have to set the logging frequency accordingly. - -Here is an [example project using the Panel](https://www.comet.com/examples/comet-example-yolov5?shareable=YcwMiJaZSXfcEXpGOHDD12vA1&ref=yolov5&utm_source=yolov5&utm_medium=affilliate&utm_campaign=yolov5_comet_integration) - - -```shell -python train.py \ ---img 640 \ ---batch 16 \ ---epochs 5 \ ---data coco128.yaml \ ---weights yolov5s.pt \ ---bbox_interval 2 -``` - -### Controlling the number of Prediction Images logged to Comet - -When logging predictions from YOLOv5, Comet will log the images associated with each set of predictions. By default a maximum of 100 validation images are logged. You can increase or decrease this number using the `COMET_MAX_IMAGE_UPLOADS` environment variable. - -```shell -env COMET_MAX_IMAGE_UPLOADS=200 python train.py \ ---img 640 \ ---batch 16 \ ---epochs 5 \ ---data coco128.yaml \ ---weights yolov5s.pt \ ---bbox_interval 1 -``` - -### Logging Class Level Metrics - -Use the `COMET_LOG_PER_CLASS_METRICS` environment variable to log mAP, precision, recall, f1 for each class. - -```shell -env COMET_LOG_PER_CLASS_METRICS=true python train.py \ ---img 640 \ ---batch 16 \ ---epochs 5 \ ---data coco128.yaml \ ---weights yolov5s.pt -``` - -## Uploading a Dataset to Comet Artifacts - -If you would like to store your data using [Comet Artifacts](https://www.comet.com/docs/v2/guides/data-management/using-artifacts/#learn-more?ref=yolov5&utm_source=yolov5&utm_medium=affilliate&utm_campaign=yolov5_comet_integration), you can do so using the `upload_dataset` flag. - -The dataset be organized in the way described in the [YOLOv5 documentation](https://docs.ultralytics.com/tutorials/train-custom-datasets/#3-organize-directories). The dataset config `yaml` file must follow the same format as that of the `coco128.yaml` file. - -```shell -python train.py \ ---img 640 \ ---batch 16 \ ---epochs 5 \ ---data coco128.yaml \ ---weights yolov5s.pt \ ---upload_dataset -``` - -You can find the uploaded dataset in the Artifacts tab in your Comet Workspace -artifact-1 - -You can preview the data directly in the Comet UI. -artifact-2 - -Artifacts are versioned and also support adding metadata about the dataset. Comet will automatically log the metadata from your dataset `yaml` file -artifact-3 - -### Using a saved Artifact - -If you would like to use a dataset from Comet Artifacts, set the `path` variable in your dataset `yaml` file to point to the following Artifact resource URL. - -``` -# contents of artifact.yaml file -path: "comet:///:" -``` -Then pass this file to your training script in the following way - -```shell -python train.py \ ---img 640 \ ---batch 16 \ ---epochs 5 \ ---data artifact.yaml \ ---weights yolov5s.pt -``` - -Artifacts also allow you to track the lineage of data as it flows through your Experimentation workflow. Here you can see a graph that shows you all the experiments that have used your uploaded dataset. -artifact-4 - -## Resuming a Training Run - -If your training run is interrupted for any reason, e.g. disrupted internet connection, you can resume the run using the `resume` flag and the Comet Run Path. - -The Run Path has the following format `comet:////`. - -This will restore the run to its state before the interruption, which includes restoring the model from a checkpoint, restoring all hyperparameters and training arguments and downloading Comet dataset Artifacts if they were used in the original run. The resumed run will continue logging to the existing Experiment in the Comet UI - -```shell -python train.py \ ---resume "comet://" -``` - -## Hyperparameter Search with the Comet Optimizer - -YOLOv5 is also integrated with Comet's Optimizer, making is simple to visualie hyperparameter sweeps in the Comet UI. - -### Configuring an Optimizer Sweep - -To configure the Comet Optimizer, you will have to create a JSON file with the information about the sweep. An example file has been provided in `utils/loggers/comet/optimizer_config.json` - -```shell -python utils/loggers/comet/hpo.py \ - --comet_optimizer_config "utils/loggers/comet/optimizer_config.json" -``` - -The `hpo.py` script accepts the same arguments as `train.py`. If you wish to pass additional arguments to your sweep simply add them after -the script. - -```shell -python utils/loggers/comet/hpo.py \ - --comet_optimizer_config "utils/loggers/comet/optimizer_config.json" \ - --save-period 1 \ - --bbox_interval 1 -``` - -### Running a Sweep in Parallel - -```shell -comet optimizer -j utils/loggers/comet/hpo.py \ - utils/loggers/comet/optimizer_config.json" -``` - -### Visualizing Results - -Comet provides a number of ways to visualize the results of your sweep. Take a look at a [project with a completed sweep here](https://www.comet.com/examples/comet-example-yolov5/view/PrlArHGuuhDTKC1UuBmTtOSXD/panels?ref=yolov5&utm_source=yolov5&utm_medium=affilliate&utm_campaign=yolov5_comet_integration) - -hyperparameter-yolo diff --git a/spaces/Illumotion/Koboldcpp/examples/chat-13B.sh b/spaces/Illumotion/Koboldcpp/examples/chat-13B.sh deleted file mode 100644 index 35c089d57d253ae828322f550f08f102f1aed59d..0000000000000000000000000000000000000000 --- a/spaces/Illumotion/Koboldcpp/examples/chat-13B.sh +++ /dev/null @@ -1,41 +0,0 @@ -#!/bin/bash - -set -e - -cd "$(dirname "$0")/.." || exit - -MODEL="${MODEL:-./models/13B/ggml-model-q4_0.bin}" -PROMPT_TEMPLATE=${PROMPT_TEMPLATE:-./prompts/chat.txt} -USER_NAME="${USER_NAME:-USER}" -AI_NAME="${AI_NAME:-ChatLLaMa}" - -# Adjust to the number of CPU cores you want to use. -N_THREAD="${N_THREAD:-8}" -# Number of tokens to predict (made it larger than default because we want a long interaction) -N_PREDICTS="${N_PREDICTS:-2048}" - -# Note: you can also override the generation options by specifying them on the command line: -# For example, override the context size by doing: ./chatLLaMa --ctx_size 1024 -GEN_OPTIONS="${GEN_OPTIONS:---ctx_size 2048 --temp 0.7 --top_k 40 --top_p 0.5 --repeat_last_n 256 --batch_size 1024 --repeat_penalty 1.17647}" - -DATE_TIME=$(date +%H:%M) -DATE_YEAR=$(date +%Y) - -PROMPT_FILE=$(mktemp -t llamacpp_prompt.XXXXXXX.txt) - -sed -e "s/\[\[USER_NAME\]\]/$USER_NAME/g" \ - -e "s/\[\[AI_NAME\]\]/$AI_NAME/g" \ - -e "s/\[\[DATE_TIME\]\]/$DATE_TIME/g" \ - -e "s/\[\[DATE_YEAR\]\]/$DATE_YEAR/g" \ - $PROMPT_TEMPLATE > $PROMPT_FILE - -# shellcheck disable=SC2086 # Intended splitting of GEN_OPTIONS -./main $GEN_OPTIONS \ - --model "$MODEL" \ - --threads "$N_THREAD" \ - --n_predict "$N_PREDICTS" \ - --color --interactive \ - --file ${PROMPT_FILE} \ - --reverse-prompt "${USER_NAME}:" \ - --in-prefix ' ' \ - "$@" diff --git a/spaces/Ilzhabimantara/rvc-Blue-archives/lib/infer_pack/modules/F0Predictor/PMF0Predictor.py b/spaces/Ilzhabimantara/rvc-Blue-archives/lib/infer_pack/modules/F0Predictor/PMF0Predictor.py deleted file mode 100644 index b2c592527a5966e6f8e79e8c52dc5b414246dcc6..0000000000000000000000000000000000000000 --- a/spaces/Ilzhabimantara/rvc-Blue-archives/lib/infer_pack/modules/F0Predictor/PMF0Predictor.py +++ /dev/null @@ -1,97 +0,0 @@ -from lib.infer_pack.modules.F0Predictor.F0Predictor import F0Predictor -import parselmouth -import numpy as np - - -class PMF0Predictor(F0Predictor): - def __init__(self, hop_length=512, f0_min=50, f0_max=1100, sampling_rate=44100): - self.hop_length = hop_length - self.f0_min = f0_min - self.f0_max = f0_max - self.sampling_rate = sampling_rate - - def interpolate_f0(self, f0): - """ - 对F0进行插值处理 - """ - - data = np.reshape(f0, (f0.size, 1)) - - vuv_vector = np.zeros((data.size, 1), dtype=np.float32) - vuv_vector[data > 0.0] = 1.0 - vuv_vector[data <= 0.0] = 0.0 - - ip_data = data - - frame_number = data.size - last_value = 0.0 - for i in range(frame_number): - if data[i] <= 0.0: - j = i + 1 - for j in range(i + 1, frame_number): - if data[j] > 0.0: - break - if j < frame_number - 1: - if last_value > 0.0: - step = (data[j] - data[i - 1]) / float(j - i) - for k in range(i, j): - ip_data[k] = data[i - 1] + step * (k - i + 1) - else: - for k in range(i, j): - ip_data[k] = data[j] - else: - for k in range(i, frame_number): - ip_data[k] = last_value - else: - ip_data[i] = data[i] # 这里可能存在一个没有必要的拷贝 - last_value = data[i] - - return ip_data[:, 0], vuv_vector[:, 0] - - def compute_f0(self, wav, p_len=None): - x = wav - if p_len is None: - p_len = x.shape[0] // self.hop_length - else: - assert abs(p_len - x.shape[0] // self.hop_length) < 4, "pad length error" - time_step = self.hop_length / self.sampling_rate * 1000 - f0 = ( - parselmouth.Sound(x, self.sampling_rate) - .to_pitch_ac( - time_step=time_step / 1000, - voicing_threshold=0.6, - pitch_floor=self.f0_min, - pitch_ceiling=self.f0_max, - ) - .selected_array["frequency"] - ) - - pad_size = (p_len - len(f0) + 1) // 2 - if pad_size > 0 or p_len - len(f0) - pad_size > 0: - f0 = np.pad(f0, [[pad_size, p_len - len(f0) - pad_size]], mode="constant") - f0, uv = self.interpolate_f0(f0) - return f0 - - def compute_f0_uv(self, wav, p_len=None): - x = wav - if p_len is None: - p_len = x.shape[0] // self.hop_length - else: - assert abs(p_len - x.shape[0] // self.hop_length) < 4, "pad length error" - time_step = self.hop_length / self.sampling_rate * 1000 - f0 = ( - parselmouth.Sound(x, self.sampling_rate) - .to_pitch_ac( - time_step=time_step / 1000, - voicing_threshold=0.6, - pitch_floor=self.f0_min, - pitch_ceiling=self.f0_max, - ) - .selected_array["frequency"] - ) - - pad_size = (p_len - len(f0) + 1) // 2 - if pad_size > 0 or p_len - len(f0) - pad_size > 0: - f0 = np.pad(f0, [[pad_size, p_len - len(f0) - pad_size]], mode="constant") - f0, uv = self.interpolate_f0(f0) - return f0, uv diff --git a/spaces/Ivanrs/batch-image-bg-remover/README.md b/spaces/Ivanrs/batch-image-bg-remover/README.md deleted file mode 100644 index 14f8e4e5a6bf277b00946fe3bb07c8dd6a34cbce..0000000000000000000000000000000000000000 --- a/spaces/Ivanrs/batch-image-bg-remover/README.md +++ /dev/null @@ -1,13 +0,0 @@ ---- -title: Batch Image Bg Remover -emoji: ⚡ -colorFrom: gray -colorTo: green -sdk: gradio -sdk_version: 3.1.1 -app_file: app.py -pinned: false -license: apache-2.0 ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/JUNGU/SuperGlue-Image-Matching/models/utils.py b/spaces/JUNGU/SuperGlue-Image-Matching/models/utils.py deleted file mode 100644 index 1a506ec988df122539319398e78233b0afec15bb..0000000000000000000000000000000000000000 --- a/spaces/JUNGU/SuperGlue-Image-Matching/models/utils.py +++ /dev/null @@ -1,567 +0,0 @@ -# %BANNER_BEGIN% -# --------------------------------------------------------------------- -# %COPYRIGHT_BEGIN% -# -# Magic Leap, Inc. ("COMPANY") CONFIDENTIAL -# -# Unpublished Copyright (c) 2020 -# Magic Leap, Inc., All Rights Reserved. -# -# NOTICE: All information contained herein is, and remains the property -# of COMPANY. The intellectual and technical concepts contained herein -# are proprietary to COMPANY and may be covered by U.S. and Foreign -# Patents, patents in process, and are protected by trade secret or -# copyright law. Dissemination of this information or reproduction of -# this material is strictly forbidden unless prior written permission is -# obtained from COMPANY. Access to the source code contained herein is -# hereby forbidden to anyone except current COMPANY employees, managers -# or contractors who have executed Confidentiality and Non-disclosure -# agreements explicitly covering such access. -# -# The copyright notice above does not evidence any actual or intended -# publication or disclosure of this source code, which includes -# information that is confidential and/or proprietary, and is a trade -# secret, of COMPANY. ANY REPRODUCTION, MODIFICATION, DISTRIBUTION, -# PUBLIC PERFORMANCE, OR PUBLIC DISPLAY OF OR THROUGH USE OF THIS -# SOURCE CODE WITHOUT THE EXPRESS WRITTEN CONSENT OF COMPANY IS -# STRICTLY PROHIBITED, AND IN VIOLATION OF APPLICABLE LAWS AND -# INTERNATIONAL TREATIES. THE RECEIPT OR POSSESSION OF THIS SOURCE -# CODE AND/OR RELATED INFORMATION DOES NOT CONVEY OR IMPLY ANY RIGHTS -# TO REPRODUCE, DISCLOSE OR DISTRIBUTE ITS CONTENTS, OR TO MANUFACTURE, -# USE, OR SELL ANYTHING THAT IT MAY DESCRIBE, IN WHOLE OR IN PART. -# -# %COPYRIGHT_END% -# ---------------------------------------------------------------------- -# %AUTHORS_BEGIN% -# -# Originating Authors: Paul-Edouard Sarlin -# Daniel DeTone -# Tomasz Malisiewicz -# -# %AUTHORS_END% -# --------------------------------------------------------------------*/ -# %BANNER_END% - -from pathlib import Path -import time -from collections import OrderedDict -from threading import Thread -import numpy as np -import cv2 -import torch -import matplotlib.pyplot as plt -import matplotlib -matplotlib.use('Agg') - - -class AverageTimer: - """ Class to help manage printing simple timing of code execution. """ - - def __init__(self, smoothing=0.3, newline=False): - self.smoothing = smoothing - self.newline = newline - self.times = OrderedDict() - self.will_print = OrderedDict() - self.reset() - - def reset(self): - now = time.time() - self.start = now - self.last_time = now - for name in self.will_print: - self.will_print[name] = False - - def update(self, name='default'): - now = time.time() - dt = now - self.last_time - if name in self.times: - dt = self.smoothing * dt + (1 - self.smoothing) * self.times[name] - self.times[name] = dt - self.will_print[name] = True - self.last_time = now - - def print(self, text='Timer'): - total = 0. - print('[{}]'.format(text), end=' ') - for key in self.times: - val = self.times[key] - if self.will_print[key]: - print('%s=%.3f' % (key, val), end=' ') - total += val - print('total=%.3f sec {%.1f FPS}' % (total, 1./total), end=' ') - if self.newline: - print(flush=True) - else: - print(end='\r', flush=True) - self.reset() - - -class VideoStreamer: - """ Class to help process image streams. Four types of possible inputs:" - 1.) USB Webcam. - 2.) An IP camera - 3.) A directory of images (files in directory matching 'image_glob'). - 4.) A video file, such as an .mp4 or .avi file. - """ - def __init__(self, basedir, resize, skip, image_glob, max_length=1000000): - self._ip_grabbed = False - self._ip_running = False - self._ip_camera = False - self._ip_image = None - self._ip_index = 0 - self.cap = [] - self.camera = True - self.video_file = False - self.listing = [] - self.resize = resize - self.interp = cv2.INTER_AREA - self.i = 0 - self.skip = skip - self.max_length = max_length - if isinstance(basedir, int) or basedir.isdigit(): - print('==> Processing USB webcam input: {}'.format(basedir)) - self.cap = cv2.VideoCapture(int(basedir)) - self.listing = range(0, self.max_length) - elif basedir.startswith(('http', 'rtsp')): - print('==> Processing IP camera input: {}'.format(basedir)) - self.cap = cv2.VideoCapture(basedir) - self.start_ip_camera_thread() - self._ip_camera = True - self.listing = range(0, self.max_length) - elif Path(basedir).is_dir(): - print('==> Processing image directory input: {}'.format(basedir)) - self.listing = list(Path(basedir).glob(image_glob[0])) - for j in range(1, len(image_glob)): - image_path = list(Path(basedir).glob(image_glob[j])) - self.listing = self.listing + image_path - self.listing.sort() - self.listing = self.listing[::self.skip] - self.max_length = np.min([self.max_length, len(self.listing)]) - if self.max_length == 0: - raise IOError('No images found (maybe bad \'image_glob\' ?)') - self.listing = self.listing[:self.max_length] - self.camera = False - elif Path(basedir).exists(): - print('==> Processing video input: {}'.format(basedir)) - self.cap = cv2.VideoCapture(basedir) - self.cap.set(cv2.CAP_PROP_BUFFERSIZE, 1) - num_frames = int(self.cap.get(cv2.CAP_PROP_FRAME_COUNT)) - self.listing = range(0, num_frames) - self.listing = self.listing[::self.skip] - self.video_file = True - self.max_length = np.min([self.max_length, len(self.listing)]) - self.listing = self.listing[:self.max_length] - else: - raise ValueError('VideoStreamer input \"{}\" not recognized.'.format(basedir)) - if self.camera and not self.cap.isOpened(): - raise IOError('Could not read camera') - - def load_image(self, impath): - """ Read image as grayscale and resize to img_size. - Inputs - impath: Path to input image. - Returns - grayim: uint8 numpy array sized H x W. - """ - grayim = cv2.imread(impath, 0) - if grayim is None: - raise Exception('Error reading image %s' % impath) - w, h = grayim.shape[1], grayim.shape[0] - w_new, h_new = process_resize(w, h, self.resize) - grayim = cv2.resize( - grayim, (w_new, h_new), interpolation=self.interp) - return grayim - - def next_frame(self): - """ Return the next frame, and increment internal counter. - Returns - image: Next H x W image. - status: True or False depending whether image was loaded. - """ - - if self.i == self.max_length: - return (None, False) - if self.camera: - - if self._ip_camera: - #Wait for first image, making sure we haven't exited - while self._ip_grabbed is False and self._ip_exited is False: - time.sleep(.001) - - ret, image = self._ip_grabbed, self._ip_image.copy() - if ret is False: - self._ip_running = False - else: - ret, image = self.cap.read() - if ret is False: - print('VideoStreamer: Cannot get image from camera') - return (None, False) - w, h = image.shape[1], image.shape[0] - if self.video_file: - self.cap.set(cv2.CAP_PROP_POS_FRAMES, self.listing[self.i]) - - w_new, h_new = process_resize(w, h, self.resize) - image = cv2.resize(image, (w_new, h_new), - interpolation=self.interp) - image = cv2.cvtColor(image, cv2.COLOR_RGB2GRAY) - else: - image_file = str(self.listing[self.i]) - image = self.load_image(image_file) - self.i = self.i + 1 - return (image, True) - - def start_ip_camera_thread(self): - self._ip_thread = Thread(target=self.update_ip_camera, args=()) - self._ip_running = True - self._ip_thread.start() - self._ip_exited = False - return self - - def update_ip_camera(self): - while self._ip_running: - ret, img = self.cap.read() - if ret is False: - self._ip_running = False - self._ip_exited = True - self._ip_grabbed = False - return - - self._ip_image = img - self._ip_grabbed = ret - self._ip_index += 1 - #print('IPCAMERA THREAD got frame {}'.format(self._ip_index)) - - - def cleanup(self): - self._ip_running = False - -# --- PREPROCESSING --- - -def process_resize(w, h, resize): - assert(len(resize) > 0 and len(resize) <= 2) - if len(resize) == 1 and resize[0] > -1: - scale = resize[0] / max(h, w) - w_new, h_new = int(round(w*scale)), int(round(h*scale)) - elif len(resize) == 1 and resize[0] == -1: - w_new, h_new = w, h - else: # len(resize) == 2: - w_new, h_new = resize[0], resize[1] - - # Issue warning if resolution is too small or too large. - if max(w_new, h_new) < 160: - print('Warning: input resolution is very small, results may vary') - elif max(w_new, h_new) > 2000: - print('Warning: input resolution is very large, results may vary') - - return w_new, h_new - - -def frame2tensor(frame, device): - return torch.from_numpy(frame/255.).float()[None, None].to(device) - - -def read_image(path, device, resize, rotation, resize_float): - image = cv2.imread(str(path), cv2.IMREAD_GRAYSCALE) - if image is None: - return None, None, None - w, h = image.shape[1], image.shape[0] - w_new, h_new = process_resize(w, h, resize) - scales = (float(w) / float(w_new), float(h) / float(h_new)) - - if resize_float: - image = cv2.resize(image.astype('float32'), (w_new, h_new)) - else: - image = cv2.resize(image, (w_new, h_new)).astype('float32') - - if rotation != 0: - image = np.rot90(image, k=rotation) - if rotation % 2: - scales = scales[::-1] - - inp = frame2tensor(image, device) - return image, inp, scales - -def process_image(image, device, resize, rotation, resize_float): - image = cv2.cvtColor(image, cv2.COLOR_BGR2GRAY) - if image is None: - return None, None, None - w, h = image.shape[1], image.shape[0] - w_new, h_new = process_resize(w, h, resize) - scales = (float(w) / float(w_new), float(h) / float(h_new)) - - if resize_float: - image = cv2.resize(image.astype('float32'), (w_new, h_new)) - else: - image = cv2.resize(image, (w_new, h_new)).astype('float32') - - if rotation != 0: - image = np.rot90(image, k=rotation) - if rotation % 2: - scales = scales[::-1] - - inp = frame2tensor(image, device) - return image, inp, scales - -# --- GEOMETRY --- - - -def estimate_pose(kpts0, kpts1, K0, K1, thresh, conf=0.99999): - if len(kpts0) < 5: - return None - - f_mean = np.mean([K0[0, 0], K1[1, 1], K0[0, 0], K1[1, 1]]) - norm_thresh = thresh / f_mean - - kpts0 = (kpts0 - K0[[0, 1], [2, 2]][None]) / K0[[0, 1], [0, 1]][None] - kpts1 = (kpts1 - K1[[0, 1], [2, 2]][None]) / K1[[0, 1], [0, 1]][None] - - E, mask = cv2.findEssentialMat( - kpts0, kpts1, np.eye(3), threshold=norm_thresh, prob=conf, - method=cv2.RANSAC) - - assert E is not None - - best_num_inliers = 0 - ret = None - for _E in np.split(E, len(E) / 3): - n, R, t, _ = cv2.recoverPose( - _E, kpts0, kpts1, np.eye(3), 1e9, mask=mask) - if n > best_num_inliers: - best_num_inliers = n - ret = (R, t[:, 0], mask.ravel() > 0) - return ret - - -def rotate_intrinsics(K, image_shape, rot): - """image_shape is the shape of the image after rotation""" - assert rot <= 3 - h, w = image_shape[:2][::-1 if (rot % 2) else 1] - fx, fy, cx, cy = K[0, 0], K[1, 1], K[0, 2], K[1, 2] - rot = rot % 4 - if rot == 1: - return np.array([[fy, 0., cy], - [0., fx, w-1-cx], - [0., 0., 1.]], dtype=K.dtype) - elif rot == 2: - return np.array([[fx, 0., w-1-cx], - [0., fy, h-1-cy], - [0., 0., 1.]], dtype=K.dtype) - else: # if rot == 3: - return np.array([[fy, 0., h-1-cy], - [0., fx, cx], - [0., 0., 1.]], dtype=K.dtype) - - -def rotate_pose_inplane(i_T_w, rot): - rotation_matrices = [ - np.array([[np.cos(r), -np.sin(r), 0., 0.], - [np.sin(r), np.cos(r), 0., 0.], - [0., 0., 1., 0.], - [0., 0., 0., 1.]], dtype=np.float32) - for r in [np.deg2rad(d) for d in (0, 270, 180, 90)] - ] - return np.dot(rotation_matrices[rot], i_T_w) - - -def scale_intrinsics(K, scales): - scales = np.diag([1./scales[0], 1./scales[1], 1.]) - return np.dot(scales, K) - - -def to_homogeneous(points): - return np.concatenate([points, np.ones_like(points[:, :1])], axis=-1) - - -def compute_epipolar_error(kpts0, kpts1, T_0to1, K0, K1): - kpts0 = (kpts0 - K0[[0, 1], [2, 2]][None]) / K0[[0, 1], [0, 1]][None] - kpts1 = (kpts1 - K1[[0, 1], [2, 2]][None]) / K1[[0, 1], [0, 1]][None] - kpts0 = to_homogeneous(kpts0) - kpts1 = to_homogeneous(kpts1) - - t0, t1, t2 = T_0to1[:3, 3] - t_skew = np.array([ - [0, -t2, t1], - [t2, 0, -t0], - [-t1, t0, 0] - ]) - E = t_skew @ T_0to1[:3, :3] - - Ep0 = kpts0 @ E.T # N x 3 - p1Ep0 = np.sum(kpts1 * Ep0, -1) # N - Etp1 = kpts1 @ E # N x 3 - d = p1Ep0**2 * (1.0 / (Ep0[:, 0]**2 + Ep0[:, 1]**2) - + 1.0 / (Etp1[:, 0]**2 + Etp1[:, 1]**2)) - return d - - -def angle_error_mat(R1, R2): - cos = (np.trace(np.dot(R1.T, R2)) - 1) / 2 - cos = np.clip(cos, -1., 1.) # numercial errors can make it out of bounds - return np.rad2deg(np.abs(np.arccos(cos))) - - -def angle_error_vec(v1, v2): - n = np.linalg.norm(v1) * np.linalg.norm(v2) - return np.rad2deg(np.arccos(np.clip(np.dot(v1, v2) / n, -1.0, 1.0))) - - -def compute_pose_error(T_0to1, R, t): - R_gt = T_0to1[:3, :3] - t_gt = T_0to1[:3, 3] - error_t = angle_error_vec(t, t_gt) - error_t = np.minimum(error_t, 180 - error_t) # ambiguity of E estimation - error_R = angle_error_mat(R, R_gt) - return error_t, error_R - - -def pose_auc(errors, thresholds): - sort_idx = np.argsort(errors) - errors = np.array(errors.copy())[sort_idx] - recall = (np.arange(len(errors)) + 1) / len(errors) - errors = np.r_[0., errors] - recall = np.r_[0., recall] - aucs = [] - for t in thresholds: - last_index = np.searchsorted(errors, t) - r = np.r_[recall[:last_index], recall[last_index-1]] - e = np.r_[errors[:last_index], t] - aucs.append(np.trapz(r, x=e)/t) - return aucs - - -# --- VISUALIZATION --- - - -def plot_image_pair(imgs, dpi=100, size=6, pad=.5): - n = len(imgs) - assert n == 2, 'number of images must be two' - figsize = (size*n, size*3/4) if size is not None else None - _, ax = plt.subplots(1, n, figsize=figsize, dpi=dpi) - for i in range(n): - ax[i].imshow(imgs[i], cmap=plt.get_cmap('gray'), vmin=0, vmax=255) - ax[i].get_yaxis().set_ticks([]) - ax[i].get_xaxis().set_ticks([]) - for spine in ax[i].spines.values(): # remove frame - spine.set_visible(False) - plt.tight_layout(pad=pad) - - -def plot_keypoints(kpts0, kpts1, color='w', ps=2): - ax = plt.gcf().axes - ax[0].scatter(kpts0[:, 0], kpts0[:, 1], c=color, s=ps) - ax[1].scatter(kpts1[:, 0], kpts1[:, 1], c=color, s=ps) - - -def plot_matches(kpts0, kpts1, color, lw=1.5, ps=4): - fig = plt.gcf() - ax = fig.axes - fig.canvas.draw() - - transFigure = fig.transFigure.inverted() - fkpts0 = transFigure.transform(ax[0].transData.transform(kpts0)) - fkpts1 = transFigure.transform(ax[1].transData.transform(kpts1)) - - fig.lines = [matplotlib.lines.Line2D( - (fkpts0[i, 0], fkpts1[i, 0]), (fkpts0[i, 1], fkpts1[i, 1]), zorder=1, - transform=fig.transFigure, c=color[i], linewidth=lw) - for i in range(len(kpts0))] - ax[0].scatter(kpts0[:, 0], kpts0[:, 1], c=color, s=ps) - ax[1].scatter(kpts1[:, 0], kpts1[:, 1], c=color, s=ps) - - -def make_matching_plot(image0, image1, kpts0, kpts1, mkpts0, mkpts1, - color, text, path, show_keypoints=False, - fast_viz=False, opencv_display=False, - opencv_title='matches', small_text=[]): - - if fast_viz: - make_matching_plot_fast(image0, image1, kpts0, kpts1, mkpts0, mkpts1, - color, text, path, show_keypoints, 10, - opencv_display, opencv_title, small_text) - return - - plot_image_pair([image0, image1]) - if show_keypoints: - plot_keypoints(kpts0, kpts1, color='k', ps=4) - plot_keypoints(kpts0, kpts1, color='w', ps=2) - plot_matches(mkpts0, mkpts1, color) - - fig = plt.gcf() - txt_color = 'k' if image0[:100, :150].mean() > 200 else 'w' - fig.text( - 0.01, 0.99, '\n'.join(text), transform=fig.axes[0].transAxes, - fontsize=15, va='top', ha='left', color=txt_color) - - txt_color = 'k' if image0[-100:, :150].mean() > 200 else 'w' - fig.text( - 0.01, 0.01, '\n'.join(small_text), transform=fig.axes[0].transAxes, - fontsize=5, va='bottom', ha='left', color=txt_color) - - plt.savefig(str(path), bbox_inches='tight', pad_inches=0) - plt.close() - - -def make_matching_plot_fast(image0, image1, kpts0, kpts1, mkpts0, - mkpts1, color, text, path=None, - show_keypoints=False, margin=10, - opencv_display=False, opencv_title='', - small_text=[]): - H0, W0 = image0.shape - H1, W1 = image1.shape - H, W = max(H0, H1), W0 + W1 + margin - - out = 255*np.ones((H, W), np.uint8) - out[:H0, :W0] = image0 - out[:H1, W0+margin:] = image1 - out = np.stack([out]*3, -1) - - if show_keypoints: - kpts0, kpts1 = np.round(kpts0).astype(int), np.round(kpts1).astype(int) - white = (255, 255, 255) - black = (0, 0, 0) - for x, y in kpts0: - cv2.circle(out, (x, y), 2, black, -1, lineType=cv2.LINE_AA) - cv2.circle(out, (x, y), 1, white, -1, lineType=cv2.LINE_AA) - for x, y in kpts1: - cv2.circle(out, (x + margin + W0, y), 2, black, -1, - lineType=cv2.LINE_AA) - cv2.circle(out, (x + margin + W0, y), 1, white, -1, - lineType=cv2.LINE_AA) - - mkpts0, mkpts1 = np.round(mkpts0).astype(int), np.round(mkpts1).astype(int) - color = (np.array(color[:, :3])*255).astype(int)[:, ::-1] - for (x0, y0), (x1, y1), c in zip(mkpts0, mkpts1, color): - c = c.tolist() - cv2.line(out, (x0, y0), (x1 + margin + W0, y1), - color=c, thickness=1, lineType=cv2.LINE_AA) - # display line end-points as circles - cv2.circle(out, (x0, y0), 2, c, -1, lineType=cv2.LINE_AA) - cv2.circle(out, (x1 + margin + W0, y1), 2, c, -1, - lineType=cv2.LINE_AA) - - # Scale factor for consistent visualization across scales. - sc = min(H / 640., 2.0) - - # Big text. - Ht = int(30 * sc) # text height - txt_color_fg = (255, 255, 255) - txt_color_bg = (0, 0, 0) - for i, t in enumerate(text): - cv2.putText(out, t, (int(8*sc), Ht*(i+1)), cv2.FONT_HERSHEY_DUPLEX, - 1.0*sc, txt_color_bg, 2, cv2.LINE_AA) - cv2.putText(out, t, (int(8*sc), Ht*(i+1)), cv2.FONT_HERSHEY_DUPLEX, - 1.0*sc, txt_color_fg, 1, cv2.LINE_AA) - - # Small text. - Ht = int(18 * sc) # text height - for i, t in enumerate(reversed(small_text)): - cv2.putText(out, t, (int(8*sc), int(H-Ht*(i+.6))), cv2.FONT_HERSHEY_DUPLEX, - 0.5*sc, txt_color_bg, 2, cv2.LINE_AA) - cv2.putText(out, t, (int(8*sc), int(H-Ht*(i+.6))), cv2.FONT_HERSHEY_DUPLEX, - 0.5*sc, txt_color_fg, 1, cv2.LINE_AA) - return out - - -def error_colormap(x): - return np.clip( - np.stack([2-x*2, x*2, np.zeros_like(x), np.ones_like(x)], -1), 0, 1) diff --git a/spaces/Jason1112/ML-GUI/readme.md b/spaces/Jason1112/ML-GUI/readme.md deleted file mode 100644 index fafdf1e2a871a4f7b318dd0b81936fb2a9bdaf0b..0000000000000000000000000000000000000000 --- a/spaces/Jason1112/ML-GUI/readme.md +++ /dev/null @@ -1,10 +0,0 @@ ---- -title: ML-GUI -app_file: app.py -sdk: gradio -sdk_version: 3.35.2 ---- -Hướng dẫn chạy: -
      1, pip install -r requirements.txt -
      2, chạy file app.py - lưu ý: môi trường của chương trình hiện là python 3.10.6, với bản python 3.11.x có thể xảy ra lỗi khi đọc các file .pkl diff --git a/spaces/Jeff2323/ai-comic-factory/src/lib/dirtyLLMResponseCleaner.ts b/spaces/Jeff2323/ai-comic-factory/src/lib/dirtyLLMResponseCleaner.ts deleted file mode 100644 index f3052c217445760d102949a11c64384f488865ae..0000000000000000000000000000000000000000 --- a/spaces/Jeff2323/ai-comic-factory/src/lib/dirtyLLMResponseCleaner.ts +++ /dev/null @@ -1,46 +0,0 @@ -export function dirtyLLMResponseCleaner(input: string) { - let str = ( - `${input || ""}` - // a summary of all the weird hallucinations I saw it make.. - .replaceAll(`"]`, `"}]`) - .replaceAll(`" ]`, `"}]`) - .replaceAll(`" ]`, `"}]`) - .replaceAll(`"\n]`, `"}]`) - .replaceAll(`"\n ]`, `"}]`) - .replaceAll(`"\n ]`, `"}]`) - .replaceAll("}}", "}") - .replaceAll("]]", "]") - .replaceAll("[[", "[") - .replaceAll("{{", "{") - .replaceAll(",,", ",") - .replaceAll("[0]", "") - .replaceAll("[1]", "") - .replaceAll("[2]", "") - .replaceAll("[3]", "") - .replaceAll("[4]", "") - .replaceAll("[panel 0]", "") - .replaceAll("[panel 1]", "") - .replaceAll("[panel 2]", "") - .replaceAll("[panel 3]", "") - .replaceAll("[panel 4]", "") - ) - - // repair missing end of JSON array - if (str.at(-1) === '}') { - str = str + "]" - } - - if (str.at(-1) === '"') { - str = str + "}]" - } - - if (str[0] === '{') { - str = "[" + str - } - - if (str[0] === '"') { - str = "[{" + str - } - - return str -} \ No newline at end of file diff --git a/spaces/K3sco/Linaqruf-anything-v3.0/app.py b/spaces/K3sco/Linaqruf-anything-v3.0/app.py deleted file mode 100644 index 16e8131a0bbf7b06956e69e2b7758fa01e4eb51f..0000000000000000000000000000000000000000 --- a/spaces/K3sco/Linaqruf-anything-v3.0/app.py +++ /dev/null @@ -1,3 +0,0 @@ -import gradio as gr - -gr.Interface.load("models/Linaqruf/anything-v3.0").launch() \ No newline at end of file diff --git a/spaces/KAIST-Geometric-AI-Lab/salad-demo/salad/utils/shapeglot_util.py b/spaces/KAIST-Geometric-AI-Lab/salad-demo/salad/utils/shapeglot_util.py deleted file mode 100644 index 31f3e193521f12582dcd90f74ec6257b08b68f8a..0000000000000000000000000000000000000000 --- a/spaces/KAIST-Geometric-AI-Lab/salad-demo/salad/utils/shapeglot_util.py +++ /dev/null @@ -1,61 +0,0 @@ -# References: https://github.com/optas/shapeglot -# https://github.com/63days/PartGlot. - -from six.moves import cPickle - - -def unpickle_data(file_name, python2_to_3=False): - """Restore data previously saved with pickle_data(). - :param file_name: file holding the pickled data. - :param python2_to_3: (boolean), if True, pickle happened under python2x, unpickling under python3x. - :return: a generator over the un-pickled items. - Note, about implementing the python2_to_3 see - https://stackoverflow.com/questions/28218466/unpickling-a-python-2-object-with-python-3 - """ - - in_file = open(file_name, "rb") - if python2_to_3: - size = cPickle.load(in_file, encoding="latin1") - else: - size = cPickle.load(in_file) - - for _ in range(size): - if python2_to_3: - yield cPickle.load(in_file, encoding="latin1") - else: - yield cPickle.load(in_file) - in_file.close() - - -def get_mask_of_game_data( - game_data: DataFrame, - word2int: Dict, - only_correct: bool, - only_easy_context: bool, - max_seq_len: int, - only_one_part_name: bool, -): - """ - only_correct (if True): mask will be 1 in location iff human listener predicted correctly. - only_easy (if True): uses only easy context examples (more dissimilar triplet chairs) - max_seq_len: drops examples with len(utterance) > max_seq_len - only_one_part_name (if True): uses only utterances describing only one part in the give set. - """ - mask = np.array(game_data.correct) - if not only_correct: - mask = np.ones_like(mask, dtype=np.bool) - - if only_easy_context: - context_mask = np.array(game_data.context_condition == "easy", dtype=np.bool) - mask = np.logical_and(mask, context_mask) - - short_mask = np.array( - game_data.text.apply(lambda x: len(x)) <= max_seq_len, dtype=np.bool - ) - mask = np.logical_and(mask, short_mask) - - part_indicator, part_mask = get_part_indicator(game_data.text, word2int) - if only_one_part_name: - mask = np.logical_and(mask, part_mask) - - return mask, part_indicator diff --git a/spaces/Kangarroar/ApplioRVC-Inference/infer/lib/uvr5_pack/lib_v5/spec_utils.py b/spaces/Kangarroar/ApplioRVC-Inference/infer/lib/uvr5_pack/lib_v5/spec_utils.py deleted file mode 100644 index a9634fd51ff47bf90211839231774719154c37cf..0000000000000000000000000000000000000000 --- a/spaces/Kangarroar/ApplioRVC-Inference/infer/lib/uvr5_pack/lib_v5/spec_utils.py +++ /dev/null @@ -1,672 +0,0 @@ -import hashlib -import json -import math -import os - -import librosa -import numpy as np -import soundfile as sf -from tqdm import tqdm - - -def crop_center(h1, h2): - h1_shape = h1.size() - h2_shape = h2.size() - - if h1_shape[3] == h2_shape[3]: - return h1 - elif h1_shape[3] < h2_shape[3]: - raise ValueError("h1_shape[3] must be greater than h2_shape[3]") - - # s_freq = (h2_shape[2] - h1_shape[2]) // 2 - # e_freq = s_freq + h1_shape[2] - s_time = (h1_shape[3] - h2_shape[3]) // 2 - e_time = s_time + h2_shape[3] - h1 = h1[:, :, :, s_time:e_time] - - return h1 - - -def wave_to_spectrogram( - wave, hop_length, n_fft, mid_side=False, mid_side_b2=False, reverse=False -): - if reverse: - wave_left = np.flip(np.asfortranarray(wave[0])) - wave_right = np.flip(np.asfortranarray(wave[1])) - elif mid_side: - wave_left = np.asfortranarray(np.add(wave[0], wave[1]) / 2) - wave_right = np.asfortranarray(np.subtract(wave[0], wave[1])) - elif mid_side_b2: - wave_left = np.asfortranarray(np.add(wave[1], wave[0] * 0.5)) - wave_right = np.asfortranarray(np.subtract(wave[0], wave[1] * 0.5)) - else: - wave_left = np.asfortranarray(wave[0]) - wave_right = np.asfortranarray(wave[1]) - - spec_left = librosa.stft(wave_left, n_fft, hop_length=hop_length) - spec_right = librosa.stft(wave_right, n_fft, hop_length=hop_length) - - spec = np.asfortranarray([spec_left, spec_right]) - - return spec - - -def wave_to_spectrogram_mt( - wave, hop_length, n_fft, mid_side=False, mid_side_b2=False, reverse=False -): - import threading - - if reverse: - wave_left = np.flip(np.asfortranarray(wave[0])) - wave_right = np.flip(np.asfortranarray(wave[1])) - elif mid_side: - wave_left = np.asfortranarray(np.add(wave[0], wave[1]) / 2) - wave_right = np.asfortranarray(np.subtract(wave[0], wave[1])) - elif mid_side_b2: - wave_left = np.asfortranarray(np.add(wave[1], wave[0] * 0.5)) - wave_right = np.asfortranarray(np.subtract(wave[0], wave[1] * 0.5)) - else: - wave_left = np.asfortranarray(wave[0]) - wave_right = np.asfortranarray(wave[1]) - - def run_thread(**kwargs): - global spec_left - spec_left = librosa.stft(**kwargs) - - thread = threading.Thread( - target=run_thread, - kwargs={"y": wave_left, "n_fft": n_fft, "hop_length": hop_length}, - ) - thread.start() - spec_right = librosa.stft(wave_right, n_fft, hop_length=hop_length) - thread.join() - - spec = np.asfortranarray([spec_left, spec_right]) - - return spec - - -def combine_spectrograms(specs, mp): - l = min([specs[i].shape[2] for i in specs]) - spec_c = np.zeros(shape=(2, mp.param["bins"] + 1, l), dtype=np.complex64) - offset = 0 - bands_n = len(mp.param["band"]) - - for d in range(1, bands_n + 1): - h = mp.param["band"][d]["crop_stop"] - mp.param["band"][d]["crop_start"] - spec_c[:, offset : offset + h, :l] = specs[d][ - :, mp.param["band"][d]["crop_start"] : mp.param["band"][d]["crop_stop"], :l - ] - offset += h - - if offset > mp.param["bins"]: - raise ValueError("Too much bins") - - # lowpass fiter - if ( - mp.param["pre_filter_start"] > 0 - ): # and mp.param['band'][bands_n]['res_type'] in ['scipy', 'polyphase']: - if bands_n == 1: - spec_c = fft_lp_filter( - spec_c, mp.param["pre_filter_start"], mp.param["pre_filter_stop"] - ) - else: - gp = 1 - for b in range( - mp.param["pre_filter_start"] + 1, mp.param["pre_filter_stop"] - ): - g = math.pow( - 10, -(b - mp.param["pre_filter_start"]) * (3.5 - gp) / 20.0 - ) - gp = g - spec_c[:, b, :] *= g - - return np.asfortranarray(spec_c) - - -def spectrogram_to_image(spec, mode="magnitude"): - if mode == "magnitude": - if np.iscomplexobj(spec): - y = np.abs(spec) - else: - y = spec - y = np.log10(y**2 + 1e-8) - elif mode == "phase": - if np.iscomplexobj(spec): - y = np.angle(spec) - else: - y = spec - - y -= y.min() - y *= 255 / y.max() - img = np.uint8(y) - - if y.ndim == 3: - img = img.transpose(1, 2, 0) - img = np.concatenate([np.max(img, axis=2, keepdims=True), img], axis=2) - - return img - - -def reduce_vocal_aggressively(X, y, softmask): - v = X - y - y_mag_tmp = np.abs(y) - v_mag_tmp = np.abs(v) - - v_mask = v_mag_tmp > y_mag_tmp - y_mag = np.clip(y_mag_tmp - v_mag_tmp * v_mask * softmask, 0, np.inf) - - return y_mag * np.exp(1.0j * np.angle(y)) - - -def mask_silence(mag, ref, thres=0.2, min_range=64, fade_size=32): - if min_range < fade_size * 2: - raise ValueError("min_range must be >= fade_area * 2") - - mag = mag.copy() - - idx = np.where(ref.mean(axis=(0, 1)) < thres)[0] - starts = np.insert(idx[np.where(np.diff(idx) != 1)[0] + 1], 0, idx[0]) - ends = np.append(idx[np.where(np.diff(idx) != 1)[0]], idx[-1]) - uninformative = np.where(ends - starts > min_range)[0] - if len(uninformative) > 0: - starts = starts[uninformative] - ends = ends[uninformative] - old_e = None - for s, e in zip(starts, ends): - if old_e is not None and s - old_e < fade_size: - s = old_e - fade_size * 2 - - if s != 0: - weight = np.linspace(0, 1, fade_size) - mag[:, :, s : s + fade_size] += weight * ref[:, :, s : s + fade_size] - else: - s -= fade_size - - if e != mag.shape[2]: - weight = np.linspace(1, 0, fade_size) - mag[:, :, e - fade_size : e] += weight * ref[:, :, e - fade_size : e] - else: - e += fade_size - - mag[:, :, s + fade_size : e - fade_size] += ref[ - :, :, s + fade_size : e - fade_size - ] - old_e = e - - return mag - - -def align_wave_head_and_tail(a, b): - l = min([a[0].size, b[0].size]) - - return a[:l, :l], b[:l, :l] - - -def cache_or_load(mix_path, inst_path, mp): - mix_basename = os.path.splitext(os.path.basename(mix_path))[0] - inst_basename = os.path.splitext(os.path.basename(inst_path))[0] - - cache_dir = "mph{}".format( - hashlib.sha1(json.dumps(mp.param, sort_keys=True).encode("utf-8")).hexdigest() - ) - mix_cache_dir = os.path.join("cache", cache_dir) - inst_cache_dir = os.path.join("cache", cache_dir) - - os.makedirs(mix_cache_dir, exist_ok=True) - os.makedirs(inst_cache_dir, exist_ok=True) - - mix_cache_path = os.path.join(mix_cache_dir, mix_basename + ".npy") - inst_cache_path = os.path.join(inst_cache_dir, inst_basename + ".npy") - - if os.path.exists(mix_cache_path) and os.path.exists(inst_cache_path): - X_spec_m = np.load(mix_cache_path) - y_spec_m = np.load(inst_cache_path) - else: - X_wave, y_wave, X_spec_s, y_spec_s = {}, {}, {}, {} - - for d in range(len(mp.param["band"]), 0, -1): - bp = mp.param["band"][d] - - if d == len(mp.param["band"]): # high-end band - X_wave[d], _ = librosa.load( - mix_path, bp["sr"], False, dtype=np.float32, res_type=bp["res_type"] - ) - y_wave[d], _ = librosa.load( - inst_path, - bp["sr"], - False, - dtype=np.float32, - res_type=bp["res_type"], - ) - else: # lower bands - X_wave[d] = librosa.resample( - X_wave[d + 1], - mp.param["band"][d + 1]["sr"], - bp["sr"], - res_type=bp["res_type"], - ) - y_wave[d] = librosa.resample( - y_wave[d + 1], - mp.param["band"][d + 1]["sr"], - bp["sr"], - res_type=bp["res_type"], - ) - - X_wave[d], y_wave[d] = align_wave_head_and_tail(X_wave[d], y_wave[d]) - - X_spec_s[d] = wave_to_spectrogram( - X_wave[d], - bp["hl"], - bp["n_fft"], - mp.param["mid_side"], - mp.param["mid_side_b2"], - mp.param["reverse"], - ) - y_spec_s[d] = wave_to_spectrogram( - y_wave[d], - bp["hl"], - bp["n_fft"], - mp.param["mid_side"], - mp.param["mid_side_b2"], - mp.param["reverse"], - ) - - del X_wave, y_wave - - X_spec_m = combine_spectrograms(X_spec_s, mp) - y_spec_m = combine_spectrograms(y_spec_s, mp) - - if X_spec_m.shape != y_spec_m.shape: - raise ValueError("The combined spectrograms are different: " + mix_path) - - _, ext = os.path.splitext(mix_path) - - np.save(mix_cache_path, X_spec_m) - np.save(inst_cache_path, y_spec_m) - - return X_spec_m, y_spec_m - - -def spectrogram_to_wave(spec, hop_length, mid_side, mid_side_b2, reverse): - spec_left = np.asfortranarray(spec[0]) - spec_right = np.asfortranarray(spec[1]) - - wave_left = librosa.istft(spec_left, hop_length=hop_length) - wave_right = librosa.istft(spec_right, hop_length=hop_length) - - if reverse: - return np.asfortranarray([np.flip(wave_left), np.flip(wave_right)]) - elif mid_side: - return np.asfortranarray( - [np.add(wave_left, wave_right / 2), np.subtract(wave_left, wave_right / 2)] - ) - elif mid_side_b2: - return np.asfortranarray( - [ - np.add(wave_right / 1.25, 0.4 * wave_left), - np.subtract(wave_left / 1.25, 0.4 * wave_right), - ] - ) - else: - return np.asfortranarray([wave_left, wave_right]) - - -def spectrogram_to_wave_mt(spec, hop_length, mid_side, reverse, mid_side_b2): - import threading - - spec_left = np.asfortranarray(spec[0]) - spec_right = np.asfortranarray(spec[1]) - - def run_thread(**kwargs): - global wave_left - wave_left = librosa.istft(**kwargs) - - thread = threading.Thread( - target=run_thread, kwargs={"stft_matrix": spec_left, "hop_length": hop_length} - ) - thread.start() - wave_right = librosa.istft(spec_right, hop_length=hop_length) - thread.join() - - if reverse: - return np.asfortranarray([np.flip(wave_left), np.flip(wave_right)]) - elif mid_side: - return np.asfortranarray( - [np.add(wave_left, wave_right / 2), np.subtract(wave_left, wave_right / 2)] - ) - elif mid_side_b2: - return np.asfortranarray( - [ - np.add(wave_right / 1.25, 0.4 * wave_left), - np.subtract(wave_left / 1.25, 0.4 * wave_right), - ] - ) - else: - return np.asfortranarray([wave_left, wave_right]) - - -def cmb_spectrogram_to_wave(spec_m, mp, extra_bins_h=None, extra_bins=None): - wave_band = {} - bands_n = len(mp.param["band"]) - offset = 0 - - for d in range(1, bands_n + 1): - bp = mp.param["band"][d] - spec_s = np.ndarray( - shape=(2, bp["n_fft"] // 2 + 1, spec_m.shape[2]), dtype=complex - ) - h = bp["crop_stop"] - bp["crop_start"] - spec_s[:, bp["crop_start"] : bp["crop_stop"], :] = spec_m[ - :, offset : offset + h, : - ] - - offset += h - if d == bands_n: # higher - if extra_bins_h: # if --high_end_process bypass - max_bin = bp["n_fft"] // 2 - spec_s[:, max_bin - extra_bins_h : max_bin, :] = extra_bins[ - :, :extra_bins_h, : - ] - if bp["hpf_start"] > 0: - spec_s = fft_hp_filter(spec_s, bp["hpf_start"], bp["hpf_stop"] - 1) - if bands_n == 1: - wave = spectrogram_to_wave( - spec_s, - bp["hl"], - mp.param["mid_side"], - mp.param["mid_side_b2"], - mp.param["reverse"], - ) - else: - wave = np.add( - wave, - spectrogram_to_wave( - spec_s, - bp["hl"], - mp.param["mid_side"], - mp.param["mid_side_b2"], - mp.param["reverse"], - ), - ) - else: - sr = mp.param["band"][d + 1]["sr"] - if d == 1: # lower - spec_s = fft_lp_filter(spec_s, bp["lpf_start"], bp["lpf_stop"]) - wave = librosa.resample( - spectrogram_to_wave( - spec_s, - bp["hl"], - mp.param["mid_side"], - mp.param["mid_side_b2"], - mp.param["reverse"], - ), - bp["sr"], - sr, - res_type="sinc_fastest", - ) - else: # mid - spec_s = fft_hp_filter(spec_s, bp["hpf_start"], bp["hpf_stop"] - 1) - spec_s = fft_lp_filter(spec_s, bp["lpf_start"], bp["lpf_stop"]) - wave2 = np.add( - wave, - spectrogram_to_wave( - spec_s, - bp["hl"], - mp.param["mid_side"], - mp.param["mid_side_b2"], - mp.param["reverse"], - ), - ) - # wave = librosa.core.resample(wave2, bp['sr'], sr, res_type="sinc_fastest") - wave = librosa.core.resample(wave2, bp["sr"], sr, res_type="scipy") - - return wave.T - - -def fft_lp_filter(spec, bin_start, bin_stop): - g = 1.0 - for b in range(bin_start, bin_stop): - g -= 1 / (bin_stop - bin_start) - spec[:, b, :] = g * spec[:, b, :] - - spec[:, bin_stop:, :] *= 0 - - return spec - - -def fft_hp_filter(spec, bin_start, bin_stop): - g = 1.0 - for b in range(bin_start, bin_stop, -1): - g -= 1 / (bin_start - bin_stop) - spec[:, b, :] = g * spec[:, b, :] - - spec[:, 0 : bin_stop + 1, :] *= 0 - - return spec - - -def mirroring(a, spec_m, input_high_end, mp): - if "mirroring" == a: - mirror = np.flip( - np.abs( - spec_m[ - :, - mp.param["pre_filter_start"] - - 10 - - input_high_end.shape[1] : mp.param["pre_filter_start"] - - 10, - :, - ] - ), - 1, - ) - mirror = mirror * np.exp(1.0j * np.angle(input_high_end)) - - return np.where( - np.abs(input_high_end) <= np.abs(mirror), input_high_end, mirror - ) - - if "mirroring2" == a: - mirror = np.flip( - np.abs( - spec_m[ - :, - mp.param["pre_filter_start"] - - 10 - - input_high_end.shape[1] : mp.param["pre_filter_start"] - - 10, - :, - ] - ), - 1, - ) - mi = np.multiply(mirror, input_high_end * 1.7) - - return np.where(np.abs(input_high_end) <= np.abs(mi), input_high_end, mi) - - -def ensembling(a, specs): - for i in range(1, len(specs)): - if i == 1: - spec = specs[0] - - ln = min([spec.shape[2], specs[i].shape[2]]) - spec = spec[:, :, :ln] - specs[i] = specs[i][:, :, :ln] - - if "min_mag" == a: - spec = np.where(np.abs(specs[i]) <= np.abs(spec), specs[i], spec) - if "max_mag" == a: - spec = np.where(np.abs(specs[i]) >= np.abs(spec), specs[i], spec) - - return spec - - -def stft(wave, nfft, hl): - wave_left = np.asfortranarray(wave[0]) - wave_right = np.asfortranarray(wave[1]) - spec_left = librosa.stft(wave_left, nfft, hop_length=hl) - spec_right = librosa.stft(wave_right, nfft, hop_length=hl) - spec = np.asfortranarray([spec_left, spec_right]) - - return spec - - -def istft(spec, hl): - spec_left = np.asfortranarray(spec[0]) - spec_right = np.asfortranarray(spec[1]) - - wave_left = librosa.istft(spec_left, hop_length=hl) - wave_right = librosa.istft(spec_right, hop_length=hl) - wave = np.asfortranarray([wave_left, wave_right]) - - -if __name__ == "__main__": - import argparse - import sys - import time - - import cv2 - from model_param_init import ModelParameters - - p = argparse.ArgumentParser() - p.add_argument( - "--algorithm", - "-a", - type=str, - choices=["invert", "invert_p", "min_mag", "max_mag", "deep", "align"], - default="min_mag", - ) - p.add_argument( - "--model_params", - "-m", - type=str, - default=os.path.join("modelparams", "1band_sr44100_hl512.json"), - ) - p.add_argument("--output_name", "-o", type=str, default="output") - p.add_argument("--vocals_only", "-v", action="store_true") - p.add_argument("input", nargs="+") - args = p.parse_args() - - start_time = time.time() - - if args.algorithm.startswith("invert") and len(args.input) != 2: - raise ValueError("There should be two input files.") - - if not args.algorithm.startswith("invert") and len(args.input) < 2: - raise ValueError("There must be at least two input files.") - - wave, specs = {}, {} - mp = ModelParameters(args.model_params) - - for i in range(len(args.input)): - spec = {} - - for d in range(len(mp.param["band"]), 0, -1): - bp = mp.param["band"][d] - - if d == len(mp.param["band"]): # high-end band - wave[d], _ = librosa.load( - args.input[i], - bp["sr"], - False, - dtype=np.float32, - res_type=bp["res_type"], - ) - - if len(wave[d].shape) == 1: # mono to stereo - wave[d] = np.array([wave[d], wave[d]]) - else: # lower bands - wave[d] = librosa.resample( - wave[d + 1], - mp.param["band"][d + 1]["sr"], - bp["sr"], - res_type=bp["res_type"], - ) - - spec[d] = wave_to_spectrogram( - wave[d], - bp["hl"], - bp["n_fft"], - mp.param["mid_side"], - mp.param["mid_side_b2"], - mp.param["reverse"], - ) - - specs[i] = combine_spectrograms(spec, mp) - - del wave - - if args.algorithm == "deep": - d_spec = np.where(np.abs(specs[0]) <= np.abs(spec[1]), specs[0], spec[1]) - v_spec = d_spec - specs[1] - sf.write( - os.path.join("{}.wav".format(args.output_name)), - cmb_spectrogram_to_wave(v_spec, mp), - mp.param["sr"], - ) - - if args.algorithm.startswith("invert"): - ln = min([specs[0].shape[2], specs[1].shape[2]]) - specs[0] = specs[0][:, :, :ln] - specs[1] = specs[1][:, :, :ln] - - if "invert_p" == args.algorithm: - X_mag = np.abs(specs[0]) - y_mag = np.abs(specs[1]) - max_mag = np.where(X_mag >= y_mag, X_mag, y_mag) - v_spec = specs[1] - max_mag * np.exp(1.0j * np.angle(specs[0])) - else: - specs[1] = reduce_vocal_aggressively(specs[0], specs[1], 0.2) - v_spec = specs[0] - specs[1] - - if not args.vocals_only: - X_mag = np.abs(specs[0]) - y_mag = np.abs(specs[1]) - v_mag = np.abs(v_spec) - - X_image = spectrogram_to_image(X_mag) - y_image = spectrogram_to_image(y_mag) - v_image = spectrogram_to_image(v_mag) - - cv2.imwrite("{}_X.png".format(args.output_name), X_image) - cv2.imwrite("{}_y.png".format(args.output_name), y_image) - cv2.imwrite("{}_v.png".format(args.output_name), v_image) - - sf.write( - "{}_X.wav".format(args.output_name), - cmb_spectrogram_to_wave(specs[0], mp), - mp.param["sr"], - ) - sf.write( - "{}_y.wav".format(args.output_name), - cmb_spectrogram_to_wave(specs[1], mp), - mp.param["sr"], - ) - - sf.write( - "{}_v.wav".format(args.output_name), - cmb_spectrogram_to_wave(v_spec, mp), - mp.param["sr"], - ) - else: - if not args.algorithm == "deep": - sf.write( - os.path.join("ensembled", "{}.wav".format(args.output_name)), - cmb_spectrogram_to_wave(ensembling(args.algorithm, specs), mp), - mp.param["sr"], - ) - - if args.algorithm == "align": - trackalignment = [ - { - "file1": '"{}"'.format(args.input[0]), - "file2": '"{}"'.format(args.input[1]), - } - ] - - for i, e in tqdm(enumerate(trackalignment), desc="Performing Alignment..."): - os.system(f"python lib/align_tracks.py {e['file1']} {e['file2']}") - - # print('Total time: {0:.{1}f}s'.format(time.time() - start_time, 1)) diff --git a/spaces/Kay2048/IKay/README.md b/spaces/Kay2048/IKay/README.md deleted file mode 100644 index a1d8f85510b85899dc7d22770ab7859484075847..0000000000000000000000000000000000000000 --- a/spaces/Kay2048/IKay/README.md +++ /dev/null @@ -1,12 +0,0 @@ ---- -title: Real CUGAN -emoji: 🔥 -colorFrom: pink -colorTo: yellow -sdk: gradio -app_file: app.py -pinned: false -license: apache-2.0 ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces#reference diff --git a/spaces/KevinQHLin/UniVTG/run_on_video/__init__.py b/spaces/KevinQHLin/UniVTG/run_on_video/__init__.py deleted file mode 100644 index 9ed2adb16c75bea2cdd119bc3c19b67bf68bcc37..0000000000000000000000000000000000000000 --- a/spaces/KevinQHLin/UniVTG/run_on_video/__init__.py +++ /dev/null @@ -1 +0,0 @@ -from run_on_video.video_extractor import vid2clip, txt2clip diff --git a/spaces/Kimata/multimodal-deepfakes/README.md b/spaces/Kimata/multimodal-deepfakes/README.md deleted file mode 100644 index 60c605b88ac603544946c4e6ef07ac4a97c85b27..0000000000000000000000000000000000000000 --- a/spaces/Kimata/multimodal-deepfakes/README.md +++ /dev/null @@ -1,37 +0,0 @@ ---- -title: Deepfakes_Video_Detector -emoji: 🔥 -colorFrom: blue -colorTo: gray -sdk: gradio -app_file: app.py -pinned: false ---- - -# Configuration - -`title`: _string_ -Display title for the Space - -`emoji`: _string_ -Space emoji (emoji-only character allowed) - -`colorFrom`: _string_ -Color for Thumbnail gradient (red, yellow, green, blue, indigo, purple, pink, gray) - -`colorTo`: _string_ -Color for Thumbnail gradient (red, yellow, green, blue, indigo, purple, pink, gray) - -`sdk`: _string_ -Can be either `gradio`, `streamlit`, or `static` - -`sdk_version` : _string_ -Only applicable for `streamlit` SDK. -See [doc](https://hf.co/docs/hub/spaces) for more info on supported versions. - -`app_file`: _string_ -Path to your main application file (which contains either `gradio` or `streamlit` Python code, or `static` html code). -Path is relative to the root of the repository. - -`pinned`: _boolean_ -Whether the Space stays on top of your list. diff --git a/spaces/KyanChen/BuildingExtraction/Tools/CutImgSegWithLabel.py b/spaces/KyanChen/BuildingExtraction/Tools/CutImgSegWithLabel.py deleted file mode 100644 index 7f12a40947787dad7fcb6f28f7f5e1fb567045fd..0000000000000000000000000000000000000000 --- a/spaces/KyanChen/BuildingExtraction/Tools/CutImgSegWithLabel.py +++ /dev/null @@ -1,44 +0,0 @@ -import os -import glob -from skimage import io -import tqdm -img_piece_size = (512, 512) - - -def get_pieces(img_path, label_path, img_format): - pieces_folder = os.path.abspath(img_path + '/..') - if not os.path.exists(pieces_folder + '/img_pieces'): - os.makedirs(pieces_folder + '/img_pieces') - if not os.path.exists(pieces_folder + '/label_pieces'): - os.makedirs(pieces_folder + '/label_pieces') - - img_path_list = glob.glob(img_path+'/austin31.%s' % img_format) - for idx in tqdm.tqdm(range(len(img_path_list))): - img = io.imread(img_path_list[idx]) - label = io.imread(label_path + '/' + os.path.basename(img_path_list[idx]).replace(img_format, img_format)) - h, w, c = img.shape - h_list = list(range(0, h-img_piece_size[1], int(0.9 * img_piece_size[1]))) - h_list = h_list + [h - img_piece_size[1]] - # h_list[-1] = h - img_piece_size[1] - w_list = list(range(0, w-img_piece_size[0], int(0.9 * img_piece_size[0]))) - # w_list[-1] = w - img_piece_size[0] - w_list = w_list + [w - img_piece_size[0]] - for h_step in h_list: - for w_step in w_list: - img_piece = img[h_step:h_step+img_piece_size[1], w_step:w_step+img_piece_size[0]] - label_piece = label[h_step:h_step + img_piece_size[1], w_step:w_step + img_piece_size[0]] - assert label_piece.shape[0] == img_piece_size[1] and label_piece.shape[1] == img_piece_size[0], 'shape error' - io.imsave(pieces_folder + '/img_pieces%s_%d_%d.png' % - (img_path_list[idx].replace(img_path, '').replace('.' + img_format, ''), w_step, h_step), img_piece, check_contrast=False) - io.imsave(pieces_folder + '/label_pieces%s_%d_%d.png' % - (img_path_list[idx].replace(img_path, '').replace('.' + img_format, ''), w_step, h_step), label_piece, check_contrast=False) - - -if __name__ == "__main__": - parent_path = r'J:\20200923-建筑提取数据集\InriaAerialImageDataset\train' - for i in ['train', 'val', 'test']: - img_path = parent_path + '/' + i + '/img' - label_path = parent_path + '/' + i + '/gt' - img_format = 'tif' - get_pieces(img_path, label_path, img_format) - diff --git a/spaces/Liu-LAB/GPT-academic/crazy_functions/test_project/cpp/longcode/prod_cons.h b/spaces/Liu-LAB/GPT-academic/crazy_functions/test_project/cpp/longcode/prod_cons.h deleted file mode 100644 index c9004bb8043a12e32814436baa6262a00c8ef68e..0000000000000000000000000000000000000000 --- a/spaces/Liu-LAB/GPT-academic/crazy_functions/test_project/cpp/longcode/prod_cons.h +++ /dev/null @@ -1,433 +0,0 @@ -#pragma once - -#include -#include -#include -#include -#include - -#include "libipc/def.h" - -#include "libipc/platform/detail.h" -#include "libipc/circ/elem_def.h" -#include "libipc/utility/log.h" -#include "libipc/utility/utility.h" - -namespace ipc { - -//////////////////////////////////////////////////////////////// -/// producer-consumer implementation -//////////////////////////////////////////////////////////////// - -template -struct prod_cons_impl; - -template <> -struct prod_cons_impl> { - - template - struct elem_t { - std::aligned_storage_t data_ {}; - }; - - alignas(cache_line_size) std::atomic rd_; // read index - alignas(cache_line_size) std::atomic wt_; // write index - - constexpr circ::u2_t cursor() const noexcept { - return 0; - } - - template - bool push(W* /*wrapper*/, F&& f, E* elems) { - auto cur_wt = circ::index_of(wt_.load(std::memory_order_relaxed)); - if (cur_wt == circ::index_of(rd_.load(std::memory_order_acquire) - 1)) { - return false; // full - } - std::forward(f)(&(elems[cur_wt].data_)); - wt_.fetch_add(1, std::memory_order_release); - return true; - } - - /** - * In single-single-unicast, 'force_push' means 'no reader' or 'the only one reader is dead'. - * So we could just disconnect all connections of receiver, and return false. - */ - template - bool force_push(W* wrapper, F&&, E*) { - wrapper->elems()->disconnect_receiver(~static_cast(0u)); - return false; - } - - template - bool pop(W* /*wrapper*/, circ::u2_t& /*cur*/, F&& f, R&& out, E* elems) { - auto cur_rd = circ::index_of(rd_.load(std::memory_order_relaxed)); - if (cur_rd == circ::index_of(wt_.load(std::memory_order_acquire))) { - return false; // empty - } - std::forward(f)(&(elems[cur_rd].data_)); - std::forward(out)(true); - rd_.fetch_add(1, std::memory_order_release); - return true; - } -}; - -template <> -struct prod_cons_impl> - : prod_cons_impl> { - - template - bool force_push(W* wrapper, F&&, E*) { - wrapper->elems()->disconnect_receiver(1); - return false; - } - - template class E, std::size_t DS, std::size_t AS> - bool pop(W* /*wrapper*/, circ::u2_t& /*cur*/, F&& f, R&& out, E* elems) { - byte_t buff[DS]; - for (unsigned k = 0;;) { - auto cur_rd = rd_.load(std::memory_order_relaxed); - if (circ::index_of(cur_rd) == - circ::index_of(wt_.load(std::memory_order_acquire))) { - return false; // empty - } - std::memcpy(buff, &(elems[circ::index_of(cur_rd)].data_), sizeof(buff)); - if (rd_.compare_exchange_weak(cur_rd, cur_rd + 1, std::memory_order_release)) { - std::forward(f)(buff); - std::forward(out)(true); - return true; - } - ipc::yield(k); - } - } -}; - -template <> -struct prod_cons_impl> - : prod_cons_impl> { - - using flag_t = std::uint64_t; - - template - struct elem_t { - std::aligned_storage_t data_ {}; - std::atomic f_ct_ { 0 }; // commit flag - }; - - alignas(cache_line_size) std::atomic ct_; // commit index - - template - bool push(W* /*wrapper*/, F&& f, E* elems) { - circ::u2_t cur_ct, nxt_ct; - for (unsigned k = 0;;) { - cur_ct = ct_.load(std::memory_order_relaxed); - if (circ::index_of(nxt_ct = cur_ct + 1) == - circ::index_of(rd_.load(std::memory_order_acquire))) { - return false; // full - } - if (ct_.compare_exchange_weak(cur_ct, nxt_ct, std::memory_order_acq_rel)) { - break; - } - ipc::yield(k); - } - auto* el = elems + circ::index_of(cur_ct); - std::forward(f)(&(el->data_)); - // set flag & try update wt - el->f_ct_.store(~static_cast(cur_ct), std::memory_order_release); - while (1) { - auto cac_ct = el->f_ct_.load(std::memory_order_acquire); - if (cur_ct != wt_.load(std::memory_order_relaxed)) { - return true; - } - if ((~cac_ct) != cur_ct) { - return true; - } - if (!el->f_ct_.compare_exchange_strong(cac_ct, 0, std::memory_order_relaxed)) { - return true; - } - wt_.store(nxt_ct, std::memory_order_release); - cur_ct = nxt_ct; - nxt_ct = cur_ct + 1; - el = elems + circ::index_of(cur_ct); - } - return true; - } - - template - bool force_push(W* wrapper, F&&, E*) { - wrapper->elems()->disconnect_receiver(1); - return false; - } - - template class E, std::size_t DS, std::size_t AS> - bool pop(W* /*wrapper*/, circ::u2_t& /*cur*/, F&& f, R&& out, E* elems) { - byte_t buff[DS]; - for (unsigned k = 0;;) { - auto cur_rd = rd_.load(std::memory_order_relaxed); - auto cur_wt = wt_.load(std::memory_order_acquire); - auto id_rd = circ::index_of(cur_rd); - auto id_wt = circ::index_of(cur_wt); - if (id_rd == id_wt) { - auto* el = elems + id_wt; - auto cac_ct = el->f_ct_.load(std::memory_order_acquire); - if ((~cac_ct) != cur_wt) { - return false; // empty - } - if (el->f_ct_.compare_exchange_weak(cac_ct, 0, std::memory_order_relaxed)) { - wt_.store(cur_wt + 1, std::memory_order_release); - } - k = 0; - } - else { - std::memcpy(buff, &(elems[circ::index_of(cur_rd)].data_), sizeof(buff)); - if (rd_.compare_exchange_weak(cur_rd, cur_rd + 1, std::memory_order_release)) { - std::forward(f)(buff); - std::forward(out)(true); - return true; - } - ipc::yield(k); - } - } - } -}; - -template <> -struct prod_cons_impl> { - - using rc_t = std::uint64_t; - - enum : rc_t { - ep_mask = 0x00000000ffffffffull, - ep_incr = 0x0000000100000000ull - }; - - template - struct elem_t { - std::aligned_storage_t data_ {}; - std::atomic rc_ { 0 }; // read-counter - }; - - alignas(cache_line_size) std::atomic wt_; // write index - alignas(cache_line_size) rc_t epoch_ { 0 }; // only one writer - - circ::u2_t cursor() const noexcept { - return wt_.load(std::memory_order_acquire); - } - - template - bool push(W* wrapper, F&& f, E* elems) { - E* el; - for (unsigned k = 0;;) { - circ::cc_t cc = wrapper->elems()->connections(std::memory_order_relaxed); - if (cc == 0) return false; // no reader - el = elems + circ::index_of(wt_.load(std::memory_order_relaxed)); - // check all consumers have finished reading this element - auto cur_rc = el->rc_.load(std::memory_order_acquire); - circ::cc_t rem_cc = cur_rc & ep_mask; - if ((cc & rem_cc) && ((cur_rc & ~ep_mask) == epoch_)) { - return false; // has not finished yet - } - // consider rem_cc to be 0 here - if (el->rc_.compare_exchange_weak( - cur_rc, epoch_ | static_cast(cc), std::memory_order_release)) { - break; - } - ipc::yield(k); - } - std::forward(f)(&(el->data_)); - wt_.fetch_add(1, std::memory_order_release); - return true; - } - - template - bool force_push(W* wrapper, F&& f, E* elems) { - E* el; - epoch_ += ep_incr; - for (unsigned k = 0;;) { - circ::cc_t cc = wrapper->elems()->connections(std::memory_order_relaxed); - if (cc == 0) return false; // no reader - el = elems + circ::index_of(wt_.load(std::memory_order_relaxed)); - // check all consumers have finished reading this element - auto cur_rc = el->rc_.load(std::memory_order_acquire); - circ::cc_t rem_cc = cur_rc & ep_mask; - if (cc & rem_cc) { - ipc::log("force_push: k = %u, cc = %u, rem_cc = %u\n", k, cc, rem_cc); - cc = wrapper->elems()->disconnect_receiver(rem_cc); // disconnect all invalid readers - if (cc == 0) return false; // no reader - } - // just compare & exchange - if (el->rc_.compare_exchange_weak( - cur_rc, epoch_ | static_cast(cc), std::memory_order_release)) { - break; - } - ipc::yield(k); - } - std::forward(f)(&(el->data_)); - wt_.fetch_add(1, std::memory_order_release); - return true; - } - - template - bool pop(W* wrapper, circ::u2_t& cur, F&& f, R&& out, E* elems) { - if (cur == cursor()) return false; // acquire - auto* el = elems + circ::index_of(cur++); - std::forward(f)(&(el->data_)); - for (unsigned k = 0;;) { - auto cur_rc = el->rc_.load(std::memory_order_acquire); - if ((cur_rc & ep_mask) == 0) { - std::forward(out)(true); - return true; - } - auto nxt_rc = cur_rc & ~static_cast(wrapper->connected_id()); - if (el->rc_.compare_exchange_weak(cur_rc, nxt_rc, std::memory_order_release)) { - std::forward(out)((nxt_rc & ep_mask) == 0); - return true; - } - ipc::yield(k); - } - } -}; - -template <> -struct prod_cons_impl> { - - using rc_t = std::uint64_t; - using flag_t = std::uint64_t; - - enum : rc_t { - rc_mask = 0x00000000ffffffffull, - ep_mask = 0x00ffffffffffffffull, - ep_incr = 0x0100000000000000ull, - ic_mask = 0xff000000ffffffffull, - ic_incr = 0x0000000100000000ull - }; - - template - struct elem_t { - std::aligned_storage_t data_ {}; - std::atomic rc_ { 0 }; // read-counter - std::atomic f_ct_ { 0 }; // commit flag - }; - - alignas(cache_line_size) std::atomic ct_; // commit index - alignas(cache_line_size) std::atomic epoch_ { 0 }; - - circ::u2_t cursor() const noexcept { - return ct_.load(std::memory_order_acquire); - } - - constexpr static rc_t inc_rc(rc_t rc) noexcept { - return (rc & ic_mask) | ((rc + ic_incr) & ~ic_mask); - } - - constexpr static rc_t inc_mask(rc_t rc) noexcept { - return inc_rc(rc) & ~rc_mask; - } - - template - bool push(W* wrapper, F&& f, E* elems) { - E* el; - circ::u2_t cur_ct; - rc_t epoch = epoch_.load(std::memory_order_acquire); - for (unsigned k = 0;;) { - circ::cc_t cc = wrapper->elems()->connections(std::memory_order_relaxed); - if (cc == 0) return false; // no reader - el = elems + circ::index_of(cur_ct = ct_.load(std::memory_order_relaxed)); - // check all consumers have finished reading this element - auto cur_rc = el->rc_.load(std::memory_order_relaxed); - circ::cc_t rem_cc = cur_rc & rc_mask; - if ((cc & rem_cc) && ((cur_rc & ~ep_mask) == epoch)) { - return false; // has not finished yet - } - else if (!rem_cc) { - auto cur_fl = el->f_ct_.load(std::memory_order_acquire); - if ((cur_fl != cur_ct) && cur_fl) { - return false; // full - } - } - // consider rem_cc to be 0 here - if (el->rc_.compare_exchange_weak( - cur_rc, inc_mask(epoch | (cur_rc & ep_mask)) | static_cast(cc), std::memory_order_relaxed) && - epoch_.compare_exchange_weak(epoch, epoch, std::memory_order_acq_rel)) { - break; - } - ipc::yield(k); - } - // only one thread/process would touch here at one time - ct_.store(cur_ct + 1, std::memory_order_release); - std::forward(f)(&(el->data_)); - // set flag & try update wt - el->f_ct_.store(~static_cast(cur_ct), std::memory_order_release); - return true; - } - - template - bool force_push(W* wrapper, F&& f, E* elems) { - E* el; - circ::u2_t cur_ct; - rc_t epoch = epoch_.fetch_add(ep_incr, std::memory_order_release) + ep_incr; - for (unsigned k = 0;;) { - circ::cc_t cc = wrapper->elems()->connections(std::memory_order_relaxed); - if (cc == 0) return false; // no reader - el = elems + circ::index_of(cur_ct = ct_.load(std::memory_order_relaxed)); - // check all consumers have finished reading this element - auto cur_rc = el->rc_.load(std::memory_order_acquire); - circ::cc_t rem_cc = cur_rc & rc_mask; - if (cc & rem_cc) { - ipc::log("force_push: k = %u, cc = %u, rem_cc = %u\n", k, cc, rem_cc); - cc = wrapper->elems()->disconnect_receiver(rem_cc); // disconnect all invalid readers - if (cc == 0) return false; // no reader - } - // just compare & exchange - if (el->rc_.compare_exchange_weak( - cur_rc, inc_mask(epoch | (cur_rc & ep_mask)) | static_cast(cc), std::memory_order_relaxed)) { - if (epoch == epoch_.load(std::memory_order_acquire)) { - break; - } - else if (push(wrapper, std::forward(f), elems)) { - return true; - } - epoch = epoch_.fetch_add(ep_incr, std::memory_order_release) + ep_incr; - } - ipc::yield(k); - } - // only one thread/process would touch here at one time - ct_.store(cur_ct + 1, std::memory_order_release); - std::forward(f)(&(el->data_)); - // set flag & try update wt - el->f_ct_.store(~static_cast(cur_ct), std::memory_order_release); - return true; - } - - template - bool pop(W* wrapper, circ::u2_t& cur, F&& f, R&& out, E(& elems)[N]) { - auto* el = elems + circ::index_of(cur); - auto cur_fl = el->f_ct_.load(std::memory_order_acquire); - if (cur_fl != ~static_cast(cur)) { - return false; // empty - } - ++cur; - std::forward(f)(&(el->data_)); - for (unsigned k = 0;;) { - auto cur_rc = el->rc_.load(std::memory_order_acquire); - if ((cur_rc & rc_mask) == 0) { - std::forward(out)(true); - el->f_ct_.store(cur + N - 1, std::memory_order_release); - return true; - } - auto nxt_rc = inc_rc(cur_rc) & ~static_cast(wrapper->connected_id()); - bool last_one = false; - if ((last_one = (nxt_rc & rc_mask) == 0)) { - el->f_ct_.store(cur + N - 1, std::memory_order_release); - } - if (el->rc_.compare_exchange_weak(cur_rc, nxt_rc, std::memory_order_release)) { - std::forward(out)(last_one); - return true; - } - ipc::yield(k); - } - } -}; - -} // namespace ipc diff --git a/spaces/Loren/Streamlit_OCR_comparator/configs/_base_/recog_datasets/ST_charbox_train.py b/spaces/Loren/Streamlit_OCR_comparator/configs/_base_/recog_datasets/ST_charbox_train.py deleted file mode 100644 index 45d50d0d151fca5c4e9118d1f6b1f094f8a51324..0000000000000000000000000000000000000000 --- a/spaces/Loren/Streamlit_OCR_comparator/configs/_base_/recog_datasets/ST_charbox_train.py +++ /dev/null @@ -1,23 +0,0 @@ -# Text Recognition Training set, including: -# Synthetic Datasets: SynthText (with character level boxes) - -train_img_root = 'data/mixture' - -train_img_prefix = f'{train_img_root}/SynthText' - -train_ann_file = f'{train_img_root}/SynthText/instances_train.txt' - -train = dict( - type='OCRSegDataset', - img_prefix=train_img_prefix, - ann_file=train_ann_file, - loader=dict( - type='AnnFileLoader', - repeat=1, - file_format='txt', - parser=dict( - type='LineJsonParser', keys=['file_name', 'annotations', 'text'])), - pipeline=None, - test_mode=False) - -train_list = [train] diff --git a/spaces/MWilinski/bot/data/hugging_face_docs_dataset.py b/spaces/MWilinski/bot/data/hugging_face_docs_dataset.py deleted file mode 100644 index 39e302aa3708df4ea88103421c66cf997d29884d..0000000000000000000000000000000000000000 --- a/spaces/MWilinski/bot/data/hugging_face_docs_dataset.py +++ /dev/null @@ -1,190 +0,0 @@ -import glob -import json -import os -import re -import subprocess -from typing import List - -import requests -import pandas as pd -from bs4 import BeautifulSoup -from markdown import markdown -import nbformat -from nbconvert import MarkdownExporter -from nbconvert.preprocessors import Preprocessor, ClearOutputPreprocessor -from tqdm import tqdm - - -VALIDATE_URLS = False - - -def download_repositories(repo_urls_file: str, repo_dir: str): - """ - Downloads the Hugging Face repositories. - """ - if not os.path.exists(repo_dir): - os.makedirs(repo_dir) - with open(repo_urls_file, "r") as f: - repositories_urls = json.load(f)["urls"] - print(f'Downloading {len(repositories_urls)} repositories') - for url in repositories_urls: - try: - subprocess.run(["git", "clone", url], cwd=repo_dir) - except subprocess.CalledProcessError as e: - print("Command failed with error:", e.stderr) - - -class EmptyCellPreprocessor(Preprocessor): - def preprocess_cell(self, cell, resources, index): - if cell.source.strip() == '': - cell.source = '' - cell.cell_type = 'raw' - return cell, resources - - -def convert_notebook_to_txt(filename: str): - """ - Converts a notebook to a markdown file. - """ - with open(filename) as f: - notebook = nbformat.read(f, as_version=4) - # id validation error fix - for cell in notebook['cells']: - cell['id'] = str(cell['id']) - - clear_output = ClearOutputPreprocessor() - notebook, resources = clear_output.preprocess(notebook, {}) - - exporter = MarkdownExporter() - exporter.register_preprocessor(EmptyCellPreprocessor, enabled=True) - output_notebook_text, resources = exporter.from_notebook_node(notebook) - - new_filename = filename.replace('.ipynb', '_ipynb.txt') - with open(new_filename, 'w') as f: - f.write(output_notebook_text) - return new_filename - - -def extract_files_from_directories( - repo_urls_file: str, - repo_dir: str, - docs_dir: str, - files_extensions: List[str] -) -> None: - - """ - This function reads markdown and markdownx files from the repositories directory, - filters out non-English files, and adds the source GitHub URL as the first line of each file. - The resulting files are saved in the docs_dir. - """ - languages = pd.read_csv("language-codes.csv").loc[:,"alpha2"].tolist() - languages.remove("en") - - files = [ - filename - for extension in files_extensions - for filename in glob.glob(repo_dir + f"**/*{extension}", recursive=True) - ] - print(f'Used extensions: {", ".join(files_extensions)}') - print(f'Found {len(files)} files') - - repo_urls = [] - with open(repo_urls_file, "r") as f: - repo_urls = json.load(f)["urls"] - - # filter out the files that are not in english - filtered_files = [] - for filename in files: - sep_file = filename.split("/") - for seq in sep_file: - if seq in languages: - break - else: - filtered_files.append(filename) - print(f'Found {len(filtered_files)} files in English') - - # generate a GitHub URL for a file based on its name and a list of possible repository URLs - def get_github_url(filename: str, repo_urls: str, repo_dir: str) -> str: - source = filename.replace(repo_dir, '') - repo_name, file_path = source.split('/', 1) - repo_url_prefix = None - for repo_url in repo_urls: - if repo_name == repo_url.split('/')[-1]: - repo_url_prefix = repo_url - break - if not repo_url_prefix: - raise ValueError(f"Repo URL not found for {repo_name}") - url = f'{repo_url_prefix}/blob/main/{file_path}' - if VALIDATE_URLS: - try: - response = requests.get(url) - response.raise_for_status() - except: - print(f'filename: {filename}') - print(f'repo: {repo_name}, file: {file_path}') - print(f'url: {url}') - raise - return url - - # creates a valid filename by replacing certain characters and removing the repo_dir path - def create_filename_from_path(filename: str, repo_dir: str) -> str: - filename = filename.replace(repo_dir, '') - chars_to_replace = ['/', '{', '}', '-', '.'] - filename = ''.join(['_' if c in chars_to_replace else c for c in filename]) - return filename - - # copy the files with the source added in the first line - if not os.path.exists(docs_dir): - os.makedirs(docs_dir) - copied_files = [] - for filename in tqdm(filtered_files): - source_url = get_github_url(filename, repo_urls, repo_dir) - data = f"source: {source_url}\n\n" - # convert jupyter notebooks to txt files - try: - if filename.endswith('.ipynb'): - filename = convert_notebook_to_txt(filename) - # rename and copy files - with open(filename, 'r') as f: - data += f.read() - output_filename = docs_dir + create_filename_from_path(filename, repo_dir) - with open(output_filename, 'w') as f: - f.write(data) - if not os.path.isfile(output_filename): - raise ValueError(f"Failed to create the output file: {output_filename}") - copied_files.append(output_filename) - except Exception as ex: - print(f'Failed to copy file {filename}: {ex}') - - print(f'Successfully copied {len(set(copied_files))}/{len(filtered_files)} files') - - -def markdown_cleaner(data: str): - """ - Clean markdown text. - - Args: - data (str): The markdown text to be cleaned. - - Returns: - str: The cleaned markdown text. - """ - soupped = BeautifulSoup(markdown(data), "html.parser") - raw_text = ''.join(soupped.findAll(string=True)) - clean_text = re.sub(r"", "", raw_text, flags=re.DOTALL) - # remove any special tokens e.g <|endoftext|> - clean_text = re.sub(r"<\|endoftext\|>", "", clean_text, flags=re.DOTALL) - # discard non english text - clean_text = re.sub(r"[^a-zA-Z0-9\s]", "", clean_text, flags=re.DOTALL) - return "\n".join([t for t in clean_text.split("\n") if t]) - - -if __name__ == '__main__': - repo_urls_file = "./datasets/hf_repositories_urls.json" - repo_dir = "./datasets/huggingface_repositories/" - docs_dir = "./datasets/huggingface_docs/" - download_repositories(repo_urls_file, repo_dir) - extract_files_from_directories( - repo_urls_file, repo_dir, docs_dir, - files_extensions=['.md', '.mdx', '.ipynb'] - ) diff --git a/spaces/Manjushri/MusicGen/audiocraft/utils/__init__.py b/spaces/Manjushri/MusicGen/audiocraft/utils/__init__.py deleted file mode 100644 index 0952fcc3f57e34b3747962e9ebd6fc57aeea63fa..0000000000000000000000000000000000000000 --- a/spaces/Manjushri/MusicGen/audiocraft/utils/__init__.py +++ /dev/null @@ -1,5 +0,0 @@ -# Copyright (c) Meta Platforms, Inc. and affiliates. -# All rights reserved. -# -# This source code is licensed under the license found in the -# LICENSE file in the root directory of this source tree. diff --git a/spaces/Manmay/tortoise-tts/tortoise/models/autoregressive.py b/spaces/Manmay/tortoise-tts/tortoise/models/autoregressive.py deleted file mode 100644 index fcd1a94ff17ee0847048e529581612364c90cbe6..0000000000000000000000000000000000000000 --- a/spaces/Manmay/tortoise-tts/tortoise/models/autoregressive.py +++ /dev/null @@ -1,582 +0,0 @@ -import functools - -import torch -import torch.nn as nn -import torch.nn.functional as F -from transformers import GPT2Config, GPT2PreTrainedModel, LogitsProcessorList -from transformers.modeling_outputs import CausalLMOutputWithCrossAttentions -from transformers.utils.model_parallel_utils import get_device_map, assert_device_map -from tortoise.models.arch_util import AttentionBlock -from tortoise.utils.typical_sampling import TypicalLogitsWarper - - -def null_position_embeddings(range, dim): - return torch.zeros((range.shape[0], range.shape[1], dim), device=range.device) - - -class ResBlock(nn.Module): - """ - Basic residual convolutional block that uses GroupNorm. - """ - def __init__(self, chan): - super().__init__() - self.net = nn.Sequential( - nn.Conv1d(chan, chan, kernel_size=3, padding=1), - nn.GroupNorm(chan//8, chan), - nn.ReLU(), - nn.Conv1d(chan, chan, kernel_size=3, padding=1), - nn.GroupNorm(chan//8, chan) - ) - - def forward(self, x): - return F.relu(self.net(x) + x) - - -class GPT2InferenceModel(GPT2PreTrainedModel): - def __init__(self, config, gpt, text_pos_emb, embeddings, norm, linear, kv_cache=False): - super().__init__(config) - self.transformer = gpt - self.text_pos_embedding = text_pos_emb - self.embeddings = embeddings - self.final_norm = norm - self.lm_head = nn.Sequential(norm, linear) - self.kv_cache = kv_cache - - # Model parallel - self.model_parallel = False - self.device_map = None - self.cached_mel_emb = None - def parallelize(self, device_map=None): - self.device_map = ( - get_device_map(len(self.transformer.h), range(max(1, torch.cuda.device_count()))) - if device_map is None - else device_map - ) - assert_device_map(self.device_map, len(self.transformer.h)) - self.transformer.parallelize(self.device_map) - self.lm_head = self.lm_head.to(self.transformer.first_device) - self.model_parallel = True - - def deparallelize(self): - self.transformer.deparallelize() - self.transformer = self.transformer.to("cpu") - self.lm_head = self.lm_head.to("cpu") - self.model_parallel = False - torch.cuda.empty_cache() - if torch.backends.mps.is_available(): - torch.mps.empty_cache() - - def get_output_embeddings(self): - return self.lm_head - - def set_output_embeddings(self, new_embeddings): - self.lm_head = new_embeddings - - def store_mel_emb(self, mel_emb): - self.cached_mel_emb = mel_emb - - def prepare_inputs_for_generation(self, input_ids, past_key_values=None, **kwargs): - token_type_ids = kwargs.get("token_type_ids", None) # usually None - if not self.kv_cache: - past_key_values = None - # only last token for inputs_ids if past is defined in kwargs - if past_key_values: - input_ids = input_ids[:, -1].unsqueeze(-1) - if token_type_ids is not None: - token_type_ids = token_type_ids[:, -1].unsqueeze(-1) - - attention_mask = kwargs.get("attention_mask", None) - position_ids = kwargs.get("position_ids", None) - - if attention_mask is not None and position_ids is None: - # create position_ids on the fly for batch generation - position_ids = attention_mask.long().cumsum(-1) - 1 - position_ids.masked_fill_(attention_mask == 0, 1) - if past_key_values: - position_ids = position_ids[:, -1].unsqueeze(-1) - else: - position_ids = None - return { - "input_ids": input_ids, - "past_key_values": past_key_values, - "use_cache": kwargs.get("use_cache"), - "position_ids": position_ids, - "attention_mask": attention_mask, - "token_type_ids": token_type_ids, - } - - def forward( - self, - input_ids=None, - past_key_values=None, - attention_mask=None, - token_type_ids=None, - position_ids=None, - head_mask=None, - inputs_embeds=None, - encoder_hidden_states=None, - encoder_attention_mask=None, - labels=None, - use_cache=None, - output_attentions=None, - output_hidden_states=None, - return_dict=None, - ): - assert self.cached_mel_emb is not None - assert inputs_embeds is None # Not supported by this inference model. - assert labels is None # Training not supported by this inference model. - return_dict = ( - return_dict if return_dict is not None else self.config.use_return_dict - ) - - # Create embedding - mel_len = self.cached_mel_emb.shape[1] - if input_ids.shape[1] != 1: - text_inputs = input_ids[:, mel_len:] - text_emb = self.embeddings(text_inputs) - text_emb = text_emb + self.text_pos_embedding(text_emb) - if self.cached_mel_emb.shape[0] != text_emb.shape[0]: - mel_emb = self.cached_mel_emb.repeat_interleave( - text_emb.shape[0] // self.cached_mel_emb.shape[0], 0 - ) - else: # this outcome only occurs once per loop in most cases - mel_emb = self.cached_mel_emb - emb = torch.cat([mel_emb, text_emb], dim=1) - else: - emb = self.embeddings(input_ids) - emb = emb + self.text_pos_embedding.get_fixed_embedding( - attention_mask.shape[1] - mel_len, attention_mask.device - ) - transformer_outputs = self.transformer( - inputs_embeds=emb, - past_key_values=past_key_values, - attention_mask=attention_mask, - token_type_ids=token_type_ids, - position_ids=position_ids, - head_mask=head_mask, - encoder_hidden_states=encoder_hidden_states, - encoder_attention_mask=encoder_attention_mask, - use_cache=use_cache, - output_attentions=output_attentions, - output_hidden_states=output_hidden_states, - return_dict=return_dict, - ) - hidden_states = transformer_outputs[0] - - # Set device for model parallelism - if self.model_parallel: - if torch.backends.mps.is_available(): - self.to(self.transformer.first_device) - else: - torch.cuda.set_device(self.transformer.first_device) - hidden_states = hidden_states.to(self.lm_head.weight.device) - - lm_logits = self.lm_head(hidden_states) - - if not return_dict: - return (lm_logits,) + transformer_outputs[1:] - - return CausalLMOutputWithCrossAttentions( - loss=None, - logits=lm_logits, - past_key_values=transformer_outputs.past_key_values, - hidden_states=transformer_outputs.hidden_states, - attentions=transformer_outputs.attentions, - cross_attentions=transformer_outputs.cross_attentions, - ) - - @staticmethod - def _reorder_cache(past, beam_idx): - """ - This function is used to re-order the :obj:`past_key_values` cache if - :meth:`~transformers.PreTrainedModel.beam_search` or :meth:`~transformers.PreTrainedModel.beam_sample` is - called. This is required to match :obj:`past_key_values` with the correct beam_idx at every generation step. - """ - return tuple( - tuple( - past_state.index_select(0, beam_idx.to(past_state.device)) - for past_state in layer_past - ) - for layer_past in past - ) - - -class ConditioningEncoder(nn.Module): - def __init__(self, - spec_dim, - embedding_dim, - attn_blocks=6, - num_attn_heads=4, - do_checkpointing=False, - mean=False): - super().__init__() - attn = [] - self.init = nn.Conv1d(spec_dim, embedding_dim, kernel_size=1) - for a in range(attn_blocks): - attn.append(AttentionBlock(embedding_dim, num_attn_heads)) - self.attn = nn.Sequential(*attn) - self.dim = embedding_dim - self.do_checkpointing = do_checkpointing - self.mean = mean - - def forward(self, x): - h = self.init(x) - h = self.attn(h) - if self.mean: - return h.mean(dim=2) - else: - return h[:, :, 0] - - -class LearnedPositionEmbeddings(nn.Module): - def __init__(self, seq_len, model_dim, init=.02): - super().__init__() - self.emb = nn.Embedding(seq_len, model_dim) - # Initializing this way is standard for GPT-2 - self.emb.weight.data.normal_(mean=0.0, std=init) - - def forward(self, x): - sl = x.shape[1] - return self.emb(torch.arange(0, sl, device=x.device)) - - def get_fixed_embedding(self, ind, dev): - return self.emb(torch.tensor([ind], device=dev)).unsqueeze(0) - - -def build_hf_gpt_transformer(layers, model_dim, heads, max_mel_seq_len, max_text_seq_len, checkpointing): - """ - GPT-2 implemented by the HuggingFace library. - """ - from transformers import GPT2Config, GPT2Model - gpt_config = GPT2Config(vocab_size=256, # Unused. - n_positions=max_mel_seq_len+max_text_seq_len, - n_ctx=max_mel_seq_len+max_text_seq_len, - n_embd=model_dim, - n_layer=layers, - n_head=heads, - gradient_checkpointing=checkpointing, - use_cache=not checkpointing) - gpt = GPT2Model(gpt_config) - # Override the built in positional embeddings - del gpt.wpe - gpt.wpe = functools.partial(null_position_embeddings, dim=model_dim) - # Built-in token embeddings are unused. - del gpt.wte - return gpt, LearnedPositionEmbeddings(max_mel_seq_len, model_dim), LearnedPositionEmbeddings(max_text_seq_len, model_dim),\ - None, None - - -class MelEncoder(nn.Module): - def __init__(self, channels, mel_channels=80, resblocks_per_reduction=2): - super().__init__() - self.channels = channels - self.encoder = nn.Sequential(nn.Conv1d(mel_channels, channels//4, kernel_size=3, padding=1), - nn.Sequential(*[ResBlock(channels//4) for _ in range(resblocks_per_reduction)]), - nn.Conv1d(channels//4, channels//2, kernel_size=3, stride=2, padding=1), - nn.GroupNorm(channels//16, channels//2), - nn.ReLU(), - nn.Sequential(*[ResBlock(channels//2) for _ in range(resblocks_per_reduction)]), - nn.Conv1d(channels//2, channels, kernel_size=3, stride=2, padding=1), - nn.GroupNorm(channels//8, channels), - nn.ReLU(), - nn.Sequential(*[ResBlock(channels) for _ in range(resblocks_per_reduction)]), - ) - self.reduction = 4 - - - def forward(self, x): - for e in self.encoder: - x = e(x) - return x.permute(0,2,1) - - -class UnifiedVoice(nn.Module): - def __init__(self, layers=8, model_dim=512, heads=8, max_text_tokens=120, max_mel_tokens=250, max_conditioning_inputs=1, - mel_length_compression=1024, number_text_tokens=256, - start_text_token=None, number_mel_codes=8194, start_mel_token=8192, - stop_mel_token=8193, train_solo_embeddings=False, use_mel_codes_as_input=True, - checkpointing=True, types=1): - """ - Args: - layers: Number of layers in transformer stack. - model_dim: Operating dimensions of the transformer - heads: Number of transformer heads. Must be divisible by model_dim. Recommend model_dim//64 - max_text_tokens: Maximum number of text tokens that will be encountered by model. - max_mel_tokens: Maximum number of MEL tokens that will be encountered by model. - max_conditioning_inputs: Maximum number of conditioning inputs provided to the model. If (1), conditioning input can be of format (b,80,s), otherwise (b,n,80,s). - mel_length_compression: The factor between and . Used to compute MEL code padding given wav input length. - number_text_tokens: - start_text_token: - stop_text_token: - number_mel_codes: - start_mel_token: - stop_mel_token: - train_solo_embeddings: - use_mel_codes_as_input: - checkpointing: - """ - super().__init__() - - self.number_text_tokens = number_text_tokens - self.start_text_token = number_text_tokens * types if start_text_token is None else start_text_token - self.stop_text_token = 0 - self.number_mel_codes = number_mel_codes - self.start_mel_token = start_mel_token - self.stop_mel_token = stop_mel_token - self.layers = layers - self.heads = heads - self.max_mel_tokens = max_mel_tokens - self.max_text_tokens = max_text_tokens - self.model_dim = model_dim - self.max_conditioning_inputs = max_conditioning_inputs - self.mel_length_compression = mel_length_compression - self.conditioning_encoder = ConditioningEncoder(80, model_dim, num_attn_heads=heads) - self.text_embedding = nn.Embedding(self.number_text_tokens*types+1, model_dim) - if use_mel_codes_as_input: - self.mel_embedding = nn.Embedding(self.number_mel_codes, model_dim) - else: - self.mel_embedding = MelEncoder(model_dim, resblocks_per_reduction=1) - self.gpt, self.mel_pos_embedding, self.text_pos_embedding, self.mel_layer_pos_embedding, self.text_layer_pos_embedding = \ - build_hf_gpt_transformer(layers, model_dim, heads, self.max_mel_tokens+2+self.max_conditioning_inputs, self.max_text_tokens+2, checkpointing) - if train_solo_embeddings: - self.mel_solo_embedding = nn.Parameter(torch.randn(1, 1, model_dim) * .02, requires_grad=True) - self.text_solo_embedding = nn.Parameter(torch.randn(1, 1, model_dim) * .02, requires_grad=True) - else: - self.mel_solo_embedding = 0 - self.text_solo_embedding = 0 - - self.final_norm = nn.LayerNorm(model_dim) - self.text_head = nn.Linear(model_dim, self.number_text_tokens*types+1) - self.mel_head = nn.Linear(model_dim, self.number_mel_codes) - - # Initialize the embeddings per the GPT-2 scheme - embeddings = [self.text_embedding] - if use_mel_codes_as_input: - embeddings.append(self.mel_embedding) - for module in embeddings: - module.weight.data.normal_(mean=0.0, std=.02) - def post_init_gpt2_config(self, use_deepspeed=False, kv_cache=False, half=False): - seq_length = self.max_mel_tokens + self.max_text_tokens + 2 - gpt_config = GPT2Config( - vocab_size=self.max_mel_tokens, - n_positions=seq_length, - n_ctx=seq_length, - n_embd=self.model_dim, - n_layer=self.layers, - n_head=self.heads, - gradient_checkpointing=False, - use_cache=True, - ) - self.inference_model = GPT2InferenceModel( - gpt_config, - self.gpt, - self.mel_pos_embedding, - self.mel_embedding, - self.final_norm, - self.mel_head, - kv_cache=kv_cache, - ) - if use_deepspeed and half and torch.cuda.is_available(): - import deepspeed - self.ds_engine = deepspeed.init_inference(model=self.inference_model, - mp_size=1, - replace_with_kernel_inject=True, - dtype=torch.float16) - self.inference_model = self.ds_engine.module.eval() - elif use_deepspeed and torch.cuda.is_available(): - import deepspeed - self.ds_engine = deepspeed.init_inference(model=self.inference_model, - mp_size=1, - replace_with_kernel_inject=True, - dtype=torch.float32) - self.inference_model = self.ds_engine.module.eval() - else: - self.inference_model = self.inference_model.eval() - - # self.inference_model = PrunedGPT2InferenceModel(gpt_config, self.gpt, self.mel_pos_embedding, self.mel_embedding, self.final_norm, self.mel_head) - self.gpt.wte = self.mel_embedding - def build_aligned_inputs_and_targets(self, input, start_token, stop_token): - inp = F.pad(input, (1,0), value=start_token) - tar = F.pad(input, (0,1), value=stop_token) - return inp, tar - - def set_mel_padding(self, mel_input_tokens, wav_lengths): - """ - Given mel tokens that are derived from a padded audio clip and the actual lengths of each batch element in - that audio clip, reformats the tokens with STOP_MEL_TOKEN in place of the zero padding. This is required - preformatting to create a working TTS model. - """ - # Set padding areas within MEL (currently it is coded with the MEL code for ). - mel_lengths = torch.div(wav_lengths, self.mel_length_compression, rounding_mode='trunc') - for b in range(len(mel_lengths)): - actual_end = mel_lengths[b] + 1 # Due to the convolutional nature of how these tokens are generated, it would be best if the model predicts a token past the actual last token. - if actual_end < mel_input_tokens.shape[-1]: - mel_input_tokens[b, actual_end:] = self.stop_mel_token - return mel_input_tokens - - def get_logits(self, speech_conditioning_inputs, first_inputs, first_head, second_inputs=None, second_head=None, get_attns=False, return_latent=False): - if second_inputs is not None: - emb = torch.cat([speech_conditioning_inputs, first_inputs, second_inputs], dim=1) - else: - emb = torch.cat([speech_conditioning_inputs, first_inputs], dim=1) - - gpt_out = self.gpt(inputs_embeds=emb, return_dict=True, output_attentions=get_attns) - if get_attns: - return gpt_out.attentions - - enc = gpt_out.last_hidden_state[:, 1:] # The first logit is tied to the speech_conditioning_input - enc = self.final_norm(enc) - - if return_latent: - return enc[:, speech_conditioning_inputs.shape[1]:speech_conditioning_inputs.shape[1]+first_inputs.shape[1]], enc[:, -second_inputs.shape[1]:] - - first_logits = enc[:, :first_inputs.shape[1]] - first_logits = first_head(first_logits) - first_logits = first_logits.permute(0,2,1) - if second_inputs is not None: - second_logits = enc[:, -second_inputs.shape[1]:] - second_logits = second_head(second_logits) - second_logits = second_logits.permute(0,2,1) - return first_logits, second_logits - else: - return first_logits - - def get_conditioning(self, speech_conditioning_input): - speech_conditioning_input = speech_conditioning_input.unsqueeze(1) if len( - speech_conditioning_input.shape) == 3 else speech_conditioning_input - conds = [] - for j in range(speech_conditioning_input.shape[1]): - conds.append(self.conditioning_encoder(speech_conditioning_input[:, j])) - conds = torch.stack(conds, dim=1) - conds = conds.mean(dim=1) - return conds - - def forward(self, speech_conditioning_latent, text_inputs, text_lengths, mel_codes, wav_lengths, types=None, text_first=True, raw_mels=None, return_attentions=False, - return_latent=False, clip_inputs=True): - """ - Forward pass that uses both text and voice in either text conditioning mode or voice conditioning mode - (actuated by `text_first`). - - speech_conditioning_input: MEL float tensor, (b,1024) - text_inputs: long tensor, (b,t) - text_lengths: long tensor, (b,) - mel_inputs: long tensor, (b,m) - wav_lengths: long tensor, (b,) - raw_mels: MEL float tensor (b,80,s) - - If return_attentions is specified, only logits are returned. - If return_latent is specified, loss & logits are not computed or returned. Only the predicted latents are returned. - If clip_inputs is True, the inputs will be clipped to the smallest input size across each input modality. - """ - # Types are expressed by expanding the text embedding space. - if types is not None: - text_inputs = text_inputs * (1+types).unsqueeze(-1) - - if clip_inputs: - # This model will receive micro-batches with a ton of padding for both the text and MELs. Ameliorate this by - # chopping the inputs by the maximum actual length. - max_text_len = text_lengths.max() - text_inputs = text_inputs[:, :max_text_len] - max_mel_len = wav_lengths.max() // self.mel_length_compression - mel_codes = mel_codes[:, :max_mel_len] - if raw_mels is not None: - raw_mels = raw_mels[:, :, :max_mel_len*4] - mel_codes = self.set_mel_padding(mel_codes, wav_lengths) - text_inputs = F.pad(text_inputs, (0,1), value=self.stop_text_token) - mel_codes = F.pad(mel_codes, (0,1), value=self.stop_mel_token) - - conds = speech_conditioning_latent.unsqueeze(1) - text_inputs, text_targets = self.build_aligned_inputs_and_targets(text_inputs, self.start_text_token, self.stop_text_token) - text_emb = self.text_embedding(text_inputs) + self.text_pos_embedding(text_inputs) - mel_codes, mel_targets = self.build_aligned_inputs_and_targets(mel_codes, self.start_mel_token, self.stop_mel_token) - if raw_mels is not None: - mel_inp = F.pad(raw_mels, (0, 8)) - else: - mel_inp = mel_codes - mel_emb = self.mel_embedding(mel_inp) - mel_emb = mel_emb + self.mel_pos_embedding(mel_codes) - - if text_first: - text_logits, mel_logits = self.get_logits(conds, text_emb, self.text_head, mel_emb, self.mel_head, get_attns=return_attentions, return_latent=return_latent) - if return_latent: - return mel_logits[:, :-2] # Despite the name, these are not logits. Strip off the two tokens added by this forward pass. - else: - mel_logits, text_logits = self.get_logits(conds, mel_emb, self.mel_head, text_emb, self.text_head, get_attns=return_attentions, return_latent=return_latent) - if return_latent: - return text_logits[:, :-2] # Despite the name, these are not logits. Strip off the two tokens added by this forward pass. - - if return_attentions: - return mel_logits - loss_text = F.cross_entropy(text_logits, text_targets.long()) - loss_mel = F.cross_entropy(mel_logits, mel_targets.long()) - return loss_text.mean(), loss_mel.mean(), mel_logits - def compute_embeddings( - self, - cond_latents, - text_inputs, - ): - text_inputs = F.pad(text_inputs, (0, 1), value=self.stop_text_token) - text_inputs = F.pad(text_inputs, (1, 0), value=self.start_text_token) - emb = self.text_embedding(text_inputs) + self.text_pos_embedding(text_inputs) - conds = cond_latents.unsqueeze(1) - emb = torch.cat([conds, emb], dim=1) - self.inference_model.store_mel_emb(emb) - gpt_inputs = torch.full( - ( - emb.shape[0], - emb.shape[1] + 1, # +1 for the start_mel_token - ), - fill_value=1, - dtype=torch.long, - device=text_inputs.device, - ) - gpt_inputs[:, -1] = self.start_mel_token - return gpt_inputs - def inference_speech(self, speech_conditioning_latent, text_inputs, input_tokens=None, num_return_sequences=1, - max_generate_length=None, typical_sampling=False, typical_mass=.9, **hf_generate_kwargs): - - text_inputs = F.pad(text_inputs, (0, 1), value=self.stop_text_token) - text_inputs, _ = self.build_aligned_inputs_and_targets(text_inputs, self.start_text_token, self.stop_text_token) - text_emb = self.text_embedding(text_inputs) + self.text_pos_embedding(text_inputs) - - conds = speech_conditioning_latent.unsqueeze(1) - emb = torch.cat([conds, text_emb], dim=1) - self.inference_model.store_mel_emb(emb) - - fake_inputs = torch.full((emb.shape[0], conds.shape[1] + emb.shape[1],), fill_value=1, dtype=torch.long, - device=text_inputs.device) - fake_inputs[:, -1] = self.start_mel_token - trunc_index = fake_inputs.shape[1] - if input_tokens is None: - inputs = fake_inputs - else: - assert num_return_sequences % input_tokens.shape[0] == 0, "The number of return sequences must be divisible by the number of input sequences" - fake_inputs = fake_inputs.repeat(num_return_sequences, 1) - input_tokens = input_tokens.repeat(num_return_sequences // input_tokens.shape[0], 1) - inputs = torch.cat([fake_inputs, input_tokens], dim=1) - - logits_processor = LogitsProcessorList([TypicalLogitsWarper(mass=typical_mass)]) if typical_sampling else LogitsProcessorList() - max_length = trunc_index + self.max_mel_tokens - 1 if max_generate_length is None else trunc_index + max_generate_length - gen = self.inference_model.generate(inputs, bos_token_id=self.start_mel_token, pad_token_id=self.stop_mel_token, eos_token_id=self.stop_mel_token, - max_length=max_length, logits_processor=logits_processor, - num_return_sequences=num_return_sequences, **hf_generate_kwargs) - return gen[:, trunc_index:] - - def get_generator(self, fake_inputs, **hf_generate_kwargs): - return self.inference_model.generate_stream( - fake_inputs, - bos_token_id=self.start_mel_token, - pad_token_id=self.stop_mel_token, - eos_token_id=self.stop_mel_token, - max_length=500, - do_stream=True, - **hf_generate_kwargs, - ) -if __name__ == '__main__': - gpt = UnifiedVoice(model_dim=256, heads=4, train_solo_embeddings=True, use_mel_codes_as_input=True, max_conditioning_inputs=4) - l = gpt(torch.randn(2, 3, 80, 800), - torch.randint(high=120, size=(2,120)), - torch.tensor([32, 120]), - torch.randint(high=8192, size=(2,250)), - torch.tensor([250*256,195*256])) - gpt.text_forward(torch.randn(2,80,800), torch.randint(high=50, size=(2,80)), torch.tensor([32, 80])) diff --git a/spaces/MarcSkovMadsen/awesome-panel/app.py b/spaces/MarcSkovMadsen/awesome-panel/app.py deleted file mode 100644 index 0a07235da8e26cb2e2c2af8a22f06fc5d41758d4..0000000000000000000000000000000000000000 --- a/spaces/MarcSkovMadsen/awesome-panel/app.py +++ /dev/null @@ -1,39 +0,0 @@ -"""This files enables serving Panel apps on Hugging Face Spaces""" -import os -from subprocess import Popen - -# CONFIGURE YOUR SETTINGS HERE - -# Space separated list of .py or .ipynb files to serve -APPS_TO_SERVE = "pages/index.py pages/videostream.py" -# Prefix of the index .py or .ipynb file. Must be in APPS_TO_SERVE too -INDEX_PAGE = "index" - -# NORMALLY NO NEED TO CHANGE THE BELOW -PORT = os.environ.get("PORT", "7860") -ADDRESS = "0.0.0.0" -command = [ - "panel", - "serve", - *APPS_TO_SERVE.split(" "), - "--index", - INDEX_PAGE, - "--port", - PORT, - "--address", - ADDRESS, - "--allow-websocket-origin", - "localhost", - "--allow-websocket-origin", - "*.hf.space", - "--allow-websocket-origin", - "*.huggingface.co", - # "--log-level", - # "debug" -] -if os.name != "nt": - command = command + ["--num-procs", "4", "--num-threads", "4"] - -print(" ".join(command)) -worker = Popen(command) -worker.wait() \ No newline at end of file diff --git a/spaces/Marshalls/testmtd/analysis/aistplusplus_api/processing/run_estimate_camera.py b/spaces/Marshalls/testmtd/analysis/aistplusplus_api/processing/run_estimate_camera.py deleted file mode 100644 index 93ebe38573789f3f7e3d969430234321085b6f2e..0000000000000000000000000000000000000000 --- a/spaces/Marshalls/testmtd/analysis/aistplusplus_api/processing/run_estimate_camera.py +++ /dev/null @@ -1,153 +0,0 @@ -# coding=utf-8 -# Copyright 2020 The Google AI Perception Team Authors. -# -# Licensed under the Apache License, Version 2.0 (the "License"); -# you may not use this file except in compliance with the License. -# You may obtain a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an "AS IS" BASIS, -# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -# See the License for the specific language governing permissions and -# limitations under the License. -"""Estimate AIST++ camera parameters.""" -import json -import math -import os -import random - -from absl import app -from absl import flags -from aist_plusplus.loader import AISTDataset -import aniposelib -import numpy as np -import vedo -import cv2 -from scipy.spatial.transform import Rotation as R - -FLAGS = flags.FLAGS -flags.DEFINE_string( - 'anno_dir', - '/usr/local/google/home/ruilongli/data/public/aist_plusplus_final/', - 'input local dictionary for AIST++ annotations.') -flags.DEFINE_string( - 'save_dir', - '/usr/local/google/home/ruilongli/data/public/aist_plusplus_final/cameras/', - 'output local dictionary that stores AIST++ camera parameters.') -flags.DEFINE_bool( - 'visualize', False, - 'Whether to visualize the cameras for debugging.') -random.seed(0) -np.random.seed(0) - - -def plot_cameras(cgroup): - points_world = np.array([ - [40., 0., 0.], # arrow x: red - [0., 40., 0.], # arrow y: green - [0., 0., 40.], # arrow z: blue - ]) - colors = ['r', 'g', 'b'] - axes_all = [ - vedo.Arrows([[0, 0, 0]], [points_world[i]]).c(colors[i]) - for i in range(3)] - for camera in cgroup.cameras: - rot_mat = cv2.Rodrigues(camera.rvec)[0] - cam_center = - np.linalg.inv(rot_mat).dot(camera.tvec) - points_cam = np.einsum('ij,kj->ki', np.linalg.inv(rot_mat), points_world) - axes_all += [ - vedo.Arrows([cam_center], [cam_center + points_cam[i]]).c(colors[i]) - for i in range(3)] - axes_all += [vedo.Text(camera.name, cam_center, s=10)] - return axes_all - - -def init_env_cameras(): - """Trys to estimate the environment manually.""" - cams = [] - for i, view in enumerate(AISTDataset.VIEWS): - f = 1600 - cx = 1920 // 2 - cy = 1080 // 2 - if view == 'c09': - r1 = R.from_euler('y', 180, degrees=True) - r2 = R.from_euler('z', 180, degrees=True) - rvec = (r1 * r2).as_rotvec() - tvec = [0, 170, 500] - else: - r1 = R.from_euler('y', 180 - 360 // 8 * i, degrees=True) - r2 = R.from_euler('z', 180, degrees=True) - rvec = (r1 * r2).as_rotvec() - tvec = [0, 180, 500] - - matrix = np.array([ - [f, 0, cx], - [0, f, cy], - [0, 0, 1], - ], dtype=np.float32) - cams.append( - aniposelib.cameras.Camera( - matrix=matrix, rvec=rvec, tvec=tvec, name=view, size=(1920, 1080))) - cgroup = aniposelib.cameras.CameraGroup(cams) - return cgroup - - -def main(_): - aist_dataset = AISTDataset(anno_dir=FLAGS.anno_dir) - - for env_name, seq_names in aist_dataset.mapping_env2seq.items(): - # Init camera parameters - cgroup = init_env_cameras() - - # Select a set of sequences for optimizing camera parameters. - seq_names = random.choices(seq_names, k=20) - - # Load 2D keypoints - keypoints2d_all = [] - for seq_name in seq_names: - keypoints2d_raw, _, _ = AISTDataset.load_keypoint2d( - aist_dataset.keypoint2d_dir, seq_name=seq_name) - # Special cases - if seq_name == 'gBR_sBM_cAll_d04_mBR0_ch01': - keypoints2d_raw[4] = np.nan # not synced view - if seq_name == 'gJB_sBM_cAll_d07_mJB3_ch05': - keypoints2d_raw[6] = np.nan # size 640x480 - keypoints2d_all.append(keypoints2d_raw) - keypoints2d_all = np.concatenate(keypoints2d_all, axis=1) - - # Filter keypoints to select those best points - kpt_thre = 0.5 - ignore_idxs = np.where(keypoints2d_all[:, :, :, 2] < kpt_thre) - keypoints2d_all[ignore_idxs[0], ignore_idxs[1], ignore_idxs[2], :] = np.nan - keypoints2d_all = keypoints2d_all[..., 0:2] - - # Apply bundle adjustment and dump the camera parameters - nviews = keypoints2d_all.shape[0] - cgroup.bundle_adjust_iter( - keypoints2d_all.reshape(nviews, -1, 2), - n_iters=20, - n_samp_iter=500, - n_samp_full=5000, - verbose=True) - os.makedirs(FLAGS.save_dir, exist_ok=True) - camera_file = os.path.join(FLAGS.save_dir, f'{env_name}.json') - with open(camera_file, 'w') as f: - json.dump([camera.get_dict() for camera in cgroup.cameras], f) - - # visualize the world with one frame - if FLAGS.visualize: - print("seq_name:", seq_name) - axes_all = plot_cameras(cgroup) - keypoints3d = cgroup.triangulate( - keypoints2d_all[:, 0].reshape(nviews, -1, 2) - ).reshape(-1, 3) - vedo.show( - *axes_all, vedo.Points(keypoints3d, r=12), - interactive=True, axes=True) - vedo.clear() - - -if __name__ == '__main__': - app.run(main) diff --git a/spaces/MashiroSA/sovits-emu-voice-transform/vdecoder/hifigan/utils.py b/spaces/MashiroSA/sovits-emu-voice-transform/vdecoder/hifigan/utils.py deleted file mode 100644 index 9c93c996d3cc73c30d71c1fc47056e4230f35c0f..0000000000000000000000000000000000000000 --- a/spaces/MashiroSA/sovits-emu-voice-transform/vdecoder/hifigan/utils.py +++ /dev/null @@ -1,68 +0,0 @@ -import glob -import os -import matplotlib -import torch -from torch.nn.utils import weight_norm -# matplotlib.use("Agg") -import matplotlib.pylab as plt - - -def plot_spectrogram(spectrogram): - fig, ax = plt.subplots(figsize=(10, 2)) - im = ax.imshow(spectrogram, aspect="auto", origin="lower", - interpolation='none') - plt.colorbar(im, ax=ax) - - fig.canvas.draw() - plt.close() - - return fig - - -def init_weights(m, mean=0.0, std=0.01): - classname = m.__class__.__name__ - if classname.find("Conv") != -1: - m.weight.data.normal_(mean, std) - - -def apply_weight_norm(m): - classname = m.__class__.__name__ - if classname.find("Conv") != -1: - weight_norm(m) - - -def get_padding(kernel_size, dilation=1): - return int((kernel_size*dilation - dilation)/2) - - -def load_checkpoint(filepath, device): - assert os.path.isfile(filepath) - print("Loading '{}'".format(filepath)) - checkpoint_dict = torch.load(filepath, map_location=device) - print("Complete.") - return checkpoint_dict - - -def save_checkpoint(filepath, obj): - print("Saving checkpoint to {}".format(filepath)) - torch.save(obj, filepath) - print("Complete.") - - -def del_old_checkpoints(cp_dir, prefix, n_models=2): - pattern = os.path.join(cp_dir, prefix + '????????') - cp_list = glob.glob(pattern) # get checkpoint paths - cp_list = sorted(cp_list)# sort by iter - if len(cp_list) > n_models: # if more than n_models models are found - for cp in cp_list[:-n_models]:# delete the oldest models other than lastest n_models - open(cp, 'w').close()# empty file contents - os.unlink(cp)# delete file (move to trash when using Colab) - - -def scan_checkpoint(cp_dir, prefix): - pattern = os.path.join(cp_dir, prefix + '????????') - cp_list = glob.glob(pattern) - if len(cp_list) == 0: - return None - return sorted(cp_list)[-1] - diff --git a/spaces/MathysL/AutoGPT4/tests/context.py b/spaces/MathysL/AutoGPT4/tests/context.py deleted file mode 100644 index cef969db69ab189109b935bba9ed06696cf5337a..0000000000000000000000000000000000000000 --- a/spaces/MathysL/AutoGPT4/tests/context.py +++ /dev/null @@ -1,6 +0,0 @@ -import os -import sys - -sys.path.insert( - 0, os.path.abspath(os.path.join(os.path.dirname(__file__), "../scripts")) -) diff --git a/spaces/Mileena/PIFu-Clothed-Human-Digitization/PIFu/lib/sdf.py b/spaces/Mileena/PIFu-Clothed-Human-Digitization/PIFu/lib/sdf.py deleted file mode 100644 index e87e639eb94993c3e4068d6bd4d21f902aee7694..0000000000000000000000000000000000000000 --- a/spaces/Mileena/PIFu-Clothed-Human-Digitization/PIFu/lib/sdf.py +++ /dev/null @@ -1,100 +0,0 @@ -import numpy as np - - -def create_grid(resX, resY, resZ, b_min=np.array([0, 0, 0]), b_max=np.array([1, 1, 1]), transform=None): - ''' - Create a dense grid of given resolution and bounding box - :param resX: resolution along X axis - :param resY: resolution along Y axis - :param resZ: resolution along Z axis - :param b_min: vec3 (x_min, y_min, z_min) bounding box corner - :param b_max: vec3 (x_max, y_max, z_max) bounding box corner - :return: [3, resX, resY, resZ] coordinates of the grid, and transform matrix from mesh index - ''' - coords = np.mgrid[:resX, :resY, :resZ] - coords = coords.reshape(3, -1) - coords_matrix = np.eye(4) - length = b_max - b_min - coords_matrix[0, 0] = length[0] / resX - coords_matrix[1, 1] = length[1] / resY - coords_matrix[2, 2] = length[2] / resZ - coords_matrix[0:3, 3] = b_min - coords = np.matmul(coords_matrix[:3, :3], coords) + coords_matrix[:3, 3:4] - if transform is not None: - coords = np.matmul(transform[:3, :3], coords) + transform[:3, 3:4] - coords_matrix = np.matmul(transform, coords_matrix) - coords = coords.reshape(3, resX, resY, resZ) - return coords, coords_matrix - - -def batch_eval(points, eval_func, num_samples=512 * 512 * 512): - num_pts = points.shape[1] - sdf = np.zeros(num_pts) - - num_batches = num_pts // num_samples - for i in range(num_batches): - sdf[i * num_samples:i * num_samples + num_samples] = eval_func( - points[:, i * num_samples:i * num_samples + num_samples]) - if num_pts % num_samples: - sdf[num_batches * num_samples:] = eval_func(points[:, num_batches * num_samples:]) - - return sdf - - -def eval_grid(coords, eval_func, num_samples=512 * 512 * 512): - resolution = coords.shape[1:4] - coords = coords.reshape([3, -1]) - sdf = batch_eval(coords, eval_func, num_samples=num_samples) - return sdf.reshape(resolution) - - -def eval_grid_octree(coords, eval_func, - init_resolution=64, threshold=0.01, - num_samples=512 * 512 * 512): - resolution = coords.shape[1:4] - - sdf = np.zeros(resolution) - - dirty = np.ones(resolution, dtype=np.bool) - grid_mask = np.zeros(resolution, dtype=np.bool) - - reso = resolution[0] // init_resolution - - while reso > 0: - # subdivide the grid - grid_mask[0:resolution[0]:reso, 0:resolution[1]:reso, 0:resolution[2]:reso] = True - # test samples in this iteration - test_mask = np.logical_and(grid_mask, dirty) - #print('step size:', reso, 'test sample size:', test_mask.sum()) - points = coords[:, test_mask] - - sdf[test_mask] = batch_eval(points, eval_func, num_samples=num_samples) - dirty[test_mask] = False - - # do interpolation - if reso <= 1: - break - for x in range(0, resolution[0] - reso, reso): - for y in range(0, resolution[1] - reso, reso): - for z in range(0, resolution[2] - reso, reso): - # if center marked, return - if not dirty[x + reso // 2, y + reso // 2, z + reso // 2]: - continue - v0 = sdf[x, y, z] - v1 = sdf[x, y, z + reso] - v2 = sdf[x, y + reso, z] - v3 = sdf[x, y + reso, z + reso] - v4 = sdf[x + reso, y, z] - v5 = sdf[x + reso, y, z + reso] - v6 = sdf[x + reso, y + reso, z] - v7 = sdf[x + reso, y + reso, z + reso] - v = np.array([v0, v1, v2, v3, v4, v5, v6, v7]) - v_min = v.min() - v_max = v.max() - # this cell is all the same - if (v_max - v_min) < threshold: - sdf[x:x + reso, y:y + reso, z:z + reso] = (v_max + v_min) / 2 - dirty[x:x + reso, y:y + reso, z:z + reso] = False - reso //= 2 - - return sdf.reshape(resolution) diff --git a/spaces/Miyuki13242/Daily/Dockerfile b/spaces/Miyuki13242/Daily/Dockerfile deleted file mode 100644 index 6c01c09373883afcb4ea34ae2d316cd596e1737b..0000000000000000000000000000000000000000 --- a/spaces/Miyuki13242/Daily/Dockerfile +++ /dev/null @@ -1,21 +0,0 @@ -FROM node:18-bullseye-slim - -RUN apt-get update && \ - -apt-get install -y git - -RUN git clone https://gitgud.io/khanon/oai-reverse-proxy.git /app - -WORKDIR /app - -RUN npm install - -COPY Dockerfile greeting.md* .env* ./ - -RUN npm run build - -EXPOSE 7860 - -ENV NODE_ENV=production - -CMD [ "npm", "start" ] \ No newline at end of file diff --git a/spaces/MrVicente/RA-BART/custom_bart/config.py b/spaces/MrVicente/RA-BART/custom_bart/config.py deleted file mode 100644 index 6c2eb062718c4d0dd80f3040f9c673ca1b9ac924..0000000000000000000000000000000000000000 --- a/spaces/MrVicente/RA-BART/custom_bart/config.py +++ /dev/null @@ -1,197 +0,0 @@ -from transformers import BartConfig - -class BartCustomConfig(BartConfig): - def __init__( - self, - model_type='bart', - vocab_size=50265, - max_position_embeddings=1024, - encoder_layers=12, - encoder_ffn_dim=4096, - encoder_attention_heads=16, - decoder_layers=12, - decoder_ffn_dim=4096, - decoder_attention_heads=16, - encoder_layerdrop=0.0, - decoder_layerdrop=0.0, - activation_function="gelu", - d_model=1024, - dropout=0.1, - attention_dropout=0.1, - activation_dropout=0.1, - init_std=0.02, - classifier_dropout=0.0, - classif_dropout=0.1, - scale_embedding=False, - use_cache=True, - num_labels=3, - pad_token_id=1, - bos_token_id=0, - eos_token_id=2, - is_encoder_decoder=True, - decoder_start_token_id=2, - forced_eos_token_id=2, - forced_bos_token_id=0, - no_repeat_ngram_size=3, # adding - num_hidden_layers=12, - normalize_before=False, - num_beams=4, - add_bias_logits=False, - add_final_layer_norm=False, - early_stopping=True, - gradient_checkpointing=False, - num_relation_kinds = 0, - use_same_relation_kv_emb = True, - is_simple_mask_commonsense = False, - should_embed_positions = False, - heads_mask = None, - **kwargs - ): - super(BartCustomConfig, self).__init__( - model_type=model_type, - vocab_size=vocab_size, - max_position_embeddings=max_position_embeddings, - encoder_layers=encoder_layers, - encoder_ffn_dim=encoder_ffn_dim, - encoder_attention_heads=encoder_attention_heads, - decoder_layers=decoder_layers, - decoder_ffn_dim=decoder_ffn_dim, - decoder_attention_heads=decoder_attention_heads, - encoder_layerdrop=encoder_layerdrop, - decoder_layerdrop=decoder_layerdrop, - activation_function=activation_function, - d_model=d_model, - dropout=dropout, - attention_dropout=attention_dropout, - activation_dropout=activation_dropout, - init_std=init_std, - classifier_dropout=classifier_dropout, - classif_dropout=classif_dropout, - scale_embedding=scale_embedding, - use_cache=use_cache, - num_labels=num_labels, - pad_token_id = pad_token_id, - bos_token_id = bos_token_id, - eos_token_id = eos_token_id, - is_encoder_decoder = is_encoder_decoder, - decoder_start_token_id = decoder_start_token_id, - forced_eos_token_id = forced_eos_token_id, - forced_bos_token_id=forced_bos_token_id, - no_repeat_ngram_size=no_repeat_ngram_size, # Adding - normalize_before=normalize_before, - num_hidden_layers=num_hidden_layers, - num_beams=num_beams, - add_bias_logits=add_bias_logits, - add_final_layer_norm=add_final_layer_norm, - early_stopping=early_stopping, - gradient_checkpointing=gradient_checkpointing, - num_relation_kinds = num_relation_kinds, - use_same_relation_kv_emb = use_same_relation_kv_emb, - is_simple_mask_commonsense = is_simple_mask_commonsense, - heads_mask = None, - should_embed_positions=False, - **kwargs - ) - self.num_relation_kinds = num_relation_kinds - self.use_same_relation_kv_emb = use_same_relation_kv_emb - self.is_simple_mask_commonsense = is_simple_mask_commonsense - self.heads_mask = heads_mask - self.should_embed_positions = should_embed_positions - -class BartSmallCustomConfig(BartConfig): - def __init__( - self, - vocab_size=50265, - max_position_embeddings=1024, - encoder_layers=6, - encoder_ffn_dim=3072, - encoder_attention_heads=12, - decoder_layers=12, - decoder_ffn_dim=3072, - decoder_attention_heads=12, - encoder_layerdrop=0.0, - decoder_layerdrop=0.0, - activation_function="gelu", - d_model=768, - dropout=0.1, - attention_dropout=0.1, - activation_dropout=0.1, - init_std=0.02, - classifier_dropout=0.0, - classif_dropout= 0.1, - scale_embedding=False, - use_cache=True, - num_labels=3, - pad_token_id=1, - bos_token_id=0, - eos_token_id=2, - is_encoder_decoder=True, - decoder_start_token_id=2, - forced_eos_token_id=2, - forced_bos_token_id=0, - no_repeat_ngram_size=3, #adding - num_hidden_layers=6, - normalize_before=False, - num_beams=4, - add_bias_logits=False, - add_final_layer_norm=False, - _name_or_path="bart-base", - early_stopping=True, - gradient_checkpointing=False, - num_relation_kinds = 0, - use_same_relation_kv_emb = True, - is_simple_mask_commonsense = False, - should_embed_positions = True, - heads_mask = None, - **kwargs - ): - super(BartSmallCustomConfig, self).__init__( - vocab_size=vocab_size, - max_position_embeddings=max_position_embeddings, - encoder_layers=encoder_layers, - encoder_ffn_dim=encoder_ffn_dim, - encoder_attention_heads=encoder_attention_heads, - decoder_layers=decoder_layers, - decoder_ffn_dim=decoder_ffn_dim, - decoder_attention_heads=decoder_attention_heads, - encoder_layerdrop=encoder_layerdrop, - decoder_layerdrop=decoder_layerdrop, - activation_function=activation_function, - d_model=d_model, - dropout=dropout, - attention_dropout=attention_dropout, - activation_dropout=activation_dropout, - init_std=init_std, - classifier_dropout=classifier_dropout, - classif_dropout=classif_dropout, - scale_embedding=scale_embedding, - use_cache=use_cache, - num_labels=num_labels, - pad_token_id = pad_token_id, - bos_token_id = bos_token_id, - eos_token_id = eos_token_id, - is_encoder_decoder = is_encoder_decoder, - decoder_start_token_id = decoder_start_token_id, - forced_eos_token_id = forced_eos_token_id, - forced_bos_token_id=forced_bos_token_id, - no_repeat_ngram_size = no_repeat_ngram_size, #Adding - normalize_before = normalize_before, - num_hidden_layers=num_hidden_layers, - num_beams=num_beams, - add_bias_logits=add_bias_logits, - add_final_layer_norm=add_final_layer_norm, - _name_or_path=_name_or_path, - early_stopping=early_stopping, - gradient_checkpointing=gradient_checkpointing, - num_relation_kinds = num_relation_kinds, - use_same_relation_kv_emb = use_same_relation_kv_emb, - is_simple_mask_commonsense = is_simple_mask_commonsense, - heads_mask = heads_mask, - should_embed_positions=should_embed_positions, - **kwargs - ) - self.num_relation_kinds = num_relation_kinds - self.use_same_relation_kv_emb = use_same_relation_kv_emb - self.is_simple_mask_commonsense = is_simple_mask_commonsense - self.heads_mask = heads_mask - self.should_embed_positions = should_embed_positions diff --git a/spaces/Mrleo/MyChatGPT/ChuanhuChatbot.py b/spaces/Mrleo/MyChatGPT/ChuanhuChatbot.py deleted file mode 100644 index 086dc6a1e3da91f4078e163ffac03ab54ed0a7d0..0000000000000000000000000000000000000000 --- a/spaces/Mrleo/MyChatGPT/ChuanhuChatbot.py +++ /dev/null @@ -1,159 +0,0 @@ -import gradio as gr -# import openai -import os -import sys -import argparse -from utils import * -from presets import * - - -my_api_key = "" # 在这里输入你的 API 密钥 - -#if we are running in Docker -if os.environ.get('dockerrun') == 'yes': - dockerflag = True -else: - dockerflag = False - -authflag = False - -if dockerflag: - my_api_key = os.environ.get('my_api_key') - if my_api_key == "empty": - print("Please give a api key!") - sys.exit(1) - #auth - username = os.environ.get('USERNAME') - password = os.environ.get('PASSWORD') - if not (isinstance(username, type(None)) or isinstance(password, type(None))): - authflag = True -else: - if not my_api_key and os.path.exists("api_key.txt") and os.path.getsize("api_key.txt"): - with open("api_key.txt", "r") as f: - my_api_key = f.read().strip() - if os.path.exists("auth.json"): - with open("auth.json", "r") as f: - auth = json.load(f) - username = auth["username"] - password = auth["password"] - if username != "" and password != "": - authflag = True - -gr.Chatbot.postprocess = postprocess - -with gr.Blocks(css=customCSS) as demo: - gr.HTML(title) - with gr.Row(): - keyTxt = gr.Textbox(show_label=False, placeholder=f"在这里输入你的OpenAI API-key...", - value=my_api_key, type="password", visible=not HIDE_MY_KEY).style(container=True) - use_streaming_checkbox = gr.Checkbox(label="实时传输回答", value=True, visible=enable_streaming_option) - chatbot = gr.Chatbot() # .style(color_map=("#1D51EE", "#585A5B")) - history = gr.State([]) - token_count = gr.State([]) - promptTemplates = gr.State(load_template(get_template_names(plain=True)[0], mode=2)) - TRUECOMSTANT = gr.State(True) - FALSECONSTANT = gr.State(False) - topic = gr.State("未命名对话历史记录") - - with gr.Row(): - with gr.Column(scale=12): - user_input = gr.Textbox(show_label=False, placeholder="在这里输入").style( - container=False) - with gr.Column(min_width=50, scale=1): - submitBtn = gr.Button("🚀", variant="primary") - with gr.Row(): - emptyBtn = gr.Button("🧹 新的对话") - retryBtn = gr.Button("🔄 重新生成") - delLastBtn = gr.Button("🗑️ 删除最近一条对话") - reduceTokenBtn = gr.Button("♻️ 总结对话") - status_display = gr.Markdown("status: ready") - systemPromptTxt = gr.Textbox(show_label=True, placeholder=f"在这里输入System Prompt...", - label="System prompt", value=initial_prompt).style(container=True) - with gr.Accordion(label="加载Prompt模板", open=False): - with gr.Column(): - with gr.Row(): - with gr.Column(scale=6): - templateFileSelectDropdown = gr.Dropdown(label="选择Prompt模板集合文件", choices=get_template_names(plain=True), multiselect=False, value=get_template_names(plain=True)[0]) - with gr.Column(scale=1): - templateRefreshBtn = gr.Button("🔄 刷新") - templaeFileReadBtn = gr.Button("📂 读入模板") - with gr.Row(): - with gr.Column(scale=6): - templateSelectDropdown = gr.Dropdown(label="从Prompt模板中加载", choices=load_template(get_template_names(plain=True)[0], mode=1), multiselect=False, value=load_template(get_template_names(plain=True)[0], mode=1)[0]) - with gr.Column(scale=1): - templateApplyBtn = gr.Button("⬇️ 应用") - with gr.Accordion(label="保存/加载对话历史记录", open=False): - with gr.Column(): - with gr.Row(): - with gr.Column(scale=6): - saveFileName = gr.Textbox( - show_label=True, placeholder=f"在这里输入保存的文件名...", label="设置保存文件名", value="对话历史记录").style(container=True) - with gr.Column(scale=1): - saveHistoryBtn = gr.Button("💾 保存对话") - with gr.Row(): - with gr.Column(scale=6): - historyFileSelectDropdown = gr.Dropdown(label="从列表中加载对话", choices=get_history_names(plain=True), multiselect=False, value=get_history_names(plain=True)[0]) - with gr.Column(scale=1): - historyRefreshBtn = gr.Button("🔄 刷新") - historyReadBtn = gr.Button("📂 读入对话") - #inputs, top_p, temperature, top_k, repetition_penalty - with gr.Accordion("参数", open=False): - top_p = gr.Slider(minimum=-0, maximum=1.0, value=1.0, step=0.05, - interactive=True, label="Top-p (nucleus sampling)",) - temperature = gr.Slider(minimum=-0, maximum=5.0, value=1.0, - step=0.1, interactive=True, label="Temperature",) - #top_k = gr.Slider( minimum=1, maximum=50, value=4, step=1, interactive=True, label="Top-k",) - #repetition_penalty = gr.Slider( minimum=0.1, maximum=3.0, value=1.03, step=0.01, interactive=True, label="Repetition Penalty", ) - gr.Markdown(description) - - - user_input.submit(predict, [keyTxt, systemPromptTxt, history, user_input, chatbot, token_count, top_p, temperature, use_streaming_checkbox], [chatbot, history, status_display, token_count], show_progress=True) - user_input.submit(reset_textbox, [], [user_input]) - - submitBtn.click(predict, [keyTxt, systemPromptTxt, history, user_input, chatbot, token_count, top_p, temperature, use_streaming_checkbox], [chatbot, history, status_display, token_count], show_progress=True) - submitBtn.click(reset_textbox, [], [user_input]) - - emptyBtn.click(reset_state, outputs=[chatbot, history, token_count, status_display], show_progress=True) - - retryBtn.click(retry, [keyTxt, systemPromptTxt, history, chatbot, token_count, top_p, temperature, use_streaming_checkbox], [chatbot, history, status_display, token_count], show_progress=True) - - delLastBtn.click(delete_last_conversation, [chatbot, history, token_count, use_streaming_checkbox], [ - chatbot, history, token_count, status_display], show_progress=True) - - reduceTokenBtn.click(reduce_token_size, [keyTxt, systemPromptTxt, history, chatbot, token_count, top_p, temperature, use_streaming_checkbox], [chatbot, history, status_display, token_count], show_progress=True) - - saveHistoryBtn.click(save_chat_history, [ - saveFileName, systemPromptTxt, history, chatbot], None, show_progress=True) - - saveHistoryBtn.click(get_history_names, None, [historyFileSelectDropdown]) - - historyRefreshBtn.click(get_history_names, None, [historyFileSelectDropdown]) - - historyReadBtn.click(load_chat_history, [historyFileSelectDropdown, systemPromptTxt, history, chatbot], [saveFileName, systemPromptTxt, history, chatbot], show_progress=True) - - templateRefreshBtn.click(get_template_names, None, [templateFileSelectDropdown]) - - templaeFileReadBtn.click(load_template, [templateFileSelectDropdown], [promptTemplates, templateSelectDropdown], show_progress=True) - - templateApplyBtn.click(get_template_content, [promptTemplates, templateSelectDropdown, systemPromptTxt], [systemPromptTxt], show_progress=True) - -print("川虎的温馨提示:访问 http://localhost:7860 查看界面") -# 默认开启本地服务器,默认可以直接从IP访问,默认不创建公开分享链接 -demo.title = "川虎ChatGPT 🚀" - -if __name__ == "__main__": - #if running in Docker - if dockerflag: - if authflag: - demo.queue().launch(server_name="0.0.0.0", server_port=7860,auth=(username, password)) - else: - demo.queue().launch(server_name="0.0.0.0", server_port=7860, share=False) - #if not running in Docker - else: - if authflag: - demo.queue().launch(share=False, auth=(username, password)) - else: - demo.queue().launch(share=False) # 改为 share=True 可以创建公开分享链接 - #demo.queue().launch(server_name="0.0.0.0", server_port=7860, share=False) # 可自定义端口 - #demo.queue().launch(server_name="0.0.0.0", server_port=7860,auth=("在这里填写用户名", "在这里填写密码")) # 可设置用户名与密码 - #demo.queue().launch(auth=("在这里填写用户名", "在这里填写密码")) # 适合Nginx反向代理 diff --git a/spaces/NATSpeech/PortaSpeech/utils/nn/seq_utils.py b/spaces/NATSpeech/PortaSpeech/utils/nn/seq_utils.py deleted file mode 100644 index 1308bf7d1806a6c36de9c8af5e9d217eaefa7b56..0000000000000000000000000000000000000000 --- a/spaces/NATSpeech/PortaSpeech/utils/nn/seq_utils.py +++ /dev/null @@ -1,305 +0,0 @@ -from collections import defaultdict -import torch -import torch.nn.functional as F - - -def make_positions(tensor, padding_idx): - """Replace non-padding symbols with their position numbers. - - Position numbers begin at padding_idx+1. Padding symbols are ignored. - """ - # The series of casts and type-conversions here are carefully - # balanced to both work with ONNX export and XLA. In particular XLA - # prefers ints, cumsum defaults to output longs, and ONNX doesn't know - # how to handle the dtype kwarg in cumsum. - mask = tensor.ne(padding_idx).int() - return ( - torch.cumsum(mask, dim=1).type_as(mask) * mask - ).long() + padding_idx - - -def softmax(x, dim): - return F.softmax(x, dim=dim, dtype=torch.float32) - - -def sequence_mask(lengths, maxlen, dtype=torch.bool): - if maxlen is None: - maxlen = lengths.max() - mask = ~(torch.ones((len(lengths), maxlen)).to(lengths.device).cumsum(dim=1).t() > lengths).t() - mask.type(dtype) - return mask - - -def weights_nonzero_speech(target): - # target : B x T x mel - # Assign weight 1.0 to all labels except for padding (id=0). - dim = target.size(-1) - return target.abs().sum(-1, keepdim=True).ne(0).float().repeat(1, 1, dim) - - -INCREMENTAL_STATE_INSTANCE_ID = defaultdict(lambda: 0) - - -def _get_full_incremental_state_key(module_instance, key): - module_name = module_instance.__class__.__name__ - - # assign a unique ID to each module instance, so that incremental state is - # not shared across module instances - if not hasattr(module_instance, '_instance_id'): - INCREMENTAL_STATE_INSTANCE_ID[module_name] += 1 - module_instance._instance_id = INCREMENTAL_STATE_INSTANCE_ID[module_name] - - return '{}.{}.{}'.format(module_name, module_instance._instance_id, key) - - -def get_incremental_state(module, incremental_state, key): - """Helper for getting incremental state for an nn.Module.""" - full_key = _get_full_incremental_state_key(module, key) - if incremental_state is None or full_key not in incremental_state: - return None - return incremental_state[full_key] - - -def set_incremental_state(module, incremental_state, key, value): - """Helper for setting incremental state for an nn.Module.""" - if incremental_state is not None: - full_key = _get_full_incremental_state_key(module, key) - incremental_state[full_key] = value - - -def fill_with_neg_inf(t): - """FP16-compatible function that fills a tensor with -inf.""" - return t.float().fill_(float('-inf')).type_as(t) - - -def fill_with_neg_inf2(t): - """FP16-compatible function that fills a tensor with -inf.""" - return t.float().fill_(-1e8).type_as(t) - - -def select_attn(attn_logits, type='best'): - """ - - :param attn_logits: [n_layers, B, n_head, T_sp, T_txt] - :return: - """ - encdec_attn = torch.stack(attn_logits, 0).transpose(1, 2) - # [n_layers * n_head, B, T_sp, T_txt] - encdec_attn = (encdec_attn.reshape([-1, *encdec_attn.shape[2:]])).softmax(-1) - if type == 'best': - indices = encdec_attn.max(-1).values.sum(-1).argmax(0) - encdec_attn = encdec_attn.gather( - 0, indices[None, :, None, None].repeat(1, 1, encdec_attn.size(-2), encdec_attn.size(-1)))[0] - return encdec_attn - elif type == 'mean': - return encdec_attn.mean(0) - - -def make_pad_mask(lengths, xs=None, length_dim=-1): - """Make mask tensor containing indices of padded part. - Args: - lengths (LongTensor or List): Batch of lengths (B,). - xs (Tensor, optional): The reference tensor. - If set, masks will be the same shape as this tensor. - length_dim (int, optional): Dimension indicator of the above tensor. - See the example. - Returns: - Tensor: Mask tensor containing indices of padded part. - dtype=torch.uint8 in PyTorch 1.2- - dtype=torch.bool in PyTorch 1.2+ (including 1.2) - Examples: - With only lengths. - >>> lengths = [5, 3, 2] - >>> make_non_pad_mask(lengths) - masks = [[0, 0, 0, 0 ,0], - [0, 0, 0, 1, 1], - [0, 0, 1, 1, 1]] - With the reference tensor. - >>> xs = torch.zeros((3, 2, 4)) - >>> make_pad_mask(lengths, xs) - tensor([[[0, 0, 0, 0], - [0, 0, 0, 0]], - [[0, 0, 0, 1], - [0, 0, 0, 1]], - [[0, 0, 1, 1], - [0, 0, 1, 1]]], dtype=torch.uint8) - >>> xs = torch.zeros((3, 2, 6)) - >>> make_pad_mask(lengths, xs) - tensor([[[0, 0, 0, 0, 0, 1], - [0, 0, 0, 0, 0, 1]], - [[0, 0, 0, 1, 1, 1], - [0, 0, 0, 1, 1, 1]], - [[0, 0, 1, 1, 1, 1], - [0, 0, 1, 1, 1, 1]]], dtype=torch.uint8) - With the reference tensor and dimension indicator. - >>> xs = torch.zeros((3, 6, 6)) - >>> make_pad_mask(lengths, xs, 1) - tensor([[[0, 0, 0, 0, 0, 0], - [0, 0, 0, 0, 0, 0], - [0, 0, 0, 0, 0, 0], - [0, 0, 0, 0, 0, 0], - [0, 0, 0, 0, 0, 0], - [1, 1, 1, 1, 1, 1]], - [[0, 0, 0, 0, 0, 0], - [0, 0, 0, 0, 0, 0], - [0, 0, 0, 0, 0, 0], - [1, 1, 1, 1, 1, 1], - [1, 1, 1, 1, 1, 1], - [1, 1, 1, 1, 1, 1]], - [[0, 0, 0, 0, 0, 0], - [0, 0, 0, 0, 0, 0], - [1, 1, 1, 1, 1, 1], - [1, 1, 1, 1, 1, 1], - [1, 1, 1, 1, 1, 1], - [1, 1, 1, 1, 1, 1]]], dtype=torch.uint8) - >>> make_pad_mask(lengths, xs, 2) - tensor([[[0, 0, 0, 0, 0, 1], - [0, 0, 0, 0, 0, 1], - [0, 0, 0, 0, 0, 1], - [0, 0, 0, 0, 0, 1], - [0, 0, 0, 0, 0, 1], - [0, 0, 0, 0, 0, 1]], - [[0, 0, 0, 1, 1, 1], - [0, 0, 0, 1, 1, 1], - [0, 0, 0, 1, 1, 1], - [0, 0, 0, 1, 1, 1], - [0, 0, 0, 1, 1, 1], - [0, 0, 0, 1, 1, 1]], - [[0, 0, 1, 1, 1, 1], - [0, 0, 1, 1, 1, 1], - [0, 0, 1, 1, 1, 1], - [0, 0, 1, 1, 1, 1], - [0, 0, 1, 1, 1, 1], - [0, 0, 1, 1, 1, 1]]], dtype=torch.uint8) - """ - if length_dim == 0: - raise ValueError("length_dim cannot be 0: {}".format(length_dim)) - - if not isinstance(lengths, list): - lengths = lengths.tolist() - bs = int(len(lengths)) - if xs is None: - maxlen = int(max(lengths)) - else: - maxlen = xs.size(length_dim) - - seq_range = torch.arange(0, maxlen, dtype=torch.int64) - seq_range_expand = seq_range.unsqueeze(0).expand(bs, maxlen) - seq_length_expand = seq_range_expand.new(lengths).unsqueeze(-1) - mask = seq_range_expand >= seq_length_expand - - if xs is not None: - assert xs.size(0) == bs, (xs.size(0), bs) - - if length_dim < 0: - length_dim = xs.dim() + length_dim - # ind = (:, None, ..., None, :, , None, ..., None) - ind = tuple( - slice(None) if i in (0, length_dim) else None for i in range(xs.dim()) - ) - mask = mask[ind].expand_as(xs).to(xs.device) - return mask - - -def make_non_pad_mask(lengths, xs=None, length_dim=-1): - """Make mask tensor containing indices of non-padded part. - Args: - lengths (LongTensor or List): Batch of lengths (B,). - xs (Tensor, optional): The reference tensor. - If set, masks will be the same shape as this tensor. - length_dim (int, optional): Dimension indicator of the above tensor. - See the example. - Returns: - ByteTensor: mask tensor containing indices of padded part. - dtype=torch.uint8 in PyTorch 1.2- - dtype=torch.bool in PyTorch 1.2+ (including 1.2) - Examples: - With only lengths. - >>> lengths = [5, 3, 2] - >>> make_non_pad_mask(lengths) - masks = [[1, 1, 1, 1 ,1], - [1, 1, 1, 0, 0], - [1, 1, 0, 0, 0]] - With the reference tensor. - >>> xs = torch.zeros((3, 2, 4)) - >>> make_non_pad_mask(lengths, xs) - tensor([[[1, 1, 1, 1], - [1, 1, 1, 1]], - [[1, 1, 1, 0], - [1, 1, 1, 0]], - [[1, 1, 0, 0], - [1, 1, 0, 0]]], dtype=torch.uint8) - >>> xs = torch.zeros((3, 2, 6)) - >>> make_non_pad_mask(lengths, xs) - tensor([[[1, 1, 1, 1, 1, 0], - [1, 1, 1, 1, 1, 0]], - [[1, 1, 1, 0, 0, 0], - [1, 1, 1, 0, 0, 0]], - [[1, 1, 0, 0, 0, 0], - [1, 1, 0, 0, 0, 0]]], dtype=torch.uint8) - With the reference tensor and dimension indicator. - >>> xs = torch.zeros((3, 6, 6)) - >>> make_non_pad_mask(lengths, xs, 1) - tensor([[[1, 1, 1, 1, 1, 1], - [1, 1, 1, 1, 1, 1], - [1, 1, 1, 1, 1, 1], - [1, 1, 1, 1, 1, 1], - [1, 1, 1, 1, 1, 1], - [0, 0, 0, 0, 0, 0]], - [[1, 1, 1, 1, 1, 1], - [1, 1, 1, 1, 1, 1], - [1, 1, 1, 1, 1, 1], - [0, 0, 0, 0, 0, 0], - [0, 0, 0, 0, 0, 0], - [0, 0, 0, 0, 0, 0]], - [[1, 1, 1, 1, 1, 1], - [1, 1, 1, 1, 1, 1], - [0, 0, 0, 0, 0, 0], - [0, 0, 0, 0, 0, 0], - [0, 0, 0, 0, 0, 0], - [0, 0, 0, 0, 0, 0]]], dtype=torch.uint8) - >>> make_non_pad_mask(lengths, xs, 2) - tensor([[[1, 1, 1, 1, 1, 0], - [1, 1, 1, 1, 1, 0], - [1, 1, 1, 1, 1, 0], - [1, 1, 1, 1, 1, 0], - [1, 1, 1, 1, 1, 0], - [1, 1, 1, 1, 1, 0]], - [[1, 1, 1, 0, 0, 0], - [1, 1, 1, 0, 0, 0], - [1, 1, 1, 0, 0, 0], - [1, 1, 1, 0, 0, 0], - [1, 1, 1, 0, 0, 0], - [1, 1, 1, 0, 0, 0]], - [[1, 1, 0, 0, 0, 0], - [1, 1, 0, 0, 0, 0], - [1, 1, 0, 0, 0, 0], - [1, 1, 0, 0, 0, 0], - [1, 1, 0, 0, 0, 0], - [1, 1, 0, 0, 0, 0]]], dtype=torch.uint8) - """ - return ~make_pad_mask(lengths, xs, length_dim) - - -def get_mask_from_lengths(lengths): - max_len = torch.max(lengths).item() - ids = torch.arange(0, max_len).to(lengths.device) - mask = (ids < lengths.unsqueeze(1)).bool() - return mask - - -def group_hidden_by_segs(h, seg_ids, max_len): - """ - - :param h: [B, T, H] - :param seg_ids: [B, T] - :return: h_ph: [B, T_ph, H] - """ - B, T, H = h.shape - h_gby_segs = h.new_zeros([B, max_len + 1, H]).scatter_add_(1, seg_ids[:, :, None].repeat([1, 1, H]), h) - all_ones = h.new_ones(h.shape[:2]) - cnt_gby_segs = h.new_zeros([B, max_len + 1]).scatter_add_(1, seg_ids, all_ones).contiguous() - h_gby_segs = h_gby_segs[:, 1:] - cnt_gby_segs = cnt_gby_segs[:, 1:] - h_gby_segs = h_gby_segs / torch.clamp(cnt_gby_segs[:, :, None], min=1) - return h_gby_segs, cnt_gby_segs diff --git a/spaces/NCTCMumbai/NCTC/models/official/nlp/bert/bert_models_test.py b/spaces/NCTCMumbai/NCTC/models/official/nlp/bert/bert_models_test.py deleted file mode 100644 index 93763b45bfc53c5d32de2df7f7f0f72894e9556f..0000000000000000000000000000000000000000 --- a/spaces/NCTCMumbai/NCTC/models/official/nlp/bert/bert_models_test.py +++ /dev/null @@ -1,114 +0,0 @@ -# Copyright 2020 The TensorFlow Authors. All Rights Reserved. -# -# Licensed under the Apache License, Version 2.0 (the "License"); -# you may not use this file except in compliance with the License. -# You may obtain a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an "AS IS" BASIS, -# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -# See the License for the specific language governing permissions and -# limitations under the License. -# ============================================================================== -from __future__ import absolute_import -from __future__ import division -from __future__ import print_function - -import tensorflow as tf - -from official.nlp.bert import bert_models -from official.nlp.bert import configs as bert_configs -from official.nlp.modeling import networks - - -class BertModelsTest(tf.test.TestCase): - - def setUp(self): - super(BertModelsTest, self).setUp() - self._bert_test_config = bert_configs.BertConfig( - attention_probs_dropout_prob=0.0, - hidden_act='gelu', - hidden_dropout_prob=0.0, - hidden_size=16, - initializer_range=0.02, - intermediate_size=32, - max_position_embeddings=128, - num_attention_heads=2, - num_hidden_layers=2, - type_vocab_size=2, - vocab_size=30522) - - def test_pretrain_model(self): - model, encoder = bert_models.pretrain_model( - self._bert_test_config, - seq_length=5, - max_predictions_per_seq=2, - initializer=None, - use_next_sentence_label=True) - self.assertIsInstance(model, tf.keras.Model) - self.assertIsInstance(encoder, networks.TransformerEncoder) - - # model has one scalar output: loss value. - self.assertEqual(model.output.shape.as_list(), [None,]) - - # Expect two output from encoder: sequence and classification output. - self.assertIsInstance(encoder.output, list) - self.assertLen(encoder.output, 2) - # shape should be [batch size, seq_length, hidden_size] - self.assertEqual(encoder.output[0].shape.as_list(), [None, 5, 16]) - # shape should be [batch size, hidden_size] - self.assertEqual(encoder.output[1].shape.as_list(), [None, 16]) - - def test_squad_model(self): - model, core_model = bert_models.squad_model( - self._bert_test_config, - max_seq_length=5, - initializer=None, - hub_module_url=None, - hub_module_trainable=None) - self.assertIsInstance(model, tf.keras.Model) - self.assertIsInstance(core_model, tf.keras.Model) - - # Expect two output from model: start positions and end positions - self.assertIsInstance(model.output, list) - self.assertLen(model.output, 2) - # shape should be [batch size, seq_length] - self.assertEqual(model.output[0].shape.as_list(), [None, 5]) - # shape should be [batch size, seq_length] - self.assertEqual(model.output[1].shape.as_list(), [None, 5]) - - # Expect two output from core_model: sequence and classification output. - self.assertIsInstance(core_model.output, list) - self.assertLen(core_model.output, 2) - # shape should be [batch size, seq_length, hidden_size] - self.assertEqual(core_model.output[0].shape.as_list(), [None, 5, 16]) - # shape should be [batch size, hidden_size] - self.assertEqual(core_model.output[1].shape.as_list(), [None, 16]) - - def test_classifier_model(self): - model, core_model = bert_models.classifier_model( - self._bert_test_config, - num_labels=3, - max_seq_length=5, - final_layer_initializer=None, - hub_module_url=None, - hub_module_trainable=None) - self.assertIsInstance(model, tf.keras.Model) - self.assertIsInstance(core_model, tf.keras.Model) - - # model has one classification output with num_labels=3. - self.assertEqual(model.output.shape.as_list(), [None, 3]) - - # Expect two output from core_model: sequence and classification output. - self.assertIsInstance(core_model.output, list) - self.assertLen(core_model.output, 2) - # shape should be [batch size, 1, hidden_size] - self.assertEqual(core_model.output[0].shape.as_list(), [None, 1, 16]) - # shape should be [batch size, hidden_size] - self.assertEqual(core_model.output[1].shape.as_list(), [None, 16]) - - -if __name__ == '__main__': - tf.test.main() diff --git a/spaces/NMEX/rvc-hoyogame-v2/weights/genshin-impactV2/AutoFile/index.js b/spaces/NMEX/rvc-hoyogame-v2/weights/genshin-impactV2/AutoFile/index.js deleted file mode 100644 index 10fbcd39f1d3ee5f93b2af16b3edc61fb280ec2e..0000000000000000000000000000000000000000 --- a/spaces/NMEX/rvc-hoyogame-v2/weights/genshin-impactV2/AutoFile/index.js +++ /dev/null @@ -1,44 +0,0 @@ -const testFolder = './test/'; -const fs = require('fs'); -const { json } = require('stream/consumers'); - - -var Model = { - folder: String, - enable: Boolean, - name: String, - title: String, - cover: String, - index: String, - author: String -} - - var Models = []; - - -(async () => { - try { - const files = await fs.readdir(testFolder,async (err,files) => { - for(var file of files) { - console.log(file) - Models.push(new Object({ - name: file - })) - const filesIn = await fs.readdir(testFolder+file,(err,filesIn) => { - for(var fileIn of filesIn) { - Models.push(new Object({ - model: file, - index: file - })) - } - }) - } - }) - } catch(err) { - console.log(err) - } - setTimeout(() => { - console.log(JSON.stringify(Models)) - console.log(Models) - }, 1000); -})(); \ No newline at end of file diff --git a/spaces/OAOA/DifFace/basicsr/utils/__init__.py b/spaces/OAOA/DifFace/basicsr/utils/__init__.py deleted file mode 100644 index 9569c50780415b356c8e06edac5d960cf1fe1e91..0000000000000000000000000000000000000000 --- a/spaces/OAOA/DifFace/basicsr/utils/__init__.py +++ /dev/null @@ -1,47 +0,0 @@ -from .color_util import bgr2ycbcr, rgb2ycbcr, rgb2ycbcr_pt, ycbcr2bgr, ycbcr2rgb -from .diffjpeg import DiffJPEG -from .file_client import FileClient -from .img_process_util import USMSharp, usm_sharp -from .img_util import crop_border, imfrombytes, img2tensor, imwrite, tensor2img -from .logger import AvgTimer, MessageLogger, get_env_info, get_root_logger, init_tb_logger, init_wandb_logger -from .misc import check_resume, get_time_str, make_exp_dirs, mkdir_and_rename, scandir, set_random_seed, sizeof_fmt -from .options import yaml_load - -__all__ = [ - # color_util.py - 'bgr2ycbcr', - 'rgb2ycbcr', - 'rgb2ycbcr_pt', - 'ycbcr2bgr', - 'ycbcr2rgb', - # file_client.py - 'FileClient', - # img_util.py - 'img2tensor', - 'tensor2img', - 'imfrombytes', - 'imwrite', - 'crop_border', - # logger.py - 'MessageLogger', - 'AvgTimer', - 'init_tb_logger', - 'init_wandb_logger', - 'get_root_logger', - 'get_env_info', - # misc.py - 'set_random_seed', - 'get_time_str', - 'mkdir_and_rename', - 'make_exp_dirs', - 'scandir', - 'check_resume', - 'sizeof_fmt', - # diffjpeg - 'DiffJPEG', - # img_process_util - 'USMSharp', - 'usm_sharp', - # options - 'yaml_load' -] diff --git a/spaces/OFA-Sys/OFA-Generic_Interface/fairseq/examples/textless_nlp/gslm/speech2unit/__init__.py b/spaces/OFA-Sys/OFA-Generic_Interface/fairseq/examples/textless_nlp/gslm/speech2unit/__init__.py deleted file mode 100644 index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000 diff --git a/spaces/OFA-Sys/OFA-Generic_Interface/fairseq/tests/speech_recognition/test_vggtransformer.py b/spaces/OFA-Sys/OFA-Generic_Interface/fairseq/tests/speech_recognition/test_vggtransformer.py deleted file mode 100644 index 4dc73b8c7379970dc0bcc16fcb088a64a1bd7e3b..0000000000000000000000000000000000000000 --- a/spaces/OFA-Sys/OFA-Generic_Interface/fairseq/tests/speech_recognition/test_vggtransformer.py +++ /dev/null @@ -1,135 +0,0 @@ -#!/usr/bin/env python3 - -# import models/encoder/decoder to be tested -from examples.speech_recognition.models.vggtransformer import ( - TransformerDecoder, - VGGTransformerEncoder, - VGGTransformerModel, - vggtransformer_1, - vggtransformer_2, - vggtransformer_base, -) - -# import base test class -from .asr_test_base import ( - DEFAULT_TEST_VOCAB_SIZE, - TestFairseqDecoderBase, - TestFairseqEncoderBase, - TestFairseqEncoderDecoderModelBase, - get_dummy_dictionary, - get_dummy_encoder_output, - get_dummy_input, -) - - -class VGGTransformerModelTest_mid(TestFairseqEncoderDecoderModelBase): - def setUp(self): - def override_config(args): - """ - vggtrasformer_1 use 14 layers of transformer, - for testing purpose, it is too expensive. For fast turn-around - test, reduce the number of layers to 3. - """ - args.transformer_enc_config = ( - "((1024, 16, 4096, True, 0.15, 0.15, 0.15),) * 3" - ) - - super().setUp() - extra_args_setter = [vggtransformer_1, override_config] - - self.setUpModel(VGGTransformerModel, extra_args_setter) - self.setUpInput(get_dummy_input(T=50, D=80, B=5, K=DEFAULT_TEST_VOCAB_SIZE)) - - -class VGGTransformerModelTest_big(TestFairseqEncoderDecoderModelBase): - def setUp(self): - def override_config(args): - """ - vggtrasformer_2 use 16 layers of transformer, - for testing purpose, it is too expensive. For fast turn-around - test, reduce the number of layers to 3. - """ - args.transformer_enc_config = ( - "((1024, 16, 4096, True, 0.15, 0.15, 0.15),) * 3" - ) - - super().setUp() - extra_args_setter = [vggtransformer_2, override_config] - - self.setUpModel(VGGTransformerModel, extra_args_setter) - self.setUpInput(get_dummy_input(T=50, D=80, B=5, K=DEFAULT_TEST_VOCAB_SIZE)) - - -class VGGTransformerModelTest_base(TestFairseqEncoderDecoderModelBase): - def setUp(self): - def override_config(args): - """ - vggtrasformer_base use 12 layers of transformer, - for testing purpose, it is too expensive. For fast turn-around - test, reduce the number of layers to 3. - """ - args.transformer_enc_config = ( - "((512, 8, 2048, True, 0.15, 0.15, 0.15),) * 3" - ) - - super().setUp() - extra_args_setter = [vggtransformer_base, override_config] - - self.setUpModel(VGGTransformerModel, extra_args_setter) - self.setUpInput(get_dummy_input(T=50, D=80, B=5, K=DEFAULT_TEST_VOCAB_SIZE)) - - -class VGGTransformerEncoderTest(TestFairseqEncoderBase): - def setUp(self): - super().setUp() - - self.setUpInput(get_dummy_input(T=50, D=80, B=5)) - - def test_forward(self): - print("1. test standard vggtransformer") - self.setUpEncoder(VGGTransformerEncoder(input_feat_per_channel=80)) - super().test_forward() - print("2. test vggtransformer with limited right context") - self.setUpEncoder( - VGGTransformerEncoder( - input_feat_per_channel=80, transformer_context=(-1, 5) - ) - ) - super().test_forward() - print("3. test vggtransformer with limited left context") - self.setUpEncoder( - VGGTransformerEncoder( - input_feat_per_channel=80, transformer_context=(5, -1) - ) - ) - super().test_forward() - print("4. test vggtransformer with limited right context and sampling") - self.setUpEncoder( - VGGTransformerEncoder( - input_feat_per_channel=80, - transformer_context=(-1, 12), - transformer_sampling=(2, 2), - ) - ) - super().test_forward() - print("5. test vggtransformer with windowed context and sampling") - self.setUpEncoder( - VGGTransformerEncoder( - input_feat_per_channel=80, - transformer_context=(12, 12), - transformer_sampling=(2, 2), - ) - ) - - -class TransformerDecoderTest(TestFairseqDecoderBase): - def setUp(self): - super().setUp() - - dict = get_dummy_dictionary(vocab_size=DEFAULT_TEST_VOCAB_SIZE) - decoder = TransformerDecoder(dict) - dummy_encoder_output = get_dummy_encoder_output(encoder_out_shape=(50, 5, 256)) - - self.setUpDecoder(decoder) - self.setUpInput(dummy_encoder_output) - self.setUpPrevOutputTokens() diff --git a/spaces/OFA-Sys/OFA-Image_Caption/fairseq/examples/wav2vec/unsupervised/kaldi_self_train/st/local/train_subset_lgbeam.sh b/spaces/OFA-Sys/OFA-Image_Caption/fairseq/examples/wav2vec/unsupervised/kaldi_self_train/st/local/train_subset_lgbeam.sh deleted file mode 100644 index 913c1d8e4357c146026b86e78f0b16f921776441..0000000000000000000000000000000000000000 --- a/spaces/OFA-Sys/OFA-Image_Caption/fairseq/examples/wav2vec/unsupervised/kaldi_self_train/st/local/train_subset_lgbeam.sh +++ /dev/null @@ -1,129 +0,0 @@ -#!/usr/bin/env bash - -out_root=/tmp -out_name=train_${RANDOM} -num_nonsil_states=1 - -valid="dev_other" -train="train" -mono_size="-1" # 2000 -tri1_size="-1" # 5000 -tri2b_size="-1" # 10000 -tri3b_size="-1" # 10000 - -# Acoustic model parameters -numLeavesTri1=2000 -numGaussTri1=10000 -numLeavesMLLT=2500 -numGaussMLLT=15000 -numLeavesSAT=2500 -numGaussSAT=15000 - -stage=1 -max_stage=1 - -. ./cmd.sh -. ./path.sh -. parse_options.sh - -data=$1 -lang=$2 -lang_test=$3 - -exp_root=$out_root/$out_name - -# you might not want to do this for interactive shells. -set -e - - -if [ $stage -le 1 ] && [ $max_stage -ge 1 ]; then - # train a monophone system - if [ ! $mono_size -eq -1 ]; then - utils/subset_data_dir.sh $data/$train $mono_size $data/${train}_${mono_size} - mono_train=${train}_${mono_size} - else - mono_train=${train} - fi - - steps/train_mono.sh --boost-silence 1.25 --nj 20 --cmd "$train_cmd" \ - --initial-beam 40 --regular-beam 60 --retry-beam 120 \ - $data/$mono_train $lang $exp_root/mono - - utils/mkgraph.sh $lang_test $exp_root/mono $exp_root/mono/graph - steps/decode.sh --nj 20 --cmd "$decode_cmd" \ - $exp_root/mono/graph $data/$valid $exp_root/mono/decode_$valid & -fi - - -if [ $stage -le 2 ] && [ $max_stage -ge 2 ]; then - # train a first delta + delta-delta triphone system on a subset of 5000 utterances - if [ ! $tri1_size -eq -1 ]; then - utils/subset_data_dir.sh $data/$train $tri1_size $data/${train}_${tri1_size} - tri1_train=${train}_${tri1_size} - else - tri1_train=${train} - fi - - steps/align_si.sh --boost-silence 1.25 --nj 10 --cmd "$train_cmd" \ - $data/$tri1_train $lang \ - $exp_root/mono $exp_root/mono_ali_${tri1_train} - - steps_gan/train_deltas.sh --boost-silence 1.25 --cmd "$train_cmd" \ - --num_nonsil_states $num_nonsil_states $numLeavesTri1 $numGaussTri1 \ - $data/$tri1_train $lang \ - $exp_root/mono_ali_${tri1_train} $exp_root/tri1 - - utils/mkgraph.sh $lang_test $exp_root/tri1 $exp_root/tri1/graph - steps/decode.sh --nj 20 --cmd "$decode_cmd" \ - $exp_root/tri1/graph $data/$valid $exp_root/tri1/decode_$valid & -fi - -if [ $stage -le 3 ] && [ $max_stage -ge 3 ]; then - # train an LDA+MLLT system. - if [ ! $tri2b_size -eq -1 ]; then - utils/subset_data_dir.sh $data/$train $tri2b_size $data/${train}_${tri2b_size} - tri2b_train=${train}_${tri2b_size} - else - tri2b_train=${train} - fi - - steps/align_si.sh --nj 10 --cmd "$train_cmd" \ - $data/$tri2b_train $lang \ - $exp_root/tri1 $exp_root/tri1_ali_${tri2b_train} - - steps_gan/train_lda_mllt.sh --cmd "$train_cmd" \ - --num_nonsil_states $num_nonsil_states \ - --splice-opts "--left-context=3 --right-context=3" $numLeavesMLLT $numGaussMLLT \ - $data/$tri2b_train $lang \ - $exp_root/tri1_ali_${tri2b_train} $exp_root/tri2b - - utils/mkgraph.sh $lang_test $exp_root/tri2b $exp_root/tri2b/graph - steps/decode.sh --nj 20 --cmd "$decode_cmd" \ - $exp_root/tri2b/graph $data/$valid $exp_root/tri2b/decode_$valid & -fi - - -if [ $stage -le 4 ] && [ $max_stage -ge 4 ]; then - # Train tri3b, which is LDA+MLLT+SAT on 10k utts - if [ ! $tri3b_size -eq -1 ]; then - utils/subset_data_dir.sh $data/$train $tri3b_size $data/${train}_${tri3b_size} - tri3b_train=${train}_${tri3b_size} - else - tri3b_train=${train} - fi - - steps/align_si.sh --nj 10 --cmd "$train_cmd" --use-graphs true \ - $data/$tri3b_train $lang \ - $exp_root/tri2b $exp_root/tri2b_ali_${tri2b_train} - - steps_gan/train_sat.sh --cmd "$train_cmd" \ - --num_nonsil_states $num_nonsil_states $numLeavesSAT $numGaussSAT \ - $data/$tri3b_train $lang \ - $exp_root/tri2b_ali_${tri2b_train} $exp_root/tri3b - - utils/mkgraph.sh $lang_test $exp_root/tri3b $exp_root/tri3b/graph - steps/decode_fmllr.sh --nj 20 --cmd "$decode_cmd" \ - $exp_root/tri3b/graph $data/$valid $exp_root/tri3b/decode_$valid & -fi - -wait diff --git a/spaces/OFA-Sys/OFA-Image_Caption/fairseq/examples/wav2vec/unsupervised/scripts/wav2vec_apply_cluster_faiss.py b/spaces/OFA-Sys/OFA-Image_Caption/fairseq/examples/wav2vec/unsupervised/scripts/wav2vec_apply_cluster_faiss.py deleted file mode 100644 index a5dd7ae6c15b358206e067385be260c94021bf20..0000000000000000000000000000000000000000 --- a/spaces/OFA-Sys/OFA-Image_Caption/fairseq/examples/wav2vec/unsupervised/scripts/wav2vec_apply_cluster_faiss.py +++ /dev/null @@ -1,128 +0,0 @@ -#!/usr/bin/env python3 -u -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -import argparse -import os -import os.path as osp -import numpy as np -import tqdm -import torch -import sys - -import faiss -import torch.nn.functional as F - -from wav2vec_cluster_faiss import parse_faiss_specs, Wav2VecFeatureReader - - -def get_parser(): - parser = argparse.ArgumentParser(description="apply clusters") - # fmt: off - parser.add_argument('data', help='location of tsv files') - parser.add_argument('--split', help='split to process', required=True) - parser.add_argument('--labels', help='split to process', default="phn") - parser.add_argument('--path', help='path to pca and centroids', required=True) - parser.add_argument('--checkpoint', type=str, help='checkpoint for wav2vec model (if using wav2vec features)', required=True) - parser.add_argument('--layer', '-l', type=int, help='which layer to read', default=14) - parser.add_argument('--max-tsz', type=int, help='batch kmeans up to this much', default=14) - # fmt: on - - return parser - - -def get_iterator(args): - label_path = osp.join(args.data, f"{args.split}.{args.labels}") - if osp.exists(label_path): - lp = open(label_path, "r") - else: - lp = None - - with open(osp.join(args.data, f"{args.split}.tsv"), "r") as fp: - lines = fp.read().split("\n") - root = lines.pop(0).strip() - files = [line.rstrip() for line in lines if len(line) > 0] - - if lp is not None: - lbls = [line.rstrip() for line in lp] - else: - lbls = [None] * len(files) - - num = len(files) - reader = Wav2VecFeatureReader(args.checkpoint, args.layer) - - def iterate(): - for fname, lbl in zip(files, lbls): - file = osp.join(root, fname.split("\t")[0]) - feats = reader.get_feats(file) - yield feats.data, fname, lbl - - return iterate, num, root - - -def main(): - parser = get_parser() - args = parser.parse_args() - - spec = osp.basename(args.path) - - try: - faiss_spec = parse_faiss_specs(spec.rstrip("/"))[0] - except: - print(spec) - raise - - print("Faiss Spec:", faiss_spec, file=sys.stderr) - - if faiss_spec.pca: - A = torch.from_numpy(np.load(osp.join(args.path, "pca_A.npy"))).cuda() - b = torch.from_numpy(np.load(osp.join(args.path, "pca_b.npy"))).cuda() - print("Loaded PCA", file=sys.stderr) - - centroids = np.load(osp.join(args.path, "centroids.npy")) - print("Loaded centroids", centroids.shape, file=sys.stderr) - - res = faiss.StandardGpuResources() - index_flat = ( - faiss.IndexFlatL2(centroids.shape[1]) - if not faiss_spec.sphere - else faiss.IndexFlatIP(centroids.shape[1]) - ) - faiss_index = faiss.index_cpu_to_gpu(res, 0, index_flat) - faiss_index.add(centroids) - - generator, num, root = get_iterator(args) - iterator = generator() - - had_labels = False - label_path = osp.join(args.path, f"{args.split}.{args.labels}") - - with torch.no_grad(): - with open(osp.join(args.path, f"{args.split}.src"), "w") as fp, open( - osp.join(args.path, f"{args.split}.tsv"), "w" - ) as pp, open(label_path, "w") as lp: - print(root, file=pp) - for f, fname, lbl in tqdm.tqdm(iterator, total=num): - if faiss_spec.pca: - f = torch.mm(f, A) + b - if faiss_spec.norm: - f = F.normalize(f, p=2, dim=-1) - - f = f.cpu().numpy() - - _, z = faiss_index.search(f, 1) - - print(" ".join(str(x.item()) for x in z), file=fp) - print(fname, file=pp) - - if lbl is not None: - print(lbl, file=lp) - had_labels = True - if not had_labels: - os.remove(label_path) - - -if __name__ == "__main__": - main() diff --git a/spaces/OFA-Sys/OFA-Image_Caption/fairseq/fairseq/criterions/composite_loss.py b/spaces/OFA-Sys/OFA-Image_Caption/fairseq/fairseq/criterions/composite_loss.py deleted file mode 100644 index 98e835fa6e4c0bcad062df9c519701bf795c98be..0000000000000000000000000000000000000000 --- a/spaces/OFA-Sys/OFA-Image_Caption/fairseq/fairseq/criterions/composite_loss.py +++ /dev/null @@ -1,100 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -from fairseq import utils -from fairseq.criterions import LegacyFairseqCriterion, register_criterion -from torch import nn - - -@register_criterion("composite_loss") -class CompositeLoss(LegacyFairseqCriterion): - """This is a composite loss that, given a list of model outputs and a list of targets, - computes an average of losses for each output-target pair""" - - def __init__(self, args, task): - super().__init__(args, task) - self.underlying_criterion = args.underlying_criterion - - @staticmethod - def add_args(parser): - """Add criterion-specific arguments to the parser.""" - # fmt: off - parser.add_argument('--underlying-criterion', type=str, metavar='VAL', required=True, - help='underlying criterion to use for the composite loss') - # fmt: on - - @staticmethod - def build_underlying_criterion(args, task): - saved_criterion = args.criterion - args.criterion = args.underlying_criterion - assert saved_criterion != args.underlying_criterion - underlying_criterion = task.build_criterion(args) - args.criterion = saved_criterion - return underlying_criterion - - @classmethod - def build_criterion(cls, args, task): - underlying_criterion = CompositeLoss.build_underlying_criterion(args, task) - - class FakeModel(nn.Module): - def __init__(self, model, net_out, target): - super().__init__() - self.model = model - self.net_out = net_out - self.target = target - - def forward(self, **unused): - return self.net_out - - def get_normalized_probs(self, net_output, log_probs, sample=None): - return self.model.get_normalized_probs( - net_output, log_probs, sample=sample - ) - - def get_targets(self, *unused): - return self.target - - @property - def decoder(self): - return self.model.decoder - - class _CompositeLoss(LegacyFairseqCriterion): - def __init__(self, args, task, underlying_criterion): - super().__init__(args, task) - self.underlying_criterion = underlying_criterion - - def forward(self, model, sample, reduce=True): - net_outputs = model(**sample["net_input"]) - targets = sample["target"] - - bsz = targets[0].size(0) - loss = net_outputs[0][0].new(1 if reduce else bsz).float().zero_() - - sample_size = 0 - logging_output = {} - for o, t in zip(net_outputs[0], targets): - m = FakeModel(model, (o, net_outputs[1]), t) - sample["target"] = t - l, ss, logging_output = self.underlying_criterion(m, sample, reduce) - loss += l - sample_size += ss - - loss.div_(len(targets)) - sample_size /= len(targets) - - logging_output["loss"] = utils.item(loss.data) if reduce else loss.data - return loss, sample_size, logging_output - - @staticmethod - def aggregate_logging_outputs(logging_outputs): - return underlying_criterion.__class__.aggregate_logging_outputs( - logging_outputs - ) - - @staticmethod - def reduce_metrics(logging_outputs) -> None: - underlying_criterion.__class__.reduce_metrics(logging_outputs) - - return _CompositeLoss(args, task, underlying_criterion) diff --git a/spaces/OFA-Sys/OFA-Image_Caption/fairseq/fairseq/modules/dynamicconv_layer/dynamiconv_cpu.cpp b/spaces/OFA-Sys/OFA-Image_Caption/fairseq/fairseq/modules/dynamicconv_layer/dynamiconv_cpu.cpp deleted file mode 100644 index d7e57c859085f98ec10960330ca763ae2764585a..0000000000000000000000000000000000000000 --- a/spaces/OFA-Sys/OFA-Image_Caption/fairseq/fairseq/modules/dynamicconv_layer/dynamiconv_cpu.cpp +++ /dev/null @@ -1,29 +0,0 @@ -#include -#include - -std::vector -dynamicconv_cpu_forward(float* input, float* filters, int padding_l); - -std::vector dynamicconv_cpu_backward( - float* gradOutput, - int padding_l, - float* input, - float* filters); - -std::vector -dynamicconv_forward(float* input, float* filters, int padding_l) { - return dynamicconv_cpu_forward(input, filters, padding_l); -} - -std::vector dynamicconv_backward( - float* gradOutput, - int padding_l, - float* input, - float* filters) { - return dynamicconv_cpu_backward(gradOutput, padding_l, input, filters); -} - -PYBIND11_MODULE(TORCH_EXTENSION_NAME, m) { - m.def("forward", &dynamicconv_forward, "dynamicconv forward (CPU)"); - m.def("backward", &dynamicconv_backward, "dynamicconv backward (CPU)"); -} diff --git a/spaces/OFA-Sys/OFA-Image_Caption/utils/trie.py b/spaces/OFA-Sys/OFA-Image_Caption/utils/trie.py deleted file mode 100644 index 76d331d87fd99096e8228f34f297379221941045..0000000000000000000000000000000000000000 --- a/spaces/OFA-Sys/OFA-Image_Caption/utils/trie.py +++ /dev/null @@ -1,25 +0,0 @@ -from collections import defaultdict - - -class TreeNode(): - def __init__(self): - self.child = defaultdict(TreeNode) - -class Trie: - - def __init__(self, eos): - self.root = TreeNode() - self.eos = eos - - def insert(self, word): - cur = self.root - for c in word: - cur = cur.child[c] - - def get_next_layer(self, word): - cur = self.root - for c in word: - cur = cur.child.get(c) - if cur is None: - return [self.eos] - return list(cur.child.keys()) \ No newline at end of file diff --git a/spaces/OFA-Sys/OFA-vqa/models/ofa/unify_multihead_attention.py b/spaces/OFA-Sys/OFA-vqa/models/ofa/unify_multihead_attention.py deleted file mode 100644 index 428daf0f9a74be58f9d7d00a4a61c682492e8780..0000000000000000000000000000000000000000 --- a/spaces/OFA-Sys/OFA-vqa/models/ofa/unify_multihead_attention.py +++ /dev/null @@ -1,518 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -import math -from typing import Dict, Optional, Tuple - -import torch -import torch.nn.functional as F -from fairseq import utils -from fairseq.incremental_decoding_utils import with_incremental_state -from fairseq.modules.fairseq_dropout import FairseqDropout -from fairseq.modules.quant_noise import quant_noise -from torch import Tensor, nn -from torch.nn import Parameter - - -@with_incremental_state -class MultiheadAttention(nn.Module): - """Multi-headed attention. - - See "Attention Is All You Need" for more details. - """ - - def __init__( - self, - embed_dim, - num_heads, - kdim=None, - vdim=None, - dropout=0.0, - bias=True, - add_bias_kv=False, - add_zero_attn=False, - self_attention=False, - encoder_decoder_attention=False, - q_noise=0.0, - qn_block_size=8, - scale_factor=2, - scale_heads=False - ): - super().__init__() - self.embed_dim = embed_dim - self.kdim = kdim if kdim is not None else embed_dim - self.vdim = vdim if vdim is not None else embed_dim - self.qkv_same_dim = self.kdim == embed_dim and self.vdim == embed_dim - - self.num_heads = num_heads - self.dropout_module = FairseqDropout( - dropout, module_name=self.__class__.__name__ - ) - - self.head_dim = embed_dim // num_heads - assert ( - self.head_dim * num_heads == self.embed_dim - ), "embed_dim must be divisible by num_heads" - self.scaling = float(self.head_dim * scale_factor) ** -0.5 - - self.self_attention = self_attention - self.encoder_decoder_attention = encoder_decoder_attention - self.c_attn = nn.Parameter(torch.ones((self.num_heads,)), requires_grad=True) if scale_heads else None - - assert not self.self_attention or self.qkv_same_dim, ( - "Self-attention requires query, key and " "value to be of the same size" - ) - - self.k_proj = quant_noise( - nn.Linear(self.kdim, embed_dim, bias=bias), q_noise, qn_block_size - ) - self.v_proj = quant_noise( - nn.Linear(self.vdim, embed_dim, bias=bias), q_noise, qn_block_size - ) - self.q_proj = quant_noise( - nn.Linear(embed_dim, embed_dim, bias=bias), q_noise, qn_block_size - ) - - self.out_proj = quant_noise( - nn.Linear(embed_dim, embed_dim, bias=bias), q_noise, qn_block_size - ) - - if add_bias_kv: - self.bias_k = Parameter(torch.Tensor(1, 1, embed_dim)) - self.bias_v = Parameter(torch.Tensor(1, 1, embed_dim)) - else: - self.bias_k = self.bias_v = None - - self.add_zero_attn = add_zero_attn - - self.reset_parameters() - - self.onnx_trace = False - - def prepare_for_onnx_export_(self): - self.onnx_trace = True - - def reset_parameters(self): - if self.qkv_same_dim: - # Empirically observed the convergence to be much better with - # the scaled initialization - nn.init.xavier_uniform_(self.k_proj.weight, gain=1 / math.sqrt(2)) - nn.init.xavier_uniform_(self.v_proj.weight, gain=1 / math.sqrt(2)) - nn.init.xavier_uniform_(self.q_proj.weight, gain=1 / math.sqrt(2)) - else: - nn.init.xavier_uniform_(self.k_proj.weight) - nn.init.xavier_uniform_(self.v_proj.weight) - nn.init.xavier_uniform_(self.q_proj.weight) - - nn.init.xavier_uniform_(self.out_proj.weight) - if self.out_proj.bias is not None: - nn.init.constant_(self.out_proj.bias, 0.0) - if self.bias_k is not None: - nn.init.xavier_normal_(self.bias_k) - if self.bias_v is not None: - nn.init.xavier_normal_(self.bias_v) - - def forward( - self, - query, - key: Optional[Tensor], - value: Optional[Tensor], - key_padding_mask: Optional[Tensor] = None, - incremental_state: Optional[Dict[str, Dict[str, Optional[Tensor]]]] = None, - need_weights: bool = True, - static_kv: bool = False, - attn_mask: Optional[Tensor] = None, - self_attn_mask: Optional[Tensor] = None, - before_softmax: bool = False, - need_head_weights: bool = False, - attn_bias: Optional[Tensor] = None - ) -> Tuple[Tensor, Optional[Tensor]]: - """Input shape: Time x Batch x Channel - - Args: - key_padding_mask (ByteTensor, optional): mask to exclude - keys that are pads, of shape `(batch, src_len)`, where - padding elements are indicated by 1s. - need_weights (bool, optional): return the attention weights, - averaged over heads (default: False). - attn_mask (ByteTensor, optional): typically used to - implement causal attention, where the mask prevents the - attention from looking forward in time (default: None). - before_softmax (bool, optional): return the raw attention - weights and values before the attention softmax. - need_head_weights (bool, optional): return the attention - weights for each head. Implies *need_weights*. Default: - return the average attention weights over all heads. - """ - if need_head_weights: - need_weights = True - - is_tpu = query.device.type == "xla" - - tgt_len, bsz, embed_dim = query.size() - src_len = tgt_len - assert embed_dim == self.embed_dim, f"query dim {embed_dim} != {self.embed_dim}" - assert list(query.size()) == [tgt_len, bsz, embed_dim] - if key is not None: - src_len, key_bsz, _ = key.size() - if not torch.jit.is_scripting(): - assert key_bsz == bsz - assert value is not None - assert src_len, bsz == value.shape[:2] - - if ( - not self.onnx_trace - and not is_tpu # don't use PyTorch version on TPUs - and incremental_state is None - and not static_kv - # A workaround for quantization to work. Otherwise JIT compilation - # treats bias in linear module as method. - and not torch.jit.is_scripting() - and self_attn_mask is None - and attn_bias is None - ): - assert key is not None and value is not None - return F.multi_head_attention_forward( - query, - key, - value, - self.embed_dim, - self.num_heads, - torch.empty([0]), - torch.cat((self.q_proj.bias, self.k_proj.bias, self.v_proj.bias)), - self.bias_k, - self.bias_v, - self.add_zero_attn, - self.dropout_module.p, - self.out_proj.weight, - self.out_proj.bias, - self.training or self.dropout_module.apply_during_inference, - key_padding_mask, - need_weights, - attn_mask, - use_separate_proj_weight=True, - q_proj_weight=self.q_proj.weight, - k_proj_weight=self.k_proj.weight, - v_proj_weight=self.v_proj.weight, - ) - - if incremental_state is not None: - saved_state = self._get_input_buffer(incremental_state) - if saved_state is not None and "prev_key" in saved_state: - # previous time steps are cached - no need to recompute - # key and value if they are static - if static_kv: - assert self.encoder_decoder_attention and not self.self_attention - key = value = None - else: - saved_state = None - - if self.self_attention and self_attn_mask is None: - q = self.q_proj(query) - k = self.k_proj(query) - v = self.v_proj(query) - elif self.encoder_decoder_attention: - # encoder-decoder attention - q = self.q_proj(query) - if key is None: - assert value is None - k = v = None - else: - k = self.k_proj(key) - v = self.v_proj(key) - - else: - assert key is not None and value is not None - q = self.q_proj(query) - k = self.k_proj(key) - v = self.v_proj(value) - q *= self.scaling - - if self.bias_k is not None: - assert self.bias_v is not None - k = torch.cat([k, self.bias_k.repeat(1, bsz, 1)]) - v = torch.cat([v, self.bias_v.repeat(1, bsz, 1)]) - if attn_mask is not None: - attn_mask = torch.cat( - [attn_mask, attn_mask.new_zeros(attn_mask.size(0), 1)], dim=1 - ) - if key_padding_mask is not None: - key_padding_mask = torch.cat( - [ - key_padding_mask, - key_padding_mask.new_zeros(key_padding_mask.size(0), 1), - ], - dim=1, - ) - - q = ( - q.contiguous() - .view(tgt_len, bsz * self.num_heads, self.head_dim) - .transpose(0, 1) - ) - if k is not None: - k = ( - k.contiguous() - .view(-1, bsz * self.num_heads, self.head_dim) - .transpose(0, 1) - ) - if v is not None: - v = ( - v.contiguous() - .view(-1, bsz * self.num_heads, self.head_dim) - .transpose(0, 1) - ) - - if saved_state is not None: - # saved states are stored with shape (bsz, num_heads, seq_len, head_dim) - if "prev_key" in saved_state: - _prev_key = saved_state["prev_key"] - assert _prev_key is not None - prev_key = _prev_key.view(bsz * self.num_heads, -1, self.head_dim) - if static_kv: - k = prev_key - else: - assert k is not None - k = torch.cat([prev_key, k], dim=1) - src_len = k.size(1) - if "prev_value" in saved_state: - _prev_value = saved_state["prev_value"] - assert _prev_value is not None - prev_value = _prev_value.view(bsz * self.num_heads, -1, self.head_dim) - if static_kv: - v = prev_value - else: - assert v is not None - v = torch.cat([prev_value, v], dim=1) - prev_key_padding_mask: Optional[Tensor] = None - if "prev_key_padding_mask" in saved_state: - prev_key_padding_mask = saved_state["prev_key_padding_mask"] - assert k is not None and v is not None - key_padding_mask = MultiheadAttention._append_prev_key_padding_mask( - key_padding_mask=key_padding_mask, - prev_key_padding_mask=prev_key_padding_mask, - batch_size=bsz, - src_len=k.size(1), - static_kv=static_kv, - ) - - saved_state["prev_key"] = k.view(bsz, self.num_heads, -1, self.head_dim) - saved_state["prev_value"] = v.view(bsz, self.num_heads, -1, self.head_dim) - saved_state["prev_key_padding_mask"] = key_padding_mask - # In this branch incremental_state is never None - assert incremental_state is not None - incremental_state = self._set_input_buffer(incremental_state, saved_state) - assert k is not None - assert k.size(1) == src_len - - # This is part of a workaround to get around fork/join parallelism - # not supporting Optional types. - if key_padding_mask is not None and key_padding_mask.dim() == 0: - key_padding_mask = None - - if key_padding_mask is not None: - assert key_padding_mask.size(0) == bsz - assert key_padding_mask.size(1) == src_len - - if self.add_zero_attn: - assert v is not None - src_len += 1 - k = torch.cat([k, k.new_zeros((k.size(0), 1) + k.size()[2:])], dim=1) - v = torch.cat([v, v.new_zeros((v.size(0), 1) + v.size()[2:])], dim=1) - if attn_mask is not None: - attn_mask = torch.cat( - [attn_mask, attn_mask.new_zeros(attn_mask.size(0), 1)], dim=1 - ) - if key_padding_mask is not None: - key_padding_mask = torch.cat( - [ - key_padding_mask, - torch.zeros(key_padding_mask.size(0), 1).type_as( - key_padding_mask - ), - ], - dim=1, - ) - - attn_weights = torch.bmm(q, k.transpose(1, 2)) - attn_weights = self.apply_sparse_mask(attn_weights, tgt_len, src_len, bsz) - - assert list(attn_weights.size()) == [bsz * self.num_heads, tgt_len, src_len] - - if attn_bias is not None: - attn_weights += attn_bias - - if attn_mask is not None: - attn_mask = attn_mask.unsqueeze(0) - if self.onnx_trace: - attn_mask = attn_mask.repeat(attn_weights.size(0), 1, 1) - attn_weights += attn_mask - - if self_attn_mask is not None: - self_attn_mask = self_attn_mask.unsqueeze(1).expand(bsz, self.num_heads, tgt_len, src_len) - attn_weights += self_attn_mask.contiguous().view(bsz * self.num_heads, tgt_len, src_len) - - if key_padding_mask is not None: - # don't attend to padding symbols - attn_weights = attn_weights.view(bsz, self.num_heads, tgt_len, src_len) - if not is_tpu: - attn_weights = attn_weights.masked_fill( - key_padding_mask.unsqueeze(1).unsqueeze(2).to(torch.bool), - float("-inf"), - ) - else: - attn_weights = attn_weights.transpose(0, 2) - attn_weights = attn_weights.masked_fill(key_padding_mask, float("-inf")) - attn_weights = attn_weights.transpose(0, 2) - attn_weights = attn_weights.view(bsz * self.num_heads, tgt_len, src_len) - - if before_softmax: - return attn_weights, v - - attn_weights_float = utils.softmax( - attn_weights, dim=-1, onnx_trace=self.onnx_trace - ) - attn_weights = attn_weights_float.type_as(attn_weights) - attn_probs = self.dropout_module(attn_weights) - - assert v is not None - attn = torch.bmm(attn_probs, v) - assert list(attn.size()) == [bsz * self.num_heads, tgt_len, self.head_dim] - if self.onnx_trace and attn.size(1) == 1: - # when ONNX tracing a single decoder step (sequence length == 1) - # the transpose is a no-op copy before view, thus unnecessary - attn = attn.contiguous().view(tgt_len, bsz, embed_dim) - else: - attn = attn.transpose(0, 1).contiguous().view(tgt_len, bsz, embed_dim) - if self.c_attn is not None: - attn = attn.view(tgt_len, bsz, self.num_heads, self.head_dim) - attn = torch.einsum('tbhd,h->tbhd', attn, self.c_attn) - attn = attn.reshape(tgt_len, bsz, self.embed_dim) - attn = self.out_proj(attn) - attn_weights: Optional[Tensor] = None - if need_weights: - attn_weights = attn_weights_float.view( - bsz, self.num_heads, tgt_len, src_len - ).transpose(1, 0) - if not need_head_weights: - # average attention weights over heads - attn_weights = attn_weights.mean(dim=0) - - return attn, attn_weights - - @staticmethod - def _append_prev_key_padding_mask( - key_padding_mask: Optional[Tensor], - prev_key_padding_mask: Optional[Tensor], - batch_size: int, - src_len: int, - static_kv: bool, - ) -> Optional[Tensor]: - # saved key padding masks have shape (bsz, seq_len) - if prev_key_padding_mask is not None and static_kv: - new_key_padding_mask = prev_key_padding_mask - elif prev_key_padding_mask is not None and key_padding_mask is not None: - new_key_padding_mask = torch.cat( - [prev_key_padding_mask.float(), key_padding_mask.float()], dim=1 - ) - # During incremental decoding, as the padding token enters and - # leaves the frame, there will be a time when prev or current - # is None - elif prev_key_padding_mask is not None: - if src_len > prev_key_padding_mask.size(1): - filler = torch.zeros( - (batch_size, src_len - prev_key_padding_mask.size(1)), - device=prev_key_padding_mask.device, - ) - new_key_padding_mask = torch.cat( - [prev_key_padding_mask.float(), filler.float()], dim=1 - ) - else: - new_key_padding_mask = prev_key_padding_mask.float() - elif key_padding_mask is not None: - if src_len > key_padding_mask.size(1): - filler = torch.zeros( - (batch_size, src_len - key_padding_mask.size(1)), - device=key_padding_mask.device, - ) - new_key_padding_mask = torch.cat( - [filler.float(), key_padding_mask.float()], dim=1 - ) - else: - new_key_padding_mask = key_padding_mask.float() - else: - new_key_padding_mask = prev_key_padding_mask - return new_key_padding_mask - - @torch.jit.export - def reorder_incremental_state( - self, - incremental_state: Dict[str, Dict[str, Optional[Tensor]]], - new_order: Tensor, - ): - """Reorder buffered internal state (for incremental generation).""" - input_buffer = self._get_input_buffer(incremental_state) - if input_buffer is not None: - for k in input_buffer.keys(): - input_buffer_k = input_buffer[k] - if input_buffer_k is not None: - if self.encoder_decoder_attention and input_buffer_k.size( - 0 - ) == new_order.size(0): - break - input_buffer[k] = input_buffer_k.index_select(0, new_order) - incremental_state = self._set_input_buffer(incremental_state, input_buffer) - return incremental_state - - def _get_input_buffer( - self, incremental_state: Optional[Dict[str, Dict[str, Optional[Tensor]]]] - ) -> Dict[str, Optional[Tensor]]: - result = self.get_incremental_state(incremental_state, "attn_state") - if result is not None: - return result - else: - empty_result: Dict[str, Optional[Tensor]] = {} - return empty_result - - def _set_input_buffer( - self, - incremental_state: Dict[str, Dict[str, Optional[Tensor]]], - buffer: Dict[str, Optional[Tensor]], - ): - return self.set_incremental_state(incremental_state, "attn_state", buffer) - - def apply_sparse_mask(self, attn_weights, tgt_len: int, src_len: int, bsz: int): - return attn_weights - - def upgrade_state_dict_named(self, state_dict, name): - prefix = name + "." if name != "" else "" - items_to_add = {} - keys_to_remove = [] - for k in state_dict.keys(): - if k.endswith(prefix + "in_proj_weight"): - # in_proj_weight used to be q + k + v with same dimensions - dim = int(state_dict[k].shape[0] / 3) - items_to_add[prefix + "q_proj.weight"] = state_dict[k][:dim] - items_to_add[prefix + "k_proj.weight"] = state_dict[k][dim : 2 * dim] - items_to_add[prefix + "v_proj.weight"] = state_dict[k][2 * dim :] - - keys_to_remove.append(k) - - k_bias = prefix + "in_proj_bias" - if k_bias in state_dict.keys(): - dim = int(state_dict[k].shape[0] / 3) - items_to_add[prefix + "q_proj.bias"] = state_dict[k_bias][:dim] - items_to_add[prefix + "k_proj.bias"] = state_dict[k_bias][ - dim : 2 * dim - ] - items_to_add[prefix + "v_proj.bias"] = state_dict[k_bias][2 * dim :] - - keys_to_remove.append(prefix + "in_proj_bias") - - for k in keys_to_remove: - del state_dict[k] - - for key, value in items_to_add.items(): - state_dict[key] = value diff --git a/spaces/OdiaGenAI/Olive_Farm/open_instruct/get_data_stats.py b/spaces/OdiaGenAI/Olive_Farm/open_instruct/get_data_stats.py deleted file mode 100644 index 5ac7bba3ee5818457d8c529fa8e44741fec4251e..0000000000000000000000000000000000000000 --- a/spaces/OdiaGenAI/Olive_Farm/open_instruct/get_data_stats.py +++ /dev/null @@ -1,121 +0,0 @@ -import json -import os -import sys -import tqdm -import pandas as pd -import numpy as np -import argparse -from datasets import load_dataset -from transformers import AutoTokenizer - - -def get_statistics_for_messages_data(data_path): - # load dataset - dataset = load_dataset("json", data_files={"train": data_path}) - # tokenize dataset - tokenizer = AutoTokenizer.from_pretrained("/net/nfs.cirrascale/allennlp/yizhongw/hf_llama_models/7B", use_fast=False) - # get statistics - num_instances = len(dataset["train"]) - num_of_turns = [len(instance["messages"]) for instance in dataset["train"]] - user_prompt_lengths = [] - assistant_response_lengths = [] - instance_lengths = [] - for instance in tqdm.tqdm(dataset["train"], desc="Processing instances"): - instance_length = 0 - for message in instance["messages"]: - if message["role"] == "user": - user_prompt_lengths.append(len(tokenizer(message["content"], truncation=False, add_special_tokens=False)["input_ids"])) - instance_length += user_prompt_lengths[-1] - elif message["role"] == "assistant": - assistant_response_lengths.append(len(tokenizer(message["content"], truncation=False, add_special_tokens=False)["input_ids"])) - instance_length += assistant_response_lengths[-1] - instance_lengths.append(instance_length) - - top_100_longest_instances = np.argsort(instance_lengths)[-100:][::-1].tolist() - top_100_longest_instances = [dataset["train"][i]["id"] for i in top_100_longest_instances] - - result = { - "num_instances": num_instances, - "turns_summary": pd.Series(num_of_turns).describe(), - "user_prompt_lengths_summary": pd.Series(user_prompt_lengths).describe(), - "assistant_response_lengths_summary": pd.Series(assistant_response_lengths).describe(), - "total_lengths_summary": pd.Series(instance_lengths).describe(), - "num_instances_with_total_length_gt_512": np.sum(np.array(instance_lengths) > 512), - "num_instances_with_total_length_gt_768": np.sum(np.array(instance_lengths) > 768), - "num_instances_with_total_length_gt_1024": np.sum(np.array(instance_lengths) > 1024), - "num_instances_with_total_length_gt_1536": np.sum(np.array(instance_lengths) > 1536), - "num_instances_with_total_length_gt_2048": np.sum(np.array(instance_lengths) > 2048), - "num_instances_with_total_length_gt_4096": np.sum(np.array(instance_lengths) > 4096), - "top_100_longest_instances": top_100_longest_instances, - } - - # convert everything to dict or scalar - for key, value in result.items(): - if isinstance(value, pd.Series): - result[key] = value.to_dict() - elif isinstance(value, np.ndarray): - result[key] = value.tolist() - elif isinstance(value, np.int64): - result[key] = int(value) - - return result - -def get_statistics_for_prompt_completion_data(data_path): - # load dataset - dataset = load_dataset("json", data_files={"train": data_path}) - prompts = [instance["prompt"] for instance in dataset["train"]] - completions = [instance["completion"] for instance in dataset["train"]] - # tokenize dataset - tokenizer = AutoTokenizer.from_pretrained("/net/nfs.cirrascale/allennlp/yizhongw/hf_llama_models/7B") - tokenized_prompts = tokenizer(prompts, truncation=False, add_special_tokens=False) - tokenized_completions = tokenizer(completions, truncation=False, add_special_tokens=False) - # get statistics - num_instances = len(dataset["train"]) - prompt_lengths = [len(tokenized_prompts["input_ids"][i]) for i in range(num_instances)] - completion_lengths = [len(tokenized_completions["input_ids"][i]) for i in range(num_instances)] - prompt_completion_lengths = [prompt_lengths[i] + completion_lengths[i] for i in range(num_instances)] - - result = { - "num_instances": num_instances, - "prompt_lengths_summary": pd.Series(prompt_lengths).describe(), - "completion_lengths_summary": pd.Series(completion_lengths).describe(), - "prompt_completion_lengths_summary": pd.Series(prompt_completion_lengths).describe(), - "num_instances_with_prompt_length_gt_512": np.sum(np.array(prompt_lengths) > 512), - "num_instances_with_completion_length_gt_512": np.sum(np.array(completion_lengths) > 512), - "num_instances_with_prompt_completion_length_gt_512": np.sum(np.array(prompt_completion_lengths) > 512), - "num_instances_with_completion_length_gt_768": np.sum(np.array(completion_lengths) > 768), - "num_instances_with_prompt_completion_length_gt_1024": np.sum(np.array(prompt_completion_lengths) > 1024), - } - - # convert everything to dict or scalar - for key, value in result.items(): - if isinstance(value, pd.Series): - result[key] = value.to_dict() - elif isinstance(value, np.ndarray): - result[key] = value.tolist() - elif isinstance(value, np.int64): - result[key] = int(value) - - return result - - -if __name__ == "__main__": - parser = argparse.ArgumentParser() - parser.add_argument("--data_path", type=str, required=True) - parser.add_argument("--save_path", type=str, help="Path to save the statistics.") - args = parser.parse_args() - - with open(args.data_path, "r") as f: - sample = json.loads(f.readline()) - if "prompt" in sample: - statistics = get_statistics_for_prompt_completion_data(args.data_path) - elif "messages" in sample: - statistics = get_statistics_for_messages_data(args.data_path) - else: - raise ValueError("Invalid data format - the data should be either prompt completion data or messages data.") - - print(json.dumps(statistics, indent=4)) - - if args.save_path is not None: - with open(args.save_path, "w") as f: - json.dump(statistics, f, indent=4) diff --git a/spaces/OpenGVLab/InternGPT/iGPT/models/tag2text.py b/spaces/OpenGVLab/InternGPT/iGPT/models/tag2text.py deleted file mode 100644 index c521d6a587d1cca70db4e16ce7897ec69d06c338..0000000000000000000000000000000000000000 --- a/spaces/OpenGVLab/InternGPT/iGPT/models/tag2text.py +++ /dev/null @@ -1,430 +0,0 @@ -''' - * Tag2Text - * Written by Xinyu Huang -''' -import warnings -warnings.filterwarnings("ignore") - -from .vit import VisionTransformer, interpolate_pos_embed -from .swin_transformer import SwinTransformer, interpolate_relative_pos_embed -from .med import BertConfig, BertModel, BertLMHeadModel -from .utils import tra_array -from transformers import BertTokenizer - -import torch -from torch import nn -import torch.nn.functional as F - -import os -from urllib.parse import urlparse -from timm.models.hub import download_cached_file -import json -import math -import numpy as np - -def read_json(rpath): - with open(rpath, 'r') as f: - return json.load(f) - -# delete some tags that may disturb captioning -# 127: "quarter"; 2961: "back"; 3351: "two"; 3265: "three"; 3338: "four"; 3355: "five"; 3359: "one" -delete_tag_index = [127,2961, 3351, 3265, 3338, 3355, 3359] - -# adjust thresholds for some tags -# default threshold: 0.68 -# 2701: "person"; 2828: "man"; 1167: "woman"; -tag_thrshold = {2701:0.7, 2828: 0.7, 1167: 0.7} - -class Tag2Text_Caption(nn.Module): - def __init__(self, - med_config = 'configs/med_config.json', - image_size = 384, - vit = 'base', - vit_grad_ckpt = False, - vit_ckpt_layer = 0, - prompt = 'a picture of ', - threshold = 0.68, - ): - """ - Args: - med_config (str): path for the mixture of encoder-decoder model's configuration file - image_size (int): input image size - vit (str): model size of vision transformer - """ - super().__init__() - - if vit=='swin_b': - if image_size == 224: - vision_config_path = 'configs/swin/config_swinB_224.json' - elif image_size == 384: - vision_config_path = 'configs/swin/config_swinB_384.json' - vision_config = read_json(vision_config_path) - assert image_size == vision_config['image_res'] - # assert config['patch_size'] == 32 - vision_width = vision_config['vision_width'] - - self.visual_encoder = SwinTransformer(img_size=vision_config['image_res'], - patch_size=4, - in_chans=3, - embed_dim=vision_config['embed_dim'], - depths=vision_config['depths'], - num_heads=vision_config['num_heads'], - window_size=vision_config['window_size'], - mlp_ratio=4., - qkv_bias=True, - drop_rate=0.0, - drop_path_rate=0.1, - ape=False, - patch_norm=True, - use_checkpoint=False) - - else: - self.visual_encoder, vision_width = create_vit(vit,image_size, vit_grad_ckpt, vit_ckpt_layer) - - - self.tokenizer = init_tokenizer() - - # create the decoder - decoder_config = BertConfig.from_json_file(med_config) - decoder_config.encoder_width = 768 - self.text_decoder = BertLMHeadModel(config=decoder_config) - - # create encoder - encoder_config = BertConfig.from_json_file(med_config) - encoder_config.encoder_width = vision_width - self.tag_encoder = BertModel(config=encoder_config, add_pooling_layer=False) - - self.prompt = prompt - self.prompt_length = len(self.tokenizer(self.prompt).input_ids)-1 - - self.threshold = threshold - num_features = 768 - self.num_class = 3429 - - q2l_config = BertConfig.from_json_file('configs/q2l_config.json') - q2l_config.encoder_width = vision_width - self.vision_multi = BertModel(config=q2l_config, add_pooling_layer=False) - self.vision_multi.resize_token_embeddings(len(self.tokenizer)) - self.label_embed = nn.Embedding(self.num_class, q2l_config.hidden_size) - self.fc = GroupWiseLinear(self.num_class, num_features, bias=True) - self.del_selfattention() - - tie_encoder_decoder_weights(self.tag_encoder,self.vision_multi,'',' ') - self.tag_array = tra_array - - self.class_threshold = torch.ones(self.num_class) * self.threshold - for key,value in tag_thrshold.items(): - self.class_threshold[key] = value - - def del_selfattention(self): - del self.vision_multi.embeddings - for layer in self.vision_multi.encoder.layer: - del layer.attention - - def generate(self, image, sample=False, num_beams=3, max_length=30, min_length=10, top_p=0.9, repetition_penalty=1.0, tag_input = None, return_tag_predict = False): - image_embeds = self.visual_encoder(image) - image_atts = torch.ones(image_embeds.size()[:-1],dtype=torch.long).to(image.device) - - #==============generate tag==============# - if tag_input == None: - image_spatial_embeds = image_embeds[:,1:,:] - image_cls_embeds = image_embeds[:,0,:] - - bs = image_spatial_embeds.shape[0] - label_embed = self.label_embed.weight.unsqueeze(0).repeat(bs,1,1) - mlr_tagembedding = self.vision_multi(encoder_embeds = label_embed, - encoder_hidden_states = image_embeds, - encoder_attention_mask = image_atts, - return_dict = False, - mode = 'mlr', - ) - - logits = self.fc(mlr_tagembedding[0]) - - # targets = torch.where(torch.sigmoid(logits) > self.threshold , torch.tensor(1.0).to(image.device), torch.zeros(self.num_class).to(image.device)) - targets = torch.where(torch.sigmoid(logits) > self.class_threshold.to(image.device) , torch.tensor(1.0).to(image.device), torch.zeros(self.num_class).to(image.device)) - - tag = targets.cpu().numpy() - tag[:,delete_tag_index] = 0 - bs = image.size(0) - tag_input = [] - for b in range(bs): - index = np.argwhere(tag[b] == 1) - token = self.tag_array[index].squeeze(axis = 1) - tag_input.append(' | '.join(token)) - #========================================# - - if not sample: - image_embeds = image_embeds.repeat_interleave(num_beams,dim=0) - image_atts = image_atts.repeat_interleave(num_beams,dim=0) - tag_input_temp = [] - for tag in tag_input: - for i in range(num_beams): - tag_input_temp.append(tag) - tag_input = tag_input_temp - - - tag_input_tokenzier = self.tokenizer(tag_input, padding='max_length', truncation=True, max_length=40, - return_tensors="pt").to(image.device) - encoder_input_ids = tag_input_tokenzier.input_ids - encoder_input_ids[:,0] = self.tokenizer.enc_token_id - - output_tagembedding = self.tag_encoder(encoder_input_ids, - attention_mask = tag_input_tokenzier.attention_mask, - encoder_hidden_states = image_embeds, - encoder_attention_mask = image_atts, - return_dict = True, - ) - - prompt = [self.prompt] * image.size(0) - input_ids = self.tokenizer(prompt, return_tensors="pt").input_ids.to(image.device) - input_ids[:,0] = self.tokenizer.bos_token_id - input_ids = input_ids[:, :-1] - - if sample: - #nucleus sampling - model_kwargs = {"encoder_hidden_states": output_tagembedding.last_hidden_state, "encoder_attention_mask":None} - outputs = self.text_decoder.generate(input_ids=input_ids, - max_length=max_length, - min_length=min_length, - do_sample=True, - top_p=top_p, - num_return_sequences=1, - eos_token_id=self.tokenizer.sep_token_id, - pad_token_id=self.tokenizer.pad_token_id, - repetition_penalty=1.1, - **model_kwargs) - else: - #beam search - model_kwargs = {"encoder_hidden_states": output_tagembedding.last_hidden_state, "encoder_attention_mask":None} - outputs = self.text_decoder.generate(input_ids=input_ids, - max_length=max_length, - min_length=min_length, - num_beams=num_beams, - eos_token_id=self.tokenizer.sep_token_id, - pad_token_id=self.tokenizer.pad_token_id, - repetition_penalty=repetition_penalty, - **model_kwargs) - - captions = [] - for output in outputs: - caption = self.tokenizer.decode(output, skip_special_tokens=True) - captions.append(caption[len(self.prompt):]) - if return_tag_predict == True: - if sample: - return captions, tag_input - else: - return captions, tag_input[0:int(len(tag_input)/num_beams)] - return captions - - -def tag2text_caption(pretrained='',**kwargs): - model = Tag2Text_Caption(**kwargs) - if pretrained: - if kwargs['vit'] == 'swin_b': - model,msg = load_checkpoint_swinbase(model,pretrained,kwargs) - else: - model,msg = load_checkpoint(model,pretrained) - # print('vit:',kwargs['vit']) - # print('msg_v2',msg) - return model - - -from typing import List -def tie_encoder_decoder_weights(encoder: nn.Module, decoder: nn.Module, base_model_prefix: str, skip_key:str): - uninitialized_encoder_weights: List[str] = [] - if decoder.__class__ != encoder.__class__: - logger.info( - f"{decoder.__class__} and {encoder.__class__} are not equal. In this case make sure that all encoder weights are correctly initialized." - ) - - def tie_encoder_to_decoder_recursively( - decoder_pointer: nn.Module, - encoder_pointer: nn.Module, - module_name: str, - uninitialized_encoder_weights: List[str], - skip_key: str, - depth=0, - ): - assert isinstance(decoder_pointer, nn.Module) and isinstance( - encoder_pointer, nn.Module - ), f"{decoder_pointer} and {encoder_pointer} have to be of type torch.nn.Module" - if hasattr(decoder_pointer, "weight") and skip_key not in module_name: - assert hasattr(encoder_pointer, "weight") - encoder_pointer.weight = decoder_pointer.weight - if hasattr(decoder_pointer, "bias"): - assert hasattr(encoder_pointer, "bias") - encoder_pointer.bias = decoder_pointer.bias - # print(module_name+' is tied') - return - - encoder_modules = encoder_pointer._modules - decoder_modules = decoder_pointer._modules - if len(decoder_modules) > 0: - assert ( - len(encoder_modules) > 0 - ), f"Encoder module {encoder_pointer} does not match decoder module {decoder_pointer}" - - all_encoder_weights = set([module_name + "/" + sub_name for sub_name in encoder_modules.keys()]) - encoder_layer_pos = 0 - for name, module in decoder_modules.items(): - if name.isdigit(): - encoder_name = str(int(name) + encoder_layer_pos) - decoder_name = name - if not isinstance(decoder_modules[decoder_name], type(encoder_modules[encoder_name])) and len( - encoder_modules - ) != len(decoder_modules): - # this can happen if the name corresponds to the position in a list module list of layers - # in this case the decoder has added a cross-attention that the encoder does not have - # thus skip this step and subtract one layer pos from encoder - encoder_layer_pos -= 1 - continue - elif name not in encoder_modules: - continue - elif depth > 500: - raise ValueError( - "Max depth of recursive function `tie_encoder_to_decoder` reached. It seems that there is a circular dependency between two or more `nn.Modules` of your model." - ) - else: - decoder_name = encoder_name = name - tie_encoder_to_decoder_recursively( - decoder_modules[decoder_name], - encoder_modules[encoder_name], - module_name + "/" + name, - uninitialized_encoder_weights, - skip_key, - depth=depth + 1, - ) - all_encoder_weights.remove(module_name + "/" + encoder_name) - - uninitialized_encoder_weights += list(all_encoder_weights) - - # tie weights recursively - tie_encoder_to_decoder_recursively(decoder, encoder, base_model_prefix, uninitialized_encoder_weights, skip_key) - - -class GroupWiseLinear(nn.Module): - # could be changed to: - # output = torch.einsum('ijk,zjk->ij', x, self.W) - # or output = torch.einsum('ijk,jk->ij', x, self.W[0]) - def __init__(self, num_class, hidden_dim, bias=True): - super().__init__() - self.num_class = num_class - self.hidden_dim = hidden_dim - self.bias = bias - - self.W = nn.Parameter(torch.Tensor(1, num_class, hidden_dim)) - if bias: - self.b = nn.Parameter(torch.Tensor(1, num_class)) - self.reset_parameters() - - def reset_parameters(self): - stdv = 1. / math.sqrt(self.W.size(2)) - for i in range(self.num_class): - self.W[0][i].data.uniform_(-stdv, stdv) - if self.bias: - for i in range(self.num_class): - self.b[0][i].data.uniform_(-stdv, stdv) - - def forward(self, x): - # x: B,K,d - x = (self.W * x).sum(-1) - if self.bias: - x = x + self.b - return x - - -def init_tokenizer(): - tokenizer = BertTokenizer.from_pretrained('bert-base-uncased') - tokenizer.add_special_tokens({'bos_token':'[DEC]'}) - tokenizer.add_special_tokens({'additional_special_tokens':['[ENC]']}) - tokenizer.enc_token_id = tokenizer.additional_special_tokens_ids[0] - return tokenizer - - -def create_vit(vit, image_size, use_grad_checkpointing=False, ckpt_layer=0, drop_path_rate=0): - - assert vit in ['base', 'large'], "vit parameter must be base or large" - if vit=='base': - vision_width = 768 - visual_encoder = VisionTransformer(img_size=image_size, patch_size=16, embed_dim=vision_width, depth=12, - num_heads=12, use_grad_checkpointing=use_grad_checkpointing, ckpt_layer=ckpt_layer, - drop_path_rate=0 or drop_path_rate - ) - elif vit=='large': - vision_width = 1024 - visual_encoder = VisionTransformer(img_size=image_size, patch_size=16, embed_dim=vision_width, depth=24, - num_heads=16, use_grad_checkpointing=use_grad_checkpointing, ckpt_layer=ckpt_layer, - drop_path_rate=0.1 or drop_path_rate - ) - return visual_encoder, vision_width - -def is_url(url_or_filename): - parsed = urlparse(url_or_filename) - return parsed.scheme in ("http", "https") - -def load_checkpoint(model,url_or_filename): - if is_url(url_or_filename): - cached_file = download_cached_file(url_or_filename, check_hash=False, progress=True) - checkpoint = torch.load(cached_file, map_location='cpu') - elif os.path.isfile(url_or_filename): - checkpoint = torch.load(url_or_filename, map_location='cpu') - else: - raise RuntimeError('checkpoint url or path is invalid') - - state_dict = checkpoint['model'] - - state_dict['visual_encoder.pos_embed'] = interpolate_pos_embed(state_dict['visual_encoder.pos_embed'],model.visual_encoder) - if 'visual_encoder_m.pos_embed' in model.state_dict().keys(): - state_dict['visual_encoder_m.pos_embed'] = interpolate_pos_embed(state_dict['visual_encoder_m.pos_embed'], - model.visual_encoder_m) - for key in model.state_dict().keys(): - if key in state_dict.keys(): - if state_dict[key].shape!=model.state_dict()[key].shape: - del state_dict[key] - - msg = model.load_state_dict(state_dict,strict=False) - print('load checkpoint from %s'%url_or_filename) - return model,msg - - -def load_checkpoint_swinbase(model,url_or_filename,kwargs): - if kwargs['image_size'] == 224: - vision_config_path = 'configs/swin/config_swinB_224.json' - elif kwargs['image_size'] == 384: - vision_config_path = 'configs/swin/config_swinB_384.json' - elif kwargs['image_size'] == 480: - vision_config_path = 'configs/swin/config_swinB_480.json' - elif kwargs['image_size'] == 576: - vision_config_path = 'configs/swin/config_swinB_576.json' - elif kwargs['image_size'] == 608: - vision_config_path = 'configs/swin/config_swinB_608.json' - window_size = read_json(vision_config_path)['window_size'] - # print('--------------') - # print(url_or_filename) - # print('--------------') - if is_url(url_or_filename): - cached_file = download_cached_file(url_or_filename, check_hash=False, progress=True) - checkpoint = torch.load(cached_file, map_location='cpu') - elif os.path.isfile(url_or_filename): - checkpoint = torch.load(url_or_filename, map_location='cpu') - else: - raise RuntimeError('checkpoint url or path is invalid') - - state_dict = checkpoint['model'] - - for k in list(state_dict.keys()): - if 'relative_position_bias_table' in k: - dst_num_pos = (2 * window_size - 1) ** 2 - state_dict[k] = interpolate_relative_pos_embed(state_dict[k], dst_num_pos, param_name=k) - elif ('relative_position_index' in k) or ('attn_mask' in k): - del state_dict[k] - - msg = model.load_state_dict(state_dict,strict=False) - print('load checkpoint from %s'%url_or_filename) - return model,msg - - - - - diff --git a/spaces/OpenMotionLab/MotionGPT/mGPT/models/__init__.py b/spaces/OpenMotionLab/MotionGPT/mGPT/models/__init__.py deleted file mode 100644 index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000 diff --git a/spaces/Pattr/DrumClassification/lilypond-2.24.2/lib/guile/2.2/ccache/language/cps/types.go b/spaces/Pattr/DrumClassification/lilypond-2.24.2/lib/guile/2.2/ccache/language/cps/types.go deleted file mode 100644 index f5e1cabb2ca9b86a0e443d4ec2c3dc29ca913572..0000000000000000000000000000000000000000 Binary files a/spaces/Pattr/DrumClassification/lilypond-2.24.2/lib/guile/2.2/ccache/language/cps/types.go and /dev/null differ diff --git a/spaces/Pie31415/control-animation/annotator/uniformer/mmseg/core/seg/builder.py b/spaces/Pie31415/control-animation/annotator/uniformer/mmseg/core/seg/builder.py deleted file mode 100644 index db61f03d4abb2072f2532ce4429c0842495e015b..0000000000000000000000000000000000000000 --- a/spaces/Pie31415/control-animation/annotator/uniformer/mmseg/core/seg/builder.py +++ /dev/null @@ -1,8 +0,0 @@ -from annotator.uniformer.mmcv.utils import Registry, build_from_cfg - -PIXEL_SAMPLERS = Registry('pixel sampler') - - -def build_pixel_sampler(cfg, **default_args): - """Build pixel sampler for segmentation map.""" - return build_from_cfg(cfg, PIXEL_SAMPLERS, default_args) diff --git a/spaces/Pinwheel/GLIP-BLIP-Object-Detection-VQA/maskrcnn_benchmark/csrc/nms.h b/spaces/Pinwheel/GLIP-BLIP-Object-Detection-VQA/maskrcnn_benchmark/csrc/nms.h deleted file mode 100644 index 929cf4a5c784511510747201251d49880318114c..0000000000000000000000000000000000000000 --- a/spaces/Pinwheel/GLIP-BLIP-Object-Detection-VQA/maskrcnn_benchmark/csrc/nms.h +++ /dev/null @@ -1,45 +0,0 @@ -// Copyright (c) Facebook, Inc. and its affiliates. All Rights Reserved. -#pragma once -#include "cpu/vision.h" - -#ifdef WITH_CUDA -#include "cuda/vision.h" -#endif - - -at::Tensor nms(const at::Tensor& dets, - const at::Tensor& scores, - const float threshold) { - - if (dets.device().is_cuda()) { -#ifdef WITH_CUDA - // TODO raise error if not compiled with CUDA - if (dets.numel() == 0) - return at::empty({0}, dets.options().dtype(at::kLong).device(at::kCPU)); - auto b = at::cat({dets, scores.unsqueeze(1)}, 1); - return nms_cuda(b, threshold); -#else - AT_ERROR("Not compiled with GPU support"); -#endif - } - - at::Tensor result = nms_cpu(dets, scores, threshold); - return result; -} - - -std::pair soft_nms(const at::Tensor& dets, - const at::Tensor& scores, - const float threshold, - const float sigma) { - - if (dets.device().is_cuda()) { -#ifdef WITH_CUDA - AT_ERROR("Soft NMS Does Not have GPU support"); -#endif - } - - std::pair result = soft_nms_cpu(dets, scores, threshold, sigma); - - return result; -} \ No newline at end of file diff --git a/spaces/Pinwheel/GLIP-BLIP-Object-Detection-VQA/maskrcnn_benchmark/engine/predictor_glip.py b/spaces/Pinwheel/GLIP-BLIP-Object-Detection-VQA/maskrcnn_benchmark/engine/predictor_glip.py deleted file mode 100644 index cbdfcc24b6abf711a77217d03c83bad7d6c6f442..0000000000000000000000000000000000000000 --- a/spaces/Pinwheel/GLIP-BLIP-Object-Detection-VQA/maskrcnn_benchmark/engine/predictor_glip.py +++ /dev/null @@ -1,471 +0,0 @@ -import cv2 -import torch -import re -import numpy as np -from typing import List, Union -import nltk -import inflect -from transformers import AutoTokenizer -from torchvision import transforms as T -import pdb -from maskrcnn_benchmark.modeling.detector import build_detection_model -from maskrcnn_benchmark.utils.checkpoint import DetectronCheckpointer -from maskrcnn_benchmark.structures.image_list import to_image_list -from maskrcnn_benchmark.structures.boxlist_ops import boxlist_iou -from maskrcnn_benchmark.structures.bounding_box import BoxList -from maskrcnn_benchmark import layers as L -from maskrcnn_benchmark.modeling.roi_heads.mask_head.inference import Masker -from maskrcnn_benchmark.utils import cv2_util - -engine = inflect.engine() -nltk.download('punkt') -nltk.download('averaged_perceptron_tagger') - -import timeit - - -class GLIPDemo(object): - def __init__(self, - cfg, - confidence_threshold=0.7, - min_image_size=None, - show_mask_heatmaps=False, - masks_per_dim=5, - load_model=True - ): - self.cfg = cfg.clone() - if load_model: - self.model = build_detection_model(cfg) - self.model.eval() - self.device = torch.device(cfg.MODEL.DEVICE) - self.model.to(self.device) - self.min_image_size = min_image_size - self.show_mask_heatmaps = show_mask_heatmaps - self.masks_per_dim = masks_per_dim - - save_dir = cfg.OUTPUT_DIR - if load_model: - checkpointer = DetectronCheckpointer(cfg, self.model, save_dir=save_dir) - _ = checkpointer.load(cfg.MODEL.WEIGHT) - - self.transforms = self.build_transform() - - # used to make colors for each tokens - mask_threshold = -1 if show_mask_heatmaps else 0.5 - self.masker = Masker(threshold=mask_threshold, padding=1) - self.palette = torch.tensor([2 ** 25 - 1, 2 ** 15 - 1, 2 ** 21 - 1]) - self.cpu_device = torch.device("cpu") - self.confidence_threshold = confidence_threshold - - self.tokenizer = self.build_tokenizer() - - def build_transform(self): - """ - Creates a basic transformation that was used to train the models - """ - cfg = self.cfg - - # we are loading images with OpenCV, so we don't need to convert them - # to BGR, they are already! So all we need to do is to normalize - # by 255 if we want to convert to BGR255 format, or flip the channels - # if we want it to be in RGB in [0-1] range. - if cfg.INPUT.TO_BGR255: - to_bgr_transform = T.Lambda(lambda x: x * 255) - else: - to_bgr_transform = T.Lambda(lambda x: x[[2, 1, 0]]) - - normalize_transform = T.Normalize( - mean=cfg.INPUT.PIXEL_MEAN, std=cfg.INPUT.PIXEL_STD - ) - - transform = T.Compose( - [ - T.ToPILImage(), - T.Resize(self.min_image_size) if self.min_image_size is not None else lambda x: x, - T.ToTensor(), - to_bgr_transform, - normalize_transform, - ] - ) - return transform - - def build_tokenizer(self): - cfg = self.cfg - tokenizer = None - if cfg.MODEL.LANGUAGE_BACKBONE.TOKENIZER_TYPE == "bert-base-uncased": - tokenizer = AutoTokenizer.from_pretrained("bert-base-uncased") - elif cfg.MODEL.LANGUAGE_BACKBONE.TOKENIZER_TYPE == "clip": - from transformers import CLIPTokenizerFast - if cfg.MODEL.DYHEAD.FUSE_CONFIG.MLM_LOSS: - tokenizer = CLIPTokenizerFast.from_pretrained("openai/clip-vit-base-patch32", - from_slow=True, mask_token='ðŁĴij') - else: - tokenizer = CLIPTokenizerFast.from_pretrained("openai/clip-vit-base-patch32", - from_slow=True) - return tokenizer - - def run_ner(self, caption): - noun_phrases = find_noun_phrases(caption) - noun_phrases = [remove_punctuation(phrase) for phrase in noun_phrases] - noun_phrases = [phrase for phrase in noun_phrases if phrase != ''] - relevant_phrases = noun_phrases - labels = noun_phrases - self.entities = labels - - tokens_positive = [] - - for entity, label in zip(relevant_phrases, labels): - try: - # search all occurrences and mark them as different entities - for m in re.finditer(entity, caption.lower()): - tokens_positive.append([[m.start(), m.end()]]) - except: - print("noun entities:", noun_phrases) - print("entity:", entity) - print("caption:", caption.lower()) - - return tokens_positive - - def inference(self, original_image, original_caption): - predictions = self.compute_prediction(original_image, original_caption) - top_predictions = self._post_process_fixed_thresh(predictions) - return top_predictions - - def run_on_web_image(self, - original_image, - original_caption, - thresh=0.5, - custom_entity=None, - alpha=0.0): - predictions = self.compute_prediction(original_image, original_caption, custom_entity) - top_predictions = self._post_process(predictions, thresh) - - result = original_image.copy() - if self.show_mask_heatmaps: - return self.create_mask_montage(result, top_predictions) - result = self.overlay_boxes(result, top_predictions) - result = self.overlay_entity_names(result, top_predictions) - if self.cfg.MODEL.MASK_ON: - result = self.overlay_mask(result, top_predictions) - return result, top_predictions - - def visualize_with_predictions(self, - original_image, - predictions, - thresh=0.5, - alpha=0.0, - box_pixel=3, - text_size=1, - text_pixel=2, - text_offset=10, - text_offset_original=4, - color=255): - self.color = color - height, width = original_image.shape[:-1] - predictions = predictions.resize((width, height)) - top_predictions = self._post_process(predictions, thresh) - - result = original_image.copy() - if self.show_mask_heatmaps: - return self.create_mask_montage(result, top_predictions) - result = self.overlay_boxes(result, top_predictions, alpha=alpha, box_pixel=box_pixel) - result = self.overlay_entity_names(result, top_predictions, text_size=text_size, text_pixel=text_pixel, - text_offset=text_offset, text_offset_original=text_offset_original) - if self.cfg.MODEL.MASK_ON: - result = self.overlay_mask(result, top_predictions) - return result, top_predictions - - def compute_prediction(self, original_image, original_caption, custom_entity=None): - # image - image = self.transforms(original_image) - image_list = to_image_list(image, self.cfg.DATALOADER.SIZE_DIVISIBILITY) - image_list = image_list.to(self.device) - # caption - if isinstance(original_caption, list): - # we directly provided a list of category names - caption_string = "" - tokens_positive = [] - seperation_tokens = " . " - for word in original_caption: - tokens_positive.append([len(caption_string), len(caption_string) + len(word)]) - caption_string += word - caption_string += seperation_tokens - - tokenized = self.tokenizer([caption_string], return_tensors="pt") - tokens_positive = [tokens_positive] - - original_caption = caption_string - print(tokens_positive) - else: - tokenized = self.tokenizer([original_caption], return_tensors="pt") - if custom_entity is None: - tokens_positive = self.run_ner(original_caption) - print(tokens_positive) - # process positive map - positive_map = create_positive_map(tokenized, tokens_positive) - - if self.cfg.MODEL.RPN_ARCHITECTURE == "VLDYHEAD": - plus = 1 - else: - plus = 0 - - positive_map_label_to_token = create_positive_map_label_to_token_from_positive_map(positive_map, plus=plus) - self.plus = plus - self.positive_map_label_to_token = positive_map_label_to_token - tic = timeit.time.perf_counter() - - # compute predictions - with torch.no_grad(): - predictions = self.model(image_list, captions=[original_caption], positive_map=positive_map_label_to_token) - predictions = [o.to(self.cpu_device) for o in predictions] - print("inference time per image: {}".format(timeit.time.perf_counter() - tic)) - - # always single image is passed at a time - prediction = predictions[0] - - # reshape prediction (a BoxList) into the original image size - height, width = original_image.shape[:-1] - prediction = prediction.resize((width, height)) - - if prediction.has_field("mask"): - # if we have masks, paste the masks in the right position - # in the image, as defined by the bounding boxes - masks = prediction.get_field("mask") - # always single image is passed at a time - masks = self.masker([masks], [prediction])[0] - prediction.add_field("mask", masks) - - return prediction - - def _post_process_fixed_thresh(self, predictions): - scores = predictions.get_field("scores") - labels = predictions.get_field("labels").tolist() - thresh = scores.clone() - for i, lb in enumerate(labels): - if isinstance(self.confidence_threshold, float): - thresh[i] = self.confidence_threshold - elif len(self.confidence_threshold) == 1: - thresh[i] = self.confidence_threshold[0] - else: - thresh[i] = self.confidence_threshold[lb - 1] - keep = torch.nonzero(scores > thresh).squeeze(1) - predictions = predictions[keep] - - scores = predictions.get_field("scores") - _, idx = scores.sort(0, descending=True) - return predictions[idx] - - def _post_process(self, predictions, threshold=0.5): - scores = predictions.get_field("scores") - labels = predictions.get_field("labels").tolist() - thresh = scores.clone() - for i, lb in enumerate(labels): - if isinstance(self.confidence_threshold, float): - thresh[i] = threshold - elif len(self.confidence_threshold) == 1: - thresh[i] = threshold - else: - thresh[i] = self.confidence_threshold[lb - 1] - keep = torch.nonzero(scores > thresh).squeeze(1) - predictions = predictions[keep] - - scores = predictions.get_field("scores") - _, idx = scores.sort(0, descending=True) - return predictions[idx] - - def compute_colors_for_labels(self, labels): - """ - Simple function that adds fixed colors depending on the class - """ - colors = (300 * (labels[:, None] - 1) + 1) * self.palette - colors = (colors % 255).numpy().astype("uint8") - try: - colors = (colors * 0 + self.color).astype("uint8") - except: - pass - return colors - - def overlay_boxes(self, image, predictions, alpha=0.5, box_pixel=3): - labels = predictions.get_field("labels") - boxes = predictions.bbox - - colors = self.compute_colors_for_labels(labels).tolist() - new_image = image.copy() - for box, color in zip(boxes, colors): - box = box.to(torch.int64) - top_left, bottom_right = box[:2].tolist(), box[2:].tolist() - new_image = cv2.rectangle( - new_image, tuple(top_left), tuple(bottom_right), tuple(color), box_pixel) - - # Following line overlays transparent rectangle over the image - image = cv2.addWeighted(new_image, alpha, image, 1 - alpha, 0) - - return image - - def overlay_scores(self, image, predictions): - scores = predictions.get_field("scores") - boxes = predictions.bbox - - for box, score in zip(boxes, scores): - box = box.to(torch.int64) - image = cv2.putText(image, '%.3f' % score, - (int(box[0]), int((box[1] + box[3]) / 2)), - cv2.FONT_HERSHEY_SIMPLEX, 1.0, (255, 255, 255), 2, cv2.LINE_AA) - - return image - - def overlay_entity_names(self, image, predictions, names=None, text_size=0.7, text_pixel=2, text_offset=10, - text_offset_original=4): - scores = predictions.get_field("scores").tolist() - labels = predictions.get_field("labels").tolist() - new_labels = [] - if self.cfg.MODEL.RPN_ARCHITECTURE == "VLDYHEAD": - plus = 1 - else: - plus = 0 - self.plus = plus - if self.entities and self.plus: - for i in labels: - if i <= len(self.entities): - new_labels.append(self.entities[i - self.plus]) - else: - new_labels.append('object') - # labels = [self.entities[i - self.plus] for i in labels ] - else: - new_labels = ['object' for i in labels] - boxes = predictions.bbox - - template = "{}:{:.2f}" - previous_locations = [] - for box, score, label in zip(boxes, scores, new_labels): - x, y = box[:2] - s = template.format(label, score).replace("_", " ").replace("(", "").replace(")", "") - for x_prev, y_prev in previous_locations: - if abs(x - x_prev) < abs(text_offset) and abs(y - y_prev) < abs(text_offset): - y -= text_offset - - cv2.putText( - image, s, (int(x), int(y) - text_offset_original), cv2.FONT_HERSHEY_SIMPLEX, text_size, - (255, 255, 255), text_pixel, cv2.LINE_AA - ) - previous_locations.append((int(x), int(y))) - - return image - - def overlay_mask(self, image, predictions): - masks = predictions.get_field("mask").numpy() - labels = predictions.get_field("labels") - - colors = self.compute_colors_for_labels(labels).tolist() - - # import pdb - # pdb.set_trace() - # masks = masks > 0.1 - - for mask, color in zip(masks, colors): - thresh = mask[0, :, :, None].astype(np.uint8) - contours, hierarchy = cv2_util.findContours( - thresh, cv2.RETR_TREE, cv2.CHAIN_APPROX_SIMPLE - ) - image = cv2.drawContours(image, contours, -1, color, 2) - - composite = image - - return composite - - def create_mask_montage(self, image, predictions): - masks = predictions.get_field("mask") - masks_per_dim = self.masks_per_dim - masks = L.interpolate( - masks.float(), scale_factor=1 / masks_per_dim - ).byte() - height, width = masks.shape[-2:] - max_masks = masks_per_dim ** 2 - masks = masks[:max_masks] - # handle case where we have less detections than max_masks - if len(masks) < max_masks: - masks_padded = torch.zeros(max_masks, 1, height, width, dtype=torch.uint8) - masks_padded[: len(masks)] = masks - masks = masks_padded - masks = masks.reshape(masks_per_dim, masks_per_dim, height, width) - result = torch.zeros( - (masks_per_dim * height, masks_per_dim * width), dtype=torch.uint8 - ) - for y in range(masks_per_dim): - start_y = y * height - end_y = (y + 1) * height - for x in range(masks_per_dim): - start_x = x * width - end_x = (x + 1) * width - result[start_y:end_y, start_x:end_x] = masks[y, x] - - return cv2.applyColorMap(result.numpy(), cv2.COLORMAP_JET), None - - -def create_positive_map_label_to_token_from_positive_map(positive_map, plus=0): - positive_map_label_to_token = {} - for i in range(len(positive_map)): - positive_map_label_to_token[i + plus] = torch.nonzero(positive_map[i], as_tuple=True)[0].tolist() - return positive_map_label_to_token - - -def create_positive_map(tokenized, tokens_positive): - """construct a map such that positive_map[i,j] = True iff box i is associated to token j""" - positive_map = torch.zeros((len(tokens_positive), 256), dtype=torch.float) - - for j, tok_list in enumerate(tokens_positive): - for (beg, end) in tok_list: - try: - beg_pos = tokenized.char_to_token(beg) - end_pos = tokenized.char_to_token(end - 1) - except Exception as e: - print("beg:", beg, "end:", end) - print("token_positive:", tokens_positive) - # print("beg_pos:", beg_pos, "end_pos:", end_pos) - raise e - if beg_pos is None: - try: - beg_pos = tokenized.char_to_token(beg + 1) - if beg_pos is None: - beg_pos = tokenized.char_to_token(beg + 2) - except: - beg_pos = None - if end_pos is None: - try: - end_pos = tokenized.char_to_token(end - 2) - if end_pos is None: - end_pos = tokenized.char_to_token(end - 3) - except: - end_pos = None - if beg_pos is None or end_pos is None: - continue - - assert beg_pos is not None and end_pos is not None - positive_map[j, beg_pos: end_pos + 1].fill_(1) - return positive_map / (positive_map.sum(-1)[:, None] + 1e-6) - - -def find_noun_phrases(caption: str) -> List[str]: - caption = caption.lower() - tokens = nltk.word_tokenize(caption) - pos_tags = nltk.pos_tag(tokens) - - grammar = "NP: {
      ?*+}" - cp = nltk.RegexpParser(grammar) - result = cp.parse(pos_tags) - - noun_phrases = list() - for subtree in result.subtrees(): - if subtree.label() == 'NP': - noun_phrases.append(' '.join(t[0] for t in subtree.leaves())) - - return noun_phrases - - -def remove_punctuation(text: str) -> str: - punct = ['|', ':', ';', '@', '(', ')', '[', ']', '{', '}', '^', - '\'', '\"', '’', '`', '?', '$', '%', '#', '!', '&', '*', '+', ',', '.' - ] - for p in punct: - text = text.replace(p, '') - return text.strip() diff --git a/spaces/Prof-Reza/Audiocraft_Music-Audio_Generation/docs/TRAINING.md b/spaces/Prof-Reza/Audiocraft_Music-Audio_Generation/docs/TRAINING.md deleted file mode 100644 index 148de295f2ddfed2e4e893576bf31e1485038b8e..0000000000000000000000000000000000000000 --- a/spaces/Prof-Reza/Audiocraft_Music-Audio_Generation/docs/TRAINING.md +++ /dev/null @@ -1,312 +0,0 @@ -# AudioCraft training pipelines - -AudioCraft training pipelines are built on top of PyTorch as our core deep learning library -and [Flashy](https://github.com/facebookresearch/flashy) as our training pipeline design library, -and [Dora](https://github.com/facebookresearch/dora) as our experiment manager. -AudioCraft training pipelines are designed to be research and experiment-friendly. - - -## Environment setup - -For the base installation, follow the instructions from the [README.md](../README.md). -Below are some additional instructions for setting up environment to train new models. - -### Team and cluster configuration - -In order to support multiple teams and clusters, AudioCraft uses an environment configuration. -The team configuration allows to specify cluster-specific configurations (e.g. SLURM configuration), -or convenient mapping of paths between the supported environments. - -Each team can have a yaml file under the [configuration folder](../config). To select a team set the -`AUDIOCRAFT_TEAM` environment variable to a valid team name (e.g. `labs` or `default`): -```shell -conda env config vars set AUDIOCRAFT_TEAM=default -``` - -Alternatively, you can add it to your `.bashrc`: -```shell -export AUDIOCRAFT_TEAM=default -``` - -If not defined, the environment will default to the `default` team. - -The cluster is automatically detected, but it is also possible to override it by setting -the `AUDIOCRAFT_CLUSTER` environment variable. - -Based on this team and cluster, the environment is then configured with: -* The dora experiment outputs directory. -* The available slurm partitions: categorized by global and team. -* A shared reference directory: In order to facilitate sharing research models while remaining -agnostic to the used compute cluster, we created the `//reference` symbol that can be used in -YAML config to point to a defined reference folder containing shared checkpoints -(e.g. baselines, models for evaluation...). - -**Important:** The default output dir for trained models and checkpoints is under `/tmp/`. This is suitable -only for quick testing. If you are doing anything serious you MUST edit the file `default.yaml` and -properly set the `dora_dir` entries. - -#### Overriding environment configurations - -You can set the following environmet variables to bypass the team's environment configuration: -* `AUDIOCRAFT_CONFIG`: absolute path to a team config yaml file. -* `AUDIOCRAFT_DORA_DIR`: absolute path to a custom dora directory. -* `AUDIOCRAFT_REFERENCE_DIR`: absolute path to the shared reference directory. - -## Training pipelines - -Each task supported in AudioCraft has its own training pipeline and dedicated solver. -Learn more about solvers and key designs around AudioCraft training pipeline below. -Please refer to the documentation of each task and model for specific information on a given task. - - -### Solvers - -The core training component in AudioCraft is the solver. A solver holds the definition -of how to solve a given task: It implements the training pipeline logic, combining the datasets, -model, optimization criterion and components and the full training loop. We refer the reader -to [Flashy](https://github.com/facebookresearch/flashy) for core principles around solvers. - -AudioCraft proposes an initial solver, the `StandardSolver` that is used as the base implementation -for downstream solvers. This standard solver provides a nice base management of logging, -checkpoints loading/saving, xp restoration, etc. on top of the base Flashy implementation. -In AudioCraft, we made the assumption that all tasks are following the same set of stages: -train, valid, evaluate and generation, each relying on a dedicated dataset. - -Each solver is responsible for defining the task to solve and the associated stages -of the training loop in order to leave the full ownership of the training pipeline -to the researchers. This includes loading the datasets, building the model and -optimisation components, registering them and defining the execution of each stage. -To create a new solver for a given task, one should extend the StandardSolver -and define each stage of the training loop. One can further customise its own solver -starting from scratch instead of inheriting from the standard solver. - -```python -from . import base -from .. import optim - - -class MyNewSolver(base.StandardSolver): - - def __init__(self, cfg: omegaconf.DictConfig): - super().__init__(cfg) - # one can add custom attributes to the solver - self.criterion = torch.nn.L1Loss() - - def best_metric(self): - # here optionally specify which metric to use to keep track of best state - return 'loss' - - def build_model(self): - # here you can instantiate your models and optimization related objects - # this method will be called by the StandardSolver init method - self.model = ... - # the self.cfg attribute contains the raw configuration - self.optimizer = optim.build_optimizer(self.model.parameters(), self.cfg.optim) - # don't forget to register the states you'd like to include in your checkpoints! - self.register_stateful('model', 'optimizer') - # keep the model best state based on the best value achieved at validation for the given best_metric - self.register_best('model') - # if you want to add EMA around the model - self.register_ema('model') - - def build_dataloaders(self): - # here you can instantiate your dataloaders - # this method will be called by the StandardSolver init method - self.dataloaders = ... - - ... - - # For both train and valid stages, the StandardSolver relies on - # a share common_train_valid implementation that is in charge of - # accessing the appropriate loader, iterate over the data up to - # the specified number of updates_per_epoch, run the ``run_step`` - # function that you need to implement to specify the behavior - # and finally update the EMA and collect the metrics properly. - @abstractmethod - def run_step(self, idx: int, batch: tp.Any, metrics: dict): - """Perform one training or valid step on a given batch. - """ - ... # provide your implementation of the solver over a batch - - def train(self): - """Train stage. - """ - return self.common_train_valid('train') - - def valid(self): - """Valid stage. - """ - return self.common_train_valid('valid') - - @abstractmethod - def evaluate(self): - """Evaluate stage. - """ - ... # provide your implementation here! - - @abstractmethod - def generate(self): - """Generate stage. - """ - ... # provide your implementation here! -``` - -### About Epochs - -AudioCraft Solvers uses the concept of Epoch. One epoch doesn't necessarily mean one pass over the entire -dataset, but instead represent the smallest amount of computation that we want to work with before checkpointing. -Typically, we find that having an Epoch time around 30min is ideal both in terms of safety (checkpointing often enough) -and getting updates often enough. One Epoch is at least a `train` stage that lasts for `optim.updates_per_epoch` (2000 by default), -and a `valid` stage. You can control how long the valid stage takes with `dataset.valid.num_samples`. -Other stages (`evaluate`, `generate`) will only happen every X epochs, as given by `evaluate.every` and `generate.every`). - - -### Models - -In AudioCraft, a model is a container object that wraps one or more torch modules together -with potential processing logic to use in a solver. For example, a model would wrap an encoder module, -a quantisation bottleneck module, a decoder and some tensor processing logic. Each of the previous components -can be considered as a small « model unit » on its own but the container model is a practical component -to manipulate and train a set of modules together. - -### Datasets - -See the [dedicated documentation on datasets](./DATASETS.md). - -### Metrics - -See the [dedicated documentation on metrics](./METRICS.md). - -### Conditioners - -AudioCraft language models can be conditioned in various ways and the codebase offers a modular implementation -of different conditioners that can be potentially combined together. -Learn more in the [dedicated documentation on conditioning](./CONDITIONING.md). - -### Configuration - -AudioCraft's configuration is defined in yaml files and the framework relies on -[hydra](https://hydra.cc/docs/intro/) and [omegaconf](https://omegaconf.readthedocs.io/) to parse -and manipulate the configuration through Dora. - -##### :warning: Important considerations around configurations - -Our configuration management relies on Hydra and the concept of group configs to structure -and compose configurations. Updating the root default configuration files will then have -an impact on all solvers and tasks. -**One should never change the default configuration files. Instead they should use Hydra config groups in order to store custom configuration.** -Once this configuration is created and used for running experiments, you should not edit it anymore. - -Note that as we are using Dora as our experiment manager, all our experiment tracking is based on -signatures computed from delta between configurations. -**One must therefore ensure backward compatibilty of the configuration at all time.** -See [Dora's README](https://github.com/facebookresearch/dora) and the -[section below introduction Dora](#running-experiments-with-dora). - -##### Configuration structure - -The configuration is organized in config groups: -* `conditioner`: default values for conditioning modules. -* `dset`: contains all data source related information (paths to manifest files -and metadata for a given dataset). -* `model`: contains configuration for each model defined in AudioCraft and configurations -for different variants of models. -* `solver`: contains the default configuration for each solver as well as configuration -for each solver task, combining all the above components. -* `teams`: contains the cluster configuration per teams. See environment setup for more details. - -The `config.yaml` file is the main configuration that composes the above groups -and contains default configuration for AudioCraft. - -##### Solver's core configuration structure - -The core configuration structure shared across solver is available in `solvers/default.yaml`. - -##### Other configuration modules - -AudioCraft configuration contains the different setups we used for our research and publications. - -## Running experiments with Dora - -### Launching jobs - -Try launching jobs for different tasks locally with dora run: - -```shell -# run compression task with lightweight encodec -dora run solver=compression/debug -``` - -Most of the time, the jobs are launched through dora grids, for example: - -```shell -# run compression task through debug grid -dora grid compression.debug -``` - -Learn more about running experiments with Dora below. - -### A small introduction to Dora - -[Dora](https://github.com/facebookresearch/dora) is the experiment manager tool used in AudioCraft. -Check out the README to learn how Dora works. Here is a quick summary of what to know: -* An XP is a unique set of hyper-parameters with a given signature. The signature is a hash -of those hyper-parameters. We always refer to an XP with its signature, e.g. 9357e12e. We will see -after that one can retrieve the hyper-params and re-rerun it in a single command. -* In fact, the hash is defined as a delta between the base config and the one obtained -with the config overrides you passed from the command line. This means you must never change -the `conf/**.yaml` files directly., except for editing things like paths. Changing the default values -in the config files means the XP signature won't reflect that change, and wrong checkpoints might be reused. -I know, this is annoying, but the reason is that otherwise, any change to the config file would mean -that all XPs ran so far would see their signature change. - -#### Dora commands - -```shell -dora info -f 81de367c # this will show the hyper-parameter used by a specific XP. - # Be careful some overrides might present twice, and the right most one - # will give you the right value for it. - -dora run -d -f 81de367c # run an XP with the hyper-parameters from XP 81de367c. - # `-d` is for distributed, it will use all available GPUs. - -dora run -d -f 81de367c dataset.batch_size=32 # start from the config of XP 81de367c but change some hyper-params. - # This will give you a new XP with a new signature (e.g. 3fe9c332). - -dora info -f SIG -t # will tail the log (if the XP has scheduled). -# if you need to access the logs of the process for rank > 0, in particular because a crash didn't happen in the main -# process, then use `dora info -f SIG` to get the main log name (finished into something like `/5037674_0_0_log.out`) -# and worker K can accessed as `/5037674_0_{K}_log.out`. -# This is only for scheduled jobs, for local distributed runs with `-d`, then you should go into the XP folder, -# and look for `worker_{K}.log` logs. -``` - -An XP runs from a specific folder based on its signature, under the -`//experiments/audiocraft/outputs/` folder. -You can safely interrupt a training and resume it, it will reuse any existing checkpoint, -as it will reuse the same folder. If you made some change to the code and need to ignore -a previous checkpoint you can use `dora run --clear [RUN ARGS]`. - -If you have a Slurm cluster, you can also use the dora grid command, e.g. - -```shell -# run a dummy grid located at `audiocraft/grids/my_grid_folder/my_grid_name.py` -dora grid my_grid_folder.my_grid_name -# Run the following will simply display the grid and also initialized the Dora experiments database. -# You can then simply refer to a config using its signature (e.g. as `dora run -f SIG`). -dora grid my_grid_folder.my_grid_name --dry_run --init -``` - -Please refer to the [Dora documentation](https://github.com/facebookresearch/dora) for more information. - - -#### Clearing up past experiments - -```shell -# This will cancel all the XPs and delete their folder and checkpoints. -# It will then reschedule them starting from scratch. -dora grid my_grid_folder.my_grid_name --clear -# The following will delete the folder and checkpoint for a single XP, -# and then run it afresh. -dora run [-f BASE_SIG] [ARGS] --clear -``` diff --git a/spaces/PureNaCl/Toxic-Tweets-MS2/README.md b/spaces/PureNaCl/Toxic-Tweets-MS2/README.md deleted file mode 100644 index 25b906ee7980cd770daff82f4d47311f03b36283..0000000000000000000000000000000000000000 --- a/spaces/PureNaCl/Toxic-Tweets-MS2/README.md +++ /dev/null @@ -1,14 +0,0 @@ ---- -title: Toxic Tweets MS2 -emoji: 💩 -colorFrom: red -colorTo: green -sdk: streamlit -sdk_version: 1.17.0 -app_file: app.py -pinned: false ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference - - diff --git a/spaces/RamAnanth1/T2I-Adapter/ldm/modules/distributions/distributions.py b/spaces/RamAnanth1/T2I-Adapter/ldm/modules/distributions/distributions.py deleted file mode 100644 index f2b8ef901130efc171aa69742ca0244d94d3f2e9..0000000000000000000000000000000000000000 --- a/spaces/RamAnanth1/T2I-Adapter/ldm/modules/distributions/distributions.py +++ /dev/null @@ -1,92 +0,0 @@ -import torch -import numpy as np - - -class AbstractDistribution: - def sample(self): - raise NotImplementedError() - - def mode(self): - raise NotImplementedError() - - -class DiracDistribution(AbstractDistribution): - def __init__(self, value): - self.value = value - - def sample(self): - return self.value - - def mode(self): - return self.value - - -class DiagonalGaussianDistribution(object): - def __init__(self, parameters, deterministic=False): - self.parameters = parameters - self.mean, self.logvar = torch.chunk(parameters, 2, dim=1) - self.logvar = torch.clamp(self.logvar, -30.0, 20.0) - self.deterministic = deterministic - self.std = torch.exp(0.5 * self.logvar) - self.var = torch.exp(self.logvar) - if self.deterministic: - self.var = self.std = torch.zeros_like(self.mean).to(device=self.parameters.device) - - def sample(self): - x = self.mean + self.std * torch.randn(self.mean.shape).to(device=self.parameters.device) - return x - - def kl(self, other=None): - if self.deterministic: - return torch.Tensor([0.]) - else: - if other is None: - return 0.5 * torch.sum(torch.pow(self.mean, 2) - + self.var - 1.0 - self.logvar, - dim=[1, 2, 3]) - else: - return 0.5 * torch.sum( - torch.pow(self.mean - other.mean, 2) / other.var - + self.var / other.var - 1.0 - self.logvar + other.logvar, - dim=[1, 2, 3]) - - def nll(self, sample, dims=[1,2,3]): - if self.deterministic: - return torch.Tensor([0.]) - logtwopi = np.log(2.0 * np.pi) - return 0.5 * torch.sum( - logtwopi + self.logvar + torch.pow(sample - self.mean, 2) / self.var, - dim=dims) - - def mode(self): - return self.mean - - -def normal_kl(mean1, logvar1, mean2, logvar2): - """ - source: https://github.com/openai/guided-diffusion/blob/27c20a8fab9cb472df5d6bdd6c8d11c8f430b924/guided_diffusion/losses.py#L12 - Compute the KL divergence between two gaussians. - Shapes are automatically broadcasted, so batches can be compared to - scalars, among other use cases. - """ - tensor = None - for obj in (mean1, logvar1, mean2, logvar2): - if isinstance(obj, torch.Tensor): - tensor = obj - break - assert tensor is not None, "at least one argument must be a Tensor" - - # Force variances to be Tensors. Broadcasting helps convert scalars to - # Tensors, but it does not work for torch.exp(). - logvar1, logvar2 = [ - x if isinstance(x, torch.Tensor) else torch.tensor(x).to(tensor) - for x in (logvar1, logvar2) - ] - - return 0.5 * ( - -1.0 - + logvar2 - - logvar1 - + torch.exp(logvar1 - logvar2) - + ((mean1 - mean2) ** 2) * torch.exp(-logvar2) - ) diff --git a/spaces/Raspberry-ai/main/.env/lib/python3.11/site-packages/pip/_internal/metadata/__init__.py b/spaces/Raspberry-ai/main/.env/lib/python3.11/site-packages/pip/_internal/metadata/__init__.py deleted file mode 100644 index 9f73ca7105ff0bf11d74dd16ffb0653059466f70..0000000000000000000000000000000000000000 --- a/spaces/Raspberry-ai/main/.env/lib/python3.11/site-packages/pip/_internal/metadata/__init__.py +++ /dev/null @@ -1,127 +0,0 @@ -import contextlib -import functools -import os -import sys -from typing import TYPE_CHECKING, List, Optional, Type, cast - -from pip._internal.utils.misc import strtobool - -from .base import BaseDistribution, BaseEnvironment, FilesystemWheel, MemoryWheel, Wheel - -if TYPE_CHECKING: - from typing import Protocol -else: - Protocol = object - -__all__ = [ - "BaseDistribution", - "BaseEnvironment", - "FilesystemWheel", - "MemoryWheel", - "Wheel", - "get_default_environment", - "get_environment", - "get_wheel_distribution", - "select_backend", -] - - -def _should_use_importlib_metadata() -> bool: - """Whether to use the ``importlib.metadata`` or ``pkg_resources`` backend. - - By default, pip uses ``importlib.metadata`` on Python 3.11+, and - ``pkg_resourcess`` otherwise. This can be overridden by a couple of ways: - - * If environment variable ``_PIP_USE_IMPORTLIB_METADATA`` is set, it - dictates whether ``importlib.metadata`` is used, regardless of Python - version. - * On Python 3.11+, Python distributors can patch ``importlib.metadata`` - to add a global constant ``_PIP_USE_IMPORTLIB_METADATA = False``. This - makes pip use ``pkg_resources`` (unless the user set the aforementioned - environment variable to *True*). - """ - with contextlib.suppress(KeyError, ValueError): - return bool(strtobool(os.environ["_PIP_USE_IMPORTLIB_METADATA"])) - if sys.version_info < (3, 11): - return False - import importlib.metadata - - return bool(getattr(importlib.metadata, "_PIP_USE_IMPORTLIB_METADATA", True)) - - -class Backend(Protocol): - Distribution: Type[BaseDistribution] - Environment: Type[BaseEnvironment] - - -@functools.lru_cache(maxsize=None) -def select_backend() -> Backend: - if _should_use_importlib_metadata(): - from . import importlib - - return cast(Backend, importlib) - from . import pkg_resources - - return cast(Backend, pkg_resources) - - -def get_default_environment() -> BaseEnvironment: - """Get the default representation for the current environment. - - This returns an Environment instance from the chosen backend. The default - Environment instance should be built from ``sys.path`` and may use caching - to share instance state accorss calls. - """ - return select_backend().Environment.default() - - -def get_environment(paths: Optional[List[str]]) -> BaseEnvironment: - """Get a representation of the environment specified by ``paths``. - - This returns an Environment instance from the chosen backend based on the - given import paths. The backend must build a fresh instance representing - the state of installed distributions when this function is called. - """ - return select_backend().Environment.from_paths(paths) - - -def get_directory_distribution(directory: str) -> BaseDistribution: - """Get the distribution metadata representation in the specified directory. - - This returns a Distribution instance from the chosen backend based on - the given on-disk ``.dist-info`` directory. - """ - return select_backend().Distribution.from_directory(directory) - - -def get_wheel_distribution(wheel: Wheel, canonical_name: str) -> BaseDistribution: - """Get the representation of the specified wheel's distribution metadata. - - This returns a Distribution instance from the chosen backend based on - the given wheel's ``.dist-info`` directory. - - :param canonical_name: Normalized project name of the given wheel. - """ - return select_backend().Distribution.from_wheel(wheel, canonical_name) - - -def get_metadata_distribution( - metadata_contents: bytes, - filename: str, - canonical_name: str, -) -> BaseDistribution: - """Get the dist representation of the specified METADATA file contents. - - This returns a Distribution instance from the chosen backend sourced from the data - in `metadata_contents`. - - :param metadata_contents: Contents of a METADATA file within a dist, or one served - via PEP 658. - :param filename: Filename for the dist this metadata represents. - :param canonical_name: Normalized project name of the given dist. - """ - return select_backend().Distribution.from_metadata_file_contents( - metadata_contents, - filename, - canonical_name, - ) diff --git a/spaces/Raspberry-ai/main/.env/lib/python3.11/site-packages/pip/_vendor/pep517/dirtools.py b/spaces/Raspberry-ai/main/.env/lib/python3.11/site-packages/pip/_vendor/pep517/dirtools.py deleted file mode 100644 index 3eff4d801ba9bc29ceb80149cd949456b5db27db..0000000000000000000000000000000000000000 --- a/spaces/Raspberry-ai/main/.env/lib/python3.11/site-packages/pip/_vendor/pep517/dirtools.py +++ /dev/null @@ -1,19 +0,0 @@ -import io -import os -import zipfile - - -def dir_to_zipfile(root): - """Construct an in-memory zip file for a directory.""" - buffer = io.BytesIO() - zip_file = zipfile.ZipFile(buffer, 'w') - for root, dirs, files in os.walk(root): - for path in dirs: - fs_path = os.path.join(root, path) - rel_path = os.path.relpath(fs_path, root) - zip_file.writestr(rel_path + '/', '') - for path in files: - fs_path = os.path.join(root, path) - rel_path = os.path.relpath(fs_path, root) - zip_file.write(fs_path, rel_path) - return zip_file diff --git a/spaces/Raspberry-ai/main/.env/lib/python3.11/site-packages/pip/_vendor/requests/status_codes.py b/spaces/Raspberry-ai/main/.env/lib/python3.11/site-packages/pip/_vendor/requests/status_codes.py deleted file mode 100644 index 4bd072be9769748a852740d037d5c63021472c9d..0000000000000000000000000000000000000000 --- a/spaces/Raspberry-ai/main/.env/lib/python3.11/site-packages/pip/_vendor/requests/status_codes.py +++ /dev/null @@ -1,128 +0,0 @@ -r""" -The ``codes`` object defines a mapping from common names for HTTP statuses -to their numerical codes, accessible either as attributes or as dictionary -items. - -Example:: - - >>> import requests - >>> requests.codes['temporary_redirect'] - 307 - >>> requests.codes.teapot - 418 - >>> requests.codes['\o/'] - 200 - -Some codes have multiple names, and both upper- and lower-case versions of -the names are allowed. For example, ``codes.ok``, ``codes.OK``, and -``codes.okay`` all correspond to the HTTP status code 200. -""" - -from .structures import LookupDict - -_codes = { - # Informational. - 100: ("continue",), - 101: ("switching_protocols",), - 102: ("processing",), - 103: ("checkpoint",), - 122: ("uri_too_long", "request_uri_too_long"), - 200: ("ok", "okay", "all_ok", "all_okay", "all_good", "\\o/", "✓"), - 201: ("created",), - 202: ("accepted",), - 203: ("non_authoritative_info", "non_authoritative_information"), - 204: ("no_content",), - 205: ("reset_content", "reset"), - 206: ("partial_content", "partial"), - 207: ("multi_status", "multiple_status", "multi_stati", "multiple_stati"), - 208: ("already_reported",), - 226: ("im_used",), - # Redirection. - 300: ("multiple_choices",), - 301: ("moved_permanently", "moved", "\\o-"), - 302: ("found",), - 303: ("see_other", "other"), - 304: ("not_modified",), - 305: ("use_proxy",), - 306: ("switch_proxy",), - 307: ("temporary_redirect", "temporary_moved", "temporary"), - 308: ( - "permanent_redirect", - "resume_incomplete", - "resume", - ), # "resume" and "resume_incomplete" to be removed in 3.0 - # Client Error. - 400: ("bad_request", "bad"), - 401: ("unauthorized",), - 402: ("payment_required", "payment"), - 403: ("forbidden",), - 404: ("not_found", "-o-"), - 405: ("method_not_allowed", "not_allowed"), - 406: ("not_acceptable",), - 407: ("proxy_authentication_required", "proxy_auth", "proxy_authentication"), - 408: ("request_timeout", "timeout"), - 409: ("conflict",), - 410: ("gone",), - 411: ("length_required",), - 412: ("precondition_failed", "precondition"), - 413: ("request_entity_too_large",), - 414: ("request_uri_too_large",), - 415: ("unsupported_media_type", "unsupported_media", "media_type"), - 416: ( - "requested_range_not_satisfiable", - "requested_range", - "range_not_satisfiable", - ), - 417: ("expectation_failed",), - 418: ("im_a_teapot", "teapot", "i_am_a_teapot"), - 421: ("misdirected_request",), - 422: ("unprocessable_entity", "unprocessable"), - 423: ("locked",), - 424: ("failed_dependency", "dependency"), - 425: ("unordered_collection", "unordered"), - 426: ("upgrade_required", "upgrade"), - 428: ("precondition_required", "precondition"), - 429: ("too_many_requests", "too_many"), - 431: ("header_fields_too_large", "fields_too_large"), - 444: ("no_response", "none"), - 449: ("retry_with", "retry"), - 450: ("blocked_by_windows_parental_controls", "parental_controls"), - 451: ("unavailable_for_legal_reasons", "legal_reasons"), - 499: ("client_closed_request",), - # Server Error. - 500: ("internal_server_error", "server_error", "/o\\", "✗"), - 501: ("not_implemented",), - 502: ("bad_gateway",), - 503: ("service_unavailable", "unavailable"), - 504: ("gateway_timeout",), - 505: ("http_version_not_supported", "http_version"), - 506: ("variant_also_negotiates",), - 507: ("insufficient_storage",), - 509: ("bandwidth_limit_exceeded", "bandwidth"), - 510: ("not_extended",), - 511: ("network_authentication_required", "network_auth", "network_authentication"), -} - -codes = LookupDict(name="status_codes") - - -def _init(): - for code, titles in _codes.items(): - for title in titles: - setattr(codes, title, code) - if not title.startswith(("\\", "/")): - setattr(codes, title.upper(), code) - - def doc(code): - names = ", ".join(f"``{n}``" for n in _codes[code]) - return "* %d: %s" % (code, names) - - global __doc__ - __doc__ = ( - __doc__ + "\n" + "\n".join(doc(code) for code in sorted(_codes)) - if __doc__ is not None - else None - ) - - -_init() diff --git a/spaces/Raspberry-ai/main/.env/lib/python3.11/site-packages/setuptools/_distutils/command/install_scripts.py b/spaces/Raspberry-ai/main/.env/lib/python3.11/site-packages/setuptools/_distutils/command/install_scripts.py deleted file mode 100644 index f09bd644207e5c5a891d3605cb6aff4f00d70c8a..0000000000000000000000000000000000000000 --- a/spaces/Raspberry-ai/main/.env/lib/python3.11/site-packages/setuptools/_distutils/command/install_scripts.py +++ /dev/null @@ -1,61 +0,0 @@ -"""distutils.command.install_scripts - -Implements the Distutils 'install_scripts' command, for installing -Python scripts.""" - -# contributed by Bastian Kleineidam - -import os -from distutils.core import Command -from distutils import log -from stat import ST_MODE - - -class install_scripts(Command): - - description = "install scripts (Python or otherwise)" - - user_options = [ - ('install-dir=', 'd', "directory to install scripts to"), - ('build-dir=', 'b', "build directory (where to install from)"), - ('force', 'f', "force installation (overwrite existing files)"), - ('skip-build', None, "skip the build steps"), - ] - - boolean_options = ['force', 'skip-build'] - - def initialize_options(self): - self.install_dir = None - self.force = 0 - self.build_dir = None - self.skip_build = None - - def finalize_options(self): - self.set_undefined_options('build', ('build_scripts', 'build_dir')) - self.set_undefined_options( - 'install', - ('install_scripts', 'install_dir'), - ('force', 'force'), - ('skip_build', 'skip_build'), - ) - - def run(self): - if not self.skip_build: - self.run_command('build_scripts') - self.outfiles = self.copy_tree(self.build_dir, self.install_dir) - if os.name == 'posix': - # Set the executable bits (owner, group, and world) on - # all the scripts we just installed. - for file in self.get_outputs(): - if self.dry_run: - log.info("changing mode of %s", file) - else: - mode = ((os.stat(file)[ST_MODE]) | 0o555) & 0o7777 - log.info("changing mode of %s to %o", file, mode) - os.chmod(file, mode) - - def get_inputs(self): - return self.distribution.scripts or [] - - def get_outputs(self): - return self.outfiles or [] diff --git a/spaces/Raspberry-ai/main/.env/lib/python3.11/site-packages/setuptools/_distutils/filelist.py b/spaces/Raspberry-ai/main/.env/lib/python3.11/site-packages/setuptools/_distutils/filelist.py deleted file mode 100644 index 987931a9883ff36862dbd0831bd0a16903977879..0000000000000000000000000000000000000000 --- a/spaces/Raspberry-ai/main/.env/lib/python3.11/site-packages/setuptools/_distutils/filelist.py +++ /dev/null @@ -1,371 +0,0 @@ -"""distutils.filelist - -Provides the FileList class, used for poking about the filesystem -and building lists of files. -""" - -import os -import re -import fnmatch -import functools - -from distutils.util import convert_path -from distutils.errors import DistutilsTemplateError, DistutilsInternalError -from distutils import log - - -class FileList: - """A list of files built by on exploring the filesystem and filtered by - applying various patterns to what we find there. - - Instance attributes: - dir - directory from which files will be taken -- only used if - 'allfiles' not supplied to constructor - files - list of filenames currently being built/filtered/manipulated - allfiles - complete list of files under consideration (ie. without any - filtering applied) - """ - - def __init__(self, warn=None, debug_print=None): - # ignore argument to FileList, but keep them for backwards - # compatibility - self.allfiles = None - self.files = [] - - def set_allfiles(self, allfiles): - self.allfiles = allfiles - - def findall(self, dir=os.curdir): - self.allfiles = findall(dir) - - def debug_print(self, msg): - """Print 'msg' to stdout if the global DEBUG (taken from the - DISTUTILS_DEBUG environment variable) flag is true. - """ - from distutils.debug import DEBUG - - if DEBUG: - print(msg) - - # Collection methods - - def append(self, item): - self.files.append(item) - - def extend(self, items): - self.files.extend(items) - - def sort(self): - # Not a strict lexical sort! - sortable_files = sorted(map(os.path.split, self.files)) - self.files = [] - for sort_tuple in sortable_files: - self.files.append(os.path.join(*sort_tuple)) - - # Other miscellaneous utility methods - - def remove_duplicates(self): - # Assumes list has been sorted! - for i in range(len(self.files) - 1, 0, -1): - if self.files[i] == self.files[i - 1]: - del self.files[i] - - # "File template" methods - - def _parse_template_line(self, line): - words = line.split() - action = words[0] - - patterns = dir = dir_pattern = None - - if action in ('include', 'exclude', 'global-include', 'global-exclude'): - if len(words) < 2: - raise DistutilsTemplateError( - "'%s' expects ..." % action - ) - patterns = [convert_path(w) for w in words[1:]] - elif action in ('recursive-include', 'recursive-exclude'): - if len(words) < 3: - raise DistutilsTemplateError( - "'%s' expects ..." % action - ) - dir = convert_path(words[1]) - patterns = [convert_path(w) for w in words[2:]] - elif action in ('graft', 'prune'): - if len(words) != 2: - raise DistutilsTemplateError( - "'%s' expects a single " % action - ) - dir_pattern = convert_path(words[1]) - else: - raise DistutilsTemplateError("unknown action '%s'" % action) - - return (action, patterns, dir, dir_pattern) - - def process_template_line(self, line): # noqa: C901 - # Parse the line: split it up, make sure the right number of words - # is there, and return the relevant words. 'action' is always - # defined: it's the first word of the line. Which of the other - # three are defined depends on the action; it'll be either - # patterns, (dir and patterns), or (dir_pattern). - (action, patterns, dir, dir_pattern) = self._parse_template_line(line) - - # OK, now we know that the action is valid and we have the - # right number of words on the line for that action -- so we - # can proceed with minimal error-checking. - if action == 'include': - self.debug_print("include " + ' '.join(patterns)) - for pattern in patterns: - if not self.include_pattern(pattern, anchor=1): - log.warn("warning: no files found matching '%s'", pattern) - - elif action == 'exclude': - self.debug_print("exclude " + ' '.join(patterns)) - for pattern in patterns: - if not self.exclude_pattern(pattern, anchor=1): - log.warn( - ( - "warning: no previously-included files " - "found matching '%s'" - ), - pattern, - ) - - elif action == 'global-include': - self.debug_print("global-include " + ' '.join(patterns)) - for pattern in patterns: - if not self.include_pattern(pattern, anchor=0): - log.warn( - ( - "warning: no files found matching '%s' " - "anywhere in distribution" - ), - pattern, - ) - - elif action == 'global-exclude': - self.debug_print("global-exclude " + ' '.join(patterns)) - for pattern in patterns: - if not self.exclude_pattern(pattern, anchor=0): - log.warn( - ( - "warning: no previously-included files matching " - "'%s' found anywhere in distribution" - ), - pattern, - ) - - elif action == 'recursive-include': - self.debug_print("recursive-include {} {}".format(dir, ' '.join(patterns))) - for pattern in patterns: - if not self.include_pattern(pattern, prefix=dir): - msg = ( - "warning: no files found matching '%s' " "under directory '%s'" - ) - log.warn(msg, pattern, dir) - - elif action == 'recursive-exclude': - self.debug_print("recursive-exclude {} {}".format(dir, ' '.join(patterns))) - for pattern in patterns: - if not self.exclude_pattern(pattern, prefix=dir): - log.warn( - ( - "warning: no previously-included files matching " - "'%s' found under directory '%s'" - ), - pattern, - dir, - ) - - elif action == 'graft': - self.debug_print("graft " + dir_pattern) - if not self.include_pattern(None, prefix=dir_pattern): - log.warn("warning: no directories found matching '%s'", dir_pattern) - - elif action == 'prune': - self.debug_print("prune " + dir_pattern) - if not self.exclude_pattern(None, prefix=dir_pattern): - log.warn( - ("no previously-included directories found " "matching '%s'"), - dir_pattern, - ) - else: - raise DistutilsInternalError( - "this cannot happen: invalid action '%s'" % action - ) - - # Filtering/selection methods - - def include_pattern(self, pattern, anchor=1, prefix=None, is_regex=0): - """Select strings (presumably filenames) from 'self.files' that - match 'pattern', a Unix-style wildcard (glob) pattern. Patterns - are not quite the same as implemented by the 'fnmatch' module: '*' - and '?' match non-special characters, where "special" is platform- - dependent: slash on Unix; colon, slash, and backslash on - DOS/Windows; and colon on Mac OS. - - If 'anchor' is true (the default), then the pattern match is more - stringent: "*.py" will match "foo.py" but not "foo/bar.py". If - 'anchor' is false, both of these will match. - - If 'prefix' is supplied, then only filenames starting with 'prefix' - (itself a pattern) and ending with 'pattern', with anything in between - them, will match. 'anchor' is ignored in this case. - - If 'is_regex' is true, 'anchor' and 'prefix' are ignored, and - 'pattern' is assumed to be either a string containing a regex or a - regex object -- no translation is done, the regex is just compiled - and used as-is. - - Selected strings will be added to self.files. - - Return True if files are found, False otherwise. - """ - # XXX docstring lying about what the special chars are? - files_found = False - pattern_re = translate_pattern(pattern, anchor, prefix, is_regex) - self.debug_print("include_pattern: applying regex r'%s'" % pattern_re.pattern) - - # delayed loading of allfiles list - if self.allfiles is None: - self.findall() - - for name in self.allfiles: - if pattern_re.search(name): - self.debug_print(" adding " + name) - self.files.append(name) - files_found = True - return files_found - - def exclude_pattern(self, pattern, anchor=1, prefix=None, is_regex=0): - """Remove strings (presumably filenames) from 'files' that match - 'pattern'. Other parameters are the same as for - 'include_pattern()', above. - The list 'self.files' is modified in place. - Return True if files are found, False otherwise. - """ - files_found = False - pattern_re = translate_pattern(pattern, anchor, prefix, is_regex) - self.debug_print("exclude_pattern: applying regex r'%s'" % pattern_re.pattern) - for i in range(len(self.files) - 1, -1, -1): - if pattern_re.search(self.files[i]): - self.debug_print(" removing " + self.files[i]) - del self.files[i] - files_found = True - return files_found - - -# Utility functions - - -def _find_all_simple(path): - """ - Find all files under 'path' - """ - all_unique = _UniqueDirs.filter(os.walk(path, followlinks=True)) - results = ( - os.path.join(base, file) for base, dirs, files in all_unique for file in files - ) - return filter(os.path.isfile, results) - - -class _UniqueDirs(set): - """ - Exclude previously-seen dirs from walk results, - avoiding infinite recursion. - Ref https://bugs.python.org/issue44497. - """ - - def __call__(self, walk_item): - """ - Given an item from an os.walk result, determine - if the item represents a unique dir for this instance - and if not, prevent further traversal. - """ - base, dirs, files = walk_item - stat = os.stat(base) - candidate = stat.st_dev, stat.st_ino - found = candidate in self - if found: - del dirs[:] - self.add(candidate) - return not found - - @classmethod - def filter(cls, items): - return filter(cls(), items) - - -def findall(dir=os.curdir): - """ - Find all files under 'dir' and return the list of full filenames. - Unless dir is '.', return full filenames with dir prepended. - """ - files = _find_all_simple(dir) - if dir == os.curdir: - make_rel = functools.partial(os.path.relpath, start=dir) - files = map(make_rel, files) - return list(files) - - -def glob_to_re(pattern): - """Translate a shell-like glob pattern to a regular expression; return - a string containing the regex. Differs from 'fnmatch.translate()' in - that '*' does not match "special characters" (which are - platform-specific). - """ - pattern_re = fnmatch.translate(pattern) - - # '?' and '*' in the glob pattern become '.' and '.*' in the RE, which - # IMHO is wrong -- '?' and '*' aren't supposed to match slash in Unix, - # and by extension they shouldn't match such "special characters" under - # any OS. So change all non-escaped dots in the RE to match any - # character except the special characters (currently: just os.sep). - sep = os.sep - if os.sep == '\\': - # we're using a regex to manipulate a regex, so we need - # to escape the backslash twice - sep = r'\\\\' - escaped = r'\1[^%s]' % sep - pattern_re = re.sub(r'((?, - React.ComponentPropsWithoutRef ->(({ className, children, ...props }, ref) => ( - - {children} - - - - -)) -SelectTrigger.displayName = SelectPrimitive.Trigger.displayName - -const SelectContent = React.forwardRef< - React.ElementRef, - React.ComponentPropsWithoutRef ->(({ className, children, position = 'popper', ...props }, ref) => ( - - - - {children} - - - -)) -SelectContent.displayName = SelectPrimitive.Content.displayName - -const SelectLabel = React.forwardRef< - React.ElementRef, - React.ComponentPropsWithoutRef ->(({ className, ...props }, ref) => ( - -)) -SelectLabel.displayName = SelectPrimitive.Label.displayName - -const SelectItem = React.forwardRef< - React.ElementRef, - React.ComponentPropsWithoutRef ->(({ className, children, ...props }, ref) => ( - - - - - - - {children} - -)) -SelectItem.displayName = SelectPrimitive.Item.displayName - -const SelectSeparator = React.forwardRef< - React.ElementRef, - React.ComponentPropsWithoutRef ->(({ className, ...props }, ref) => ( - -)) -SelectSeparator.displayName = SelectPrimitive.Separator.displayName - -export { - Select, - SelectGroup, - SelectValue, - SelectTrigger, - SelectContent, - SelectLabel, - SelectItem, - SelectSeparator -} diff --git a/spaces/Robert001/UniControl-Demo/annotator/uniformer/mmdet/core/__init__.py b/spaces/Robert001/UniControl-Demo/annotator/uniformer/mmdet/core/__init__.py deleted file mode 100644 index e812391e23894ef296755381386d4849f774418a..0000000000000000000000000000000000000000 --- a/spaces/Robert001/UniControl-Demo/annotator/uniformer/mmdet/core/__init__.py +++ /dev/null @@ -1,7 +0,0 @@ -from .anchor import * # noqa: F401, F403 -from .bbox import * # noqa: F401, F403 -from .evaluation import * # noqa: F401, F403 -from .export import * # noqa: F401, F403 -from .mask import * # noqa: F401, F403 -from .post_processing import * # noqa: F401, F403 -from .utils import * # noqa: F401, F403 diff --git a/spaces/Robert001/UniControl-Demo/annotator/uniformer/mmseg/datasets/pipelines/compose.py b/spaces/Robert001/UniControl-Demo/annotator/uniformer/mmseg/datasets/pipelines/compose.py deleted file mode 100644 index cbfcbb925c6d4ebf849328b9f94ef6fc24359bf5..0000000000000000000000000000000000000000 --- a/spaces/Robert001/UniControl-Demo/annotator/uniformer/mmseg/datasets/pipelines/compose.py +++ /dev/null @@ -1,51 +0,0 @@ -import collections - -from annotator.uniformer.mmcv.utils import build_from_cfg - -from ..builder import PIPELINES - - -@PIPELINES.register_module() -class Compose(object): - """Compose multiple transforms sequentially. - - Args: - transforms (Sequence[dict | callable]): Sequence of transform object or - config dict to be composed. - """ - - def __init__(self, transforms): - assert isinstance(transforms, collections.abc.Sequence) - self.transforms = [] - for transform in transforms: - if isinstance(transform, dict): - transform = build_from_cfg(transform, PIPELINES) - self.transforms.append(transform) - elif callable(transform): - self.transforms.append(transform) - else: - raise TypeError('transform must be callable or a dict') - - def __call__(self, data): - """Call function to apply transforms sequentially. - - Args: - data (dict): A result dict contains the data to transform. - - Returns: - dict: Transformed data. - """ - - for t in self.transforms: - data = t(data) - if data is None: - return None - return data - - def __repr__(self): - format_string = self.__class__.__name__ + '(' - for t in self.transforms: - format_string += '\n' - format_string += f' {t}' - format_string += '\n)' - return format_string diff --git a/spaces/Robert001/UniControl-Demo/annotator/uniformer_base/configs/_base_/models/lraspp_m-v3-d8.py b/spaces/Robert001/UniControl-Demo/annotator/uniformer_base/configs/_base_/models/lraspp_m-v3-d8.py deleted file mode 100644 index 93258242a90695cc94a7c6bd41562d6a75988771..0000000000000000000000000000000000000000 --- a/spaces/Robert001/UniControl-Demo/annotator/uniformer_base/configs/_base_/models/lraspp_m-v3-d8.py +++ /dev/null @@ -1,25 +0,0 @@ -# model settings -norm_cfg = dict(type='SyncBN', eps=0.001, requires_grad=True) -model = dict( - type='EncoderDecoder', - backbone=dict( - type='MobileNetV3', - arch='large', - out_indices=(1, 3, 16), - norm_cfg=norm_cfg), - decode_head=dict( - type='LRASPPHead', - in_channels=(16, 24, 960), - in_index=(0, 1, 2), - channels=128, - input_transform='multiple_select', - dropout_ratio=0.1, - num_classes=19, - norm_cfg=norm_cfg, - act_cfg=dict(type='ReLU'), - align_corners=False, - loss_decode=dict( - type='CrossEntropyLoss', use_sigmoid=False, loss_weight=1.0)), - # model training and testing settings - train_cfg=dict(), - test_cfg=dict(mode='whole')) diff --git a/spaces/RunningYou/mediapipe_inpainting/README.md b/spaces/RunningYou/mediapipe_inpainting/README.md deleted file mode 100644 index cec673698f4f795088dad451df868a742705118f..0000000000000000000000000000000000000000 --- a/spaces/RunningYou/mediapipe_inpainting/README.md +++ /dev/null @@ -1,13 +0,0 @@ ---- -title: New Inpainting -emoji: 💩 -colorFrom: purple -colorTo: blue -sdk: gradio -sdk_version: 3.12.0 -app_file: app.py -pinned: false -license: apache-2.0 ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/SERER/VITS-Umamusume-voice-synthesizer/monotonic_align/core.c b/spaces/SERER/VITS-Umamusume-voice-synthesizer/monotonic_align/core.c deleted file mode 100644 index 5631d20a9a00db29e143a6e8e4e5c378d6bb850a..0000000000000000000000000000000000000000 --- a/spaces/SERER/VITS-Umamusume-voice-synthesizer/monotonic_align/core.c +++ /dev/null @@ -1,21299 +0,0 @@ -/* Generated by Cython 0.29.21 */ - -/* BEGIN: Cython Metadata -{ - "distutils": { - "name": "monotonic_align.core", - "sources": [ - "core.pyx" - ] - }, - "module_name": "monotonic_align.core" -} -END: Cython Metadata */ - -#define PY_SSIZE_T_CLEAN -#include "Python.h" -#ifndef Py_PYTHON_H - #error Python headers needed to compile C extensions, please install development version of Python. -#elif PY_VERSION_HEX < 0x02060000 || (0x03000000 <= PY_VERSION_HEX && PY_VERSION_HEX < 0x03030000) - #error Cython requires Python 2.6+ or Python 3.3+. -#else -#define CYTHON_ABI "0_29_21" -#define CYTHON_HEX_VERSION 0x001D15F0 -#define CYTHON_FUTURE_DIVISION 0 -#include -#ifndef offsetof - #define offsetof(type, member) ( (size_t) & ((type*)0) -> member ) -#endif -#if !defined(WIN32) && !defined(MS_WINDOWS) - #ifndef __stdcall - #define __stdcall - #endif - #ifndef __cdecl - #define __cdecl - #endif - #ifndef __fastcall - #define __fastcall - #endif -#endif -#ifndef DL_IMPORT - #define DL_IMPORT(t) t -#endif -#ifndef DL_EXPORT - #define DL_EXPORT(t) t -#endif -#define __PYX_COMMA , -#ifndef HAVE_LONG_LONG - #if PY_VERSION_HEX >= 0x02070000 - #define HAVE_LONG_LONG - #endif -#endif -#ifndef PY_LONG_LONG - #define PY_LONG_LONG LONG_LONG -#endif -#ifndef Py_HUGE_VAL - #define Py_HUGE_VAL HUGE_VAL -#endif -#ifdef PYPY_VERSION - #define CYTHON_COMPILING_IN_PYPY 1 - #define CYTHON_COMPILING_IN_PYSTON 0 - #define CYTHON_COMPILING_IN_CPYTHON 0 - #undef CYTHON_USE_TYPE_SLOTS - #define CYTHON_USE_TYPE_SLOTS 0 - #undef CYTHON_USE_PYTYPE_LOOKUP - #define CYTHON_USE_PYTYPE_LOOKUP 0 - #if PY_VERSION_HEX < 0x03050000 - #undef CYTHON_USE_ASYNC_SLOTS - #define CYTHON_USE_ASYNC_SLOTS 0 - #elif !defined(CYTHON_USE_ASYNC_SLOTS) - #define CYTHON_USE_ASYNC_SLOTS 1 - #endif - #undef CYTHON_USE_PYLIST_INTERNALS - #define CYTHON_USE_PYLIST_INTERNALS 0 - #undef CYTHON_USE_UNICODE_INTERNALS - #define CYTHON_USE_UNICODE_INTERNALS 0 - #undef CYTHON_USE_UNICODE_WRITER - #define CYTHON_USE_UNICODE_WRITER 0 - #undef CYTHON_USE_PYLONG_INTERNALS - #define CYTHON_USE_PYLONG_INTERNALS 0 - #undef CYTHON_AVOID_BORROWED_REFS - #define CYTHON_AVOID_BORROWED_REFS 1 - #undef CYTHON_ASSUME_SAFE_MACROS - #define CYTHON_ASSUME_SAFE_MACROS 0 - #undef CYTHON_UNPACK_METHODS - #define CYTHON_UNPACK_METHODS 0 - #undef CYTHON_FAST_THREAD_STATE - #define CYTHON_FAST_THREAD_STATE 0 - #undef CYTHON_FAST_PYCALL - #define CYTHON_FAST_PYCALL 0 - #undef CYTHON_PEP489_MULTI_PHASE_INIT - #define CYTHON_PEP489_MULTI_PHASE_INIT 0 - #undef CYTHON_USE_TP_FINALIZE - #define CYTHON_USE_TP_FINALIZE 0 - #undef CYTHON_USE_DICT_VERSIONS - #define CYTHON_USE_DICT_VERSIONS 0 - #undef CYTHON_USE_EXC_INFO_STACK - #define CYTHON_USE_EXC_INFO_STACK 0 -#elif defined(PYSTON_VERSION) - #define CYTHON_COMPILING_IN_PYPY 0 - #define CYTHON_COMPILING_IN_PYSTON 1 - #define CYTHON_COMPILING_IN_CPYTHON 0 - #ifndef CYTHON_USE_TYPE_SLOTS - #define CYTHON_USE_TYPE_SLOTS 1 - #endif - #undef CYTHON_USE_PYTYPE_LOOKUP - #define CYTHON_USE_PYTYPE_LOOKUP 0 - #undef CYTHON_USE_ASYNC_SLOTS - #define CYTHON_USE_ASYNC_SLOTS 0 - #undef CYTHON_USE_PYLIST_INTERNALS - #define CYTHON_USE_PYLIST_INTERNALS 0 - #ifndef CYTHON_USE_UNICODE_INTERNALS - #define CYTHON_USE_UNICODE_INTERNALS 1 - #endif - #undef CYTHON_USE_UNICODE_WRITER - #define CYTHON_USE_UNICODE_WRITER 0 - #undef CYTHON_USE_PYLONG_INTERNALS - #define CYTHON_USE_PYLONG_INTERNALS 0 - #ifndef CYTHON_AVOID_BORROWED_REFS - #define CYTHON_AVOID_BORROWED_REFS 0 - #endif - #ifndef CYTHON_ASSUME_SAFE_MACROS - #define CYTHON_ASSUME_SAFE_MACROS 1 - #endif - #ifndef CYTHON_UNPACK_METHODS - #define CYTHON_UNPACK_METHODS 1 - #endif - #undef CYTHON_FAST_THREAD_STATE - #define CYTHON_FAST_THREAD_STATE 0 - #undef CYTHON_FAST_PYCALL - #define CYTHON_FAST_PYCALL 0 - #undef CYTHON_PEP489_MULTI_PHASE_INIT - #define CYTHON_PEP489_MULTI_PHASE_INIT 0 - #undef CYTHON_USE_TP_FINALIZE - #define CYTHON_USE_TP_FINALIZE 0 - #undef CYTHON_USE_DICT_VERSIONS - #define CYTHON_USE_DICT_VERSIONS 0 - #undef CYTHON_USE_EXC_INFO_STACK - #define CYTHON_USE_EXC_INFO_STACK 0 -#else - #define CYTHON_COMPILING_IN_PYPY 0 - #define CYTHON_COMPILING_IN_PYSTON 0 - #define CYTHON_COMPILING_IN_CPYTHON 1 - #ifndef CYTHON_USE_TYPE_SLOTS - #define CYTHON_USE_TYPE_SLOTS 1 - #endif - #if PY_VERSION_HEX < 0x02070000 - #undef CYTHON_USE_PYTYPE_LOOKUP - #define CYTHON_USE_PYTYPE_LOOKUP 0 - #elif !defined(CYTHON_USE_PYTYPE_LOOKUP) - #define CYTHON_USE_PYTYPE_LOOKUP 1 - #endif - #if PY_MAJOR_VERSION < 3 - #undef CYTHON_USE_ASYNC_SLOTS - #define CYTHON_USE_ASYNC_SLOTS 0 - #elif !defined(CYTHON_USE_ASYNC_SLOTS) - #define CYTHON_USE_ASYNC_SLOTS 1 - #endif - #if PY_VERSION_HEX < 0x02070000 - #undef CYTHON_USE_PYLONG_INTERNALS - #define CYTHON_USE_PYLONG_INTERNALS 0 - #elif !defined(CYTHON_USE_PYLONG_INTERNALS) - #define CYTHON_USE_PYLONG_INTERNALS 1 - #endif - #ifndef CYTHON_USE_PYLIST_INTERNALS - #define CYTHON_USE_PYLIST_INTERNALS 1 - #endif - #ifndef CYTHON_USE_UNICODE_INTERNALS - #define CYTHON_USE_UNICODE_INTERNALS 1 - #endif - #if PY_VERSION_HEX < 0x030300F0 - #undef CYTHON_USE_UNICODE_WRITER - #define CYTHON_USE_UNICODE_WRITER 0 - #elif !defined(CYTHON_USE_UNICODE_WRITER) - #define CYTHON_USE_UNICODE_WRITER 1 - #endif - #ifndef CYTHON_AVOID_BORROWED_REFS - #define CYTHON_AVOID_BORROWED_REFS 0 - #endif - #ifndef CYTHON_ASSUME_SAFE_MACROS - #define CYTHON_ASSUME_SAFE_MACROS 1 - #endif - #ifndef CYTHON_UNPACK_METHODS - #define CYTHON_UNPACK_METHODS 1 - #endif - #ifndef CYTHON_FAST_THREAD_STATE - #define CYTHON_FAST_THREAD_STATE 1 - #endif - #ifndef CYTHON_FAST_PYCALL - #define CYTHON_FAST_PYCALL 1 - #endif - #ifndef CYTHON_PEP489_MULTI_PHASE_INIT - #define CYTHON_PEP489_MULTI_PHASE_INIT (PY_VERSION_HEX >= 0x03050000) - #endif - #ifndef CYTHON_USE_TP_FINALIZE - #define CYTHON_USE_TP_FINALIZE (PY_VERSION_HEX >= 0x030400a1) - #endif - #ifndef CYTHON_USE_DICT_VERSIONS - #define CYTHON_USE_DICT_VERSIONS (PY_VERSION_HEX >= 0x030600B1) - #endif - #ifndef CYTHON_USE_EXC_INFO_STACK - #define CYTHON_USE_EXC_INFO_STACK (PY_VERSION_HEX >= 0x030700A3) - #endif -#endif -#if !defined(CYTHON_FAST_PYCCALL) -#define CYTHON_FAST_PYCCALL (CYTHON_FAST_PYCALL && PY_VERSION_HEX >= 0x030600B1) -#endif -#if CYTHON_USE_PYLONG_INTERNALS - #include "longintrepr.h" - #undef SHIFT - #undef BASE - #undef MASK - #ifdef SIZEOF_VOID_P - enum { __pyx_check_sizeof_voidp = 1 / (int)(SIZEOF_VOID_P == sizeof(void*)) }; - #endif -#endif -#ifndef __has_attribute - #define __has_attribute(x) 0 -#endif -#ifndef __has_cpp_attribute - #define __has_cpp_attribute(x) 0 -#endif -#ifndef CYTHON_RESTRICT - #if defined(__GNUC__) - #define CYTHON_RESTRICT __restrict__ - #elif defined(_MSC_VER) && _MSC_VER >= 1400 - #define CYTHON_RESTRICT __restrict - #elif defined (__STDC_VERSION__) && __STDC_VERSION__ >= 199901L - #define CYTHON_RESTRICT restrict - #else - #define CYTHON_RESTRICT - #endif -#endif -#ifndef CYTHON_UNUSED -# if defined(__GNUC__) -# if !(defined(__cplusplus)) || (__GNUC__ > 3 || (__GNUC__ == 3 && __GNUC_MINOR__ >= 4)) -# define CYTHON_UNUSED __attribute__ ((__unused__)) -# else -# define CYTHON_UNUSED -# endif -# elif defined(__ICC) || (defined(__INTEL_COMPILER) && !defined(_MSC_VER)) -# define CYTHON_UNUSED __attribute__ ((__unused__)) -# else -# define CYTHON_UNUSED -# endif -#endif -#ifndef CYTHON_MAYBE_UNUSED_VAR -# if defined(__cplusplus) - template void CYTHON_MAYBE_UNUSED_VAR( const T& ) { } -# else -# define CYTHON_MAYBE_UNUSED_VAR(x) (void)(x) -# endif -#endif -#ifndef CYTHON_NCP_UNUSED -# if CYTHON_COMPILING_IN_CPYTHON -# define CYTHON_NCP_UNUSED -# else -# define CYTHON_NCP_UNUSED CYTHON_UNUSED -# endif -#endif -#define __Pyx_void_to_None(void_result) ((void)(void_result), Py_INCREF(Py_None), Py_None) -#ifdef _MSC_VER - #ifndef _MSC_STDINT_H_ - #if _MSC_VER < 1300 - typedef unsigned char uint8_t; - typedef unsigned int uint32_t; - #else - typedef unsigned __int8 uint8_t; - typedef unsigned __int32 uint32_t; - #endif - #endif -#else - #include -#endif -#ifndef CYTHON_FALLTHROUGH - #if defined(__cplusplus) && __cplusplus >= 201103L - #if __has_cpp_attribute(fallthrough) - #define CYTHON_FALLTHROUGH [[fallthrough]] - #elif __has_cpp_attribute(clang::fallthrough) - #define CYTHON_FALLTHROUGH [[clang::fallthrough]] - #elif __has_cpp_attribute(gnu::fallthrough) - #define CYTHON_FALLTHROUGH [[gnu::fallthrough]] - #endif - #endif - #ifndef CYTHON_FALLTHROUGH - #if __has_attribute(fallthrough) - #define CYTHON_FALLTHROUGH __attribute__((fallthrough)) - #else - #define CYTHON_FALLTHROUGH - #endif - #endif - #if defined(__clang__ ) && defined(__apple_build_version__) - #if __apple_build_version__ < 7000000 - #undef CYTHON_FALLTHROUGH - #define CYTHON_FALLTHROUGH - #endif - #endif -#endif - -#ifndef CYTHON_INLINE - #if defined(__clang__) - #define CYTHON_INLINE __inline__ __attribute__ ((__unused__)) - #elif defined(__GNUC__) - #define CYTHON_INLINE __inline__ - #elif defined(_MSC_VER) - #define CYTHON_INLINE __inline - #elif defined (__STDC_VERSION__) && __STDC_VERSION__ >= 199901L - #define CYTHON_INLINE inline - #else - #define CYTHON_INLINE - #endif -#endif - -#if CYTHON_COMPILING_IN_PYPY && PY_VERSION_HEX < 0x02070600 && !defined(Py_OptimizeFlag) - #define Py_OptimizeFlag 0 -#endif -#define __PYX_BUILD_PY_SSIZE_T "n" -#define CYTHON_FORMAT_SSIZE_T "z" -#if PY_MAJOR_VERSION < 3 - #define __Pyx_BUILTIN_MODULE_NAME "__builtin__" - #define __Pyx_PyCode_New(a, k, l, s, f, code, c, n, v, fv, cell, fn, name, fline, lnos)\ - PyCode_New(a+k, l, s, f, code, c, n, v, fv, cell, fn, name, fline, lnos) - #define __Pyx_DefaultClassType PyClass_Type -#else - #define __Pyx_BUILTIN_MODULE_NAME "builtins" -#if PY_VERSION_HEX >= 0x030800A4 && PY_VERSION_HEX < 0x030800B2 - #define __Pyx_PyCode_New(a, k, l, s, f, code, c, n, v, fv, cell, fn, name, fline, lnos)\ - PyCode_New(a, 0, k, l, s, f, code, c, n, v, fv, cell, fn, name, fline, lnos) -#else - #define __Pyx_PyCode_New(a, k, l, s, f, code, c, n, v, fv, cell, fn, name, fline, lnos)\ - PyCode_New(a, k, l, s, f, code, c, n, v, fv, cell, fn, name, fline, lnos) -#endif - #define __Pyx_DefaultClassType PyType_Type -#endif -#ifndef Py_TPFLAGS_CHECKTYPES - #define Py_TPFLAGS_CHECKTYPES 0 -#endif -#ifndef Py_TPFLAGS_HAVE_INDEX - #define Py_TPFLAGS_HAVE_INDEX 0 -#endif -#ifndef Py_TPFLAGS_HAVE_NEWBUFFER - #define Py_TPFLAGS_HAVE_NEWBUFFER 0 -#endif -#ifndef Py_TPFLAGS_HAVE_FINALIZE - #define Py_TPFLAGS_HAVE_FINALIZE 0 -#endif -#ifndef METH_STACKLESS - #define METH_STACKLESS 0 -#endif -#if PY_VERSION_HEX <= 0x030700A3 || !defined(METH_FASTCALL) - #ifndef METH_FASTCALL - #define METH_FASTCALL 0x80 - #endif - typedef PyObject *(*__Pyx_PyCFunctionFast) (PyObject *self, PyObject *const *args, Py_ssize_t nargs); - typedef PyObject *(*__Pyx_PyCFunctionFastWithKeywords) (PyObject *self, PyObject *const *args, - Py_ssize_t nargs, PyObject *kwnames); -#else - #define __Pyx_PyCFunctionFast _PyCFunctionFast - #define __Pyx_PyCFunctionFastWithKeywords _PyCFunctionFastWithKeywords -#endif -#if CYTHON_FAST_PYCCALL -#define __Pyx_PyFastCFunction_Check(func)\ - ((PyCFunction_Check(func) && (METH_FASTCALL == (PyCFunction_GET_FLAGS(func) & ~(METH_CLASS | METH_STATIC | METH_COEXIST | METH_KEYWORDS | METH_STACKLESS))))) -#else -#define __Pyx_PyFastCFunction_Check(func) 0 -#endif -#if CYTHON_COMPILING_IN_PYPY && !defined(PyObject_Malloc) - #define PyObject_Malloc(s) PyMem_Malloc(s) - #define PyObject_Free(p) PyMem_Free(p) - #define PyObject_Realloc(p) PyMem_Realloc(p) -#endif -#if CYTHON_COMPILING_IN_CPYTHON && PY_VERSION_HEX < 0x030400A1 - #define PyMem_RawMalloc(n) PyMem_Malloc(n) - #define PyMem_RawRealloc(p, n) PyMem_Realloc(p, n) - #define PyMem_RawFree(p) PyMem_Free(p) -#endif -#if CYTHON_COMPILING_IN_PYSTON - #define __Pyx_PyCode_HasFreeVars(co) PyCode_HasFreeVars(co) - #define __Pyx_PyFrame_SetLineNumber(frame, lineno) PyFrame_SetLineNumber(frame, lineno) -#else - #define __Pyx_PyCode_HasFreeVars(co) (PyCode_GetNumFree(co) > 0) - #define __Pyx_PyFrame_SetLineNumber(frame, lineno) (frame)->f_lineno = (lineno) -#endif -#if !CYTHON_FAST_THREAD_STATE || PY_VERSION_HEX < 0x02070000 - #define __Pyx_PyThreadState_Current PyThreadState_GET() -#elif PY_VERSION_HEX >= 0x03060000 - #define __Pyx_PyThreadState_Current _PyThreadState_UncheckedGet() -#elif PY_VERSION_HEX >= 0x03000000 - #define __Pyx_PyThreadState_Current PyThreadState_GET() -#else - #define __Pyx_PyThreadState_Current _PyThreadState_Current -#endif -#if PY_VERSION_HEX < 0x030700A2 && !defined(PyThread_tss_create) && !defined(Py_tss_NEEDS_INIT) -#include "pythread.h" -#define Py_tss_NEEDS_INIT 0 -typedef int Py_tss_t; -static CYTHON_INLINE int PyThread_tss_create(Py_tss_t *key) { - *key = PyThread_create_key(); - return 0; -} -static CYTHON_INLINE Py_tss_t * PyThread_tss_alloc(void) { - Py_tss_t *key = (Py_tss_t *)PyObject_Malloc(sizeof(Py_tss_t)); - *key = Py_tss_NEEDS_INIT; - return key; -} -static CYTHON_INLINE void PyThread_tss_free(Py_tss_t *key) { - PyObject_Free(key); -} -static CYTHON_INLINE int PyThread_tss_is_created(Py_tss_t *key) { - return *key != Py_tss_NEEDS_INIT; -} -static CYTHON_INLINE void PyThread_tss_delete(Py_tss_t *key) { - PyThread_delete_key(*key); - *key = Py_tss_NEEDS_INIT; -} -static CYTHON_INLINE int PyThread_tss_set(Py_tss_t *key, void *value) { - return PyThread_set_key_value(*key, value); -} -static CYTHON_INLINE void * PyThread_tss_get(Py_tss_t *key) { - return PyThread_get_key_value(*key); -} -#endif -#if CYTHON_COMPILING_IN_CPYTHON || defined(_PyDict_NewPresized) -#define __Pyx_PyDict_NewPresized(n) ((n <= 8) ? PyDict_New() : _PyDict_NewPresized(n)) -#else -#define __Pyx_PyDict_NewPresized(n) PyDict_New() -#endif -#if PY_MAJOR_VERSION >= 3 || CYTHON_FUTURE_DIVISION - #define __Pyx_PyNumber_Divide(x,y) PyNumber_TrueDivide(x,y) - #define __Pyx_PyNumber_InPlaceDivide(x,y) PyNumber_InPlaceTrueDivide(x,y) -#else - #define __Pyx_PyNumber_Divide(x,y) PyNumber_Divide(x,y) - #define __Pyx_PyNumber_InPlaceDivide(x,y) PyNumber_InPlaceDivide(x,y) -#endif -#if CYTHON_COMPILING_IN_CPYTHON && PY_VERSION_HEX >= 0x030500A1 && CYTHON_USE_UNICODE_INTERNALS -#define __Pyx_PyDict_GetItemStr(dict, name) _PyDict_GetItem_KnownHash(dict, name, ((PyASCIIObject *) name)->hash) -#else -#define __Pyx_PyDict_GetItemStr(dict, name) PyDict_GetItem(dict, name) -#endif -#if PY_VERSION_HEX > 0x03030000 && defined(PyUnicode_KIND) - #define CYTHON_PEP393_ENABLED 1 - #define __Pyx_PyUnicode_READY(op) (likely(PyUnicode_IS_READY(op)) ?\ - 0 : _PyUnicode_Ready((PyObject *)(op))) - #define __Pyx_PyUnicode_GET_LENGTH(u) PyUnicode_GET_LENGTH(u) - #define __Pyx_PyUnicode_READ_CHAR(u, i) PyUnicode_READ_CHAR(u, i) - #define __Pyx_PyUnicode_MAX_CHAR_VALUE(u) PyUnicode_MAX_CHAR_VALUE(u) - #define __Pyx_PyUnicode_KIND(u) PyUnicode_KIND(u) - #define __Pyx_PyUnicode_DATA(u) PyUnicode_DATA(u) - #define __Pyx_PyUnicode_READ(k, d, i) PyUnicode_READ(k, d, i) - #define __Pyx_PyUnicode_WRITE(k, d, i, ch) PyUnicode_WRITE(k, d, i, ch) - #if defined(PyUnicode_IS_READY) && defined(PyUnicode_GET_SIZE) - #define __Pyx_PyUnicode_IS_TRUE(u) (0 != (likely(PyUnicode_IS_READY(u)) ? PyUnicode_GET_LENGTH(u) : PyUnicode_GET_SIZE(u))) - #else - #define __Pyx_PyUnicode_IS_TRUE(u) (0 != PyUnicode_GET_LENGTH(u)) - #endif -#else - #define CYTHON_PEP393_ENABLED 0 - #define PyUnicode_1BYTE_KIND 1 - #define PyUnicode_2BYTE_KIND 2 - #define PyUnicode_4BYTE_KIND 4 - #define __Pyx_PyUnicode_READY(op) (0) - #define __Pyx_PyUnicode_GET_LENGTH(u) PyUnicode_GET_SIZE(u) - #define __Pyx_PyUnicode_READ_CHAR(u, i) ((Py_UCS4)(PyUnicode_AS_UNICODE(u)[i])) - #define __Pyx_PyUnicode_MAX_CHAR_VALUE(u) ((sizeof(Py_UNICODE) == 2) ? 65535 : 1114111) - #define __Pyx_PyUnicode_KIND(u) (sizeof(Py_UNICODE)) - #define __Pyx_PyUnicode_DATA(u) ((void*)PyUnicode_AS_UNICODE(u)) - #define __Pyx_PyUnicode_READ(k, d, i) ((void)(k), (Py_UCS4)(((Py_UNICODE*)d)[i])) - #define __Pyx_PyUnicode_WRITE(k, d, i, ch) (((void)(k)), ((Py_UNICODE*)d)[i] = ch) - #define __Pyx_PyUnicode_IS_TRUE(u) (0 != PyUnicode_GET_SIZE(u)) -#endif -#if CYTHON_COMPILING_IN_PYPY - #define __Pyx_PyUnicode_Concat(a, b) PyNumber_Add(a, b) - #define __Pyx_PyUnicode_ConcatSafe(a, b) PyNumber_Add(a, b) -#else - #define __Pyx_PyUnicode_Concat(a, b) PyUnicode_Concat(a, b) - #define __Pyx_PyUnicode_ConcatSafe(a, b) ((unlikely((a) == Py_None) || unlikely((b) == Py_None)) ?\ - PyNumber_Add(a, b) : __Pyx_PyUnicode_Concat(a, b)) -#endif -#if CYTHON_COMPILING_IN_PYPY && !defined(PyUnicode_Contains) - #define PyUnicode_Contains(u, s) PySequence_Contains(u, s) -#endif -#if CYTHON_COMPILING_IN_PYPY && !defined(PyByteArray_Check) - #define PyByteArray_Check(obj) PyObject_TypeCheck(obj, &PyByteArray_Type) -#endif -#if CYTHON_COMPILING_IN_PYPY && !defined(PyObject_Format) - #define PyObject_Format(obj, fmt) PyObject_CallMethod(obj, "__format__", "O", fmt) -#endif -#define __Pyx_PyString_FormatSafe(a, b) ((unlikely((a) == Py_None || (PyString_Check(b) && !PyString_CheckExact(b)))) ? PyNumber_Remainder(a, b) : __Pyx_PyString_Format(a, b)) -#define __Pyx_PyUnicode_FormatSafe(a, b) ((unlikely((a) == Py_None || (PyUnicode_Check(b) && !PyUnicode_CheckExact(b)))) ? PyNumber_Remainder(a, b) : PyUnicode_Format(a, b)) -#if PY_MAJOR_VERSION >= 3 - #define __Pyx_PyString_Format(a, b) PyUnicode_Format(a, b) -#else - #define __Pyx_PyString_Format(a, b) PyString_Format(a, b) -#endif -#if PY_MAJOR_VERSION < 3 && !defined(PyObject_ASCII) - #define PyObject_ASCII(o) PyObject_Repr(o) -#endif -#if PY_MAJOR_VERSION >= 3 - #define PyBaseString_Type PyUnicode_Type - #define PyStringObject PyUnicodeObject - #define PyString_Type PyUnicode_Type - #define PyString_Check PyUnicode_Check - #define PyString_CheckExact PyUnicode_CheckExact -#ifndef PyObject_Unicode - #define PyObject_Unicode PyObject_Str -#endif -#endif -#if PY_MAJOR_VERSION >= 3 - #define __Pyx_PyBaseString_Check(obj) PyUnicode_Check(obj) - #define __Pyx_PyBaseString_CheckExact(obj) PyUnicode_CheckExact(obj) -#else - #define __Pyx_PyBaseString_Check(obj) (PyString_Check(obj) || PyUnicode_Check(obj)) - #define __Pyx_PyBaseString_CheckExact(obj) (PyString_CheckExact(obj) || PyUnicode_CheckExact(obj)) -#endif -#ifndef PySet_CheckExact - #define PySet_CheckExact(obj) (Py_TYPE(obj) == &PySet_Type) -#endif -#if PY_VERSION_HEX >= 0x030900A4 - #define __Pyx_SET_REFCNT(obj, refcnt) Py_SET_REFCNT(obj, refcnt) - #define __Pyx_SET_SIZE(obj, size) Py_SET_SIZE(obj, size) -#else - #define __Pyx_SET_REFCNT(obj, refcnt) Py_REFCNT(obj) = (refcnt) - #define __Pyx_SET_SIZE(obj, size) Py_SIZE(obj) = (size) -#endif -#if CYTHON_ASSUME_SAFE_MACROS - #define __Pyx_PySequence_SIZE(seq) Py_SIZE(seq) -#else - #define __Pyx_PySequence_SIZE(seq) PySequence_Size(seq) -#endif -#if PY_MAJOR_VERSION >= 3 - #define PyIntObject PyLongObject - #define PyInt_Type PyLong_Type - #define PyInt_Check(op) PyLong_Check(op) - #define PyInt_CheckExact(op) PyLong_CheckExact(op) - #define PyInt_FromString PyLong_FromString - #define PyInt_FromUnicode PyLong_FromUnicode - #define PyInt_FromLong PyLong_FromLong - #define PyInt_FromSize_t PyLong_FromSize_t - #define PyInt_FromSsize_t PyLong_FromSsize_t - #define PyInt_AsLong PyLong_AsLong - #define PyInt_AS_LONG PyLong_AS_LONG - #define PyInt_AsSsize_t PyLong_AsSsize_t - #define PyInt_AsUnsignedLongMask PyLong_AsUnsignedLongMask - #define PyInt_AsUnsignedLongLongMask PyLong_AsUnsignedLongLongMask - #define PyNumber_Int PyNumber_Long -#endif -#if PY_MAJOR_VERSION >= 3 - #define PyBoolObject PyLongObject -#endif -#if PY_MAJOR_VERSION >= 3 && CYTHON_COMPILING_IN_PYPY - #ifndef PyUnicode_InternFromString - #define PyUnicode_InternFromString(s) PyUnicode_FromString(s) - #endif -#endif -#if PY_VERSION_HEX < 0x030200A4 - typedef long Py_hash_t; - #define __Pyx_PyInt_FromHash_t PyInt_FromLong - #define __Pyx_PyInt_AsHash_t PyInt_AsLong -#else - #define __Pyx_PyInt_FromHash_t PyInt_FromSsize_t - #define __Pyx_PyInt_AsHash_t PyInt_AsSsize_t -#endif -#if PY_MAJOR_VERSION >= 3 - #define __Pyx_PyMethod_New(func, self, klass) ((self) ? ((void)(klass), PyMethod_New(func, self)) : __Pyx_NewRef(func)) -#else - #define __Pyx_PyMethod_New(func, self, klass) PyMethod_New(func, self, klass) -#endif -#if CYTHON_USE_ASYNC_SLOTS - #if PY_VERSION_HEX >= 0x030500B1 - #define __Pyx_PyAsyncMethodsStruct PyAsyncMethods - #define __Pyx_PyType_AsAsync(obj) (Py_TYPE(obj)->tp_as_async) - #else - #define __Pyx_PyType_AsAsync(obj) ((__Pyx_PyAsyncMethodsStruct*) (Py_TYPE(obj)->tp_reserved)) - #endif -#else - #define __Pyx_PyType_AsAsync(obj) NULL -#endif -#ifndef __Pyx_PyAsyncMethodsStruct - typedef struct { - unaryfunc am_await; - unaryfunc am_aiter; - unaryfunc am_anext; - } __Pyx_PyAsyncMethodsStruct; -#endif - -#if defined(WIN32) || defined(MS_WINDOWS) - #define _USE_MATH_DEFINES -#endif -#include -#ifdef NAN -#define __PYX_NAN() ((float) NAN) -#else -static CYTHON_INLINE float __PYX_NAN() { - float value; - memset(&value, 0xFF, sizeof(value)); - return value; -} -#endif -#if defined(__CYGWIN__) && defined(_LDBL_EQ_DBL) -#define __Pyx_truncl trunc -#else -#define __Pyx_truncl truncl -#endif - -#define __PYX_MARK_ERR_POS(f_index, lineno) \ - { __pyx_filename = __pyx_f[f_index]; (void)__pyx_filename; __pyx_lineno = lineno; (void)__pyx_lineno; __pyx_clineno = __LINE__; (void)__pyx_clineno; } -#define __PYX_ERR(f_index, lineno, Ln_error) \ - { __PYX_MARK_ERR_POS(f_index, lineno) goto Ln_error; } - -#ifndef __PYX_EXTERN_C - #ifdef __cplusplus - #define __PYX_EXTERN_C extern "C" - #else - #define __PYX_EXTERN_C extern - #endif -#endif - -#define __PYX_HAVE__monotonic_align__core -#define __PYX_HAVE_API__monotonic_align__core -/* Early includes */ -#include "pythread.h" -#include -#include -#include -#include "pystate.h" -#ifdef _OPENMP -#include -#endif /* _OPENMP */ - -#if defined(PYREX_WITHOUT_ASSERTIONS) && !defined(CYTHON_WITHOUT_ASSERTIONS) -#define CYTHON_WITHOUT_ASSERTIONS -#endif - -typedef struct {PyObject **p; const char *s; const Py_ssize_t n; const char* encoding; - const char is_unicode; const char is_str; const char intern; } __Pyx_StringTabEntry; - -#define __PYX_DEFAULT_STRING_ENCODING_IS_ASCII 0 -#define __PYX_DEFAULT_STRING_ENCODING_IS_UTF8 0 -#define __PYX_DEFAULT_STRING_ENCODING_IS_DEFAULT (PY_MAJOR_VERSION >= 3 && __PYX_DEFAULT_STRING_ENCODING_IS_UTF8) -#define __PYX_DEFAULT_STRING_ENCODING "" -#define __Pyx_PyObject_FromString __Pyx_PyBytes_FromString -#define __Pyx_PyObject_FromStringAndSize __Pyx_PyBytes_FromStringAndSize -#define __Pyx_uchar_cast(c) ((unsigned char)c) -#define __Pyx_long_cast(x) ((long)x) -#define __Pyx_fits_Py_ssize_t(v, type, is_signed) (\ - (sizeof(type) < sizeof(Py_ssize_t)) ||\ - (sizeof(type) > sizeof(Py_ssize_t) &&\ - likely(v < (type)PY_SSIZE_T_MAX ||\ - v == (type)PY_SSIZE_T_MAX) &&\ - (!is_signed || likely(v > (type)PY_SSIZE_T_MIN ||\ - v == (type)PY_SSIZE_T_MIN))) ||\ - (sizeof(type) == sizeof(Py_ssize_t) &&\ - (is_signed || likely(v < (type)PY_SSIZE_T_MAX ||\ - v == (type)PY_SSIZE_T_MAX))) ) -static CYTHON_INLINE int __Pyx_is_valid_index(Py_ssize_t i, Py_ssize_t limit) { - return (size_t) i < (size_t) limit; -} -#if defined (__cplusplus) && __cplusplus >= 201103L - #include - #define __Pyx_sst_abs(value) std::abs(value) -#elif SIZEOF_INT >= SIZEOF_SIZE_T - #define __Pyx_sst_abs(value) abs(value) -#elif SIZEOF_LONG >= SIZEOF_SIZE_T - #define __Pyx_sst_abs(value) labs(value) -#elif defined (_MSC_VER) - #define __Pyx_sst_abs(value) ((Py_ssize_t)_abs64(value)) -#elif defined (__STDC_VERSION__) && __STDC_VERSION__ >= 199901L - #define __Pyx_sst_abs(value) llabs(value) -#elif defined (__GNUC__) - #define __Pyx_sst_abs(value) __builtin_llabs(value) -#else - #define __Pyx_sst_abs(value) ((value<0) ? -value : value) -#endif -static CYTHON_INLINE const char* __Pyx_PyObject_AsString(PyObject*); -static CYTHON_INLINE const char* __Pyx_PyObject_AsStringAndSize(PyObject*, Py_ssize_t* length); -#define __Pyx_PyByteArray_FromString(s) PyByteArray_FromStringAndSize((const char*)s, strlen((const char*)s)) -#define __Pyx_PyByteArray_FromStringAndSize(s, l) PyByteArray_FromStringAndSize((const char*)s, l) -#define __Pyx_PyBytes_FromString PyBytes_FromString -#define __Pyx_PyBytes_FromStringAndSize PyBytes_FromStringAndSize -static CYTHON_INLINE PyObject* __Pyx_PyUnicode_FromString(const char*); -#if PY_MAJOR_VERSION < 3 - #define __Pyx_PyStr_FromString __Pyx_PyBytes_FromString - #define __Pyx_PyStr_FromStringAndSize __Pyx_PyBytes_FromStringAndSize -#else - #define __Pyx_PyStr_FromString __Pyx_PyUnicode_FromString - #define __Pyx_PyStr_FromStringAndSize __Pyx_PyUnicode_FromStringAndSize -#endif -#define __Pyx_PyBytes_AsWritableString(s) ((char*) PyBytes_AS_STRING(s)) -#define __Pyx_PyBytes_AsWritableSString(s) ((signed char*) PyBytes_AS_STRING(s)) -#define __Pyx_PyBytes_AsWritableUString(s) ((unsigned char*) PyBytes_AS_STRING(s)) -#define __Pyx_PyBytes_AsString(s) ((const char*) PyBytes_AS_STRING(s)) -#define __Pyx_PyBytes_AsSString(s) ((const signed char*) PyBytes_AS_STRING(s)) -#define __Pyx_PyBytes_AsUString(s) ((const unsigned char*) PyBytes_AS_STRING(s)) -#define __Pyx_PyObject_AsWritableString(s) ((char*) __Pyx_PyObject_AsString(s)) -#define __Pyx_PyObject_AsWritableSString(s) ((signed char*) __Pyx_PyObject_AsString(s)) -#define __Pyx_PyObject_AsWritableUString(s) ((unsigned char*) __Pyx_PyObject_AsString(s)) -#define __Pyx_PyObject_AsSString(s) ((const signed char*) __Pyx_PyObject_AsString(s)) -#define __Pyx_PyObject_AsUString(s) ((const unsigned char*) __Pyx_PyObject_AsString(s)) -#define __Pyx_PyObject_FromCString(s) __Pyx_PyObject_FromString((const char*)s) -#define __Pyx_PyBytes_FromCString(s) __Pyx_PyBytes_FromString((const char*)s) -#define __Pyx_PyByteArray_FromCString(s) __Pyx_PyByteArray_FromString((const char*)s) -#define __Pyx_PyStr_FromCString(s) __Pyx_PyStr_FromString((const char*)s) -#define __Pyx_PyUnicode_FromCString(s) __Pyx_PyUnicode_FromString((const char*)s) -static CYTHON_INLINE size_t __Pyx_Py_UNICODE_strlen(const Py_UNICODE *u) { - const Py_UNICODE *u_end = u; - while (*u_end++) ; - return (size_t)(u_end - u - 1); -} -#define __Pyx_PyUnicode_FromUnicode(u) PyUnicode_FromUnicode(u, __Pyx_Py_UNICODE_strlen(u)) -#define __Pyx_PyUnicode_FromUnicodeAndLength PyUnicode_FromUnicode -#define __Pyx_PyUnicode_AsUnicode PyUnicode_AsUnicode -#define __Pyx_NewRef(obj) (Py_INCREF(obj), obj) -#define __Pyx_Owned_Py_None(b) __Pyx_NewRef(Py_None) -static CYTHON_INLINE PyObject * __Pyx_PyBool_FromLong(long b); -static CYTHON_INLINE int __Pyx_PyObject_IsTrue(PyObject*); -static CYTHON_INLINE int __Pyx_PyObject_IsTrueAndDecref(PyObject*); -static CYTHON_INLINE PyObject* __Pyx_PyNumber_IntOrLong(PyObject* x); -#define __Pyx_PySequence_Tuple(obj)\ - (likely(PyTuple_CheckExact(obj)) ? __Pyx_NewRef(obj) : PySequence_Tuple(obj)) -static CYTHON_INLINE Py_ssize_t __Pyx_PyIndex_AsSsize_t(PyObject*); -static CYTHON_INLINE PyObject * __Pyx_PyInt_FromSize_t(size_t); -#if CYTHON_ASSUME_SAFE_MACROS -#define __pyx_PyFloat_AsDouble(x) (PyFloat_CheckExact(x) ? PyFloat_AS_DOUBLE(x) : PyFloat_AsDouble(x)) -#else -#define __pyx_PyFloat_AsDouble(x) PyFloat_AsDouble(x) -#endif -#define __pyx_PyFloat_AsFloat(x) ((float) __pyx_PyFloat_AsDouble(x)) -#if PY_MAJOR_VERSION >= 3 -#define __Pyx_PyNumber_Int(x) (PyLong_CheckExact(x) ? __Pyx_NewRef(x) : PyNumber_Long(x)) -#else -#define __Pyx_PyNumber_Int(x) (PyInt_CheckExact(x) ? __Pyx_NewRef(x) : PyNumber_Int(x)) -#endif -#define __Pyx_PyNumber_Float(x) (PyFloat_CheckExact(x) ? __Pyx_NewRef(x) : PyNumber_Float(x)) -#if PY_MAJOR_VERSION < 3 && __PYX_DEFAULT_STRING_ENCODING_IS_ASCII -static int __Pyx_sys_getdefaultencoding_not_ascii; -static int __Pyx_init_sys_getdefaultencoding_params(void) { - PyObject* sys; - PyObject* default_encoding = NULL; - PyObject* ascii_chars_u = NULL; - PyObject* ascii_chars_b = NULL; - const char* default_encoding_c; - sys = PyImport_ImportModule("sys"); - if (!sys) goto bad; - default_encoding = PyObject_CallMethod(sys, (char*) "getdefaultencoding", NULL); - Py_DECREF(sys); - if (!default_encoding) goto bad; - default_encoding_c = PyBytes_AsString(default_encoding); - if (!default_encoding_c) goto bad; - if (strcmp(default_encoding_c, "ascii") == 0) { - __Pyx_sys_getdefaultencoding_not_ascii = 0; - } else { - char ascii_chars[128]; - int c; - for (c = 0; c < 128; c++) { - ascii_chars[c] = c; - } - __Pyx_sys_getdefaultencoding_not_ascii = 1; - ascii_chars_u = PyUnicode_DecodeASCII(ascii_chars, 128, NULL); - if (!ascii_chars_u) goto bad; - ascii_chars_b = PyUnicode_AsEncodedString(ascii_chars_u, default_encoding_c, NULL); - if (!ascii_chars_b || !PyBytes_Check(ascii_chars_b) || memcmp(ascii_chars, PyBytes_AS_STRING(ascii_chars_b), 128) != 0) { - PyErr_Format( - PyExc_ValueError, - "This module compiled with c_string_encoding=ascii, but default encoding '%.200s' is not a superset of ascii.", - default_encoding_c); - goto bad; - } - Py_DECREF(ascii_chars_u); - Py_DECREF(ascii_chars_b); - } - Py_DECREF(default_encoding); - return 0; -bad: - Py_XDECREF(default_encoding); - Py_XDECREF(ascii_chars_u); - Py_XDECREF(ascii_chars_b); - return -1; -} -#endif -#if __PYX_DEFAULT_STRING_ENCODING_IS_DEFAULT && PY_MAJOR_VERSION >= 3 -#define __Pyx_PyUnicode_FromStringAndSize(c_str, size) PyUnicode_DecodeUTF8(c_str, size, NULL) -#else -#define __Pyx_PyUnicode_FromStringAndSize(c_str, size) PyUnicode_Decode(c_str, size, __PYX_DEFAULT_STRING_ENCODING, NULL) -#if __PYX_DEFAULT_STRING_ENCODING_IS_DEFAULT -static char* __PYX_DEFAULT_STRING_ENCODING; -static int __Pyx_init_sys_getdefaultencoding_params(void) { - PyObject* sys; - PyObject* default_encoding = NULL; - char* default_encoding_c; - sys = PyImport_ImportModule("sys"); - if (!sys) goto bad; - default_encoding = PyObject_CallMethod(sys, (char*) (const char*) "getdefaultencoding", NULL); - Py_DECREF(sys); - if (!default_encoding) goto bad; - default_encoding_c = PyBytes_AsString(default_encoding); - if (!default_encoding_c) goto bad; - __PYX_DEFAULT_STRING_ENCODING = (char*) malloc(strlen(default_encoding_c) + 1); - if (!__PYX_DEFAULT_STRING_ENCODING) goto bad; - strcpy(__PYX_DEFAULT_STRING_ENCODING, default_encoding_c); - Py_DECREF(default_encoding); - return 0; -bad: - Py_XDECREF(default_encoding); - return -1; -} -#endif -#endif - - -/* Test for GCC > 2.95 */ -#if defined(__GNUC__) && (__GNUC__ > 2 || (__GNUC__ == 2 && (__GNUC_MINOR__ > 95))) - #define likely(x) __builtin_expect(!!(x), 1) - #define unlikely(x) __builtin_expect(!!(x), 0) -#else /* !__GNUC__ or GCC < 2.95 */ - #define likely(x) (x) - #define unlikely(x) (x) -#endif /* __GNUC__ */ -static CYTHON_INLINE void __Pyx_pretend_to_initialize(void* ptr) { (void)ptr; } - -static PyObject *__pyx_m = NULL; -static PyObject *__pyx_d; -static PyObject *__pyx_b; -static PyObject *__pyx_cython_runtime = NULL; -static PyObject *__pyx_empty_tuple; -static PyObject *__pyx_empty_bytes; -static PyObject *__pyx_empty_unicode; -static int __pyx_lineno; -static int __pyx_clineno = 0; -static const char * __pyx_cfilenm= __FILE__; -static const char *__pyx_filename; - - -static const char *__pyx_f[] = { - "core.pyx", - "stringsource", -}; -/* NoFastGil.proto */ -#define __Pyx_PyGILState_Ensure PyGILState_Ensure -#define __Pyx_PyGILState_Release PyGILState_Release -#define __Pyx_FastGIL_Remember() -#define __Pyx_FastGIL_Forget() -#define __Pyx_FastGilFuncInit() - -/* MemviewSliceStruct.proto */ -struct __pyx_memoryview_obj; -typedef struct { - struct __pyx_memoryview_obj *memview; - char *data; - Py_ssize_t shape[8]; - Py_ssize_t strides[8]; - Py_ssize_t suboffsets[8]; -} __Pyx_memviewslice; -#define __Pyx_MemoryView_Len(m) (m.shape[0]) - -/* Atomics.proto */ -#include -#ifndef CYTHON_ATOMICS - #define CYTHON_ATOMICS 1 -#endif -#define __pyx_atomic_int_type int -#if CYTHON_ATOMICS && __GNUC__ >= 4 && (__GNUC_MINOR__ > 1 ||\ - (__GNUC_MINOR__ == 1 && __GNUC_PATCHLEVEL >= 2)) &&\ - !defined(__i386__) - #define __pyx_atomic_incr_aligned(value, lock) __sync_fetch_and_add(value, 1) - #define __pyx_atomic_decr_aligned(value, lock) __sync_fetch_and_sub(value, 1) - #ifdef __PYX_DEBUG_ATOMICS - #warning "Using GNU atomics" - #endif -#elif CYTHON_ATOMICS && defined(_MSC_VER) && 0 - #include - #undef __pyx_atomic_int_type - #define __pyx_atomic_int_type LONG - #define __pyx_atomic_incr_aligned(value, lock) InterlockedIncrement(value) - #define __pyx_atomic_decr_aligned(value, lock) InterlockedDecrement(value) - #ifdef __PYX_DEBUG_ATOMICS - #pragma message ("Using MSVC atomics") - #endif -#elif CYTHON_ATOMICS && (defined(__ICC) || defined(__INTEL_COMPILER)) && 0 - #define __pyx_atomic_incr_aligned(value, lock) _InterlockedIncrement(value) - #define __pyx_atomic_decr_aligned(value, lock) _InterlockedDecrement(value) - #ifdef __PYX_DEBUG_ATOMICS - #warning "Using Intel atomics" - #endif -#else - #undef CYTHON_ATOMICS - #define CYTHON_ATOMICS 0 - #ifdef __PYX_DEBUG_ATOMICS - #warning "Not using atomics" - #endif -#endif -typedef volatile __pyx_atomic_int_type __pyx_atomic_int; -#if CYTHON_ATOMICS - #define __pyx_add_acquisition_count(memview)\ - __pyx_atomic_incr_aligned(__pyx_get_slice_count_pointer(memview), memview->lock) - #define __pyx_sub_acquisition_count(memview)\ - __pyx_atomic_decr_aligned(__pyx_get_slice_count_pointer(memview), memview->lock) -#else - #define __pyx_add_acquisition_count(memview)\ - __pyx_add_acquisition_count_locked(__pyx_get_slice_count_pointer(memview), memview->lock) - #define __pyx_sub_acquisition_count(memview)\ - __pyx_sub_acquisition_count_locked(__pyx_get_slice_count_pointer(memview), memview->lock) -#endif - -/* ForceInitThreads.proto */ -#ifndef __PYX_FORCE_INIT_THREADS - #define __PYX_FORCE_INIT_THREADS 0 -#endif - -/* BufferFormatStructs.proto */ -#define IS_UNSIGNED(type) (((type) -1) > 0) -struct __Pyx_StructField_; -#define __PYX_BUF_FLAGS_PACKED_STRUCT (1 << 0) -typedef struct { - const char* name; - struct __Pyx_StructField_* fields; - size_t size; - size_t arraysize[8]; - int ndim; - char typegroup; - char is_unsigned; - int flags; -} __Pyx_TypeInfo; -typedef struct __Pyx_StructField_ { - __Pyx_TypeInfo* type; - const char* name; - size_t offset; -} __Pyx_StructField; -typedef struct { - __Pyx_StructField* field; - size_t parent_offset; -} __Pyx_BufFmt_StackElem; -typedef struct { - __Pyx_StructField root; - __Pyx_BufFmt_StackElem* head; - size_t fmt_offset; - size_t new_count, enc_count; - size_t struct_alignment; - int is_complex; - char enc_type; - char new_packmode; - char enc_packmode; - char is_valid_array; -} __Pyx_BufFmt_Context; - - -/*--- Type declarations ---*/ -struct __pyx_array_obj; -struct __pyx_MemviewEnum_obj; -struct __pyx_memoryview_obj; -struct __pyx_memoryviewslice_obj; -struct __pyx_opt_args_15monotonic_align_4core_maximum_path_each; - -/* "monotonic_align/core.pyx":7 - * @cython.boundscheck(False) - * @cython.wraparound(False) - * cdef void maximum_path_each(int[:,::1] path, float[:,::1] value, int t_y, int t_x, float max_neg_val=-1e9) nogil: # <<<<<<<<<<<<<< - * cdef int x - * cdef int y - */ -struct __pyx_opt_args_15monotonic_align_4core_maximum_path_each { - int __pyx_n; - float max_neg_val; -}; - -/* "View.MemoryView":105 - * - * @cname("__pyx_array") - * cdef class array: # <<<<<<<<<<<<<< - * - * cdef: - */ -struct __pyx_array_obj { - PyObject_HEAD - struct __pyx_vtabstruct_array *__pyx_vtab; - char *data; - Py_ssize_t len; - char *format; - int ndim; - Py_ssize_t *_shape; - Py_ssize_t *_strides; - Py_ssize_t itemsize; - PyObject *mode; - PyObject *_format; - void (*callback_free_data)(void *); - int free_data; - int dtype_is_object; -}; - - -/* "View.MemoryView":279 - * - * @cname('__pyx_MemviewEnum') - * cdef class Enum(object): # <<<<<<<<<<<<<< - * cdef object name - * def __init__(self, name): - */ -struct __pyx_MemviewEnum_obj { - PyObject_HEAD - PyObject *name; -}; - - -/* "View.MemoryView":330 - * - * @cname('__pyx_memoryview') - * cdef class memoryview(object): # <<<<<<<<<<<<<< - * - * cdef object obj - */ -struct __pyx_memoryview_obj { - PyObject_HEAD - struct __pyx_vtabstruct_memoryview *__pyx_vtab; - PyObject *obj; - PyObject *_size; - PyObject *_array_interface; - PyThread_type_lock lock; - __pyx_atomic_int acquisition_count[2]; - __pyx_atomic_int *acquisition_count_aligned_p; - Py_buffer view; - int flags; - int dtype_is_object; - __Pyx_TypeInfo *typeinfo; -}; - - -/* "View.MemoryView":965 - * - * @cname('__pyx_memoryviewslice') - * cdef class _memoryviewslice(memoryview): # <<<<<<<<<<<<<< - * "Internal class for passing memoryview slices to Python" - * - */ -struct __pyx_memoryviewslice_obj { - struct __pyx_memoryview_obj __pyx_base; - __Pyx_memviewslice from_slice; - PyObject *from_object; - PyObject *(*to_object_func)(char *); - int (*to_dtype_func)(char *, PyObject *); -}; - - - -/* "View.MemoryView":105 - * - * @cname("__pyx_array") - * cdef class array: # <<<<<<<<<<<<<< - * - * cdef: - */ - -struct __pyx_vtabstruct_array { - PyObject *(*get_memview)(struct __pyx_array_obj *); -}; -static struct __pyx_vtabstruct_array *__pyx_vtabptr_array; - - -/* "View.MemoryView":330 - * - * @cname('__pyx_memoryview') - * cdef class memoryview(object): # <<<<<<<<<<<<<< - * - * cdef object obj - */ - -struct __pyx_vtabstruct_memoryview { - char *(*get_item_pointer)(struct __pyx_memoryview_obj *, PyObject *); - PyObject *(*is_slice)(struct __pyx_memoryview_obj *, PyObject *); - PyObject *(*setitem_slice_assignment)(struct __pyx_memoryview_obj *, PyObject *, PyObject *); - PyObject *(*setitem_slice_assign_scalar)(struct __pyx_memoryview_obj *, struct __pyx_memoryview_obj *, PyObject *); - PyObject *(*setitem_indexed)(struct __pyx_memoryview_obj *, PyObject *, PyObject *); - PyObject *(*convert_item_to_object)(struct __pyx_memoryview_obj *, char *); - PyObject *(*assign_item_from_object)(struct __pyx_memoryview_obj *, char *, PyObject *); -}; -static struct __pyx_vtabstruct_memoryview *__pyx_vtabptr_memoryview; - - -/* "View.MemoryView":965 - * - * @cname('__pyx_memoryviewslice') - * cdef class _memoryviewslice(memoryview): # <<<<<<<<<<<<<< - * "Internal class for passing memoryview slices to Python" - * - */ - -struct __pyx_vtabstruct__memoryviewslice { - struct __pyx_vtabstruct_memoryview __pyx_base; -}; -static struct __pyx_vtabstruct__memoryviewslice *__pyx_vtabptr__memoryviewslice; - -/* --- Runtime support code (head) --- */ -/* Refnanny.proto */ -#ifndef CYTHON_REFNANNY - #define CYTHON_REFNANNY 0 -#endif -#if CYTHON_REFNANNY - typedef struct { - void (*INCREF)(void*, PyObject*, int); - void (*DECREF)(void*, PyObject*, int); - void (*GOTREF)(void*, PyObject*, int); - void (*GIVEREF)(void*, PyObject*, int); - void* (*SetupContext)(const char*, int, const char*); - void (*FinishContext)(void**); - } __Pyx_RefNannyAPIStruct; - static __Pyx_RefNannyAPIStruct *__Pyx_RefNanny = NULL; - static __Pyx_RefNannyAPIStruct *__Pyx_RefNannyImportAPI(const char *modname); - #define __Pyx_RefNannyDeclarations void *__pyx_refnanny = NULL; -#ifdef WITH_THREAD - #define __Pyx_RefNannySetupContext(name, acquire_gil)\ - if (acquire_gil) {\ - PyGILState_STATE __pyx_gilstate_save = PyGILState_Ensure();\ - __pyx_refnanny = __Pyx_RefNanny->SetupContext((name), __LINE__, __FILE__);\ - PyGILState_Release(__pyx_gilstate_save);\ - } else {\ - __pyx_refnanny = __Pyx_RefNanny->SetupContext((name), __LINE__, __FILE__);\ - } -#else - #define __Pyx_RefNannySetupContext(name, acquire_gil)\ - __pyx_refnanny = __Pyx_RefNanny->SetupContext((name), __LINE__, __FILE__) -#endif - #define __Pyx_RefNannyFinishContext()\ - __Pyx_RefNanny->FinishContext(&__pyx_refnanny) - #define __Pyx_INCREF(r) __Pyx_RefNanny->INCREF(__pyx_refnanny, (PyObject *)(r), __LINE__) - #define __Pyx_DECREF(r) __Pyx_RefNanny->DECREF(__pyx_refnanny, (PyObject *)(r), __LINE__) - #define __Pyx_GOTREF(r) __Pyx_RefNanny->GOTREF(__pyx_refnanny, (PyObject *)(r), __LINE__) - #define __Pyx_GIVEREF(r) __Pyx_RefNanny->GIVEREF(__pyx_refnanny, (PyObject *)(r), __LINE__) - #define __Pyx_XINCREF(r) do { if((r) != NULL) {__Pyx_INCREF(r); }} while(0) - #define __Pyx_XDECREF(r) do { if((r) != NULL) {__Pyx_DECREF(r); }} while(0) - #define __Pyx_XGOTREF(r) do { if((r) != NULL) {__Pyx_GOTREF(r); }} while(0) - #define __Pyx_XGIVEREF(r) do { if((r) != NULL) {__Pyx_GIVEREF(r);}} while(0) -#else - #define __Pyx_RefNannyDeclarations - #define __Pyx_RefNannySetupContext(name, acquire_gil) - #define __Pyx_RefNannyFinishContext() - #define __Pyx_INCREF(r) Py_INCREF(r) - #define __Pyx_DECREF(r) Py_DECREF(r) - #define __Pyx_GOTREF(r) - #define __Pyx_GIVEREF(r) - #define __Pyx_XINCREF(r) Py_XINCREF(r) - #define __Pyx_XDECREF(r) Py_XDECREF(r) - #define __Pyx_XGOTREF(r) - #define __Pyx_XGIVEREF(r) -#endif -#define __Pyx_XDECREF_SET(r, v) do {\ - PyObject *tmp = (PyObject *) r;\ - r = v; __Pyx_XDECREF(tmp);\ - } while (0) -#define __Pyx_DECREF_SET(r, v) do {\ - PyObject *tmp = (PyObject *) r;\ - r = v; __Pyx_DECREF(tmp);\ - } while (0) -#define __Pyx_CLEAR(r) do { PyObject* tmp = ((PyObject*)(r)); r = NULL; __Pyx_DECREF(tmp);} while(0) -#define __Pyx_XCLEAR(r) do { if((r) != NULL) {PyObject* tmp = ((PyObject*)(r)); r = NULL; __Pyx_DECREF(tmp);}} while(0) - -/* PyObjectGetAttrStr.proto */ -#if CYTHON_USE_TYPE_SLOTS -static CYTHON_INLINE PyObject* __Pyx_PyObject_GetAttrStr(PyObject* obj, PyObject* attr_name); -#else -#define __Pyx_PyObject_GetAttrStr(o,n) PyObject_GetAttr(o,n) -#endif - -/* GetBuiltinName.proto */ -static PyObject *__Pyx_GetBuiltinName(PyObject *name); - -/* MemviewSliceInit.proto */ -#define __Pyx_BUF_MAX_NDIMS %(BUF_MAX_NDIMS)d -#define __Pyx_MEMVIEW_DIRECT 1 -#define __Pyx_MEMVIEW_PTR 2 -#define __Pyx_MEMVIEW_FULL 4 -#define __Pyx_MEMVIEW_CONTIG 8 -#define __Pyx_MEMVIEW_STRIDED 16 -#define __Pyx_MEMVIEW_FOLLOW 32 -#define __Pyx_IS_C_CONTIG 1 -#define __Pyx_IS_F_CONTIG 2 -static int __Pyx_init_memviewslice( - struct __pyx_memoryview_obj *memview, - int ndim, - __Pyx_memviewslice *memviewslice, - int memview_is_new_reference); -static CYTHON_INLINE int __pyx_add_acquisition_count_locked( - __pyx_atomic_int *acquisition_count, PyThread_type_lock lock); -static CYTHON_INLINE int __pyx_sub_acquisition_count_locked( - __pyx_atomic_int *acquisition_count, PyThread_type_lock lock); -#define __pyx_get_slice_count_pointer(memview) (memview->acquisition_count_aligned_p) -#define __pyx_get_slice_count(memview) (*__pyx_get_slice_count_pointer(memview)) -#define __PYX_INC_MEMVIEW(slice, have_gil) __Pyx_INC_MEMVIEW(slice, have_gil, __LINE__) -#define __PYX_XDEC_MEMVIEW(slice, have_gil) __Pyx_XDEC_MEMVIEW(slice, have_gil, __LINE__) -static CYTHON_INLINE void __Pyx_INC_MEMVIEW(__Pyx_memviewslice *, int, int); -static CYTHON_INLINE void __Pyx_XDEC_MEMVIEW(__Pyx_memviewslice *, int, int); - -/* RaiseArgTupleInvalid.proto */ -static void __Pyx_RaiseArgtupleInvalid(const char* func_name, int exact, - Py_ssize_t num_min, Py_ssize_t num_max, Py_ssize_t num_found); - -/* RaiseDoubleKeywords.proto */ -static void __Pyx_RaiseDoubleKeywordsError(const char* func_name, PyObject* kw_name); - -/* ParseKeywords.proto */ -static int __Pyx_ParseOptionalKeywords(PyObject *kwds, PyObject **argnames[],\ - PyObject *kwds2, PyObject *values[], Py_ssize_t num_pos_args,\ - const char* function_name); - -/* None.proto */ -static CYTHON_INLINE void __Pyx_RaiseUnboundLocalError(const char *varname); - -/* ArgTypeTest.proto */ -#define __Pyx_ArgTypeTest(obj, type, none_allowed, name, exact)\ - ((likely((Py_TYPE(obj) == type) | (none_allowed && (obj == Py_None)))) ? 1 :\ - __Pyx__ArgTypeTest(obj, type, name, exact)) -static int __Pyx__ArgTypeTest(PyObject *obj, PyTypeObject *type, const char *name, int exact); - -/* PyObjectCall.proto */ -#if CYTHON_COMPILING_IN_CPYTHON -static CYTHON_INLINE PyObject* __Pyx_PyObject_Call(PyObject *func, PyObject *arg, PyObject *kw); -#else -#define __Pyx_PyObject_Call(func, arg, kw) PyObject_Call(func, arg, kw) -#endif - -/* PyThreadStateGet.proto */ -#if CYTHON_FAST_THREAD_STATE -#define __Pyx_PyThreadState_declare PyThreadState *__pyx_tstate; -#define __Pyx_PyThreadState_assign __pyx_tstate = __Pyx_PyThreadState_Current; -#define __Pyx_PyErr_Occurred() __pyx_tstate->curexc_type -#else -#define __Pyx_PyThreadState_declare -#define __Pyx_PyThreadState_assign -#define __Pyx_PyErr_Occurred() PyErr_Occurred() -#endif - -/* PyErrFetchRestore.proto */ -#if CYTHON_FAST_THREAD_STATE -#define __Pyx_PyErr_Clear() __Pyx_ErrRestore(NULL, NULL, NULL) -#define __Pyx_ErrRestoreWithState(type, value, tb) __Pyx_ErrRestoreInState(PyThreadState_GET(), type, value, tb) -#define __Pyx_ErrFetchWithState(type, value, tb) __Pyx_ErrFetchInState(PyThreadState_GET(), type, value, tb) -#define __Pyx_ErrRestore(type, value, tb) __Pyx_ErrRestoreInState(__pyx_tstate, type, value, tb) -#define __Pyx_ErrFetch(type, value, tb) __Pyx_ErrFetchInState(__pyx_tstate, type, value, tb) -static CYTHON_INLINE void __Pyx_ErrRestoreInState(PyThreadState *tstate, PyObject *type, PyObject *value, PyObject *tb); -static CYTHON_INLINE void __Pyx_ErrFetchInState(PyThreadState *tstate, PyObject **type, PyObject **value, PyObject **tb); -#if CYTHON_COMPILING_IN_CPYTHON -#define __Pyx_PyErr_SetNone(exc) (Py_INCREF(exc), __Pyx_ErrRestore((exc), NULL, NULL)) -#else -#define __Pyx_PyErr_SetNone(exc) PyErr_SetNone(exc) -#endif -#else -#define __Pyx_PyErr_Clear() PyErr_Clear() -#define __Pyx_PyErr_SetNone(exc) PyErr_SetNone(exc) -#define __Pyx_ErrRestoreWithState(type, value, tb) PyErr_Restore(type, value, tb) -#define __Pyx_ErrFetchWithState(type, value, tb) PyErr_Fetch(type, value, tb) -#define __Pyx_ErrRestoreInState(tstate, type, value, tb) PyErr_Restore(type, value, tb) -#define __Pyx_ErrFetchInState(tstate, type, value, tb) PyErr_Fetch(type, value, tb) -#define __Pyx_ErrRestore(type, value, tb) PyErr_Restore(type, value, tb) -#define __Pyx_ErrFetch(type, value, tb) PyErr_Fetch(type, value, tb) -#endif - -/* RaiseException.proto */ -static void __Pyx_Raise(PyObject *type, PyObject *value, PyObject *tb, PyObject *cause); - -/* PyCFunctionFastCall.proto */ -#if CYTHON_FAST_PYCCALL -static CYTHON_INLINE PyObject *__Pyx_PyCFunction_FastCall(PyObject *func, PyObject **args, Py_ssize_t nargs); -#else -#define __Pyx_PyCFunction_FastCall(func, args, nargs) (assert(0), NULL) -#endif - -/* PyFunctionFastCall.proto */ -#if CYTHON_FAST_PYCALL -#define __Pyx_PyFunction_FastCall(func, args, nargs)\ - __Pyx_PyFunction_FastCallDict((func), (args), (nargs), NULL) -#if 1 || PY_VERSION_HEX < 0x030600B1 -static PyObject *__Pyx_PyFunction_FastCallDict(PyObject *func, PyObject **args, Py_ssize_t nargs, PyObject *kwargs); -#else -#define __Pyx_PyFunction_FastCallDict(func, args, nargs, kwargs) _PyFunction_FastCallDict(func, args, nargs, kwargs) -#endif -#define __Pyx_BUILD_ASSERT_EXPR(cond)\ - (sizeof(char [1 - 2*!(cond)]) - 1) -#ifndef Py_MEMBER_SIZE -#define Py_MEMBER_SIZE(type, member) sizeof(((type *)0)->member) -#endif - static size_t __pyx_pyframe_localsplus_offset = 0; - #include "frameobject.h" - #define __Pxy_PyFrame_Initialize_Offsets()\ - ((void)__Pyx_BUILD_ASSERT_EXPR(sizeof(PyFrameObject) == offsetof(PyFrameObject, f_localsplus) + Py_MEMBER_SIZE(PyFrameObject, f_localsplus)),\ - (void)(__pyx_pyframe_localsplus_offset = ((size_t)PyFrame_Type.tp_basicsize) - Py_MEMBER_SIZE(PyFrameObject, f_localsplus))) - #define __Pyx_PyFrame_GetLocalsplus(frame)\ - (assert(__pyx_pyframe_localsplus_offset), (PyObject **)(((char *)(frame)) + __pyx_pyframe_localsplus_offset)) -#endif - -/* PyObjectCall2Args.proto */ -static CYTHON_UNUSED PyObject* __Pyx_PyObject_Call2Args(PyObject* function, PyObject* arg1, PyObject* arg2); - -/* PyObjectCallMethO.proto */ -#if CYTHON_COMPILING_IN_CPYTHON -static CYTHON_INLINE PyObject* __Pyx_PyObject_CallMethO(PyObject *func, PyObject *arg); -#endif - -/* PyObjectCallOneArg.proto */ -static CYTHON_INLINE PyObject* __Pyx_PyObject_CallOneArg(PyObject *func, PyObject *arg); - -/* IncludeStringH.proto */ -#include - -/* BytesEquals.proto */ -static CYTHON_INLINE int __Pyx_PyBytes_Equals(PyObject* s1, PyObject* s2, int equals); - -/* UnicodeEquals.proto */ -static CYTHON_INLINE int __Pyx_PyUnicode_Equals(PyObject* s1, PyObject* s2, int equals); - -/* StrEquals.proto */ -#if PY_MAJOR_VERSION >= 3 -#define __Pyx_PyString_Equals __Pyx_PyUnicode_Equals -#else -#define __Pyx_PyString_Equals __Pyx_PyBytes_Equals -#endif - -/* None.proto */ -static CYTHON_INLINE Py_ssize_t __Pyx_div_Py_ssize_t(Py_ssize_t, Py_ssize_t); - -/* UnaryNegOverflows.proto */ -#define UNARY_NEG_WOULD_OVERFLOW(x)\ - (((x) < 0) & ((unsigned long)(x) == 0-(unsigned long)(x))) - -static CYTHON_UNUSED int __pyx_array_getbuffer(PyObject *__pyx_v_self, Py_buffer *__pyx_v_info, int __pyx_v_flags); /*proto*/ -static PyObject *__pyx_array_get_memview(struct __pyx_array_obj *); /*proto*/ -/* GetAttr.proto */ -static CYTHON_INLINE PyObject *__Pyx_GetAttr(PyObject *, PyObject *); - -/* GetItemInt.proto */ -#define __Pyx_GetItemInt(o, i, type, is_signed, to_py_func, is_list, wraparound, boundscheck)\ - (__Pyx_fits_Py_ssize_t(i, type, is_signed) ?\ - __Pyx_GetItemInt_Fast(o, (Py_ssize_t)i, is_list, wraparound, boundscheck) :\ - (is_list ? (PyErr_SetString(PyExc_IndexError, "list index out of range"), (PyObject*)NULL) :\ - __Pyx_GetItemInt_Generic(o, to_py_func(i)))) -#define __Pyx_GetItemInt_List(o, i, type, is_signed, to_py_func, is_list, wraparound, boundscheck)\ - (__Pyx_fits_Py_ssize_t(i, type, is_signed) ?\ - __Pyx_GetItemInt_List_Fast(o, (Py_ssize_t)i, wraparound, boundscheck) :\ - (PyErr_SetString(PyExc_IndexError, "list index out of range"), (PyObject*)NULL)) -static CYTHON_INLINE PyObject *__Pyx_GetItemInt_List_Fast(PyObject *o, Py_ssize_t i, - int wraparound, int boundscheck); -#define __Pyx_GetItemInt_Tuple(o, i, type, is_signed, to_py_func, is_list, wraparound, boundscheck)\ - (__Pyx_fits_Py_ssize_t(i, type, is_signed) ?\ - __Pyx_GetItemInt_Tuple_Fast(o, (Py_ssize_t)i, wraparound, boundscheck) :\ - (PyErr_SetString(PyExc_IndexError, "tuple index out of range"), (PyObject*)NULL)) -static CYTHON_INLINE PyObject *__Pyx_GetItemInt_Tuple_Fast(PyObject *o, Py_ssize_t i, - int wraparound, int boundscheck); -static PyObject *__Pyx_GetItemInt_Generic(PyObject *o, PyObject* j); -static CYTHON_INLINE PyObject *__Pyx_GetItemInt_Fast(PyObject *o, Py_ssize_t i, - int is_list, int wraparound, int boundscheck); - -/* ObjectGetItem.proto */ -#if CYTHON_USE_TYPE_SLOTS -static CYTHON_INLINE PyObject *__Pyx_PyObject_GetItem(PyObject *obj, PyObject* key); -#else -#define __Pyx_PyObject_GetItem(obj, key) PyObject_GetItem(obj, key) -#endif - -/* decode_c_string_utf16.proto */ -static CYTHON_INLINE PyObject *__Pyx_PyUnicode_DecodeUTF16(const char *s, Py_ssize_t size, const char *errors) { - int byteorder = 0; - return PyUnicode_DecodeUTF16(s, size, errors, &byteorder); -} -static CYTHON_INLINE PyObject *__Pyx_PyUnicode_DecodeUTF16LE(const char *s, Py_ssize_t size, const char *errors) { - int byteorder = -1; - return PyUnicode_DecodeUTF16(s, size, errors, &byteorder); -} -static CYTHON_INLINE PyObject *__Pyx_PyUnicode_DecodeUTF16BE(const char *s, Py_ssize_t size, const char *errors) { - int byteorder = 1; - return PyUnicode_DecodeUTF16(s, size, errors, &byteorder); -} - -/* decode_c_string.proto */ -static CYTHON_INLINE PyObject* __Pyx_decode_c_string( - const char* cstring, Py_ssize_t start, Py_ssize_t stop, - const char* encoding, const char* errors, - PyObject* (*decode_func)(const char *s, Py_ssize_t size, const char *errors)); - -/* PyErrExceptionMatches.proto */ -#if CYTHON_FAST_THREAD_STATE -#define __Pyx_PyErr_ExceptionMatches(err) __Pyx_PyErr_ExceptionMatchesInState(__pyx_tstate, err) -static CYTHON_INLINE int __Pyx_PyErr_ExceptionMatchesInState(PyThreadState* tstate, PyObject* err); -#else -#define __Pyx_PyErr_ExceptionMatches(err) PyErr_ExceptionMatches(err) -#endif - -/* GetAttr3.proto */ -static CYTHON_INLINE PyObject *__Pyx_GetAttr3(PyObject *, PyObject *, PyObject *); - -/* PyDictVersioning.proto */ -#if CYTHON_USE_DICT_VERSIONS && CYTHON_USE_TYPE_SLOTS -#define __PYX_DICT_VERSION_INIT ((PY_UINT64_T) -1) -#define __PYX_GET_DICT_VERSION(dict) (((PyDictObject*)(dict))->ma_version_tag) -#define __PYX_UPDATE_DICT_CACHE(dict, value, cache_var, version_var)\ - (version_var) = __PYX_GET_DICT_VERSION(dict);\ - (cache_var) = (value); -#define __PYX_PY_DICT_LOOKUP_IF_MODIFIED(VAR, DICT, LOOKUP) {\ - static PY_UINT64_T __pyx_dict_version = 0;\ - static PyObject *__pyx_dict_cached_value = NULL;\ - if (likely(__PYX_GET_DICT_VERSION(DICT) == __pyx_dict_version)) {\ - (VAR) = __pyx_dict_cached_value;\ - } else {\ - (VAR) = __pyx_dict_cached_value = (LOOKUP);\ - __pyx_dict_version = __PYX_GET_DICT_VERSION(DICT);\ - }\ -} -static CYTHON_INLINE PY_UINT64_T __Pyx_get_tp_dict_version(PyObject *obj); -static CYTHON_INLINE PY_UINT64_T __Pyx_get_object_dict_version(PyObject *obj); -static CYTHON_INLINE int __Pyx_object_dict_version_matches(PyObject* obj, PY_UINT64_T tp_dict_version, PY_UINT64_T obj_dict_version); -#else -#define __PYX_GET_DICT_VERSION(dict) (0) -#define __PYX_UPDATE_DICT_CACHE(dict, value, cache_var, version_var) -#define __PYX_PY_DICT_LOOKUP_IF_MODIFIED(VAR, DICT, LOOKUP) (VAR) = (LOOKUP); -#endif - -/* GetModuleGlobalName.proto */ -#if CYTHON_USE_DICT_VERSIONS -#define __Pyx_GetModuleGlobalName(var, name) {\ - static PY_UINT64_T __pyx_dict_version = 0;\ - static PyObject *__pyx_dict_cached_value = NULL;\ - (var) = (likely(__pyx_dict_version == __PYX_GET_DICT_VERSION(__pyx_d))) ?\ - (likely(__pyx_dict_cached_value) ? __Pyx_NewRef(__pyx_dict_cached_value) : __Pyx_GetBuiltinName(name)) :\ - __Pyx__GetModuleGlobalName(name, &__pyx_dict_version, &__pyx_dict_cached_value);\ -} -#define __Pyx_GetModuleGlobalNameUncached(var, name) {\ - PY_UINT64_T __pyx_dict_version;\ - PyObject *__pyx_dict_cached_value;\ - (var) = __Pyx__GetModuleGlobalName(name, &__pyx_dict_version, &__pyx_dict_cached_value);\ -} -static PyObject *__Pyx__GetModuleGlobalName(PyObject *name, PY_UINT64_T *dict_version, PyObject **dict_cached_value); -#else -#define __Pyx_GetModuleGlobalName(var, name) (var) = __Pyx__GetModuleGlobalName(name) -#define __Pyx_GetModuleGlobalNameUncached(var, name) (var) = __Pyx__GetModuleGlobalName(name) -static CYTHON_INLINE PyObject *__Pyx__GetModuleGlobalName(PyObject *name); -#endif - -/* RaiseTooManyValuesToUnpack.proto */ -static CYTHON_INLINE void __Pyx_RaiseTooManyValuesError(Py_ssize_t expected); - -/* RaiseNeedMoreValuesToUnpack.proto */ -static CYTHON_INLINE void __Pyx_RaiseNeedMoreValuesError(Py_ssize_t index); - -/* RaiseNoneIterError.proto */ -static CYTHON_INLINE void __Pyx_RaiseNoneNotIterableError(void); - -/* ExtTypeTest.proto */ -static CYTHON_INLINE int __Pyx_TypeTest(PyObject *obj, PyTypeObject *type); - -/* GetTopmostException.proto */ -#if CYTHON_USE_EXC_INFO_STACK -static _PyErr_StackItem * __Pyx_PyErr_GetTopmostException(PyThreadState *tstate); -#endif - -/* SaveResetException.proto */ -#if CYTHON_FAST_THREAD_STATE -#define __Pyx_ExceptionSave(type, value, tb) __Pyx__ExceptionSave(__pyx_tstate, type, value, tb) -static CYTHON_INLINE void __Pyx__ExceptionSave(PyThreadState *tstate, PyObject **type, PyObject **value, PyObject **tb); -#define __Pyx_ExceptionReset(type, value, tb) __Pyx__ExceptionReset(__pyx_tstate, type, value, tb) -static CYTHON_INLINE void __Pyx__ExceptionReset(PyThreadState *tstate, PyObject *type, PyObject *value, PyObject *tb); -#else -#define __Pyx_ExceptionSave(type, value, tb) PyErr_GetExcInfo(type, value, tb) -#define __Pyx_ExceptionReset(type, value, tb) PyErr_SetExcInfo(type, value, tb) -#endif - -/* GetException.proto */ -#if CYTHON_FAST_THREAD_STATE -#define __Pyx_GetException(type, value, tb) __Pyx__GetException(__pyx_tstate, type, value, tb) -static int __Pyx__GetException(PyThreadState *tstate, PyObject **type, PyObject **value, PyObject **tb); -#else -static int __Pyx_GetException(PyObject **type, PyObject **value, PyObject **tb); -#endif - -/* SwapException.proto */ -#if CYTHON_FAST_THREAD_STATE -#define __Pyx_ExceptionSwap(type, value, tb) __Pyx__ExceptionSwap(__pyx_tstate, type, value, tb) -static CYTHON_INLINE void __Pyx__ExceptionSwap(PyThreadState *tstate, PyObject **type, PyObject **value, PyObject **tb); -#else -static CYTHON_INLINE void __Pyx_ExceptionSwap(PyObject **type, PyObject **value, PyObject **tb); -#endif - -/* Import.proto */ -static PyObject *__Pyx_Import(PyObject *name, PyObject *from_list, int level); - -/* FastTypeChecks.proto */ -#if CYTHON_COMPILING_IN_CPYTHON -#define __Pyx_TypeCheck(obj, type) __Pyx_IsSubtype(Py_TYPE(obj), (PyTypeObject *)type) -static CYTHON_INLINE int __Pyx_IsSubtype(PyTypeObject *a, PyTypeObject *b); -static CYTHON_INLINE int __Pyx_PyErr_GivenExceptionMatches(PyObject *err, PyObject *type); -static CYTHON_INLINE int __Pyx_PyErr_GivenExceptionMatches2(PyObject *err, PyObject *type1, PyObject *type2); -#else -#define __Pyx_TypeCheck(obj, type) PyObject_TypeCheck(obj, (PyTypeObject *)type) -#define __Pyx_PyErr_GivenExceptionMatches(err, type) PyErr_GivenExceptionMatches(err, type) -#define __Pyx_PyErr_GivenExceptionMatches2(err, type1, type2) (PyErr_GivenExceptionMatches(err, type1) || PyErr_GivenExceptionMatches(err, type2)) -#endif -#define __Pyx_PyException_Check(obj) __Pyx_TypeCheck(obj, PyExc_Exception) - -static CYTHON_UNUSED int __pyx_memoryview_getbuffer(PyObject *__pyx_v_self, Py_buffer *__pyx_v_info, int __pyx_v_flags); /*proto*/ -/* ListCompAppend.proto */ -#if CYTHON_USE_PYLIST_INTERNALS && CYTHON_ASSUME_SAFE_MACROS -static CYTHON_INLINE int __Pyx_ListComp_Append(PyObject* list, PyObject* x) { - PyListObject* L = (PyListObject*) list; - Py_ssize_t len = Py_SIZE(list); - if (likely(L->allocated > len)) { - Py_INCREF(x); - PyList_SET_ITEM(list, len, x); - __Pyx_SET_SIZE(list, len + 1); - return 0; - } - return PyList_Append(list, x); -} -#else -#define __Pyx_ListComp_Append(L,x) PyList_Append(L,x) -#endif - -/* PyIntBinop.proto */ -#if !CYTHON_COMPILING_IN_PYPY -static PyObject* __Pyx_PyInt_AddObjC(PyObject *op1, PyObject *op2, long intval, int inplace, int zerodivision_check); -#else -#define __Pyx_PyInt_AddObjC(op1, op2, intval, inplace, zerodivision_check)\ - (inplace ? PyNumber_InPlaceAdd(op1, op2) : PyNumber_Add(op1, op2)) -#endif - -/* ListExtend.proto */ -static CYTHON_INLINE int __Pyx_PyList_Extend(PyObject* L, PyObject* v) { -#if CYTHON_COMPILING_IN_CPYTHON - PyObject* none = _PyList_Extend((PyListObject*)L, v); - if (unlikely(!none)) - return -1; - Py_DECREF(none); - return 0; -#else - return PyList_SetSlice(L, PY_SSIZE_T_MAX, PY_SSIZE_T_MAX, v); -#endif -} - -/* ListAppend.proto */ -#if CYTHON_USE_PYLIST_INTERNALS && CYTHON_ASSUME_SAFE_MACROS -static CYTHON_INLINE int __Pyx_PyList_Append(PyObject* list, PyObject* x) { - PyListObject* L = (PyListObject*) list; - Py_ssize_t len = Py_SIZE(list); - if (likely(L->allocated > len) & likely(len > (L->allocated >> 1))) { - Py_INCREF(x); - PyList_SET_ITEM(list, len, x); - __Pyx_SET_SIZE(list, len + 1); - return 0; - } - return PyList_Append(list, x); -} -#else -#define __Pyx_PyList_Append(L,x) PyList_Append(L,x) -#endif - -/* None.proto */ -static CYTHON_INLINE long __Pyx_div_long(long, long); - -/* ImportFrom.proto */ -static PyObject* __Pyx_ImportFrom(PyObject* module, PyObject* name); - -/* HasAttr.proto */ -static CYTHON_INLINE int __Pyx_HasAttr(PyObject *, PyObject *); - -/* PyObject_GenericGetAttrNoDict.proto */ -#if CYTHON_USE_TYPE_SLOTS && CYTHON_USE_PYTYPE_LOOKUP && PY_VERSION_HEX < 0x03070000 -static CYTHON_INLINE PyObject* __Pyx_PyObject_GenericGetAttrNoDict(PyObject* obj, PyObject* attr_name); -#else -#define __Pyx_PyObject_GenericGetAttrNoDict PyObject_GenericGetAttr -#endif - -/* PyObject_GenericGetAttr.proto */ -#if CYTHON_USE_TYPE_SLOTS && CYTHON_USE_PYTYPE_LOOKUP && PY_VERSION_HEX < 0x03070000 -static PyObject* __Pyx_PyObject_GenericGetAttr(PyObject* obj, PyObject* attr_name); -#else -#define __Pyx_PyObject_GenericGetAttr PyObject_GenericGetAttr -#endif - -/* SetVTable.proto */ -static int __Pyx_SetVtable(PyObject *dict, void *vtable); - -/* PyObjectGetAttrStrNoError.proto */ -static CYTHON_INLINE PyObject* __Pyx_PyObject_GetAttrStrNoError(PyObject* obj, PyObject* attr_name); - -/* SetupReduce.proto */ -static int __Pyx_setup_reduce(PyObject* type_obj); - -/* CLineInTraceback.proto */ -#ifdef CYTHON_CLINE_IN_TRACEBACK -#define __Pyx_CLineForTraceback(tstate, c_line) (((CYTHON_CLINE_IN_TRACEBACK)) ? c_line : 0) -#else -static int __Pyx_CLineForTraceback(PyThreadState *tstate, int c_line); -#endif - -/* CodeObjectCache.proto */ -typedef struct { - PyCodeObject* code_object; - int code_line; -} __Pyx_CodeObjectCacheEntry; -struct __Pyx_CodeObjectCache { - int count; - int max_count; - __Pyx_CodeObjectCacheEntry* entries; -}; -static struct __Pyx_CodeObjectCache __pyx_code_cache = {0,0,NULL}; -static int __pyx_bisect_code_objects(__Pyx_CodeObjectCacheEntry* entries, int count, int code_line); -static PyCodeObject *__pyx_find_code_object(int code_line); -static void __pyx_insert_code_object(int code_line, PyCodeObject* code_object); - -/* AddTraceback.proto */ -static void __Pyx_AddTraceback(const char *funcname, int c_line, - int py_line, const char *filename); - -#if PY_MAJOR_VERSION < 3 - static int __Pyx_GetBuffer(PyObject *obj, Py_buffer *view, int flags); - static void __Pyx_ReleaseBuffer(Py_buffer *view); -#else - #define __Pyx_GetBuffer PyObject_GetBuffer - #define __Pyx_ReleaseBuffer PyBuffer_Release -#endif - - -/* BufferStructDeclare.proto */ -typedef struct { - Py_ssize_t shape, strides, suboffsets; -} __Pyx_Buf_DimInfo; -typedef struct { - size_t refcount; - Py_buffer pybuffer; -} __Pyx_Buffer; -typedef struct { - __Pyx_Buffer *rcbuffer; - char *data; - __Pyx_Buf_DimInfo diminfo[8]; -} __Pyx_LocalBuf_ND; - -/* MemviewSliceIsContig.proto */ -static int __pyx_memviewslice_is_contig(const __Pyx_memviewslice mvs, char order, int ndim); - -/* OverlappingSlices.proto */ -static int __pyx_slices_overlap(__Pyx_memviewslice *slice1, - __Pyx_memviewslice *slice2, - int ndim, size_t itemsize); - -/* Capsule.proto */ -static CYTHON_INLINE PyObject *__pyx_capsule_create(void *p, const char *sig); - -/* IsLittleEndian.proto */ -static CYTHON_INLINE int __Pyx_Is_Little_Endian(void); - -/* BufferFormatCheck.proto */ -static const char* __Pyx_BufFmt_CheckString(__Pyx_BufFmt_Context* ctx, const char* ts); -static void __Pyx_BufFmt_Init(__Pyx_BufFmt_Context* ctx, - __Pyx_BufFmt_StackElem* stack, - __Pyx_TypeInfo* type); - -/* TypeInfoCompare.proto */ -static int __pyx_typeinfo_cmp(__Pyx_TypeInfo *a, __Pyx_TypeInfo *b); - -/* MemviewSliceValidateAndInit.proto */ -static int __Pyx_ValidateAndInit_memviewslice( - int *axes_specs, - int c_or_f_flag, - int buf_flags, - int ndim, - __Pyx_TypeInfo *dtype, - __Pyx_BufFmt_StackElem stack[], - __Pyx_memviewslice *memviewslice, - PyObject *original_obj); - -/* ObjectToMemviewSlice.proto */ -static CYTHON_INLINE __Pyx_memviewslice __Pyx_PyObject_to_MemoryviewSlice_d_d_dc_int(PyObject *, int writable_flag); - -/* ObjectToMemviewSlice.proto */ -static CYTHON_INLINE __Pyx_memviewslice __Pyx_PyObject_to_MemoryviewSlice_d_d_dc_float(PyObject *, int writable_flag); - -/* ObjectToMemviewSlice.proto */ -static CYTHON_INLINE __Pyx_memviewslice __Pyx_PyObject_to_MemoryviewSlice_dc_int(PyObject *, int writable_flag); - -/* CIntToPy.proto */ -static CYTHON_INLINE PyObject* __Pyx_PyInt_From_int(int value); - -/* CIntToPy.proto */ -static CYTHON_INLINE PyObject* __Pyx_PyInt_From_long(long value); - -/* MemviewSliceCopyTemplate.proto */ -static __Pyx_memviewslice -__pyx_memoryview_copy_new_contig(const __Pyx_memviewslice *from_mvs, - const char *mode, int ndim, - size_t sizeof_dtype, int contig_flag, - int dtype_is_object); - -/* CIntFromPy.proto */ -static CYTHON_INLINE int __Pyx_PyInt_As_int(PyObject *); - -/* CIntFromPy.proto */ -static CYTHON_INLINE long __Pyx_PyInt_As_long(PyObject *); - -/* CIntFromPy.proto */ -static CYTHON_INLINE char __Pyx_PyInt_As_char(PyObject *); - -/* CheckBinaryVersion.proto */ -static int __Pyx_check_binary_version(void); - -/* InitStrings.proto */ -static int __Pyx_InitStrings(__Pyx_StringTabEntry *t); - -static PyObject *__pyx_array_get_memview(struct __pyx_array_obj *__pyx_v_self); /* proto*/ -static char *__pyx_memoryview_get_item_pointer(struct __pyx_memoryview_obj *__pyx_v_self, PyObject *__pyx_v_index); /* proto*/ -static PyObject *__pyx_memoryview_is_slice(struct __pyx_memoryview_obj *__pyx_v_self, PyObject *__pyx_v_obj); /* proto*/ -static PyObject *__pyx_memoryview_setitem_slice_assignment(struct __pyx_memoryview_obj *__pyx_v_self, PyObject *__pyx_v_dst, PyObject *__pyx_v_src); /* proto*/ -static PyObject *__pyx_memoryview_setitem_slice_assign_scalar(struct __pyx_memoryview_obj *__pyx_v_self, struct __pyx_memoryview_obj *__pyx_v_dst, PyObject *__pyx_v_value); /* proto*/ -static PyObject *__pyx_memoryview_setitem_indexed(struct __pyx_memoryview_obj *__pyx_v_self, PyObject *__pyx_v_index, PyObject *__pyx_v_value); /* proto*/ -static PyObject *__pyx_memoryview_convert_item_to_object(struct __pyx_memoryview_obj *__pyx_v_self, char *__pyx_v_itemp); /* proto*/ -static PyObject *__pyx_memoryview_assign_item_from_object(struct __pyx_memoryview_obj *__pyx_v_self, char *__pyx_v_itemp, PyObject *__pyx_v_value); /* proto*/ -static PyObject *__pyx_memoryviewslice_convert_item_to_object(struct __pyx_memoryviewslice_obj *__pyx_v_self, char *__pyx_v_itemp); /* proto*/ -static PyObject *__pyx_memoryviewslice_assign_item_from_object(struct __pyx_memoryviewslice_obj *__pyx_v_self, char *__pyx_v_itemp, PyObject *__pyx_v_value); /* proto*/ - -/* Module declarations from 'cython.view' */ - -/* Module declarations from 'cython' */ - -/* Module declarations from 'monotonic_align.core' */ -static PyTypeObject *__pyx_array_type = 0; -static PyTypeObject *__pyx_MemviewEnum_type = 0; -static PyTypeObject *__pyx_memoryview_type = 0; -static PyTypeObject *__pyx_memoryviewslice_type = 0; -static PyObject *generic = 0; -static PyObject *strided = 0; -static PyObject *indirect = 0; -static PyObject *contiguous = 0; -static PyObject *indirect_contiguous = 0; -static int __pyx_memoryview_thread_locks_used; -static PyThread_type_lock __pyx_memoryview_thread_locks[8]; -static void __pyx_f_15monotonic_align_4core_maximum_path_each(__Pyx_memviewslice, __Pyx_memviewslice, int, int, struct __pyx_opt_args_15monotonic_align_4core_maximum_path_each *__pyx_optional_args); /*proto*/ -static void __pyx_f_15monotonic_align_4core_maximum_path_c(__Pyx_memviewslice, __Pyx_memviewslice, __Pyx_memviewslice, __Pyx_memviewslice, int __pyx_skip_dispatch); /*proto*/ -static struct __pyx_array_obj *__pyx_array_new(PyObject *, Py_ssize_t, char *, char *, char *); /*proto*/ -static void *__pyx_align_pointer(void *, size_t); /*proto*/ -static PyObject *__pyx_memoryview_new(PyObject *, int, int, __Pyx_TypeInfo *); /*proto*/ -static CYTHON_INLINE int __pyx_memoryview_check(PyObject *); /*proto*/ -static PyObject *_unellipsify(PyObject *, int); /*proto*/ -static PyObject *assert_direct_dimensions(Py_ssize_t *, int); /*proto*/ -static struct __pyx_memoryview_obj *__pyx_memview_slice(struct __pyx_memoryview_obj *, PyObject *); /*proto*/ -static int __pyx_memoryview_slice_memviewslice(__Pyx_memviewslice *, Py_ssize_t, Py_ssize_t, Py_ssize_t, int, int, int *, Py_ssize_t, Py_ssize_t, Py_ssize_t, int, int, int, int); /*proto*/ -static char *__pyx_pybuffer_index(Py_buffer *, char *, Py_ssize_t, Py_ssize_t); /*proto*/ -static int __pyx_memslice_transpose(__Pyx_memviewslice *); /*proto*/ -static PyObject *__pyx_memoryview_fromslice(__Pyx_memviewslice, int, PyObject *(*)(char *), int (*)(char *, PyObject *), int); /*proto*/ -static __Pyx_memviewslice *__pyx_memoryview_get_slice_from_memoryview(struct __pyx_memoryview_obj *, __Pyx_memviewslice *); /*proto*/ -static void __pyx_memoryview_slice_copy(struct __pyx_memoryview_obj *, __Pyx_memviewslice *); /*proto*/ -static PyObject *__pyx_memoryview_copy_object(struct __pyx_memoryview_obj *); /*proto*/ -static PyObject *__pyx_memoryview_copy_object_from_slice(struct __pyx_memoryview_obj *, __Pyx_memviewslice *); /*proto*/ -static Py_ssize_t abs_py_ssize_t(Py_ssize_t); /*proto*/ -static char __pyx_get_best_slice_order(__Pyx_memviewslice *, int); /*proto*/ -static void _copy_strided_to_strided(char *, Py_ssize_t *, char *, Py_ssize_t *, Py_ssize_t *, Py_ssize_t *, int, size_t); /*proto*/ -static void copy_strided_to_strided(__Pyx_memviewslice *, __Pyx_memviewslice *, int, size_t); /*proto*/ -static Py_ssize_t __pyx_memoryview_slice_get_size(__Pyx_memviewslice *, int); /*proto*/ -static Py_ssize_t __pyx_fill_contig_strides_array(Py_ssize_t *, Py_ssize_t *, Py_ssize_t, int, char); /*proto*/ -static void *__pyx_memoryview_copy_data_to_temp(__Pyx_memviewslice *, __Pyx_memviewslice *, char, int); /*proto*/ -static int __pyx_memoryview_err_extents(int, Py_ssize_t, Py_ssize_t); /*proto*/ -static int __pyx_memoryview_err_dim(PyObject *, char *, int); /*proto*/ -static int __pyx_memoryview_err(PyObject *, char *); /*proto*/ -static int __pyx_memoryview_copy_contents(__Pyx_memviewslice, __Pyx_memviewslice, int, int, int); /*proto*/ -static void __pyx_memoryview_broadcast_leading(__Pyx_memviewslice *, int, int); /*proto*/ -static void __pyx_memoryview_refcount_copying(__Pyx_memviewslice *, int, int, int); /*proto*/ -static void __pyx_memoryview_refcount_objects_in_slice_with_gil(char *, Py_ssize_t *, Py_ssize_t *, int, int); /*proto*/ -static void __pyx_memoryview_refcount_objects_in_slice(char *, Py_ssize_t *, Py_ssize_t *, int, int); /*proto*/ -static void __pyx_memoryview_slice_assign_scalar(__Pyx_memviewslice *, int, size_t, void *, int); /*proto*/ -static void __pyx_memoryview__slice_assign_scalar(char *, Py_ssize_t *, Py_ssize_t *, int, size_t, void *); /*proto*/ -static PyObject *__pyx_unpickle_Enum__set_state(struct __pyx_MemviewEnum_obj *, PyObject *); /*proto*/ -static __Pyx_TypeInfo __Pyx_TypeInfo_int = { "int", NULL, sizeof(int), { 0 }, 0, IS_UNSIGNED(int) ? 'U' : 'I', IS_UNSIGNED(int), 0 }; -static __Pyx_TypeInfo __Pyx_TypeInfo_float = { "float", NULL, sizeof(float), { 0 }, 0, 'R', 0, 0 }; -#define __Pyx_MODULE_NAME "monotonic_align.core" -extern int __pyx_module_is_main_monotonic_align__core; -int __pyx_module_is_main_monotonic_align__core = 0; - -/* Implementation of 'monotonic_align.core' */ -static PyObject *__pyx_builtin_range; -static PyObject *__pyx_builtin_ValueError; -static PyObject *__pyx_builtin_MemoryError; -static PyObject *__pyx_builtin_enumerate; -static PyObject *__pyx_builtin_TypeError; -static PyObject *__pyx_builtin_Ellipsis; -static PyObject *__pyx_builtin_id; -static PyObject *__pyx_builtin_IndexError; -static const char __pyx_k_O[] = "O"; -static const char __pyx_k_c[] = "c"; -static const char __pyx_k_id[] = "id"; -static const char __pyx_k_new[] = "__new__"; -static const char __pyx_k_obj[] = "obj"; -static const char __pyx_k_base[] = "base"; -static const char __pyx_k_dict[] = "__dict__"; -static const char __pyx_k_main[] = "__main__"; -static const char __pyx_k_mode[] = "mode"; -static const char __pyx_k_name[] = "name"; -static const char __pyx_k_ndim[] = "ndim"; -static const char __pyx_k_pack[] = "pack"; -static const char __pyx_k_size[] = "size"; -static const char __pyx_k_step[] = "step"; -static const char __pyx_k_stop[] = "stop"; -static const char __pyx_k_t_xs[] = "t_xs"; -static const char __pyx_k_t_ys[] = "t_ys"; -static const char __pyx_k_test[] = "__test__"; -static const char __pyx_k_ASCII[] = "ASCII"; -static const char __pyx_k_class[] = "__class__"; -static const char __pyx_k_error[] = "error"; -static const char __pyx_k_flags[] = "flags"; -static const char __pyx_k_paths[] = "paths"; -static const char __pyx_k_range[] = "range"; -static const char __pyx_k_shape[] = "shape"; -static const char __pyx_k_start[] = "start"; -static const char __pyx_k_encode[] = "encode"; -static const char __pyx_k_format[] = "format"; -static const char __pyx_k_import[] = "__import__"; -static const char __pyx_k_name_2[] = "__name__"; -static const char __pyx_k_pickle[] = "pickle"; -static const char __pyx_k_reduce[] = "__reduce__"; -static const char __pyx_k_struct[] = "struct"; -static const char __pyx_k_unpack[] = "unpack"; -static const char __pyx_k_update[] = "update"; -static const char __pyx_k_values[] = "values"; -static const char __pyx_k_fortran[] = "fortran"; -static const char __pyx_k_memview[] = "memview"; -static const char __pyx_k_Ellipsis[] = "Ellipsis"; -static const char __pyx_k_getstate[] = "__getstate__"; -static const char __pyx_k_itemsize[] = "itemsize"; -static const char __pyx_k_pyx_type[] = "__pyx_type"; -static const char __pyx_k_setstate[] = "__setstate__"; -static const char __pyx_k_TypeError[] = "TypeError"; -static const char __pyx_k_enumerate[] = "enumerate"; -static const char __pyx_k_pyx_state[] = "__pyx_state"; -static const char __pyx_k_reduce_ex[] = "__reduce_ex__"; -static const char __pyx_k_IndexError[] = "IndexError"; -static const char __pyx_k_ValueError[] = "ValueError"; -static const char __pyx_k_pyx_result[] = "__pyx_result"; -static const char __pyx_k_pyx_vtable[] = "__pyx_vtable__"; -static const char __pyx_k_MemoryError[] = "MemoryError"; -static const char __pyx_k_PickleError[] = "PickleError"; -static const char __pyx_k_pyx_checksum[] = "__pyx_checksum"; -static const char __pyx_k_stringsource[] = "stringsource"; -static const char __pyx_k_pyx_getbuffer[] = "__pyx_getbuffer"; -static const char __pyx_k_reduce_cython[] = "__reduce_cython__"; -static const char __pyx_k_View_MemoryView[] = "View.MemoryView"; -static const char __pyx_k_allocate_buffer[] = "allocate_buffer"; -static const char __pyx_k_dtype_is_object[] = "dtype_is_object"; -static const char __pyx_k_pyx_PickleError[] = "__pyx_PickleError"; -static const char __pyx_k_setstate_cython[] = "__setstate_cython__"; -static const char __pyx_k_pyx_unpickle_Enum[] = "__pyx_unpickle_Enum"; -static const char __pyx_k_cline_in_traceback[] = "cline_in_traceback"; -static const char __pyx_k_strided_and_direct[] = ""; -static const char __pyx_k_strided_and_indirect[] = ""; -static const char __pyx_k_contiguous_and_direct[] = ""; -static const char __pyx_k_MemoryView_of_r_object[] = ""; -static const char __pyx_k_MemoryView_of_r_at_0x_x[] = ""; -static const char __pyx_k_contiguous_and_indirect[] = ""; -static const char __pyx_k_Cannot_index_with_type_s[] = "Cannot index with type '%s'"; -static const char __pyx_k_Invalid_shape_in_axis_d_d[] = "Invalid shape in axis %d: %d."; -static const char __pyx_k_itemsize_0_for_cython_array[] = "itemsize <= 0 for cython.array"; -static const char __pyx_k_unable_to_allocate_array_data[] = "unable to allocate array data."; -static const char __pyx_k_strided_and_direct_or_indirect[] = ""; -static const char __pyx_k_Buffer_view_does_not_expose_stri[] = "Buffer view does not expose strides"; -static const char __pyx_k_Can_only_create_a_buffer_that_is[] = "Can only create a buffer that is contiguous in memory."; -static const char __pyx_k_Cannot_assign_to_read_only_memor[] = "Cannot assign to read-only memoryview"; -static const char __pyx_k_Cannot_create_writable_memory_vi[] = "Cannot create writable memory view from read-only memoryview"; -static const char __pyx_k_Empty_shape_tuple_for_cython_arr[] = "Empty shape tuple for cython.array"; -static const char __pyx_k_Incompatible_checksums_s_vs_0xb0[] = "Incompatible checksums (%s vs 0xb068931 = (name))"; -static const char __pyx_k_Indirect_dimensions_not_supporte[] = "Indirect dimensions not supported"; -static const char __pyx_k_Invalid_mode_expected_c_or_fortr[] = "Invalid mode, expected 'c' or 'fortran', got %s"; -static const char __pyx_k_Out_of_bounds_on_buffer_access_a[] = "Out of bounds on buffer access (axis %d)"; -static const char __pyx_k_Unable_to_convert_item_to_object[] = "Unable to convert item to object"; -static const char __pyx_k_got_differing_extents_in_dimensi[] = "got differing extents in dimension %d (got %d and %d)"; -static const char __pyx_k_no_default___reduce___due_to_non[] = "no default __reduce__ due to non-trivial __cinit__"; -static const char __pyx_k_unable_to_allocate_shape_and_str[] = "unable to allocate shape and strides."; -static PyObject *__pyx_n_s_ASCII; -static PyObject *__pyx_kp_s_Buffer_view_does_not_expose_stri; -static PyObject *__pyx_kp_s_Can_only_create_a_buffer_that_is; -static PyObject *__pyx_kp_s_Cannot_assign_to_read_only_memor; -static PyObject *__pyx_kp_s_Cannot_create_writable_memory_vi; -static PyObject *__pyx_kp_s_Cannot_index_with_type_s; -static PyObject *__pyx_n_s_Ellipsis; -static PyObject *__pyx_kp_s_Empty_shape_tuple_for_cython_arr; -static PyObject *__pyx_kp_s_Incompatible_checksums_s_vs_0xb0; -static PyObject *__pyx_n_s_IndexError; -static PyObject *__pyx_kp_s_Indirect_dimensions_not_supporte; -static PyObject *__pyx_kp_s_Invalid_mode_expected_c_or_fortr; -static PyObject *__pyx_kp_s_Invalid_shape_in_axis_d_d; -static PyObject *__pyx_n_s_MemoryError; -static PyObject *__pyx_kp_s_MemoryView_of_r_at_0x_x; -static PyObject *__pyx_kp_s_MemoryView_of_r_object; -static PyObject *__pyx_n_b_O; -static PyObject *__pyx_kp_s_Out_of_bounds_on_buffer_access_a; -static PyObject *__pyx_n_s_PickleError; -static PyObject *__pyx_n_s_TypeError; -static PyObject *__pyx_kp_s_Unable_to_convert_item_to_object; -static PyObject *__pyx_n_s_ValueError; -static PyObject *__pyx_n_s_View_MemoryView; -static PyObject *__pyx_n_s_allocate_buffer; -static PyObject *__pyx_n_s_base; -static PyObject *__pyx_n_s_c; -static PyObject *__pyx_n_u_c; -static PyObject *__pyx_n_s_class; -static PyObject *__pyx_n_s_cline_in_traceback; -static PyObject *__pyx_kp_s_contiguous_and_direct; -static PyObject *__pyx_kp_s_contiguous_and_indirect; -static PyObject *__pyx_n_s_dict; -static PyObject *__pyx_n_s_dtype_is_object; -static PyObject *__pyx_n_s_encode; -static PyObject *__pyx_n_s_enumerate; -static PyObject *__pyx_n_s_error; -static PyObject *__pyx_n_s_flags; -static PyObject *__pyx_n_s_format; -static PyObject *__pyx_n_s_fortran; -static PyObject *__pyx_n_u_fortran; -static PyObject *__pyx_n_s_getstate; -static PyObject *__pyx_kp_s_got_differing_extents_in_dimensi; -static PyObject *__pyx_n_s_id; -static PyObject *__pyx_n_s_import; -static PyObject *__pyx_n_s_itemsize; -static PyObject *__pyx_kp_s_itemsize_0_for_cython_array; -static PyObject *__pyx_n_s_main; -static PyObject *__pyx_n_s_memview; -static PyObject *__pyx_n_s_mode; -static PyObject *__pyx_n_s_name; -static PyObject *__pyx_n_s_name_2; -static PyObject *__pyx_n_s_ndim; -static PyObject *__pyx_n_s_new; -static PyObject *__pyx_kp_s_no_default___reduce___due_to_non; -static PyObject *__pyx_n_s_obj; -static PyObject *__pyx_n_s_pack; -static PyObject *__pyx_n_s_paths; -static PyObject *__pyx_n_s_pickle; -static PyObject *__pyx_n_s_pyx_PickleError; -static PyObject *__pyx_n_s_pyx_checksum; -static PyObject *__pyx_n_s_pyx_getbuffer; -static PyObject *__pyx_n_s_pyx_result; -static PyObject *__pyx_n_s_pyx_state; -static PyObject *__pyx_n_s_pyx_type; -static PyObject *__pyx_n_s_pyx_unpickle_Enum; -static PyObject *__pyx_n_s_pyx_vtable; -static PyObject *__pyx_n_s_range; -static PyObject *__pyx_n_s_reduce; -static PyObject *__pyx_n_s_reduce_cython; -static PyObject *__pyx_n_s_reduce_ex; -static PyObject *__pyx_n_s_setstate; -static PyObject *__pyx_n_s_setstate_cython; -static PyObject *__pyx_n_s_shape; -static PyObject *__pyx_n_s_size; -static PyObject *__pyx_n_s_start; -static PyObject *__pyx_n_s_step; -static PyObject *__pyx_n_s_stop; -static PyObject *__pyx_kp_s_strided_and_direct; -static PyObject *__pyx_kp_s_strided_and_direct_or_indirect; -static PyObject *__pyx_kp_s_strided_and_indirect; -static PyObject *__pyx_kp_s_stringsource; -static PyObject *__pyx_n_s_struct; -static PyObject *__pyx_n_s_t_xs; -static PyObject *__pyx_n_s_t_ys; -static PyObject *__pyx_n_s_test; -static PyObject *__pyx_kp_s_unable_to_allocate_array_data; -static PyObject *__pyx_kp_s_unable_to_allocate_shape_and_str; -static PyObject *__pyx_n_s_unpack; -static PyObject *__pyx_n_s_update; -static PyObject *__pyx_n_s_values; -static PyObject *__pyx_pf_15monotonic_align_4core_maximum_path_c(CYTHON_UNUSED PyObject *__pyx_self, __Pyx_memviewslice __pyx_v_paths, __Pyx_memviewslice __pyx_v_values, __Pyx_memviewslice __pyx_v_t_ys, __Pyx_memviewslice __pyx_v_t_xs); /* proto */ -static int __pyx_array___pyx_pf_15View_dot_MemoryView_5array___cinit__(struct __pyx_array_obj *__pyx_v_self, PyObject *__pyx_v_shape, Py_ssize_t __pyx_v_itemsize, PyObject *__pyx_v_format, PyObject *__pyx_v_mode, int __pyx_v_allocate_buffer); /* proto */ -static int __pyx_array___pyx_pf_15View_dot_MemoryView_5array_2__getbuffer__(struct __pyx_array_obj *__pyx_v_self, Py_buffer *__pyx_v_info, int __pyx_v_flags); /* proto */ -static void __pyx_array___pyx_pf_15View_dot_MemoryView_5array_4__dealloc__(struct __pyx_array_obj *__pyx_v_self); /* proto */ -static PyObject *__pyx_pf_15View_dot_MemoryView_5array_7memview___get__(struct __pyx_array_obj *__pyx_v_self); /* proto */ -static Py_ssize_t __pyx_array___pyx_pf_15View_dot_MemoryView_5array_6__len__(struct __pyx_array_obj *__pyx_v_self); /* proto */ -static PyObject *__pyx_array___pyx_pf_15View_dot_MemoryView_5array_8__getattr__(struct __pyx_array_obj *__pyx_v_self, PyObject *__pyx_v_attr); /* proto */ -static PyObject *__pyx_array___pyx_pf_15View_dot_MemoryView_5array_10__getitem__(struct __pyx_array_obj *__pyx_v_self, PyObject *__pyx_v_item); /* proto */ -static int __pyx_array___pyx_pf_15View_dot_MemoryView_5array_12__setitem__(struct __pyx_array_obj *__pyx_v_self, PyObject *__pyx_v_item, PyObject *__pyx_v_value); /* proto */ -static PyObject *__pyx_pf___pyx_array___reduce_cython__(CYTHON_UNUSED struct __pyx_array_obj *__pyx_v_self); /* proto */ -static PyObject *__pyx_pf___pyx_array_2__setstate_cython__(CYTHON_UNUSED struct __pyx_array_obj *__pyx_v_self, CYTHON_UNUSED PyObject *__pyx_v___pyx_state); /* proto */ -static int __pyx_MemviewEnum___pyx_pf_15View_dot_MemoryView_4Enum___init__(struct __pyx_MemviewEnum_obj *__pyx_v_self, PyObject *__pyx_v_name); /* proto */ -static PyObject *__pyx_MemviewEnum___pyx_pf_15View_dot_MemoryView_4Enum_2__repr__(struct __pyx_MemviewEnum_obj *__pyx_v_self); /* proto */ -static PyObject *__pyx_pf___pyx_MemviewEnum___reduce_cython__(struct __pyx_MemviewEnum_obj *__pyx_v_self); /* proto */ -static PyObject *__pyx_pf___pyx_MemviewEnum_2__setstate_cython__(struct __pyx_MemviewEnum_obj *__pyx_v_self, PyObject *__pyx_v___pyx_state); /* proto */ -static int __pyx_memoryview___pyx_pf_15View_dot_MemoryView_10memoryview___cinit__(struct __pyx_memoryview_obj *__pyx_v_self, PyObject *__pyx_v_obj, int __pyx_v_flags, int __pyx_v_dtype_is_object); /* proto */ -static void __pyx_memoryview___pyx_pf_15View_dot_MemoryView_10memoryview_2__dealloc__(struct __pyx_memoryview_obj *__pyx_v_self); /* proto */ -static PyObject *__pyx_memoryview___pyx_pf_15View_dot_MemoryView_10memoryview_4__getitem__(struct __pyx_memoryview_obj *__pyx_v_self, PyObject *__pyx_v_index); /* proto */ -static int __pyx_memoryview___pyx_pf_15View_dot_MemoryView_10memoryview_6__setitem__(struct __pyx_memoryview_obj *__pyx_v_self, PyObject *__pyx_v_index, PyObject *__pyx_v_value); /* proto */ -static int __pyx_memoryview___pyx_pf_15View_dot_MemoryView_10memoryview_8__getbuffer__(struct __pyx_memoryview_obj *__pyx_v_self, Py_buffer *__pyx_v_info, int __pyx_v_flags); /* proto */ -static PyObject *__pyx_pf_15View_dot_MemoryView_10memoryview_1T___get__(struct __pyx_memoryview_obj *__pyx_v_self); /* proto */ -static PyObject *__pyx_pf_15View_dot_MemoryView_10memoryview_4base___get__(struct __pyx_memoryview_obj *__pyx_v_self); /* proto */ -static PyObject *__pyx_pf_15View_dot_MemoryView_10memoryview_5shape___get__(struct __pyx_memoryview_obj *__pyx_v_self); /* proto */ -static PyObject *__pyx_pf_15View_dot_MemoryView_10memoryview_7strides___get__(struct __pyx_memoryview_obj *__pyx_v_self); /* proto */ -static PyObject *__pyx_pf_15View_dot_MemoryView_10memoryview_10suboffsets___get__(struct __pyx_memoryview_obj *__pyx_v_self); /* proto */ -static PyObject *__pyx_pf_15View_dot_MemoryView_10memoryview_4ndim___get__(struct __pyx_memoryview_obj *__pyx_v_self); /* proto */ -static PyObject *__pyx_pf_15View_dot_MemoryView_10memoryview_8itemsize___get__(struct __pyx_memoryview_obj *__pyx_v_self); /* proto */ -static PyObject *__pyx_pf_15View_dot_MemoryView_10memoryview_6nbytes___get__(struct __pyx_memoryview_obj *__pyx_v_self); /* proto */ -static PyObject *__pyx_pf_15View_dot_MemoryView_10memoryview_4size___get__(struct __pyx_memoryview_obj *__pyx_v_self); /* proto */ -static Py_ssize_t __pyx_memoryview___pyx_pf_15View_dot_MemoryView_10memoryview_10__len__(struct __pyx_memoryview_obj *__pyx_v_self); /* proto */ -static PyObject *__pyx_memoryview___pyx_pf_15View_dot_MemoryView_10memoryview_12__repr__(struct __pyx_memoryview_obj *__pyx_v_self); /* proto */ -static PyObject *__pyx_memoryview___pyx_pf_15View_dot_MemoryView_10memoryview_14__str__(struct __pyx_memoryview_obj *__pyx_v_self); /* proto */ -static PyObject *__pyx_memoryview___pyx_pf_15View_dot_MemoryView_10memoryview_16is_c_contig(struct __pyx_memoryview_obj *__pyx_v_self); /* proto */ -static PyObject *__pyx_memoryview___pyx_pf_15View_dot_MemoryView_10memoryview_18is_f_contig(struct __pyx_memoryview_obj *__pyx_v_self); /* proto */ -static PyObject *__pyx_memoryview___pyx_pf_15View_dot_MemoryView_10memoryview_20copy(struct __pyx_memoryview_obj *__pyx_v_self); /* proto */ -static PyObject *__pyx_memoryview___pyx_pf_15View_dot_MemoryView_10memoryview_22copy_fortran(struct __pyx_memoryview_obj *__pyx_v_self); /* proto */ -static PyObject *__pyx_pf___pyx_memoryview___reduce_cython__(CYTHON_UNUSED struct __pyx_memoryview_obj *__pyx_v_self); /* proto */ -static PyObject *__pyx_pf___pyx_memoryview_2__setstate_cython__(CYTHON_UNUSED struct __pyx_memoryview_obj *__pyx_v_self, CYTHON_UNUSED PyObject *__pyx_v___pyx_state); /* proto */ -static void __pyx_memoryviewslice___pyx_pf_15View_dot_MemoryView_16_memoryviewslice___dealloc__(struct __pyx_memoryviewslice_obj *__pyx_v_self); /* proto */ -static PyObject *__pyx_pf_15View_dot_MemoryView_16_memoryviewslice_4base___get__(struct __pyx_memoryviewslice_obj *__pyx_v_self); /* proto */ -static PyObject *__pyx_pf___pyx_memoryviewslice___reduce_cython__(CYTHON_UNUSED struct __pyx_memoryviewslice_obj *__pyx_v_self); /* proto */ -static PyObject *__pyx_pf___pyx_memoryviewslice_2__setstate_cython__(CYTHON_UNUSED struct __pyx_memoryviewslice_obj *__pyx_v_self, CYTHON_UNUSED PyObject *__pyx_v___pyx_state); /* proto */ -static PyObject *__pyx_pf_15View_dot_MemoryView___pyx_unpickle_Enum(CYTHON_UNUSED PyObject *__pyx_self, PyObject *__pyx_v___pyx_type, long __pyx_v___pyx_checksum, PyObject *__pyx_v___pyx_state); /* proto */ -static PyObject *__pyx_tp_new_array(PyTypeObject *t, PyObject *a, PyObject *k); /*proto*/ -static PyObject *__pyx_tp_new_Enum(PyTypeObject *t, PyObject *a, PyObject *k); /*proto*/ -static PyObject *__pyx_tp_new_memoryview(PyTypeObject *t, PyObject *a, PyObject *k); /*proto*/ -static PyObject *__pyx_tp_new__memoryviewslice(PyTypeObject *t, PyObject *a, PyObject *k); /*proto*/ -static PyObject *__pyx_int_0; -static PyObject *__pyx_int_1; -static PyObject *__pyx_int_184977713; -static PyObject *__pyx_int_neg_1; -static float __pyx_k_; -static PyObject *__pyx_tuple__2; -static PyObject *__pyx_tuple__3; -static PyObject *__pyx_tuple__4; -static PyObject *__pyx_tuple__5; -static PyObject *__pyx_tuple__6; -static PyObject *__pyx_tuple__7; -static PyObject *__pyx_tuple__8; -static PyObject *__pyx_tuple__9; -static PyObject *__pyx_slice__16; -static PyObject *__pyx_tuple__10; -static PyObject *__pyx_tuple__11; -static PyObject *__pyx_tuple__12; -static PyObject *__pyx_tuple__13; -static PyObject *__pyx_tuple__14; -static PyObject *__pyx_tuple__15; -static PyObject *__pyx_tuple__17; -static PyObject *__pyx_tuple__18; -static PyObject *__pyx_tuple__19; -static PyObject *__pyx_tuple__20; -static PyObject *__pyx_tuple__21; -static PyObject *__pyx_tuple__22; -static PyObject *__pyx_tuple__23; -static PyObject *__pyx_tuple__24; -static PyObject *__pyx_tuple__25; -static PyObject *__pyx_codeobj__26; -/* Late includes */ - -/* "monotonic_align/core.pyx":7 - * @cython.boundscheck(False) - * @cython.wraparound(False) - * cdef void maximum_path_each(int[:,::1] path, float[:,::1] value, int t_y, int t_x, float max_neg_val=-1e9) nogil: # <<<<<<<<<<<<<< - * cdef int x - * cdef int y - */ - -static void __pyx_f_15monotonic_align_4core_maximum_path_each(__Pyx_memviewslice __pyx_v_path, __Pyx_memviewslice __pyx_v_value, int __pyx_v_t_y, int __pyx_v_t_x, struct __pyx_opt_args_15monotonic_align_4core_maximum_path_each *__pyx_optional_args) { - float __pyx_v_max_neg_val = __pyx_k_; - int __pyx_v_x; - int __pyx_v_y; - float __pyx_v_v_prev; - float __pyx_v_v_cur; - int __pyx_v_index; - int __pyx_t_1; - int __pyx_t_2; - int __pyx_t_3; - long __pyx_t_4; - int __pyx_t_5; - long __pyx_t_6; - long __pyx_t_7; - int __pyx_t_8; - Py_ssize_t __pyx_t_9; - Py_ssize_t __pyx_t_10; - float __pyx_t_11; - float __pyx_t_12; - float __pyx_t_13; - int __pyx_t_14; - Py_ssize_t __pyx_t_15; - Py_ssize_t __pyx_t_16; - if (__pyx_optional_args) { - if (__pyx_optional_args->__pyx_n > 0) { - __pyx_v_max_neg_val = __pyx_optional_args->max_neg_val; - } - } - - /* "monotonic_align/core.pyx":13 - * cdef float v_cur - * cdef float tmp - * cdef int index = t_x - 1 # <<<<<<<<<<<<<< - * - * for y in range(t_y): - */ - __pyx_v_index = (__pyx_v_t_x - 1); - - /* "monotonic_align/core.pyx":15 - * cdef int index = t_x - 1 - * - * for y in range(t_y): # <<<<<<<<<<<<<< - * for x in range(max(0, t_x + y - t_y), min(t_x, y + 1)): - * if x == y: - */ - __pyx_t_1 = __pyx_v_t_y; - __pyx_t_2 = __pyx_t_1; - for (__pyx_t_3 = 0; __pyx_t_3 < __pyx_t_2; __pyx_t_3+=1) { - __pyx_v_y = __pyx_t_3; - - /* "monotonic_align/core.pyx":16 - * - * for y in range(t_y): - * for x in range(max(0, t_x + y - t_y), min(t_x, y + 1)): # <<<<<<<<<<<<<< - * if x == y: - * v_cur = max_neg_val - */ - __pyx_t_4 = (__pyx_v_y + 1); - __pyx_t_5 = __pyx_v_t_x; - if (((__pyx_t_4 < __pyx_t_5) != 0)) { - __pyx_t_6 = __pyx_t_4; - } else { - __pyx_t_6 = __pyx_t_5; - } - __pyx_t_4 = __pyx_t_6; - __pyx_t_5 = ((__pyx_v_t_x + __pyx_v_y) - __pyx_v_t_y); - __pyx_t_6 = 0; - if (((__pyx_t_5 > __pyx_t_6) != 0)) { - __pyx_t_7 = __pyx_t_5; - } else { - __pyx_t_7 = __pyx_t_6; - } - __pyx_t_6 = __pyx_t_4; - for (__pyx_t_5 = __pyx_t_7; __pyx_t_5 < __pyx_t_6; __pyx_t_5+=1) { - __pyx_v_x = __pyx_t_5; - - /* "monotonic_align/core.pyx":17 - * for y in range(t_y): - * for x in range(max(0, t_x + y - t_y), min(t_x, y + 1)): - * if x == y: # <<<<<<<<<<<<<< - * v_cur = max_neg_val - * else: - */ - __pyx_t_8 = ((__pyx_v_x == __pyx_v_y) != 0); - if (__pyx_t_8) { - - /* "monotonic_align/core.pyx":18 - * for x in range(max(0, t_x + y - t_y), min(t_x, y + 1)): - * if x == y: - * v_cur = max_neg_val # <<<<<<<<<<<<<< - * else: - * v_cur = value[y-1, x] - */ - __pyx_v_v_cur = __pyx_v_max_neg_val; - - /* "monotonic_align/core.pyx":17 - * for y in range(t_y): - * for x in range(max(0, t_x + y - t_y), min(t_x, y + 1)): - * if x == y: # <<<<<<<<<<<<<< - * v_cur = max_neg_val - * else: - */ - goto __pyx_L7; - } - - /* "monotonic_align/core.pyx":20 - * v_cur = max_neg_val - * else: - * v_cur = value[y-1, x] # <<<<<<<<<<<<<< - * if x == 0: - * if y == 0: - */ - /*else*/ { - __pyx_t_9 = (__pyx_v_y - 1); - __pyx_t_10 = __pyx_v_x; - __pyx_v_v_cur = (*((float *) ( /* dim=1 */ ((char *) (((float *) ( /* dim=0 */ (__pyx_v_value.data + __pyx_t_9 * __pyx_v_value.strides[0]) )) + __pyx_t_10)) ))); - } - __pyx_L7:; - - /* "monotonic_align/core.pyx":21 - * else: - * v_cur = value[y-1, x] - * if x == 0: # <<<<<<<<<<<<<< - * if y == 0: - * v_prev = 0. - */ - __pyx_t_8 = ((__pyx_v_x == 0) != 0); - if (__pyx_t_8) { - - /* "monotonic_align/core.pyx":22 - * v_cur = value[y-1, x] - * if x == 0: - * if y == 0: # <<<<<<<<<<<<<< - * v_prev = 0. - * else: - */ - __pyx_t_8 = ((__pyx_v_y == 0) != 0); - if (__pyx_t_8) { - - /* "monotonic_align/core.pyx":23 - * if x == 0: - * if y == 0: - * v_prev = 0. # <<<<<<<<<<<<<< - * else: - * v_prev = max_neg_val - */ - __pyx_v_v_prev = 0.; - - /* "monotonic_align/core.pyx":22 - * v_cur = value[y-1, x] - * if x == 0: - * if y == 0: # <<<<<<<<<<<<<< - * v_prev = 0. - * else: - */ - goto __pyx_L9; - } - - /* "monotonic_align/core.pyx":25 - * v_prev = 0. - * else: - * v_prev = max_neg_val # <<<<<<<<<<<<<< - * else: - * v_prev = value[y-1, x-1] - */ - /*else*/ { - __pyx_v_v_prev = __pyx_v_max_neg_val; - } - __pyx_L9:; - - /* "monotonic_align/core.pyx":21 - * else: - * v_cur = value[y-1, x] - * if x == 0: # <<<<<<<<<<<<<< - * if y == 0: - * v_prev = 0. - */ - goto __pyx_L8; - } - - /* "monotonic_align/core.pyx":27 - * v_prev = max_neg_val - * else: - * v_prev = value[y-1, x-1] # <<<<<<<<<<<<<< - * value[y, x] += max(v_prev, v_cur) - * - */ - /*else*/ { - __pyx_t_10 = (__pyx_v_y - 1); - __pyx_t_9 = (__pyx_v_x - 1); - __pyx_v_v_prev = (*((float *) ( /* dim=1 */ ((char *) (((float *) ( /* dim=0 */ (__pyx_v_value.data + __pyx_t_10 * __pyx_v_value.strides[0]) )) + __pyx_t_9)) ))); - } - __pyx_L8:; - - /* "monotonic_align/core.pyx":28 - * else: - * v_prev = value[y-1, x-1] - * value[y, x] += max(v_prev, v_cur) # <<<<<<<<<<<<<< - * - * for y in range(t_y - 1, -1, -1): - */ - __pyx_t_11 = __pyx_v_v_cur; - __pyx_t_12 = __pyx_v_v_prev; - if (((__pyx_t_11 > __pyx_t_12) != 0)) { - __pyx_t_13 = __pyx_t_11; - } else { - __pyx_t_13 = __pyx_t_12; - } - __pyx_t_9 = __pyx_v_y; - __pyx_t_10 = __pyx_v_x; - *((float *) ( /* dim=1 */ ((char *) (((float *) ( /* dim=0 */ (__pyx_v_value.data + __pyx_t_9 * __pyx_v_value.strides[0]) )) + __pyx_t_10)) )) += __pyx_t_13; - } - } - - /* "monotonic_align/core.pyx":30 - * value[y, x] += max(v_prev, v_cur) - * - * for y in range(t_y - 1, -1, -1): # <<<<<<<<<<<<<< - * path[y, index] = 1 - * if index != 0 and (index == y or value[y-1, index] < value[y-1, index-1]): - */ - for (__pyx_t_1 = (__pyx_v_t_y - 1); __pyx_t_1 > -1; __pyx_t_1-=1) { - __pyx_v_y = __pyx_t_1; - - /* "monotonic_align/core.pyx":31 - * - * for y in range(t_y - 1, -1, -1): - * path[y, index] = 1 # <<<<<<<<<<<<<< - * if index != 0 and (index == y or value[y-1, index] < value[y-1, index-1]): - * index = index - 1 - */ - __pyx_t_10 = __pyx_v_y; - __pyx_t_9 = __pyx_v_index; - *((int *) ( /* dim=1 */ ((char *) (((int *) ( /* dim=0 */ (__pyx_v_path.data + __pyx_t_10 * __pyx_v_path.strides[0]) )) + __pyx_t_9)) )) = 1; - - /* "monotonic_align/core.pyx":32 - * for y in range(t_y - 1, -1, -1): - * path[y, index] = 1 - * if index != 0 and (index == y or value[y-1, index] < value[y-1, index-1]): # <<<<<<<<<<<<<< - * index = index - 1 - * - */ - __pyx_t_14 = ((__pyx_v_index != 0) != 0); - if (__pyx_t_14) { - } else { - __pyx_t_8 = __pyx_t_14; - goto __pyx_L13_bool_binop_done; - } - __pyx_t_14 = ((__pyx_v_index == __pyx_v_y) != 0); - if (!__pyx_t_14) { - } else { - __pyx_t_8 = __pyx_t_14; - goto __pyx_L13_bool_binop_done; - } - __pyx_t_9 = (__pyx_v_y - 1); - __pyx_t_10 = __pyx_v_index; - __pyx_t_15 = (__pyx_v_y - 1); - __pyx_t_16 = (__pyx_v_index - 1); - __pyx_t_14 = (((*((float *) ( /* dim=1 */ ((char *) (((float *) ( /* dim=0 */ (__pyx_v_value.data + __pyx_t_9 * __pyx_v_value.strides[0]) )) + __pyx_t_10)) ))) < (*((float *) ( /* dim=1 */ ((char *) (((float *) ( /* dim=0 */ (__pyx_v_value.data + __pyx_t_15 * __pyx_v_value.strides[0]) )) + __pyx_t_16)) )))) != 0); - __pyx_t_8 = __pyx_t_14; - __pyx_L13_bool_binop_done:; - if (__pyx_t_8) { - - /* "monotonic_align/core.pyx":33 - * path[y, index] = 1 - * if index != 0 and (index == y or value[y-1, index] < value[y-1, index-1]): - * index = index - 1 # <<<<<<<<<<<<<< - * - * - */ - __pyx_v_index = (__pyx_v_index - 1); - - /* "monotonic_align/core.pyx":32 - * for y in range(t_y - 1, -1, -1): - * path[y, index] = 1 - * if index != 0 and (index == y or value[y-1, index] < value[y-1, index-1]): # <<<<<<<<<<<<<< - * index = index - 1 - * - */ - } - } - - /* "monotonic_align/core.pyx":7 - * @cython.boundscheck(False) - * @cython.wraparound(False) - * cdef void maximum_path_each(int[:,::1] path, float[:,::1] value, int t_y, int t_x, float max_neg_val=-1e9) nogil: # <<<<<<<<<<<<<< - * cdef int x - * cdef int y - */ - - /* function exit code */ -} - -/* "monotonic_align/core.pyx":38 - * @cython.boundscheck(False) - * @cython.wraparound(False) - * cpdef void maximum_path_c(int[:,:,::1] paths, float[:,:,::1] values, int[::1] t_ys, int[::1] t_xs) nogil: # <<<<<<<<<<<<<< - * cdef int b = paths.shape[0] - * cdef int i - */ - -static PyObject *__pyx_pw_15monotonic_align_4core_1maximum_path_c(PyObject *__pyx_self, PyObject *__pyx_args, PyObject *__pyx_kwds); /*proto*/ -static void __pyx_f_15monotonic_align_4core_maximum_path_c(__Pyx_memviewslice __pyx_v_paths, __Pyx_memviewslice __pyx_v_values, __Pyx_memviewslice __pyx_v_t_ys, __Pyx_memviewslice __pyx_v_t_xs, CYTHON_UNUSED int __pyx_skip_dispatch) { - CYTHON_UNUSED int __pyx_v_b; - int __pyx_v_i; - int __pyx_t_1; - int __pyx_t_2; - int __pyx_t_3; - __Pyx_memviewslice __pyx_t_4 = { 0, 0, { 0 }, { 0 }, { 0 } }; - __Pyx_memviewslice __pyx_t_5 = { 0, 0, { 0 }, { 0 }, { 0 } }; - Py_ssize_t __pyx_t_6; - Py_ssize_t __pyx_t_7; - - /* "monotonic_align/core.pyx":39 - * @cython.wraparound(False) - * cpdef void maximum_path_c(int[:,:,::1] paths, float[:,:,::1] values, int[::1] t_ys, int[::1] t_xs) nogil: - * cdef int b = paths.shape[0] # <<<<<<<<<<<<<< - * cdef int i - * for i in prange(b, nogil=True): - */ - __pyx_v_b = (__pyx_v_paths.shape[0]); - - /* "monotonic_align/core.pyx":41 - * cdef int b = paths.shape[0] - * cdef int i - * for i in prange(b, nogil=True): # <<<<<<<<<<<<<< - * maximum_path_each(paths[i], values[i], t_ys[i], t_xs[i]) - */ - { - #ifdef WITH_THREAD - PyThreadState *_save; - Py_UNBLOCK_THREADS - __Pyx_FastGIL_Remember(); - #endif - /*try:*/ { - __pyx_t_1 = __pyx_v_b; - if ((1 == 0)) abort(); - { - #if ((defined(__APPLE__) || defined(__OSX__)) && (defined(__GNUC__) && (__GNUC__ > 2 || (__GNUC__ == 2 && (__GNUC_MINOR__ > 95))))) - #undef likely - #undef unlikely - #define likely(x) (x) - #define unlikely(x) (x) - #endif - __pyx_t_3 = (__pyx_t_1 - 0 + 1 - 1/abs(1)) / 1; - if (__pyx_t_3 > 0) - { - #ifdef _OPENMP - #pragma omp parallel private(__pyx_t_6, __pyx_t_7) firstprivate(__pyx_t_4, __pyx_t_5) - #endif /* _OPENMP */ - { - #ifdef _OPENMP - #pragma omp for firstprivate(__pyx_v_i) lastprivate(__pyx_v_i) - #endif /* _OPENMP */ - for (__pyx_t_2 = 0; __pyx_t_2 < __pyx_t_3; __pyx_t_2++){ - { - __pyx_v_i = (int)(0 + 1 * __pyx_t_2); - - /* "monotonic_align/core.pyx":42 - * cdef int i - * for i in prange(b, nogil=True): - * maximum_path_each(paths[i], values[i], t_ys[i], t_xs[i]) # <<<<<<<<<<<<<< - */ - __pyx_t_4.data = __pyx_v_paths.data; - __pyx_t_4.memview = __pyx_v_paths.memview; - __PYX_INC_MEMVIEW(&__pyx_t_4, 0); - { - Py_ssize_t __pyx_tmp_idx = __pyx_v_i; - Py_ssize_t __pyx_tmp_stride = __pyx_v_paths.strides[0]; - __pyx_t_4.data += __pyx_tmp_idx * __pyx_tmp_stride; -} - -__pyx_t_4.shape[0] = __pyx_v_paths.shape[1]; -__pyx_t_4.strides[0] = __pyx_v_paths.strides[1]; - __pyx_t_4.suboffsets[0] = -1; - -__pyx_t_4.shape[1] = __pyx_v_paths.shape[2]; -__pyx_t_4.strides[1] = __pyx_v_paths.strides[2]; - __pyx_t_4.suboffsets[1] = -1; - -__pyx_t_5.data = __pyx_v_values.data; - __pyx_t_5.memview = __pyx_v_values.memview; - __PYX_INC_MEMVIEW(&__pyx_t_5, 0); - { - Py_ssize_t __pyx_tmp_idx = __pyx_v_i; - Py_ssize_t __pyx_tmp_stride = __pyx_v_values.strides[0]; - __pyx_t_5.data += __pyx_tmp_idx * __pyx_tmp_stride; -} - -__pyx_t_5.shape[0] = __pyx_v_values.shape[1]; -__pyx_t_5.strides[0] = __pyx_v_values.strides[1]; - __pyx_t_5.suboffsets[0] = -1; - -__pyx_t_5.shape[1] = __pyx_v_values.shape[2]; -__pyx_t_5.strides[1] = __pyx_v_values.strides[2]; - __pyx_t_5.suboffsets[1] = -1; - -__pyx_t_6 = __pyx_v_i; - __pyx_t_7 = __pyx_v_i; - __pyx_f_15monotonic_align_4core_maximum_path_each(__pyx_t_4, __pyx_t_5, (*((int *) ( /* dim=0 */ ((char *) (((int *) __pyx_v_t_ys.data) + __pyx_t_6)) ))), (*((int *) ( /* dim=0 */ ((char *) (((int *) __pyx_v_t_xs.data) + __pyx_t_7)) ))), NULL); - __PYX_XDEC_MEMVIEW(&__pyx_t_4, 0); - __pyx_t_4.memview = NULL; - __pyx_t_4.data = NULL; - __PYX_XDEC_MEMVIEW(&__pyx_t_5, 0); - __pyx_t_5.memview = NULL; - __pyx_t_5.data = NULL; - } - } - } - } - } - #if ((defined(__APPLE__) || defined(__OSX__)) && (defined(__GNUC__) && (__GNUC__ > 2 || (__GNUC__ == 2 && (__GNUC_MINOR__ > 95))))) - #undef likely - #undef unlikely - #define likely(x) __builtin_expect(!!(x), 1) - #define unlikely(x) __builtin_expect(!!(x), 0) - #endif - } - - /* "monotonic_align/core.pyx":41 - * cdef int b = paths.shape[0] - * cdef int i - * for i in prange(b, nogil=True): # <<<<<<<<<<<<<< - * maximum_path_each(paths[i], values[i], t_ys[i], t_xs[i]) - */ - /*finally:*/ { - /*normal exit:*/{ - #ifdef WITH_THREAD - __Pyx_FastGIL_Forget(); - Py_BLOCK_THREADS - #endif - goto __pyx_L5; - } - __pyx_L5:; - } - } - - /* "monotonic_align/core.pyx":38 - * @cython.boundscheck(False) - * @cython.wraparound(False) - * cpdef void maximum_path_c(int[:,:,::1] paths, float[:,:,::1] values, int[::1] t_ys, int[::1] t_xs) nogil: # <<<<<<<<<<<<<< - * cdef int b = paths.shape[0] - * cdef int i - */ - - /* function exit code */ -} - -/* Python wrapper */ -static PyObject *__pyx_pw_15monotonic_align_4core_1maximum_path_c(PyObject *__pyx_self, PyObject *__pyx_args, PyObject *__pyx_kwds); /*proto*/ -static PyObject *__pyx_pw_15monotonic_align_4core_1maximum_path_c(PyObject *__pyx_self, PyObject *__pyx_args, PyObject *__pyx_kwds) { - __Pyx_memviewslice __pyx_v_paths = { 0, 0, { 0 }, { 0 }, { 0 } }; - __Pyx_memviewslice __pyx_v_values = { 0, 0, { 0 }, { 0 }, { 0 } }; - __Pyx_memviewslice __pyx_v_t_ys = { 0, 0, { 0 }, { 0 }, { 0 } }; - __Pyx_memviewslice __pyx_v_t_xs = { 0, 0, { 0 }, { 0 }, { 0 } }; - int __pyx_lineno = 0; - const char *__pyx_filename = NULL; - int __pyx_clineno = 0; - PyObject *__pyx_r = 0; - __Pyx_RefNannyDeclarations - __Pyx_RefNannySetupContext("maximum_path_c (wrapper)", 0); - { - static PyObject **__pyx_pyargnames[] = {&__pyx_n_s_paths,&__pyx_n_s_values,&__pyx_n_s_t_ys,&__pyx_n_s_t_xs,0}; - PyObject* values[4] = {0,0,0,0}; - if (unlikely(__pyx_kwds)) { - Py_ssize_t kw_args; - const Py_ssize_t pos_args = PyTuple_GET_SIZE(__pyx_args); - switch (pos_args) { - case 4: values[3] = PyTuple_GET_ITEM(__pyx_args, 3); - CYTHON_FALLTHROUGH; - case 3: values[2] = PyTuple_GET_ITEM(__pyx_args, 2); - CYTHON_FALLTHROUGH; - case 2: values[1] = PyTuple_GET_ITEM(__pyx_args, 1); - CYTHON_FALLTHROUGH; - case 1: values[0] = PyTuple_GET_ITEM(__pyx_args, 0); - CYTHON_FALLTHROUGH; - case 0: break; - default: goto __pyx_L5_argtuple_error; - } - kw_args = PyDict_Size(__pyx_kwds); - switch (pos_args) { - case 0: - if (likely((values[0] = __Pyx_PyDict_GetItemStr(__pyx_kwds, __pyx_n_s_paths)) != 0)) kw_args--; - else goto __pyx_L5_argtuple_error; - CYTHON_FALLTHROUGH; - case 1: - if (likely((values[1] = __Pyx_PyDict_GetItemStr(__pyx_kwds, __pyx_n_s_values)) != 0)) kw_args--; - else { - __Pyx_RaiseArgtupleInvalid("maximum_path_c", 1, 4, 4, 1); __PYX_ERR(0, 38, __pyx_L3_error) - } - CYTHON_FALLTHROUGH; - case 2: - if (likely((values[2] = __Pyx_PyDict_GetItemStr(__pyx_kwds, __pyx_n_s_t_ys)) != 0)) kw_args--; - else { - __Pyx_RaiseArgtupleInvalid("maximum_path_c", 1, 4, 4, 2); __PYX_ERR(0, 38, __pyx_L3_error) - } - CYTHON_FALLTHROUGH; - case 3: - if (likely((values[3] = __Pyx_PyDict_GetItemStr(__pyx_kwds, __pyx_n_s_t_xs)) != 0)) kw_args--; - else { - __Pyx_RaiseArgtupleInvalid("maximum_path_c", 1, 4, 4, 3); __PYX_ERR(0, 38, __pyx_L3_error) - } - } - if (unlikely(kw_args > 0)) { - if (unlikely(__Pyx_ParseOptionalKeywords(__pyx_kwds, __pyx_pyargnames, 0, values, pos_args, "maximum_path_c") < 0)) __PYX_ERR(0, 38, __pyx_L3_error) - } - } else if (PyTuple_GET_SIZE(__pyx_args) != 4) { - goto __pyx_L5_argtuple_error; - } else { - values[0] = PyTuple_GET_ITEM(__pyx_args, 0); - values[1] = PyTuple_GET_ITEM(__pyx_args, 1); - values[2] = PyTuple_GET_ITEM(__pyx_args, 2); - values[3] = PyTuple_GET_ITEM(__pyx_args, 3); - } - __pyx_v_paths = __Pyx_PyObject_to_MemoryviewSlice_d_d_dc_int(values[0], PyBUF_WRITABLE); if (unlikely(!__pyx_v_paths.memview)) __PYX_ERR(0, 38, __pyx_L3_error) - __pyx_v_values = __Pyx_PyObject_to_MemoryviewSlice_d_d_dc_float(values[1], PyBUF_WRITABLE); if (unlikely(!__pyx_v_values.memview)) __PYX_ERR(0, 38, __pyx_L3_error) - __pyx_v_t_ys = __Pyx_PyObject_to_MemoryviewSlice_dc_int(values[2], PyBUF_WRITABLE); if (unlikely(!__pyx_v_t_ys.memview)) __PYX_ERR(0, 38, __pyx_L3_error) - __pyx_v_t_xs = __Pyx_PyObject_to_MemoryviewSlice_dc_int(values[3], PyBUF_WRITABLE); if (unlikely(!__pyx_v_t_xs.memview)) __PYX_ERR(0, 38, __pyx_L3_error) - } - goto __pyx_L4_argument_unpacking_done; - __pyx_L5_argtuple_error:; - __Pyx_RaiseArgtupleInvalid("maximum_path_c", 1, 4, 4, PyTuple_GET_SIZE(__pyx_args)); __PYX_ERR(0, 38, __pyx_L3_error) - __pyx_L3_error:; - __Pyx_AddTraceback("monotonic_align.core.maximum_path_c", __pyx_clineno, __pyx_lineno, __pyx_filename); - __Pyx_RefNannyFinishContext(); - return NULL; - __pyx_L4_argument_unpacking_done:; - __pyx_r = __pyx_pf_15monotonic_align_4core_maximum_path_c(__pyx_self, __pyx_v_paths, __pyx_v_values, __pyx_v_t_ys, __pyx_v_t_xs); - - /* function exit code */ - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} - -static PyObject *__pyx_pf_15monotonic_align_4core_maximum_path_c(CYTHON_UNUSED PyObject *__pyx_self, __Pyx_memviewslice __pyx_v_paths, __Pyx_memviewslice __pyx_v_values, __Pyx_memviewslice __pyx_v_t_ys, __Pyx_memviewslice __pyx_v_t_xs) { - PyObject *__pyx_r = NULL; - __Pyx_RefNannyDeclarations - PyObject *__pyx_t_1 = NULL; - int __pyx_lineno = 0; - const char *__pyx_filename = NULL; - int __pyx_clineno = 0; - __Pyx_RefNannySetupContext("maximum_path_c", 0); - __Pyx_XDECREF(__pyx_r); - if (unlikely(!__pyx_v_paths.memview)) { __Pyx_RaiseUnboundLocalError("paths"); __PYX_ERR(0, 38, __pyx_L1_error) } - if (unlikely(!__pyx_v_values.memview)) { __Pyx_RaiseUnboundLocalError("values"); __PYX_ERR(0, 38, __pyx_L1_error) } - if (unlikely(!__pyx_v_t_ys.memview)) { __Pyx_RaiseUnboundLocalError("t_ys"); __PYX_ERR(0, 38, __pyx_L1_error) } - if (unlikely(!__pyx_v_t_xs.memview)) { __Pyx_RaiseUnboundLocalError("t_xs"); __PYX_ERR(0, 38, __pyx_L1_error) } - __pyx_t_1 = __Pyx_void_to_None(__pyx_f_15monotonic_align_4core_maximum_path_c(__pyx_v_paths, __pyx_v_values, __pyx_v_t_ys, __pyx_v_t_xs, 0)); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 38, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - __pyx_r = __pyx_t_1; - __pyx_t_1 = 0; - goto __pyx_L0; - - /* function exit code */ - __pyx_L1_error:; - __Pyx_XDECREF(__pyx_t_1); - __Pyx_AddTraceback("monotonic_align.core.maximum_path_c", __pyx_clineno, __pyx_lineno, __pyx_filename); - __pyx_r = NULL; - __pyx_L0:; - __PYX_XDEC_MEMVIEW(&__pyx_v_paths, 1); - __PYX_XDEC_MEMVIEW(&__pyx_v_values, 1); - __PYX_XDEC_MEMVIEW(&__pyx_v_t_ys, 1); - __PYX_XDEC_MEMVIEW(&__pyx_v_t_xs, 1); - __Pyx_XGIVEREF(__pyx_r); - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} - -/* "View.MemoryView":122 - * cdef bint dtype_is_object - * - * def __cinit__(array self, tuple shape, Py_ssize_t itemsize, format not None, # <<<<<<<<<<<<<< - * mode="c", bint allocate_buffer=True): - * - */ - -/* Python wrapper */ -static int __pyx_array___cinit__(PyObject *__pyx_v_self, PyObject *__pyx_args, PyObject *__pyx_kwds); /*proto*/ -static int __pyx_array___cinit__(PyObject *__pyx_v_self, PyObject *__pyx_args, PyObject *__pyx_kwds) { - PyObject *__pyx_v_shape = 0; - Py_ssize_t __pyx_v_itemsize; - PyObject *__pyx_v_format = 0; - PyObject *__pyx_v_mode = 0; - int __pyx_v_allocate_buffer; - int __pyx_lineno = 0; - const char *__pyx_filename = NULL; - int __pyx_clineno = 0; - int __pyx_r; - __Pyx_RefNannyDeclarations - __Pyx_RefNannySetupContext("__cinit__ (wrapper)", 0); - { - static PyObject **__pyx_pyargnames[] = {&__pyx_n_s_shape,&__pyx_n_s_itemsize,&__pyx_n_s_format,&__pyx_n_s_mode,&__pyx_n_s_allocate_buffer,0}; - PyObject* values[5] = {0,0,0,0,0}; - values[3] = ((PyObject *)__pyx_n_s_c); - if (unlikely(__pyx_kwds)) { - Py_ssize_t kw_args; - const Py_ssize_t pos_args = PyTuple_GET_SIZE(__pyx_args); - switch (pos_args) { - case 5: values[4] = PyTuple_GET_ITEM(__pyx_args, 4); - CYTHON_FALLTHROUGH; - case 4: values[3] = PyTuple_GET_ITEM(__pyx_args, 3); - CYTHON_FALLTHROUGH; - case 3: values[2] = PyTuple_GET_ITEM(__pyx_args, 2); - CYTHON_FALLTHROUGH; - case 2: values[1] = PyTuple_GET_ITEM(__pyx_args, 1); - CYTHON_FALLTHROUGH; - case 1: values[0] = PyTuple_GET_ITEM(__pyx_args, 0); - CYTHON_FALLTHROUGH; - case 0: break; - default: goto __pyx_L5_argtuple_error; - } - kw_args = PyDict_Size(__pyx_kwds); - switch (pos_args) { - case 0: - if (likely((values[0] = __Pyx_PyDict_GetItemStr(__pyx_kwds, __pyx_n_s_shape)) != 0)) kw_args--; - else goto __pyx_L5_argtuple_error; - CYTHON_FALLTHROUGH; - case 1: - if (likely((values[1] = __Pyx_PyDict_GetItemStr(__pyx_kwds, __pyx_n_s_itemsize)) != 0)) kw_args--; - else { - __Pyx_RaiseArgtupleInvalid("__cinit__", 0, 3, 5, 1); __PYX_ERR(1, 122, __pyx_L3_error) - } - CYTHON_FALLTHROUGH; - case 2: - if (likely((values[2] = __Pyx_PyDict_GetItemStr(__pyx_kwds, __pyx_n_s_format)) != 0)) kw_args--; - else { - __Pyx_RaiseArgtupleInvalid("__cinit__", 0, 3, 5, 2); __PYX_ERR(1, 122, __pyx_L3_error) - } - CYTHON_FALLTHROUGH; - case 3: - if (kw_args > 0) { - PyObject* value = __Pyx_PyDict_GetItemStr(__pyx_kwds, __pyx_n_s_mode); - if (value) { values[3] = value; kw_args--; } - } - CYTHON_FALLTHROUGH; - case 4: - if (kw_args > 0) { - PyObject* value = __Pyx_PyDict_GetItemStr(__pyx_kwds, __pyx_n_s_allocate_buffer); - if (value) { values[4] = value; kw_args--; } - } - } - if (unlikely(kw_args > 0)) { - if (unlikely(__Pyx_ParseOptionalKeywords(__pyx_kwds, __pyx_pyargnames, 0, values, pos_args, "__cinit__") < 0)) __PYX_ERR(1, 122, __pyx_L3_error) - } - } else { - switch (PyTuple_GET_SIZE(__pyx_args)) { - case 5: values[4] = PyTuple_GET_ITEM(__pyx_args, 4); - CYTHON_FALLTHROUGH; - case 4: values[3] = PyTuple_GET_ITEM(__pyx_args, 3); - CYTHON_FALLTHROUGH; - case 3: values[2] = PyTuple_GET_ITEM(__pyx_args, 2); - values[1] = PyTuple_GET_ITEM(__pyx_args, 1); - values[0] = PyTuple_GET_ITEM(__pyx_args, 0); - break; - default: goto __pyx_L5_argtuple_error; - } - } - __pyx_v_shape = ((PyObject*)values[0]); - __pyx_v_itemsize = __Pyx_PyIndex_AsSsize_t(values[1]); if (unlikely((__pyx_v_itemsize == (Py_ssize_t)-1) && PyErr_Occurred())) __PYX_ERR(1, 122, __pyx_L3_error) - __pyx_v_format = values[2]; - __pyx_v_mode = values[3]; - if (values[4]) { - __pyx_v_allocate_buffer = __Pyx_PyObject_IsTrue(values[4]); if (unlikely((__pyx_v_allocate_buffer == (int)-1) && PyErr_Occurred())) __PYX_ERR(1, 123, __pyx_L3_error) - } else { - - /* "View.MemoryView":123 - * - * def __cinit__(array self, tuple shape, Py_ssize_t itemsize, format not None, - * mode="c", bint allocate_buffer=True): # <<<<<<<<<<<<<< - * - * cdef int idx - */ - __pyx_v_allocate_buffer = ((int)1); - } - } - goto __pyx_L4_argument_unpacking_done; - __pyx_L5_argtuple_error:; - __Pyx_RaiseArgtupleInvalid("__cinit__", 0, 3, 5, PyTuple_GET_SIZE(__pyx_args)); __PYX_ERR(1, 122, __pyx_L3_error) - __pyx_L3_error:; - __Pyx_AddTraceback("View.MemoryView.array.__cinit__", __pyx_clineno, __pyx_lineno, __pyx_filename); - __Pyx_RefNannyFinishContext(); - return -1; - __pyx_L4_argument_unpacking_done:; - if (unlikely(!__Pyx_ArgTypeTest(((PyObject *)__pyx_v_shape), (&PyTuple_Type), 1, "shape", 1))) __PYX_ERR(1, 122, __pyx_L1_error) - if (unlikely(((PyObject *)__pyx_v_format) == Py_None)) { - PyErr_Format(PyExc_TypeError, "Argument '%.200s' must not be None", "format"); __PYX_ERR(1, 122, __pyx_L1_error) - } - __pyx_r = __pyx_array___pyx_pf_15View_dot_MemoryView_5array___cinit__(((struct __pyx_array_obj *)__pyx_v_self), __pyx_v_shape, __pyx_v_itemsize, __pyx_v_format, __pyx_v_mode, __pyx_v_allocate_buffer); - - /* "View.MemoryView":122 - * cdef bint dtype_is_object - * - * def __cinit__(array self, tuple shape, Py_ssize_t itemsize, format not None, # <<<<<<<<<<<<<< - * mode="c", bint allocate_buffer=True): - * - */ - - /* function exit code */ - goto __pyx_L0; - __pyx_L1_error:; - __pyx_r = -1; - __pyx_L0:; - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} - -static int __pyx_array___pyx_pf_15View_dot_MemoryView_5array___cinit__(struct __pyx_array_obj *__pyx_v_self, PyObject *__pyx_v_shape, Py_ssize_t __pyx_v_itemsize, PyObject *__pyx_v_format, PyObject *__pyx_v_mode, int __pyx_v_allocate_buffer) { - int __pyx_v_idx; - Py_ssize_t __pyx_v_i; - Py_ssize_t __pyx_v_dim; - PyObject **__pyx_v_p; - char __pyx_v_order; - int __pyx_r; - __Pyx_RefNannyDeclarations - Py_ssize_t __pyx_t_1; - int __pyx_t_2; - PyObject *__pyx_t_3 = NULL; - int __pyx_t_4; - PyObject *__pyx_t_5 = NULL; - PyObject *__pyx_t_6 = NULL; - char *__pyx_t_7; - int __pyx_t_8; - Py_ssize_t __pyx_t_9; - PyObject *__pyx_t_10 = NULL; - Py_ssize_t __pyx_t_11; - int __pyx_lineno = 0; - const char *__pyx_filename = NULL; - int __pyx_clineno = 0; - __Pyx_RefNannySetupContext("__cinit__", 0); - __Pyx_INCREF(__pyx_v_format); - - /* "View.MemoryView":129 - * cdef PyObject **p - * - * self.ndim = len(shape) # <<<<<<<<<<<<<< - * self.itemsize = itemsize - * - */ - if (unlikely(__pyx_v_shape == Py_None)) { - PyErr_SetString(PyExc_TypeError, "object of type 'NoneType' has no len()"); - __PYX_ERR(1, 129, __pyx_L1_error) - } - __pyx_t_1 = PyTuple_GET_SIZE(__pyx_v_shape); if (unlikely(__pyx_t_1 == ((Py_ssize_t)-1))) __PYX_ERR(1, 129, __pyx_L1_error) - __pyx_v_self->ndim = ((int)__pyx_t_1); - - /* "View.MemoryView":130 - * - * self.ndim = len(shape) - * self.itemsize = itemsize # <<<<<<<<<<<<<< - * - * if not self.ndim: - */ - __pyx_v_self->itemsize = __pyx_v_itemsize; - - /* "View.MemoryView":132 - * self.itemsize = itemsize - * - * if not self.ndim: # <<<<<<<<<<<<<< - * raise ValueError("Empty shape tuple for cython.array") - * - */ - __pyx_t_2 = ((!(__pyx_v_self->ndim != 0)) != 0); - if (unlikely(__pyx_t_2)) { - - /* "View.MemoryView":133 - * - * if not self.ndim: - * raise ValueError("Empty shape tuple for cython.array") # <<<<<<<<<<<<<< - * - * if itemsize <= 0: - */ - __pyx_t_3 = __Pyx_PyObject_Call(__pyx_builtin_ValueError, __pyx_tuple__2, NULL); if (unlikely(!__pyx_t_3)) __PYX_ERR(1, 133, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_3); - __Pyx_Raise(__pyx_t_3, 0, 0, 0); - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - __PYX_ERR(1, 133, __pyx_L1_error) - - /* "View.MemoryView":132 - * self.itemsize = itemsize - * - * if not self.ndim: # <<<<<<<<<<<<<< - * raise ValueError("Empty shape tuple for cython.array") - * - */ - } - - /* "View.MemoryView":135 - * raise ValueError("Empty shape tuple for cython.array") - * - * if itemsize <= 0: # <<<<<<<<<<<<<< - * raise ValueError("itemsize <= 0 for cython.array") - * - */ - __pyx_t_2 = ((__pyx_v_itemsize <= 0) != 0); - if (unlikely(__pyx_t_2)) { - - /* "View.MemoryView":136 - * - * if itemsize <= 0: - * raise ValueError("itemsize <= 0 for cython.array") # <<<<<<<<<<<<<< - * - * if not isinstance(format, bytes): - */ - __pyx_t_3 = __Pyx_PyObject_Call(__pyx_builtin_ValueError, __pyx_tuple__3, NULL); if (unlikely(!__pyx_t_3)) __PYX_ERR(1, 136, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_3); - __Pyx_Raise(__pyx_t_3, 0, 0, 0); - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - __PYX_ERR(1, 136, __pyx_L1_error) - - /* "View.MemoryView":135 - * raise ValueError("Empty shape tuple for cython.array") - * - * if itemsize <= 0: # <<<<<<<<<<<<<< - * raise ValueError("itemsize <= 0 for cython.array") - * - */ - } - - /* "View.MemoryView":138 - * raise ValueError("itemsize <= 0 for cython.array") - * - * if not isinstance(format, bytes): # <<<<<<<<<<<<<< - * format = format.encode('ASCII') - * self._format = format # keep a reference to the byte string - */ - __pyx_t_2 = PyBytes_Check(__pyx_v_format); - __pyx_t_4 = ((!(__pyx_t_2 != 0)) != 0); - if (__pyx_t_4) { - - /* "View.MemoryView":139 - * - * if not isinstance(format, bytes): - * format = format.encode('ASCII') # <<<<<<<<<<<<<< - * self._format = format # keep a reference to the byte string - * self.format = self._format - */ - __pyx_t_5 = __Pyx_PyObject_GetAttrStr(__pyx_v_format, __pyx_n_s_encode); if (unlikely(!__pyx_t_5)) __PYX_ERR(1, 139, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_5); - __pyx_t_6 = NULL; - if (CYTHON_UNPACK_METHODS && likely(PyMethod_Check(__pyx_t_5))) { - __pyx_t_6 = PyMethod_GET_SELF(__pyx_t_5); - if (likely(__pyx_t_6)) { - PyObject* function = PyMethod_GET_FUNCTION(__pyx_t_5); - __Pyx_INCREF(__pyx_t_6); - __Pyx_INCREF(function); - __Pyx_DECREF_SET(__pyx_t_5, function); - } - } - __pyx_t_3 = (__pyx_t_6) ? __Pyx_PyObject_Call2Args(__pyx_t_5, __pyx_t_6, __pyx_n_s_ASCII) : __Pyx_PyObject_CallOneArg(__pyx_t_5, __pyx_n_s_ASCII); - __Pyx_XDECREF(__pyx_t_6); __pyx_t_6 = 0; - if (unlikely(!__pyx_t_3)) __PYX_ERR(1, 139, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_3); - __Pyx_DECREF(__pyx_t_5); __pyx_t_5 = 0; - __Pyx_DECREF_SET(__pyx_v_format, __pyx_t_3); - __pyx_t_3 = 0; - - /* "View.MemoryView":138 - * raise ValueError("itemsize <= 0 for cython.array") - * - * if not isinstance(format, bytes): # <<<<<<<<<<<<<< - * format = format.encode('ASCII') - * self._format = format # keep a reference to the byte string - */ - } - - /* "View.MemoryView":140 - * if not isinstance(format, bytes): - * format = format.encode('ASCII') - * self._format = format # keep a reference to the byte string # <<<<<<<<<<<<<< - * self.format = self._format - * - */ - if (!(likely(PyBytes_CheckExact(__pyx_v_format))||((__pyx_v_format) == Py_None)||(PyErr_Format(PyExc_TypeError, "Expected %.16s, got %.200s", "bytes", Py_TYPE(__pyx_v_format)->tp_name), 0))) __PYX_ERR(1, 140, __pyx_L1_error) - __pyx_t_3 = __pyx_v_format; - __Pyx_INCREF(__pyx_t_3); - __Pyx_GIVEREF(__pyx_t_3); - __Pyx_GOTREF(__pyx_v_self->_format); - __Pyx_DECREF(__pyx_v_self->_format); - __pyx_v_self->_format = ((PyObject*)__pyx_t_3); - __pyx_t_3 = 0; - - /* "View.MemoryView":141 - * format = format.encode('ASCII') - * self._format = format # keep a reference to the byte string - * self.format = self._format # <<<<<<<<<<<<<< - * - * - */ - if (unlikely(__pyx_v_self->_format == Py_None)) { - PyErr_SetString(PyExc_TypeError, "expected bytes, NoneType found"); - __PYX_ERR(1, 141, __pyx_L1_error) - } - __pyx_t_7 = __Pyx_PyBytes_AsWritableString(__pyx_v_self->_format); if (unlikely((!__pyx_t_7) && PyErr_Occurred())) __PYX_ERR(1, 141, __pyx_L1_error) - __pyx_v_self->format = __pyx_t_7; - - /* "View.MemoryView":144 - * - * - * self._shape = PyObject_Malloc(sizeof(Py_ssize_t)*self.ndim*2) # <<<<<<<<<<<<<< - * self._strides = self._shape + self.ndim - * - */ - __pyx_v_self->_shape = ((Py_ssize_t *)PyObject_Malloc((((sizeof(Py_ssize_t)) * __pyx_v_self->ndim) * 2))); - - /* "View.MemoryView":145 - * - * self._shape = PyObject_Malloc(sizeof(Py_ssize_t)*self.ndim*2) - * self._strides = self._shape + self.ndim # <<<<<<<<<<<<<< - * - * if not self._shape: - */ - __pyx_v_self->_strides = (__pyx_v_self->_shape + __pyx_v_self->ndim); - - /* "View.MemoryView":147 - * self._strides = self._shape + self.ndim - * - * if not self._shape: # <<<<<<<<<<<<<< - * raise MemoryError("unable to allocate shape and strides.") - * - */ - __pyx_t_4 = ((!(__pyx_v_self->_shape != 0)) != 0); - if (unlikely(__pyx_t_4)) { - - /* "View.MemoryView":148 - * - * if not self._shape: - * raise MemoryError("unable to allocate shape and strides.") # <<<<<<<<<<<<<< - * - * - */ - __pyx_t_3 = __Pyx_PyObject_Call(__pyx_builtin_MemoryError, __pyx_tuple__4, NULL); if (unlikely(!__pyx_t_3)) __PYX_ERR(1, 148, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_3); - __Pyx_Raise(__pyx_t_3, 0, 0, 0); - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - __PYX_ERR(1, 148, __pyx_L1_error) - - /* "View.MemoryView":147 - * self._strides = self._shape + self.ndim - * - * if not self._shape: # <<<<<<<<<<<<<< - * raise MemoryError("unable to allocate shape and strides.") - * - */ - } - - /* "View.MemoryView":151 - * - * - * for idx, dim in enumerate(shape): # <<<<<<<<<<<<<< - * if dim <= 0: - * raise ValueError("Invalid shape in axis %d: %d." % (idx, dim)) - */ - __pyx_t_8 = 0; - __pyx_t_3 = __pyx_v_shape; __Pyx_INCREF(__pyx_t_3); __pyx_t_1 = 0; - for (;;) { - if (__pyx_t_1 >= PyTuple_GET_SIZE(__pyx_t_3)) break; - #if CYTHON_ASSUME_SAFE_MACROS && !CYTHON_AVOID_BORROWED_REFS - __pyx_t_5 = PyTuple_GET_ITEM(__pyx_t_3, __pyx_t_1); __Pyx_INCREF(__pyx_t_5); __pyx_t_1++; if (unlikely(0 < 0)) __PYX_ERR(1, 151, __pyx_L1_error) - #else - __pyx_t_5 = PySequence_ITEM(__pyx_t_3, __pyx_t_1); __pyx_t_1++; if (unlikely(!__pyx_t_5)) __PYX_ERR(1, 151, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_5); - #endif - __pyx_t_9 = __Pyx_PyIndex_AsSsize_t(__pyx_t_5); if (unlikely((__pyx_t_9 == (Py_ssize_t)-1) && PyErr_Occurred())) __PYX_ERR(1, 151, __pyx_L1_error) - __Pyx_DECREF(__pyx_t_5); __pyx_t_5 = 0; - __pyx_v_dim = __pyx_t_9; - __pyx_v_idx = __pyx_t_8; - __pyx_t_8 = (__pyx_t_8 + 1); - - /* "View.MemoryView":152 - * - * for idx, dim in enumerate(shape): - * if dim <= 0: # <<<<<<<<<<<<<< - * raise ValueError("Invalid shape in axis %d: %d." % (idx, dim)) - * self._shape[idx] = dim - */ - __pyx_t_4 = ((__pyx_v_dim <= 0) != 0); - if (unlikely(__pyx_t_4)) { - - /* "View.MemoryView":153 - * for idx, dim in enumerate(shape): - * if dim <= 0: - * raise ValueError("Invalid shape in axis %d: %d." % (idx, dim)) # <<<<<<<<<<<<<< - * self._shape[idx] = dim - * - */ - __pyx_t_5 = __Pyx_PyInt_From_int(__pyx_v_idx); if (unlikely(!__pyx_t_5)) __PYX_ERR(1, 153, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_5); - __pyx_t_6 = PyInt_FromSsize_t(__pyx_v_dim); if (unlikely(!__pyx_t_6)) __PYX_ERR(1, 153, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_6); - __pyx_t_10 = PyTuple_New(2); if (unlikely(!__pyx_t_10)) __PYX_ERR(1, 153, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_10); - __Pyx_GIVEREF(__pyx_t_5); - PyTuple_SET_ITEM(__pyx_t_10, 0, __pyx_t_5); - __Pyx_GIVEREF(__pyx_t_6); - PyTuple_SET_ITEM(__pyx_t_10, 1, __pyx_t_6); - __pyx_t_5 = 0; - __pyx_t_6 = 0; - __pyx_t_6 = __Pyx_PyString_Format(__pyx_kp_s_Invalid_shape_in_axis_d_d, __pyx_t_10); if (unlikely(!__pyx_t_6)) __PYX_ERR(1, 153, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_6); - __Pyx_DECREF(__pyx_t_10); __pyx_t_10 = 0; - __pyx_t_10 = __Pyx_PyObject_CallOneArg(__pyx_builtin_ValueError, __pyx_t_6); if (unlikely(!__pyx_t_10)) __PYX_ERR(1, 153, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_10); - __Pyx_DECREF(__pyx_t_6); __pyx_t_6 = 0; - __Pyx_Raise(__pyx_t_10, 0, 0, 0); - __Pyx_DECREF(__pyx_t_10); __pyx_t_10 = 0; - __PYX_ERR(1, 153, __pyx_L1_error) - - /* "View.MemoryView":152 - * - * for idx, dim in enumerate(shape): - * if dim <= 0: # <<<<<<<<<<<<<< - * raise ValueError("Invalid shape in axis %d: %d." % (idx, dim)) - * self._shape[idx] = dim - */ - } - - /* "View.MemoryView":154 - * if dim <= 0: - * raise ValueError("Invalid shape in axis %d: %d." % (idx, dim)) - * self._shape[idx] = dim # <<<<<<<<<<<<<< - * - * cdef char order - */ - (__pyx_v_self->_shape[__pyx_v_idx]) = __pyx_v_dim; - - /* "View.MemoryView":151 - * - * - * for idx, dim in enumerate(shape): # <<<<<<<<<<<<<< - * if dim <= 0: - * raise ValueError("Invalid shape in axis %d: %d." % (idx, dim)) - */ - } - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - - /* "View.MemoryView":157 - * - * cdef char order - * if mode == 'fortran': # <<<<<<<<<<<<<< - * order = b'F' - * self.mode = u'fortran' - */ - __pyx_t_4 = (__Pyx_PyString_Equals(__pyx_v_mode, __pyx_n_s_fortran, Py_EQ)); if (unlikely(__pyx_t_4 < 0)) __PYX_ERR(1, 157, __pyx_L1_error) - if (__pyx_t_4) { - - /* "View.MemoryView":158 - * cdef char order - * if mode == 'fortran': - * order = b'F' # <<<<<<<<<<<<<< - * self.mode = u'fortran' - * elif mode == 'c': - */ - __pyx_v_order = 'F'; - - /* "View.MemoryView":159 - * if mode == 'fortran': - * order = b'F' - * self.mode = u'fortran' # <<<<<<<<<<<<<< - * elif mode == 'c': - * order = b'C' - */ - __Pyx_INCREF(__pyx_n_u_fortran); - __Pyx_GIVEREF(__pyx_n_u_fortran); - __Pyx_GOTREF(__pyx_v_self->mode); - __Pyx_DECREF(__pyx_v_self->mode); - __pyx_v_self->mode = __pyx_n_u_fortran; - - /* "View.MemoryView":157 - * - * cdef char order - * if mode == 'fortran': # <<<<<<<<<<<<<< - * order = b'F' - * self.mode = u'fortran' - */ - goto __pyx_L10; - } - - /* "View.MemoryView":160 - * order = b'F' - * self.mode = u'fortran' - * elif mode == 'c': # <<<<<<<<<<<<<< - * order = b'C' - * self.mode = u'c' - */ - __pyx_t_4 = (__Pyx_PyString_Equals(__pyx_v_mode, __pyx_n_s_c, Py_EQ)); if (unlikely(__pyx_t_4 < 0)) __PYX_ERR(1, 160, __pyx_L1_error) - if (likely(__pyx_t_4)) { - - /* "View.MemoryView":161 - * self.mode = u'fortran' - * elif mode == 'c': - * order = b'C' # <<<<<<<<<<<<<< - * self.mode = u'c' - * else: - */ - __pyx_v_order = 'C'; - - /* "View.MemoryView":162 - * elif mode == 'c': - * order = b'C' - * self.mode = u'c' # <<<<<<<<<<<<<< - * else: - * raise ValueError("Invalid mode, expected 'c' or 'fortran', got %s" % mode) - */ - __Pyx_INCREF(__pyx_n_u_c); - __Pyx_GIVEREF(__pyx_n_u_c); - __Pyx_GOTREF(__pyx_v_self->mode); - __Pyx_DECREF(__pyx_v_self->mode); - __pyx_v_self->mode = __pyx_n_u_c; - - /* "View.MemoryView":160 - * order = b'F' - * self.mode = u'fortran' - * elif mode == 'c': # <<<<<<<<<<<<<< - * order = b'C' - * self.mode = u'c' - */ - goto __pyx_L10; - } - - /* "View.MemoryView":164 - * self.mode = u'c' - * else: - * raise ValueError("Invalid mode, expected 'c' or 'fortran', got %s" % mode) # <<<<<<<<<<<<<< - * - * self.len = fill_contig_strides_array(self._shape, self._strides, - */ - /*else*/ { - __pyx_t_3 = __Pyx_PyString_FormatSafe(__pyx_kp_s_Invalid_mode_expected_c_or_fortr, __pyx_v_mode); if (unlikely(!__pyx_t_3)) __PYX_ERR(1, 164, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_3); - __pyx_t_10 = __Pyx_PyObject_CallOneArg(__pyx_builtin_ValueError, __pyx_t_3); if (unlikely(!__pyx_t_10)) __PYX_ERR(1, 164, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_10); - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - __Pyx_Raise(__pyx_t_10, 0, 0, 0); - __Pyx_DECREF(__pyx_t_10); __pyx_t_10 = 0; - __PYX_ERR(1, 164, __pyx_L1_error) - } - __pyx_L10:; - - /* "View.MemoryView":166 - * raise ValueError("Invalid mode, expected 'c' or 'fortran', got %s" % mode) - * - * self.len = fill_contig_strides_array(self._shape, self._strides, # <<<<<<<<<<<<<< - * itemsize, self.ndim, order) - * - */ - __pyx_v_self->len = __pyx_fill_contig_strides_array(__pyx_v_self->_shape, __pyx_v_self->_strides, __pyx_v_itemsize, __pyx_v_self->ndim, __pyx_v_order); - - /* "View.MemoryView":169 - * itemsize, self.ndim, order) - * - * self.free_data = allocate_buffer # <<<<<<<<<<<<<< - * self.dtype_is_object = format == b'O' - * if allocate_buffer: - */ - __pyx_v_self->free_data = __pyx_v_allocate_buffer; - - /* "View.MemoryView":170 - * - * self.free_data = allocate_buffer - * self.dtype_is_object = format == b'O' # <<<<<<<<<<<<<< - * if allocate_buffer: - * - */ - __pyx_t_10 = PyObject_RichCompare(__pyx_v_format, __pyx_n_b_O, Py_EQ); __Pyx_XGOTREF(__pyx_t_10); if (unlikely(!__pyx_t_10)) __PYX_ERR(1, 170, __pyx_L1_error) - __pyx_t_4 = __Pyx_PyObject_IsTrue(__pyx_t_10); if (unlikely((__pyx_t_4 == (int)-1) && PyErr_Occurred())) __PYX_ERR(1, 170, __pyx_L1_error) - __Pyx_DECREF(__pyx_t_10); __pyx_t_10 = 0; - __pyx_v_self->dtype_is_object = __pyx_t_4; - - /* "View.MemoryView":171 - * self.free_data = allocate_buffer - * self.dtype_is_object = format == b'O' - * if allocate_buffer: # <<<<<<<<<<<<<< - * - * - */ - __pyx_t_4 = (__pyx_v_allocate_buffer != 0); - if (__pyx_t_4) { - - /* "View.MemoryView":174 - * - * - * self.data = malloc(self.len) # <<<<<<<<<<<<<< - * if not self.data: - * raise MemoryError("unable to allocate array data.") - */ - __pyx_v_self->data = ((char *)malloc(__pyx_v_self->len)); - - /* "View.MemoryView":175 - * - * self.data = malloc(self.len) - * if not self.data: # <<<<<<<<<<<<<< - * raise MemoryError("unable to allocate array data.") - * - */ - __pyx_t_4 = ((!(__pyx_v_self->data != 0)) != 0); - if (unlikely(__pyx_t_4)) { - - /* "View.MemoryView":176 - * self.data = malloc(self.len) - * if not self.data: - * raise MemoryError("unable to allocate array data.") # <<<<<<<<<<<<<< - * - * if self.dtype_is_object: - */ - __pyx_t_10 = __Pyx_PyObject_Call(__pyx_builtin_MemoryError, __pyx_tuple__5, NULL); if (unlikely(!__pyx_t_10)) __PYX_ERR(1, 176, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_10); - __Pyx_Raise(__pyx_t_10, 0, 0, 0); - __Pyx_DECREF(__pyx_t_10); __pyx_t_10 = 0; - __PYX_ERR(1, 176, __pyx_L1_error) - - /* "View.MemoryView":175 - * - * self.data = malloc(self.len) - * if not self.data: # <<<<<<<<<<<<<< - * raise MemoryError("unable to allocate array data.") - * - */ - } - - /* "View.MemoryView":178 - * raise MemoryError("unable to allocate array data.") - * - * if self.dtype_is_object: # <<<<<<<<<<<<<< - * p = self.data - * for i in range(self.len / itemsize): - */ - __pyx_t_4 = (__pyx_v_self->dtype_is_object != 0); - if (__pyx_t_4) { - - /* "View.MemoryView":179 - * - * if self.dtype_is_object: - * p = self.data # <<<<<<<<<<<<<< - * for i in range(self.len / itemsize): - * p[i] = Py_None - */ - __pyx_v_p = ((PyObject **)__pyx_v_self->data); - - /* "View.MemoryView":180 - * if self.dtype_is_object: - * p = self.data - * for i in range(self.len / itemsize): # <<<<<<<<<<<<<< - * p[i] = Py_None - * Py_INCREF(Py_None) - */ - if (unlikely(__pyx_v_itemsize == 0)) { - PyErr_SetString(PyExc_ZeroDivisionError, "integer division or modulo by zero"); - __PYX_ERR(1, 180, __pyx_L1_error) - } - else if (sizeof(Py_ssize_t) == sizeof(long) && (!(((Py_ssize_t)-1) > 0)) && unlikely(__pyx_v_itemsize == (Py_ssize_t)-1) && unlikely(UNARY_NEG_WOULD_OVERFLOW(__pyx_v_self->len))) { - PyErr_SetString(PyExc_OverflowError, "value too large to perform division"); - __PYX_ERR(1, 180, __pyx_L1_error) - } - __pyx_t_1 = __Pyx_div_Py_ssize_t(__pyx_v_self->len, __pyx_v_itemsize); - __pyx_t_9 = __pyx_t_1; - for (__pyx_t_11 = 0; __pyx_t_11 < __pyx_t_9; __pyx_t_11+=1) { - __pyx_v_i = __pyx_t_11; - - /* "View.MemoryView":181 - * p = self.data - * for i in range(self.len / itemsize): - * p[i] = Py_None # <<<<<<<<<<<<<< - * Py_INCREF(Py_None) - * - */ - (__pyx_v_p[__pyx_v_i]) = Py_None; - - /* "View.MemoryView":182 - * for i in range(self.len / itemsize): - * p[i] = Py_None - * Py_INCREF(Py_None) # <<<<<<<<<<<<<< - * - * @cname('getbuffer') - */ - Py_INCREF(Py_None); - } - - /* "View.MemoryView":178 - * raise MemoryError("unable to allocate array data.") - * - * if self.dtype_is_object: # <<<<<<<<<<<<<< - * p = self.data - * for i in range(self.len / itemsize): - */ - } - - /* "View.MemoryView":171 - * self.free_data = allocate_buffer - * self.dtype_is_object = format == b'O' - * if allocate_buffer: # <<<<<<<<<<<<<< - * - * - */ - } - - /* "View.MemoryView":122 - * cdef bint dtype_is_object - * - * def __cinit__(array self, tuple shape, Py_ssize_t itemsize, format not None, # <<<<<<<<<<<<<< - * mode="c", bint allocate_buffer=True): - * - */ - - /* function exit code */ - __pyx_r = 0; - goto __pyx_L0; - __pyx_L1_error:; - __Pyx_XDECREF(__pyx_t_3); - __Pyx_XDECREF(__pyx_t_5); - __Pyx_XDECREF(__pyx_t_6); - __Pyx_XDECREF(__pyx_t_10); - __Pyx_AddTraceback("View.MemoryView.array.__cinit__", __pyx_clineno, __pyx_lineno, __pyx_filename); - __pyx_r = -1; - __pyx_L0:; - __Pyx_XDECREF(__pyx_v_format); - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} - -/* "View.MemoryView":185 - * - * @cname('getbuffer') - * def __getbuffer__(self, Py_buffer *info, int flags): # <<<<<<<<<<<<<< - * cdef int bufmode = -1 - * if self.mode == u"c": - */ - -/* Python wrapper */ -static CYTHON_UNUSED int __pyx_array_getbuffer(PyObject *__pyx_v_self, Py_buffer *__pyx_v_info, int __pyx_v_flags); /*proto*/ -static CYTHON_UNUSED int __pyx_array_getbuffer(PyObject *__pyx_v_self, Py_buffer *__pyx_v_info, int __pyx_v_flags) { - int __pyx_r; - __Pyx_RefNannyDeclarations - __Pyx_RefNannySetupContext("__getbuffer__ (wrapper)", 0); - __pyx_r = __pyx_array___pyx_pf_15View_dot_MemoryView_5array_2__getbuffer__(((struct __pyx_array_obj *)__pyx_v_self), ((Py_buffer *)__pyx_v_info), ((int)__pyx_v_flags)); - - /* function exit code */ - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} - -static int __pyx_array___pyx_pf_15View_dot_MemoryView_5array_2__getbuffer__(struct __pyx_array_obj *__pyx_v_self, Py_buffer *__pyx_v_info, int __pyx_v_flags) { - int __pyx_v_bufmode; - int __pyx_r; - __Pyx_RefNannyDeclarations - int __pyx_t_1; - int __pyx_t_2; - PyObject *__pyx_t_3 = NULL; - char *__pyx_t_4; - Py_ssize_t __pyx_t_5; - int __pyx_t_6; - Py_ssize_t *__pyx_t_7; - int __pyx_lineno = 0; - const char *__pyx_filename = NULL; - int __pyx_clineno = 0; - if (__pyx_v_info == NULL) { - PyErr_SetString(PyExc_BufferError, "PyObject_GetBuffer: view==NULL argument is obsolete"); - return -1; - } - __Pyx_RefNannySetupContext("__getbuffer__", 0); - __pyx_v_info->obj = Py_None; __Pyx_INCREF(Py_None); - __Pyx_GIVEREF(__pyx_v_info->obj); - - /* "View.MemoryView":186 - * @cname('getbuffer') - * def __getbuffer__(self, Py_buffer *info, int flags): - * cdef int bufmode = -1 # <<<<<<<<<<<<<< - * if self.mode == u"c": - * bufmode = PyBUF_C_CONTIGUOUS | PyBUF_ANY_CONTIGUOUS - */ - __pyx_v_bufmode = -1; - - /* "View.MemoryView":187 - * def __getbuffer__(self, Py_buffer *info, int flags): - * cdef int bufmode = -1 - * if self.mode == u"c": # <<<<<<<<<<<<<< - * bufmode = PyBUF_C_CONTIGUOUS | PyBUF_ANY_CONTIGUOUS - * elif self.mode == u"fortran": - */ - __pyx_t_1 = (__Pyx_PyUnicode_Equals(__pyx_v_self->mode, __pyx_n_u_c, Py_EQ)); if (unlikely(__pyx_t_1 < 0)) __PYX_ERR(1, 187, __pyx_L1_error) - __pyx_t_2 = (__pyx_t_1 != 0); - if (__pyx_t_2) { - - /* "View.MemoryView":188 - * cdef int bufmode = -1 - * if self.mode == u"c": - * bufmode = PyBUF_C_CONTIGUOUS | PyBUF_ANY_CONTIGUOUS # <<<<<<<<<<<<<< - * elif self.mode == u"fortran": - * bufmode = PyBUF_F_CONTIGUOUS | PyBUF_ANY_CONTIGUOUS - */ - __pyx_v_bufmode = (PyBUF_C_CONTIGUOUS | PyBUF_ANY_CONTIGUOUS); - - /* "View.MemoryView":187 - * def __getbuffer__(self, Py_buffer *info, int flags): - * cdef int bufmode = -1 - * if self.mode == u"c": # <<<<<<<<<<<<<< - * bufmode = PyBUF_C_CONTIGUOUS | PyBUF_ANY_CONTIGUOUS - * elif self.mode == u"fortran": - */ - goto __pyx_L3; - } - - /* "View.MemoryView":189 - * if self.mode == u"c": - * bufmode = PyBUF_C_CONTIGUOUS | PyBUF_ANY_CONTIGUOUS - * elif self.mode == u"fortran": # <<<<<<<<<<<<<< - * bufmode = PyBUF_F_CONTIGUOUS | PyBUF_ANY_CONTIGUOUS - * if not (flags & bufmode): - */ - __pyx_t_2 = (__Pyx_PyUnicode_Equals(__pyx_v_self->mode, __pyx_n_u_fortran, Py_EQ)); if (unlikely(__pyx_t_2 < 0)) __PYX_ERR(1, 189, __pyx_L1_error) - __pyx_t_1 = (__pyx_t_2 != 0); - if (__pyx_t_1) { - - /* "View.MemoryView":190 - * bufmode = PyBUF_C_CONTIGUOUS | PyBUF_ANY_CONTIGUOUS - * elif self.mode == u"fortran": - * bufmode = PyBUF_F_CONTIGUOUS | PyBUF_ANY_CONTIGUOUS # <<<<<<<<<<<<<< - * if not (flags & bufmode): - * raise ValueError("Can only create a buffer that is contiguous in memory.") - */ - __pyx_v_bufmode = (PyBUF_F_CONTIGUOUS | PyBUF_ANY_CONTIGUOUS); - - /* "View.MemoryView":189 - * if self.mode == u"c": - * bufmode = PyBUF_C_CONTIGUOUS | PyBUF_ANY_CONTIGUOUS - * elif self.mode == u"fortran": # <<<<<<<<<<<<<< - * bufmode = PyBUF_F_CONTIGUOUS | PyBUF_ANY_CONTIGUOUS - * if not (flags & bufmode): - */ - } - __pyx_L3:; - - /* "View.MemoryView":191 - * elif self.mode == u"fortran": - * bufmode = PyBUF_F_CONTIGUOUS | PyBUF_ANY_CONTIGUOUS - * if not (flags & bufmode): # <<<<<<<<<<<<<< - * raise ValueError("Can only create a buffer that is contiguous in memory.") - * info.buf = self.data - */ - __pyx_t_1 = ((!((__pyx_v_flags & __pyx_v_bufmode) != 0)) != 0); - if (unlikely(__pyx_t_1)) { - - /* "View.MemoryView":192 - * bufmode = PyBUF_F_CONTIGUOUS | PyBUF_ANY_CONTIGUOUS - * if not (flags & bufmode): - * raise ValueError("Can only create a buffer that is contiguous in memory.") # <<<<<<<<<<<<<< - * info.buf = self.data - * info.len = self.len - */ - __pyx_t_3 = __Pyx_PyObject_Call(__pyx_builtin_ValueError, __pyx_tuple__6, NULL); if (unlikely(!__pyx_t_3)) __PYX_ERR(1, 192, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_3); - __Pyx_Raise(__pyx_t_3, 0, 0, 0); - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - __PYX_ERR(1, 192, __pyx_L1_error) - - /* "View.MemoryView":191 - * elif self.mode == u"fortran": - * bufmode = PyBUF_F_CONTIGUOUS | PyBUF_ANY_CONTIGUOUS - * if not (flags & bufmode): # <<<<<<<<<<<<<< - * raise ValueError("Can only create a buffer that is contiguous in memory.") - * info.buf = self.data - */ - } - - /* "View.MemoryView":193 - * if not (flags & bufmode): - * raise ValueError("Can only create a buffer that is contiguous in memory.") - * info.buf = self.data # <<<<<<<<<<<<<< - * info.len = self.len - * info.ndim = self.ndim - */ - __pyx_t_4 = __pyx_v_self->data; - __pyx_v_info->buf = __pyx_t_4; - - /* "View.MemoryView":194 - * raise ValueError("Can only create a buffer that is contiguous in memory.") - * info.buf = self.data - * info.len = self.len # <<<<<<<<<<<<<< - * info.ndim = self.ndim - * info.shape = self._shape - */ - __pyx_t_5 = __pyx_v_self->len; - __pyx_v_info->len = __pyx_t_5; - - /* "View.MemoryView":195 - * info.buf = self.data - * info.len = self.len - * info.ndim = self.ndim # <<<<<<<<<<<<<< - * info.shape = self._shape - * info.strides = self._strides - */ - __pyx_t_6 = __pyx_v_self->ndim; - __pyx_v_info->ndim = __pyx_t_6; - - /* "View.MemoryView":196 - * info.len = self.len - * info.ndim = self.ndim - * info.shape = self._shape # <<<<<<<<<<<<<< - * info.strides = self._strides - * info.suboffsets = NULL - */ - __pyx_t_7 = __pyx_v_self->_shape; - __pyx_v_info->shape = __pyx_t_7; - - /* "View.MemoryView":197 - * info.ndim = self.ndim - * info.shape = self._shape - * info.strides = self._strides # <<<<<<<<<<<<<< - * info.suboffsets = NULL - * info.itemsize = self.itemsize - */ - __pyx_t_7 = __pyx_v_self->_strides; - __pyx_v_info->strides = __pyx_t_7; - - /* "View.MemoryView":198 - * info.shape = self._shape - * info.strides = self._strides - * info.suboffsets = NULL # <<<<<<<<<<<<<< - * info.itemsize = self.itemsize - * info.readonly = 0 - */ - __pyx_v_info->suboffsets = NULL; - - /* "View.MemoryView":199 - * info.strides = self._strides - * info.suboffsets = NULL - * info.itemsize = self.itemsize # <<<<<<<<<<<<<< - * info.readonly = 0 - * - */ - __pyx_t_5 = __pyx_v_self->itemsize; - __pyx_v_info->itemsize = __pyx_t_5; - - /* "View.MemoryView":200 - * info.suboffsets = NULL - * info.itemsize = self.itemsize - * info.readonly = 0 # <<<<<<<<<<<<<< - * - * if flags & PyBUF_FORMAT: - */ - __pyx_v_info->readonly = 0; - - /* "View.MemoryView":202 - * info.readonly = 0 - * - * if flags & PyBUF_FORMAT: # <<<<<<<<<<<<<< - * info.format = self.format - * else: - */ - __pyx_t_1 = ((__pyx_v_flags & PyBUF_FORMAT) != 0); - if (__pyx_t_1) { - - /* "View.MemoryView":203 - * - * if flags & PyBUF_FORMAT: - * info.format = self.format # <<<<<<<<<<<<<< - * else: - * info.format = NULL - */ - __pyx_t_4 = __pyx_v_self->format; - __pyx_v_info->format = __pyx_t_4; - - /* "View.MemoryView":202 - * info.readonly = 0 - * - * if flags & PyBUF_FORMAT: # <<<<<<<<<<<<<< - * info.format = self.format - * else: - */ - goto __pyx_L5; - } - - /* "View.MemoryView":205 - * info.format = self.format - * else: - * info.format = NULL # <<<<<<<<<<<<<< - * - * info.obj = self - */ - /*else*/ { - __pyx_v_info->format = NULL; - } - __pyx_L5:; - - /* "View.MemoryView":207 - * info.format = NULL - * - * info.obj = self # <<<<<<<<<<<<<< - * - * __pyx_getbuffer = capsule( &__pyx_array_getbuffer, "getbuffer(obj, view, flags)") - */ - __Pyx_INCREF(((PyObject *)__pyx_v_self)); - __Pyx_GIVEREF(((PyObject *)__pyx_v_self)); - __Pyx_GOTREF(__pyx_v_info->obj); - __Pyx_DECREF(__pyx_v_info->obj); - __pyx_v_info->obj = ((PyObject *)__pyx_v_self); - - /* "View.MemoryView":185 - * - * @cname('getbuffer') - * def __getbuffer__(self, Py_buffer *info, int flags): # <<<<<<<<<<<<<< - * cdef int bufmode = -1 - * if self.mode == u"c": - */ - - /* function exit code */ - __pyx_r = 0; - goto __pyx_L0; - __pyx_L1_error:; - __Pyx_XDECREF(__pyx_t_3); - __Pyx_AddTraceback("View.MemoryView.array.__getbuffer__", __pyx_clineno, __pyx_lineno, __pyx_filename); - __pyx_r = -1; - if (__pyx_v_info->obj != NULL) { - __Pyx_GOTREF(__pyx_v_info->obj); - __Pyx_DECREF(__pyx_v_info->obj); __pyx_v_info->obj = 0; - } - goto __pyx_L2; - __pyx_L0:; - if (__pyx_v_info->obj == Py_None) { - __Pyx_GOTREF(__pyx_v_info->obj); - __Pyx_DECREF(__pyx_v_info->obj); __pyx_v_info->obj = 0; - } - __pyx_L2:; - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} - -/* "View.MemoryView":211 - * __pyx_getbuffer = capsule( &__pyx_array_getbuffer, "getbuffer(obj, view, flags)") - * - * def __dealloc__(array self): # <<<<<<<<<<<<<< - * if self.callback_free_data != NULL: - * self.callback_free_data(self.data) - */ - -/* Python wrapper */ -static void __pyx_array___dealloc__(PyObject *__pyx_v_self); /*proto*/ -static void __pyx_array___dealloc__(PyObject *__pyx_v_self) { - __Pyx_RefNannyDeclarations - __Pyx_RefNannySetupContext("__dealloc__ (wrapper)", 0); - __pyx_array___pyx_pf_15View_dot_MemoryView_5array_4__dealloc__(((struct __pyx_array_obj *)__pyx_v_self)); - - /* function exit code */ - __Pyx_RefNannyFinishContext(); -} - -static void __pyx_array___pyx_pf_15View_dot_MemoryView_5array_4__dealloc__(struct __pyx_array_obj *__pyx_v_self) { - __Pyx_RefNannyDeclarations - int __pyx_t_1; - __Pyx_RefNannySetupContext("__dealloc__", 0); - - /* "View.MemoryView":212 - * - * def __dealloc__(array self): - * if self.callback_free_data != NULL: # <<<<<<<<<<<<<< - * self.callback_free_data(self.data) - * elif self.free_data: - */ - __pyx_t_1 = ((__pyx_v_self->callback_free_data != NULL) != 0); - if (__pyx_t_1) { - - /* "View.MemoryView":213 - * def __dealloc__(array self): - * if self.callback_free_data != NULL: - * self.callback_free_data(self.data) # <<<<<<<<<<<<<< - * elif self.free_data: - * if self.dtype_is_object: - */ - __pyx_v_self->callback_free_data(__pyx_v_self->data); - - /* "View.MemoryView":212 - * - * def __dealloc__(array self): - * if self.callback_free_data != NULL: # <<<<<<<<<<<<<< - * self.callback_free_data(self.data) - * elif self.free_data: - */ - goto __pyx_L3; - } - - /* "View.MemoryView":214 - * if self.callback_free_data != NULL: - * self.callback_free_data(self.data) - * elif self.free_data: # <<<<<<<<<<<<<< - * if self.dtype_is_object: - * refcount_objects_in_slice(self.data, self._shape, - */ - __pyx_t_1 = (__pyx_v_self->free_data != 0); - if (__pyx_t_1) { - - /* "View.MemoryView":215 - * self.callback_free_data(self.data) - * elif self.free_data: - * if self.dtype_is_object: # <<<<<<<<<<<<<< - * refcount_objects_in_slice(self.data, self._shape, - * self._strides, self.ndim, False) - */ - __pyx_t_1 = (__pyx_v_self->dtype_is_object != 0); - if (__pyx_t_1) { - - /* "View.MemoryView":216 - * elif self.free_data: - * if self.dtype_is_object: - * refcount_objects_in_slice(self.data, self._shape, # <<<<<<<<<<<<<< - * self._strides, self.ndim, False) - * free(self.data) - */ - __pyx_memoryview_refcount_objects_in_slice(__pyx_v_self->data, __pyx_v_self->_shape, __pyx_v_self->_strides, __pyx_v_self->ndim, 0); - - /* "View.MemoryView":215 - * self.callback_free_data(self.data) - * elif self.free_data: - * if self.dtype_is_object: # <<<<<<<<<<<<<< - * refcount_objects_in_slice(self.data, self._shape, - * self._strides, self.ndim, False) - */ - } - - /* "View.MemoryView":218 - * refcount_objects_in_slice(self.data, self._shape, - * self._strides, self.ndim, False) - * free(self.data) # <<<<<<<<<<<<<< - * PyObject_Free(self._shape) - * - */ - free(__pyx_v_self->data); - - /* "View.MemoryView":214 - * if self.callback_free_data != NULL: - * self.callback_free_data(self.data) - * elif self.free_data: # <<<<<<<<<<<<<< - * if self.dtype_is_object: - * refcount_objects_in_slice(self.data, self._shape, - */ - } - __pyx_L3:; - - /* "View.MemoryView":219 - * self._strides, self.ndim, False) - * free(self.data) - * PyObject_Free(self._shape) # <<<<<<<<<<<<<< - * - * @property - */ - PyObject_Free(__pyx_v_self->_shape); - - /* "View.MemoryView":211 - * __pyx_getbuffer = capsule( &__pyx_array_getbuffer, "getbuffer(obj, view, flags)") - * - * def __dealloc__(array self): # <<<<<<<<<<<<<< - * if self.callback_free_data != NULL: - * self.callback_free_data(self.data) - */ - - /* function exit code */ - __Pyx_RefNannyFinishContext(); -} - -/* "View.MemoryView":222 - * - * @property - * def memview(self): # <<<<<<<<<<<<<< - * return self.get_memview() - * - */ - -/* Python wrapper */ -static PyObject *__pyx_pw_15View_dot_MemoryView_5array_7memview_1__get__(PyObject *__pyx_v_self); /*proto*/ -static PyObject *__pyx_pw_15View_dot_MemoryView_5array_7memview_1__get__(PyObject *__pyx_v_self) { - PyObject *__pyx_r = 0; - __Pyx_RefNannyDeclarations - __Pyx_RefNannySetupContext("__get__ (wrapper)", 0); - __pyx_r = __pyx_pf_15View_dot_MemoryView_5array_7memview___get__(((struct __pyx_array_obj *)__pyx_v_self)); - - /* function exit code */ - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} - -static PyObject *__pyx_pf_15View_dot_MemoryView_5array_7memview___get__(struct __pyx_array_obj *__pyx_v_self) { - PyObject *__pyx_r = NULL; - __Pyx_RefNannyDeclarations - PyObject *__pyx_t_1 = NULL; - int __pyx_lineno = 0; - const char *__pyx_filename = NULL; - int __pyx_clineno = 0; - __Pyx_RefNannySetupContext("__get__", 0); - - /* "View.MemoryView":223 - * @property - * def memview(self): - * return self.get_memview() # <<<<<<<<<<<<<< - * - * @cname('get_memview') - */ - __Pyx_XDECREF(__pyx_r); - __pyx_t_1 = ((struct __pyx_vtabstruct_array *)__pyx_v_self->__pyx_vtab)->get_memview(__pyx_v_self); if (unlikely(!__pyx_t_1)) __PYX_ERR(1, 223, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - __pyx_r = __pyx_t_1; - __pyx_t_1 = 0; - goto __pyx_L0; - - /* "View.MemoryView":222 - * - * @property - * def memview(self): # <<<<<<<<<<<<<< - * return self.get_memview() - * - */ - - /* function exit code */ - __pyx_L1_error:; - __Pyx_XDECREF(__pyx_t_1); - __Pyx_AddTraceback("View.MemoryView.array.memview.__get__", __pyx_clineno, __pyx_lineno, __pyx_filename); - __pyx_r = NULL; - __pyx_L0:; - __Pyx_XGIVEREF(__pyx_r); - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} - -/* "View.MemoryView":226 - * - * @cname('get_memview') - * cdef get_memview(self): # <<<<<<<<<<<<<< - * flags = PyBUF_ANY_CONTIGUOUS|PyBUF_FORMAT|PyBUF_WRITABLE - * return memoryview(self, flags, self.dtype_is_object) - */ - -static PyObject *__pyx_array_get_memview(struct __pyx_array_obj *__pyx_v_self) { - int __pyx_v_flags; - PyObject *__pyx_r = NULL; - __Pyx_RefNannyDeclarations - PyObject *__pyx_t_1 = NULL; - PyObject *__pyx_t_2 = NULL; - PyObject *__pyx_t_3 = NULL; - int __pyx_lineno = 0; - const char *__pyx_filename = NULL; - int __pyx_clineno = 0; - __Pyx_RefNannySetupContext("get_memview", 0); - - /* "View.MemoryView":227 - * @cname('get_memview') - * cdef get_memview(self): - * flags = PyBUF_ANY_CONTIGUOUS|PyBUF_FORMAT|PyBUF_WRITABLE # <<<<<<<<<<<<<< - * return memoryview(self, flags, self.dtype_is_object) - * - */ - __pyx_v_flags = ((PyBUF_ANY_CONTIGUOUS | PyBUF_FORMAT) | PyBUF_WRITABLE); - - /* "View.MemoryView":228 - * cdef get_memview(self): - * flags = PyBUF_ANY_CONTIGUOUS|PyBUF_FORMAT|PyBUF_WRITABLE - * return memoryview(self, flags, self.dtype_is_object) # <<<<<<<<<<<<<< - * - * def __len__(self): - */ - __Pyx_XDECREF(__pyx_r); - __pyx_t_1 = __Pyx_PyInt_From_int(__pyx_v_flags); if (unlikely(!__pyx_t_1)) __PYX_ERR(1, 228, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - __pyx_t_2 = __Pyx_PyBool_FromLong(__pyx_v_self->dtype_is_object); if (unlikely(!__pyx_t_2)) __PYX_ERR(1, 228, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - __pyx_t_3 = PyTuple_New(3); if (unlikely(!__pyx_t_3)) __PYX_ERR(1, 228, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_3); - __Pyx_INCREF(((PyObject *)__pyx_v_self)); - __Pyx_GIVEREF(((PyObject *)__pyx_v_self)); - PyTuple_SET_ITEM(__pyx_t_3, 0, ((PyObject *)__pyx_v_self)); - __Pyx_GIVEREF(__pyx_t_1); - PyTuple_SET_ITEM(__pyx_t_3, 1, __pyx_t_1); - __Pyx_GIVEREF(__pyx_t_2); - PyTuple_SET_ITEM(__pyx_t_3, 2, __pyx_t_2); - __pyx_t_1 = 0; - __pyx_t_2 = 0; - __pyx_t_2 = __Pyx_PyObject_Call(((PyObject *)__pyx_memoryview_type), __pyx_t_3, NULL); if (unlikely(!__pyx_t_2)) __PYX_ERR(1, 228, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - __pyx_r = __pyx_t_2; - __pyx_t_2 = 0; - goto __pyx_L0; - - /* "View.MemoryView":226 - * - * @cname('get_memview') - * cdef get_memview(self): # <<<<<<<<<<<<<< - * flags = PyBUF_ANY_CONTIGUOUS|PyBUF_FORMAT|PyBUF_WRITABLE - * return memoryview(self, flags, self.dtype_is_object) - */ - - /* function exit code */ - __pyx_L1_error:; - __Pyx_XDECREF(__pyx_t_1); - __Pyx_XDECREF(__pyx_t_2); - __Pyx_XDECREF(__pyx_t_3); - __Pyx_AddTraceback("View.MemoryView.array.get_memview", __pyx_clineno, __pyx_lineno, __pyx_filename); - __pyx_r = 0; - __pyx_L0:; - __Pyx_XGIVEREF(__pyx_r); - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} - -/* "View.MemoryView":230 - * return memoryview(self, flags, self.dtype_is_object) - * - * def __len__(self): # <<<<<<<<<<<<<< - * return self._shape[0] - * - */ - -/* Python wrapper */ -static Py_ssize_t __pyx_array___len__(PyObject *__pyx_v_self); /*proto*/ -static Py_ssize_t __pyx_array___len__(PyObject *__pyx_v_self) { - Py_ssize_t __pyx_r; - __Pyx_RefNannyDeclarations - __Pyx_RefNannySetupContext("__len__ (wrapper)", 0); - __pyx_r = __pyx_array___pyx_pf_15View_dot_MemoryView_5array_6__len__(((struct __pyx_array_obj *)__pyx_v_self)); - - /* function exit code */ - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} - -static Py_ssize_t __pyx_array___pyx_pf_15View_dot_MemoryView_5array_6__len__(struct __pyx_array_obj *__pyx_v_self) { - Py_ssize_t __pyx_r; - __Pyx_RefNannyDeclarations - __Pyx_RefNannySetupContext("__len__", 0); - - /* "View.MemoryView":231 - * - * def __len__(self): - * return self._shape[0] # <<<<<<<<<<<<<< - * - * def __getattr__(self, attr): - */ - __pyx_r = (__pyx_v_self->_shape[0]); - goto __pyx_L0; - - /* "View.MemoryView":230 - * return memoryview(self, flags, self.dtype_is_object) - * - * def __len__(self): # <<<<<<<<<<<<<< - * return self._shape[0] - * - */ - - /* function exit code */ - __pyx_L0:; - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} - -/* "View.MemoryView":233 - * return self._shape[0] - * - * def __getattr__(self, attr): # <<<<<<<<<<<<<< - * return getattr(self.memview, attr) - * - */ - -/* Python wrapper */ -static PyObject *__pyx_array___getattr__(PyObject *__pyx_v_self, PyObject *__pyx_v_attr); /*proto*/ -static PyObject *__pyx_array___getattr__(PyObject *__pyx_v_self, PyObject *__pyx_v_attr) { - PyObject *__pyx_r = 0; - __Pyx_RefNannyDeclarations - __Pyx_RefNannySetupContext("__getattr__ (wrapper)", 0); - __pyx_r = __pyx_array___pyx_pf_15View_dot_MemoryView_5array_8__getattr__(((struct __pyx_array_obj *)__pyx_v_self), ((PyObject *)__pyx_v_attr)); - - /* function exit code */ - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} - -static PyObject *__pyx_array___pyx_pf_15View_dot_MemoryView_5array_8__getattr__(struct __pyx_array_obj *__pyx_v_self, PyObject *__pyx_v_attr) { - PyObject *__pyx_r = NULL; - __Pyx_RefNannyDeclarations - PyObject *__pyx_t_1 = NULL; - PyObject *__pyx_t_2 = NULL; - int __pyx_lineno = 0; - const char *__pyx_filename = NULL; - int __pyx_clineno = 0; - __Pyx_RefNannySetupContext("__getattr__", 0); - - /* "View.MemoryView":234 - * - * def __getattr__(self, attr): - * return getattr(self.memview, attr) # <<<<<<<<<<<<<< - * - * def __getitem__(self, item): - */ - __Pyx_XDECREF(__pyx_r); - __pyx_t_1 = __Pyx_PyObject_GetAttrStr(((PyObject *)__pyx_v_self), __pyx_n_s_memview); if (unlikely(!__pyx_t_1)) __PYX_ERR(1, 234, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - __pyx_t_2 = __Pyx_GetAttr(__pyx_t_1, __pyx_v_attr); if (unlikely(!__pyx_t_2)) __PYX_ERR(1, 234, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; - __pyx_r = __pyx_t_2; - __pyx_t_2 = 0; - goto __pyx_L0; - - /* "View.MemoryView":233 - * return self._shape[0] - * - * def __getattr__(self, attr): # <<<<<<<<<<<<<< - * return getattr(self.memview, attr) - * - */ - - /* function exit code */ - __pyx_L1_error:; - __Pyx_XDECREF(__pyx_t_1); - __Pyx_XDECREF(__pyx_t_2); - __Pyx_AddTraceback("View.MemoryView.array.__getattr__", __pyx_clineno, __pyx_lineno, __pyx_filename); - __pyx_r = NULL; - __pyx_L0:; - __Pyx_XGIVEREF(__pyx_r); - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} - -/* "View.MemoryView":236 - * return getattr(self.memview, attr) - * - * def __getitem__(self, item): # <<<<<<<<<<<<<< - * return self.memview[item] - * - */ - -/* Python wrapper */ -static PyObject *__pyx_array___getitem__(PyObject *__pyx_v_self, PyObject *__pyx_v_item); /*proto*/ -static PyObject *__pyx_array___getitem__(PyObject *__pyx_v_self, PyObject *__pyx_v_item) { - PyObject *__pyx_r = 0; - __Pyx_RefNannyDeclarations - __Pyx_RefNannySetupContext("__getitem__ (wrapper)", 0); - __pyx_r = __pyx_array___pyx_pf_15View_dot_MemoryView_5array_10__getitem__(((struct __pyx_array_obj *)__pyx_v_self), ((PyObject *)__pyx_v_item)); - - /* function exit code */ - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} - -static PyObject *__pyx_array___pyx_pf_15View_dot_MemoryView_5array_10__getitem__(struct __pyx_array_obj *__pyx_v_self, PyObject *__pyx_v_item) { - PyObject *__pyx_r = NULL; - __Pyx_RefNannyDeclarations - PyObject *__pyx_t_1 = NULL; - PyObject *__pyx_t_2 = NULL; - int __pyx_lineno = 0; - const char *__pyx_filename = NULL; - int __pyx_clineno = 0; - __Pyx_RefNannySetupContext("__getitem__", 0); - - /* "View.MemoryView":237 - * - * def __getitem__(self, item): - * return self.memview[item] # <<<<<<<<<<<<<< - * - * def __setitem__(self, item, value): - */ - __Pyx_XDECREF(__pyx_r); - __pyx_t_1 = __Pyx_PyObject_GetAttrStr(((PyObject *)__pyx_v_self), __pyx_n_s_memview); if (unlikely(!__pyx_t_1)) __PYX_ERR(1, 237, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - __pyx_t_2 = __Pyx_PyObject_GetItem(__pyx_t_1, __pyx_v_item); if (unlikely(!__pyx_t_2)) __PYX_ERR(1, 237, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; - __pyx_r = __pyx_t_2; - __pyx_t_2 = 0; - goto __pyx_L0; - - /* "View.MemoryView":236 - * return getattr(self.memview, attr) - * - * def __getitem__(self, item): # <<<<<<<<<<<<<< - * return self.memview[item] - * - */ - - /* function exit code */ - __pyx_L1_error:; - __Pyx_XDECREF(__pyx_t_1); - __Pyx_XDECREF(__pyx_t_2); - __Pyx_AddTraceback("View.MemoryView.array.__getitem__", __pyx_clineno, __pyx_lineno, __pyx_filename); - __pyx_r = NULL; - __pyx_L0:; - __Pyx_XGIVEREF(__pyx_r); - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} - -/* "View.MemoryView":239 - * return self.memview[item] - * - * def __setitem__(self, item, value): # <<<<<<<<<<<<<< - * self.memview[item] = value - * - */ - -/* Python wrapper */ -static int __pyx_array___setitem__(PyObject *__pyx_v_self, PyObject *__pyx_v_item, PyObject *__pyx_v_value); /*proto*/ -static int __pyx_array___setitem__(PyObject *__pyx_v_self, PyObject *__pyx_v_item, PyObject *__pyx_v_value) { - int __pyx_r; - __Pyx_RefNannyDeclarations - __Pyx_RefNannySetupContext("__setitem__ (wrapper)", 0); - __pyx_r = __pyx_array___pyx_pf_15View_dot_MemoryView_5array_12__setitem__(((struct __pyx_array_obj *)__pyx_v_self), ((PyObject *)__pyx_v_item), ((PyObject *)__pyx_v_value)); - - /* function exit code */ - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} - -static int __pyx_array___pyx_pf_15View_dot_MemoryView_5array_12__setitem__(struct __pyx_array_obj *__pyx_v_self, PyObject *__pyx_v_item, PyObject *__pyx_v_value) { - int __pyx_r; - __Pyx_RefNannyDeclarations - PyObject *__pyx_t_1 = NULL; - int __pyx_lineno = 0; - const char *__pyx_filename = NULL; - int __pyx_clineno = 0; - __Pyx_RefNannySetupContext("__setitem__", 0); - - /* "View.MemoryView":240 - * - * def __setitem__(self, item, value): - * self.memview[item] = value # <<<<<<<<<<<<<< - * - * - */ - __pyx_t_1 = __Pyx_PyObject_GetAttrStr(((PyObject *)__pyx_v_self), __pyx_n_s_memview); if (unlikely(!__pyx_t_1)) __PYX_ERR(1, 240, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - if (unlikely(PyObject_SetItem(__pyx_t_1, __pyx_v_item, __pyx_v_value) < 0)) __PYX_ERR(1, 240, __pyx_L1_error) - __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; - - /* "View.MemoryView":239 - * return self.memview[item] - * - * def __setitem__(self, item, value): # <<<<<<<<<<<<<< - * self.memview[item] = value - * - */ - - /* function exit code */ - __pyx_r = 0; - goto __pyx_L0; - __pyx_L1_error:; - __Pyx_XDECREF(__pyx_t_1); - __Pyx_AddTraceback("View.MemoryView.array.__setitem__", __pyx_clineno, __pyx_lineno, __pyx_filename); - __pyx_r = -1; - __pyx_L0:; - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} - -/* "(tree fragment)":1 - * def __reduce_cython__(self): # <<<<<<<<<<<<<< - * raise TypeError("no default __reduce__ due to non-trivial __cinit__") - * def __setstate_cython__(self, __pyx_state): - */ - -/* Python wrapper */ -static PyObject *__pyx_pw___pyx_array_1__reduce_cython__(PyObject *__pyx_v_self, CYTHON_UNUSED PyObject *unused); /*proto*/ -static PyObject *__pyx_pw___pyx_array_1__reduce_cython__(PyObject *__pyx_v_self, CYTHON_UNUSED PyObject *unused) { - PyObject *__pyx_r = 0; - __Pyx_RefNannyDeclarations - __Pyx_RefNannySetupContext("__reduce_cython__ (wrapper)", 0); - __pyx_r = __pyx_pf___pyx_array___reduce_cython__(((struct __pyx_array_obj *)__pyx_v_self)); - - /* function exit code */ - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} - -static PyObject *__pyx_pf___pyx_array___reduce_cython__(CYTHON_UNUSED struct __pyx_array_obj *__pyx_v_self) { - PyObject *__pyx_r = NULL; - __Pyx_RefNannyDeclarations - PyObject *__pyx_t_1 = NULL; - int __pyx_lineno = 0; - const char *__pyx_filename = NULL; - int __pyx_clineno = 0; - __Pyx_RefNannySetupContext("__reduce_cython__", 0); - - /* "(tree fragment)":2 - * def __reduce_cython__(self): - * raise TypeError("no default __reduce__ due to non-trivial __cinit__") # <<<<<<<<<<<<<< - * def __setstate_cython__(self, __pyx_state): - * raise TypeError("no default __reduce__ due to non-trivial __cinit__") - */ - __pyx_t_1 = __Pyx_PyObject_Call(__pyx_builtin_TypeError, __pyx_tuple__7, NULL); if (unlikely(!__pyx_t_1)) __PYX_ERR(1, 2, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - __Pyx_Raise(__pyx_t_1, 0, 0, 0); - __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; - __PYX_ERR(1, 2, __pyx_L1_error) - - /* "(tree fragment)":1 - * def __reduce_cython__(self): # <<<<<<<<<<<<<< - * raise TypeError("no default __reduce__ due to non-trivial __cinit__") - * def __setstate_cython__(self, __pyx_state): - */ - - /* function exit code */ - __pyx_L1_error:; - __Pyx_XDECREF(__pyx_t_1); - __Pyx_AddTraceback("View.MemoryView.array.__reduce_cython__", __pyx_clineno, __pyx_lineno, __pyx_filename); - __pyx_r = NULL; - __Pyx_XGIVEREF(__pyx_r); - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} - -/* "(tree fragment)":3 - * def __reduce_cython__(self): - * raise TypeError("no default __reduce__ due to non-trivial __cinit__") - * def __setstate_cython__(self, __pyx_state): # <<<<<<<<<<<<<< - * raise TypeError("no default __reduce__ due to non-trivial __cinit__") - */ - -/* Python wrapper */ -static PyObject *__pyx_pw___pyx_array_3__setstate_cython__(PyObject *__pyx_v_self, PyObject *__pyx_v___pyx_state); /*proto*/ -static PyObject *__pyx_pw___pyx_array_3__setstate_cython__(PyObject *__pyx_v_self, PyObject *__pyx_v___pyx_state) { - PyObject *__pyx_r = 0; - __Pyx_RefNannyDeclarations - __Pyx_RefNannySetupContext("__setstate_cython__ (wrapper)", 0); - __pyx_r = __pyx_pf___pyx_array_2__setstate_cython__(((struct __pyx_array_obj *)__pyx_v_self), ((PyObject *)__pyx_v___pyx_state)); - - /* function exit code */ - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} - -static PyObject *__pyx_pf___pyx_array_2__setstate_cython__(CYTHON_UNUSED struct __pyx_array_obj *__pyx_v_self, CYTHON_UNUSED PyObject *__pyx_v___pyx_state) { - PyObject *__pyx_r = NULL; - __Pyx_RefNannyDeclarations - PyObject *__pyx_t_1 = NULL; - int __pyx_lineno = 0; - const char *__pyx_filename = NULL; - int __pyx_clineno = 0; - __Pyx_RefNannySetupContext("__setstate_cython__", 0); - - /* "(tree fragment)":4 - * raise TypeError("no default __reduce__ due to non-trivial __cinit__") - * def __setstate_cython__(self, __pyx_state): - * raise TypeError("no default __reduce__ due to non-trivial __cinit__") # <<<<<<<<<<<<<< - */ - __pyx_t_1 = __Pyx_PyObject_Call(__pyx_builtin_TypeError, __pyx_tuple__8, NULL); if (unlikely(!__pyx_t_1)) __PYX_ERR(1, 4, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - __Pyx_Raise(__pyx_t_1, 0, 0, 0); - __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; - __PYX_ERR(1, 4, __pyx_L1_error) - - /* "(tree fragment)":3 - * def __reduce_cython__(self): - * raise TypeError("no default __reduce__ due to non-trivial __cinit__") - * def __setstate_cython__(self, __pyx_state): # <<<<<<<<<<<<<< - * raise TypeError("no default __reduce__ due to non-trivial __cinit__") - */ - - /* function exit code */ - __pyx_L1_error:; - __Pyx_XDECREF(__pyx_t_1); - __Pyx_AddTraceback("View.MemoryView.array.__setstate_cython__", __pyx_clineno, __pyx_lineno, __pyx_filename); - __pyx_r = NULL; - __Pyx_XGIVEREF(__pyx_r); - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} - -/* "View.MemoryView":244 - * - * @cname("__pyx_array_new") - * cdef array array_cwrapper(tuple shape, Py_ssize_t itemsize, char *format, # <<<<<<<<<<<<<< - * char *mode, char *buf): - * cdef array result - */ - -static struct __pyx_array_obj *__pyx_array_new(PyObject *__pyx_v_shape, Py_ssize_t __pyx_v_itemsize, char *__pyx_v_format, char *__pyx_v_mode, char *__pyx_v_buf) { - struct __pyx_array_obj *__pyx_v_result = 0; - struct __pyx_array_obj *__pyx_r = NULL; - __Pyx_RefNannyDeclarations - int __pyx_t_1; - PyObject *__pyx_t_2 = NULL; - PyObject *__pyx_t_3 = NULL; - PyObject *__pyx_t_4 = NULL; - PyObject *__pyx_t_5 = NULL; - int __pyx_lineno = 0; - const char *__pyx_filename = NULL; - int __pyx_clineno = 0; - __Pyx_RefNannySetupContext("array_cwrapper", 0); - - /* "View.MemoryView":248 - * cdef array result - * - * if buf == NULL: # <<<<<<<<<<<<<< - * result = array(shape, itemsize, format, mode.decode('ASCII')) - * else: - */ - __pyx_t_1 = ((__pyx_v_buf == NULL) != 0); - if (__pyx_t_1) { - - /* "View.MemoryView":249 - * - * if buf == NULL: - * result = array(shape, itemsize, format, mode.decode('ASCII')) # <<<<<<<<<<<<<< - * else: - * result = array(shape, itemsize, format, mode.decode('ASCII'), - */ - __pyx_t_2 = PyInt_FromSsize_t(__pyx_v_itemsize); if (unlikely(!__pyx_t_2)) __PYX_ERR(1, 249, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - __pyx_t_3 = __Pyx_PyBytes_FromString(__pyx_v_format); if (unlikely(!__pyx_t_3)) __PYX_ERR(1, 249, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_3); - __pyx_t_4 = __Pyx_decode_c_string(__pyx_v_mode, 0, strlen(__pyx_v_mode), NULL, NULL, PyUnicode_DecodeASCII); if (unlikely(!__pyx_t_4)) __PYX_ERR(1, 249, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_4); - __pyx_t_5 = PyTuple_New(4); if (unlikely(!__pyx_t_5)) __PYX_ERR(1, 249, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_5); - __Pyx_INCREF(__pyx_v_shape); - __Pyx_GIVEREF(__pyx_v_shape); - PyTuple_SET_ITEM(__pyx_t_5, 0, __pyx_v_shape); - __Pyx_GIVEREF(__pyx_t_2); - PyTuple_SET_ITEM(__pyx_t_5, 1, __pyx_t_2); - __Pyx_GIVEREF(__pyx_t_3); - PyTuple_SET_ITEM(__pyx_t_5, 2, __pyx_t_3); - __Pyx_GIVEREF(__pyx_t_4); - PyTuple_SET_ITEM(__pyx_t_5, 3, __pyx_t_4); - __pyx_t_2 = 0; - __pyx_t_3 = 0; - __pyx_t_4 = 0; - __pyx_t_4 = __Pyx_PyObject_Call(((PyObject *)__pyx_array_type), __pyx_t_5, NULL); if (unlikely(!__pyx_t_4)) __PYX_ERR(1, 249, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_4); - __Pyx_DECREF(__pyx_t_5); __pyx_t_5 = 0; - __pyx_v_result = ((struct __pyx_array_obj *)__pyx_t_4); - __pyx_t_4 = 0; - - /* "View.MemoryView":248 - * cdef array result - * - * if buf == NULL: # <<<<<<<<<<<<<< - * result = array(shape, itemsize, format, mode.decode('ASCII')) - * else: - */ - goto __pyx_L3; - } - - /* "View.MemoryView":251 - * result = array(shape, itemsize, format, mode.decode('ASCII')) - * else: - * result = array(shape, itemsize, format, mode.decode('ASCII'), # <<<<<<<<<<<<<< - * allocate_buffer=False) - * result.data = buf - */ - /*else*/ { - __pyx_t_4 = PyInt_FromSsize_t(__pyx_v_itemsize); if (unlikely(!__pyx_t_4)) __PYX_ERR(1, 251, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_4); - __pyx_t_5 = __Pyx_PyBytes_FromString(__pyx_v_format); if (unlikely(!__pyx_t_5)) __PYX_ERR(1, 251, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_5); - __pyx_t_3 = __Pyx_decode_c_string(__pyx_v_mode, 0, strlen(__pyx_v_mode), NULL, NULL, PyUnicode_DecodeASCII); if (unlikely(!__pyx_t_3)) __PYX_ERR(1, 251, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_3); - __pyx_t_2 = PyTuple_New(4); if (unlikely(!__pyx_t_2)) __PYX_ERR(1, 251, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - __Pyx_INCREF(__pyx_v_shape); - __Pyx_GIVEREF(__pyx_v_shape); - PyTuple_SET_ITEM(__pyx_t_2, 0, __pyx_v_shape); - __Pyx_GIVEREF(__pyx_t_4); - PyTuple_SET_ITEM(__pyx_t_2, 1, __pyx_t_4); - __Pyx_GIVEREF(__pyx_t_5); - PyTuple_SET_ITEM(__pyx_t_2, 2, __pyx_t_5); - __Pyx_GIVEREF(__pyx_t_3); - PyTuple_SET_ITEM(__pyx_t_2, 3, __pyx_t_3); - __pyx_t_4 = 0; - __pyx_t_5 = 0; - __pyx_t_3 = 0; - - /* "View.MemoryView":252 - * else: - * result = array(shape, itemsize, format, mode.decode('ASCII'), - * allocate_buffer=False) # <<<<<<<<<<<<<< - * result.data = buf - * - */ - __pyx_t_3 = __Pyx_PyDict_NewPresized(1); if (unlikely(!__pyx_t_3)) __PYX_ERR(1, 252, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_3); - if (PyDict_SetItem(__pyx_t_3, __pyx_n_s_allocate_buffer, Py_False) < 0) __PYX_ERR(1, 252, __pyx_L1_error) - - /* "View.MemoryView":251 - * result = array(shape, itemsize, format, mode.decode('ASCII')) - * else: - * result = array(shape, itemsize, format, mode.decode('ASCII'), # <<<<<<<<<<<<<< - * allocate_buffer=False) - * result.data = buf - */ - __pyx_t_5 = __Pyx_PyObject_Call(((PyObject *)__pyx_array_type), __pyx_t_2, __pyx_t_3); if (unlikely(!__pyx_t_5)) __PYX_ERR(1, 251, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_5); - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - __pyx_v_result = ((struct __pyx_array_obj *)__pyx_t_5); - __pyx_t_5 = 0; - - /* "View.MemoryView":253 - * result = array(shape, itemsize, format, mode.decode('ASCII'), - * allocate_buffer=False) - * result.data = buf # <<<<<<<<<<<<<< - * - * return result - */ - __pyx_v_result->data = __pyx_v_buf; - } - __pyx_L3:; - - /* "View.MemoryView":255 - * result.data = buf - * - * return result # <<<<<<<<<<<<<< - * - * - */ - __Pyx_XDECREF(((PyObject *)__pyx_r)); - __Pyx_INCREF(((PyObject *)__pyx_v_result)); - __pyx_r = __pyx_v_result; - goto __pyx_L0; - - /* "View.MemoryView":244 - * - * @cname("__pyx_array_new") - * cdef array array_cwrapper(tuple shape, Py_ssize_t itemsize, char *format, # <<<<<<<<<<<<<< - * char *mode, char *buf): - * cdef array result - */ - - /* function exit code */ - __pyx_L1_error:; - __Pyx_XDECREF(__pyx_t_2); - __Pyx_XDECREF(__pyx_t_3); - __Pyx_XDECREF(__pyx_t_4); - __Pyx_XDECREF(__pyx_t_5); - __Pyx_AddTraceback("View.MemoryView.array_cwrapper", __pyx_clineno, __pyx_lineno, __pyx_filename); - __pyx_r = 0; - __pyx_L0:; - __Pyx_XDECREF((PyObject *)__pyx_v_result); - __Pyx_XGIVEREF((PyObject *)__pyx_r); - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} - -/* "View.MemoryView":281 - * cdef class Enum(object): - * cdef object name - * def __init__(self, name): # <<<<<<<<<<<<<< - * self.name = name - * def __repr__(self): - */ - -/* Python wrapper */ -static int __pyx_MemviewEnum___init__(PyObject *__pyx_v_self, PyObject *__pyx_args, PyObject *__pyx_kwds); /*proto*/ -static int __pyx_MemviewEnum___init__(PyObject *__pyx_v_self, PyObject *__pyx_args, PyObject *__pyx_kwds) { - PyObject *__pyx_v_name = 0; - int __pyx_lineno = 0; - const char *__pyx_filename = NULL; - int __pyx_clineno = 0; - int __pyx_r; - __Pyx_RefNannyDeclarations - __Pyx_RefNannySetupContext("__init__ (wrapper)", 0); - { - static PyObject **__pyx_pyargnames[] = {&__pyx_n_s_name,0}; - PyObject* values[1] = {0}; - if (unlikely(__pyx_kwds)) { - Py_ssize_t kw_args; - const Py_ssize_t pos_args = PyTuple_GET_SIZE(__pyx_args); - switch (pos_args) { - case 1: values[0] = PyTuple_GET_ITEM(__pyx_args, 0); - CYTHON_FALLTHROUGH; - case 0: break; - default: goto __pyx_L5_argtuple_error; - } - kw_args = PyDict_Size(__pyx_kwds); - switch (pos_args) { - case 0: - if (likely((values[0] = __Pyx_PyDict_GetItemStr(__pyx_kwds, __pyx_n_s_name)) != 0)) kw_args--; - else goto __pyx_L5_argtuple_error; - } - if (unlikely(kw_args > 0)) { - if (unlikely(__Pyx_ParseOptionalKeywords(__pyx_kwds, __pyx_pyargnames, 0, values, pos_args, "__init__") < 0)) __PYX_ERR(1, 281, __pyx_L3_error) - } - } else if (PyTuple_GET_SIZE(__pyx_args) != 1) { - goto __pyx_L5_argtuple_error; - } else { - values[0] = PyTuple_GET_ITEM(__pyx_args, 0); - } - __pyx_v_name = values[0]; - } - goto __pyx_L4_argument_unpacking_done; - __pyx_L5_argtuple_error:; - __Pyx_RaiseArgtupleInvalid("__init__", 1, 1, 1, PyTuple_GET_SIZE(__pyx_args)); __PYX_ERR(1, 281, __pyx_L3_error) - __pyx_L3_error:; - __Pyx_AddTraceback("View.MemoryView.Enum.__init__", __pyx_clineno, __pyx_lineno, __pyx_filename); - __Pyx_RefNannyFinishContext(); - return -1; - __pyx_L4_argument_unpacking_done:; - __pyx_r = __pyx_MemviewEnum___pyx_pf_15View_dot_MemoryView_4Enum___init__(((struct __pyx_MemviewEnum_obj *)__pyx_v_self), __pyx_v_name); - - /* function exit code */ - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} - -static int __pyx_MemviewEnum___pyx_pf_15View_dot_MemoryView_4Enum___init__(struct __pyx_MemviewEnum_obj *__pyx_v_self, PyObject *__pyx_v_name) { - int __pyx_r; - __Pyx_RefNannyDeclarations - __Pyx_RefNannySetupContext("__init__", 0); - - /* "View.MemoryView":282 - * cdef object name - * def __init__(self, name): - * self.name = name # <<<<<<<<<<<<<< - * def __repr__(self): - * return self.name - */ - __Pyx_INCREF(__pyx_v_name); - __Pyx_GIVEREF(__pyx_v_name); - __Pyx_GOTREF(__pyx_v_self->name); - __Pyx_DECREF(__pyx_v_self->name); - __pyx_v_self->name = __pyx_v_name; - - /* "View.MemoryView":281 - * cdef class Enum(object): - * cdef object name - * def __init__(self, name): # <<<<<<<<<<<<<< - * self.name = name - * def __repr__(self): - */ - - /* function exit code */ - __pyx_r = 0; - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} - -/* "View.MemoryView":283 - * def __init__(self, name): - * self.name = name - * def __repr__(self): # <<<<<<<<<<<<<< - * return self.name - * - */ - -/* Python wrapper */ -static PyObject *__pyx_MemviewEnum___repr__(PyObject *__pyx_v_self); /*proto*/ -static PyObject *__pyx_MemviewEnum___repr__(PyObject *__pyx_v_self) { - PyObject *__pyx_r = 0; - __Pyx_RefNannyDeclarations - __Pyx_RefNannySetupContext("__repr__ (wrapper)", 0); - __pyx_r = __pyx_MemviewEnum___pyx_pf_15View_dot_MemoryView_4Enum_2__repr__(((struct __pyx_MemviewEnum_obj *)__pyx_v_self)); - - /* function exit code */ - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} - -static PyObject *__pyx_MemviewEnum___pyx_pf_15View_dot_MemoryView_4Enum_2__repr__(struct __pyx_MemviewEnum_obj *__pyx_v_self) { - PyObject *__pyx_r = NULL; - __Pyx_RefNannyDeclarations - __Pyx_RefNannySetupContext("__repr__", 0); - - /* "View.MemoryView":284 - * self.name = name - * def __repr__(self): - * return self.name # <<<<<<<<<<<<<< - * - * cdef generic = Enum("") - */ - __Pyx_XDECREF(__pyx_r); - __Pyx_INCREF(__pyx_v_self->name); - __pyx_r = __pyx_v_self->name; - goto __pyx_L0; - - /* "View.MemoryView":283 - * def __init__(self, name): - * self.name = name - * def __repr__(self): # <<<<<<<<<<<<<< - * return self.name - * - */ - - /* function exit code */ - __pyx_L0:; - __Pyx_XGIVEREF(__pyx_r); - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} - -/* "(tree fragment)":1 - * def __reduce_cython__(self): # <<<<<<<<<<<<<< - * cdef tuple state - * cdef object _dict - */ - -/* Python wrapper */ -static PyObject *__pyx_pw___pyx_MemviewEnum_1__reduce_cython__(PyObject *__pyx_v_self, CYTHON_UNUSED PyObject *unused); /*proto*/ -static PyObject *__pyx_pw___pyx_MemviewEnum_1__reduce_cython__(PyObject *__pyx_v_self, CYTHON_UNUSED PyObject *unused) { - PyObject *__pyx_r = 0; - __Pyx_RefNannyDeclarations - __Pyx_RefNannySetupContext("__reduce_cython__ (wrapper)", 0); - __pyx_r = __pyx_pf___pyx_MemviewEnum___reduce_cython__(((struct __pyx_MemviewEnum_obj *)__pyx_v_self)); - - /* function exit code */ - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} - -static PyObject *__pyx_pf___pyx_MemviewEnum___reduce_cython__(struct __pyx_MemviewEnum_obj *__pyx_v_self) { - PyObject *__pyx_v_state = 0; - PyObject *__pyx_v__dict = 0; - int __pyx_v_use_setstate; - PyObject *__pyx_r = NULL; - __Pyx_RefNannyDeclarations - PyObject *__pyx_t_1 = NULL; - int __pyx_t_2; - int __pyx_t_3; - PyObject *__pyx_t_4 = NULL; - PyObject *__pyx_t_5 = NULL; - int __pyx_lineno = 0; - const char *__pyx_filename = NULL; - int __pyx_clineno = 0; - __Pyx_RefNannySetupContext("__reduce_cython__", 0); - - /* "(tree fragment)":5 - * cdef object _dict - * cdef bint use_setstate - * state = (self.name,) # <<<<<<<<<<<<<< - * _dict = getattr(self, '__dict__', None) - * if _dict is not None: - */ - __pyx_t_1 = PyTuple_New(1); if (unlikely(!__pyx_t_1)) __PYX_ERR(1, 5, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - __Pyx_INCREF(__pyx_v_self->name); - __Pyx_GIVEREF(__pyx_v_self->name); - PyTuple_SET_ITEM(__pyx_t_1, 0, __pyx_v_self->name); - __pyx_v_state = ((PyObject*)__pyx_t_1); - __pyx_t_1 = 0; - - /* "(tree fragment)":6 - * cdef bint use_setstate - * state = (self.name,) - * _dict = getattr(self, '__dict__', None) # <<<<<<<<<<<<<< - * if _dict is not None: - * state += (_dict,) - */ - __pyx_t_1 = __Pyx_GetAttr3(((PyObject *)__pyx_v_self), __pyx_n_s_dict, Py_None); if (unlikely(!__pyx_t_1)) __PYX_ERR(1, 6, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - __pyx_v__dict = __pyx_t_1; - __pyx_t_1 = 0; - - /* "(tree fragment)":7 - * state = (self.name,) - * _dict = getattr(self, '__dict__', None) - * if _dict is not None: # <<<<<<<<<<<<<< - * state += (_dict,) - * use_setstate = True - */ - __pyx_t_2 = (__pyx_v__dict != Py_None); - __pyx_t_3 = (__pyx_t_2 != 0); - if (__pyx_t_3) { - - /* "(tree fragment)":8 - * _dict = getattr(self, '__dict__', None) - * if _dict is not None: - * state += (_dict,) # <<<<<<<<<<<<<< - * use_setstate = True - * else: - */ - __pyx_t_1 = PyTuple_New(1); if (unlikely(!__pyx_t_1)) __PYX_ERR(1, 8, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - __Pyx_INCREF(__pyx_v__dict); - __Pyx_GIVEREF(__pyx_v__dict); - PyTuple_SET_ITEM(__pyx_t_1, 0, __pyx_v__dict); - __pyx_t_4 = PyNumber_InPlaceAdd(__pyx_v_state, __pyx_t_1); if (unlikely(!__pyx_t_4)) __PYX_ERR(1, 8, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_4); - __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; - __Pyx_DECREF_SET(__pyx_v_state, ((PyObject*)__pyx_t_4)); - __pyx_t_4 = 0; - - /* "(tree fragment)":9 - * if _dict is not None: - * state += (_dict,) - * use_setstate = True # <<<<<<<<<<<<<< - * else: - * use_setstate = self.name is not None - */ - __pyx_v_use_setstate = 1; - - /* "(tree fragment)":7 - * state = (self.name,) - * _dict = getattr(self, '__dict__', None) - * if _dict is not None: # <<<<<<<<<<<<<< - * state += (_dict,) - * use_setstate = True - */ - goto __pyx_L3; - } - - /* "(tree fragment)":11 - * use_setstate = True - * else: - * use_setstate = self.name is not None # <<<<<<<<<<<<<< - * if use_setstate: - * return __pyx_unpickle_Enum, (type(self), 0xb068931, None), state - */ - /*else*/ { - __pyx_t_3 = (__pyx_v_self->name != Py_None); - __pyx_v_use_setstate = __pyx_t_3; - } - __pyx_L3:; - - /* "(tree fragment)":12 - * else: - * use_setstate = self.name is not None - * if use_setstate: # <<<<<<<<<<<<<< - * return __pyx_unpickle_Enum, (type(self), 0xb068931, None), state - * else: - */ - __pyx_t_3 = (__pyx_v_use_setstate != 0); - if (__pyx_t_3) { - - /* "(tree fragment)":13 - * use_setstate = self.name is not None - * if use_setstate: - * return __pyx_unpickle_Enum, (type(self), 0xb068931, None), state # <<<<<<<<<<<<<< - * else: - * return __pyx_unpickle_Enum, (type(self), 0xb068931, state) - */ - __Pyx_XDECREF(__pyx_r); - __Pyx_GetModuleGlobalName(__pyx_t_4, __pyx_n_s_pyx_unpickle_Enum); if (unlikely(!__pyx_t_4)) __PYX_ERR(1, 13, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_4); - __pyx_t_1 = PyTuple_New(3); if (unlikely(!__pyx_t_1)) __PYX_ERR(1, 13, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - __Pyx_INCREF(((PyObject *)Py_TYPE(((PyObject *)__pyx_v_self)))); - __Pyx_GIVEREF(((PyObject *)Py_TYPE(((PyObject *)__pyx_v_self)))); - PyTuple_SET_ITEM(__pyx_t_1, 0, ((PyObject *)Py_TYPE(((PyObject *)__pyx_v_self)))); - __Pyx_INCREF(__pyx_int_184977713); - __Pyx_GIVEREF(__pyx_int_184977713); - PyTuple_SET_ITEM(__pyx_t_1, 1, __pyx_int_184977713); - __Pyx_INCREF(Py_None); - __Pyx_GIVEREF(Py_None); - PyTuple_SET_ITEM(__pyx_t_1, 2, Py_None); - __pyx_t_5 = PyTuple_New(3); if (unlikely(!__pyx_t_5)) __PYX_ERR(1, 13, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_5); - __Pyx_GIVEREF(__pyx_t_4); - PyTuple_SET_ITEM(__pyx_t_5, 0, __pyx_t_4); - __Pyx_GIVEREF(__pyx_t_1); - PyTuple_SET_ITEM(__pyx_t_5, 1, __pyx_t_1); - __Pyx_INCREF(__pyx_v_state); - __Pyx_GIVEREF(__pyx_v_state); - PyTuple_SET_ITEM(__pyx_t_5, 2, __pyx_v_state); - __pyx_t_4 = 0; - __pyx_t_1 = 0; - __pyx_r = __pyx_t_5; - __pyx_t_5 = 0; - goto __pyx_L0; - - /* "(tree fragment)":12 - * else: - * use_setstate = self.name is not None - * if use_setstate: # <<<<<<<<<<<<<< - * return __pyx_unpickle_Enum, (type(self), 0xb068931, None), state - * else: - */ - } - - /* "(tree fragment)":15 - * return __pyx_unpickle_Enum, (type(self), 0xb068931, None), state - * else: - * return __pyx_unpickle_Enum, (type(self), 0xb068931, state) # <<<<<<<<<<<<<< - * def __setstate_cython__(self, __pyx_state): - * __pyx_unpickle_Enum__set_state(self, __pyx_state) - */ - /*else*/ { - __Pyx_XDECREF(__pyx_r); - __Pyx_GetModuleGlobalName(__pyx_t_5, __pyx_n_s_pyx_unpickle_Enum); if (unlikely(!__pyx_t_5)) __PYX_ERR(1, 15, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_5); - __pyx_t_1 = PyTuple_New(3); if (unlikely(!__pyx_t_1)) __PYX_ERR(1, 15, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - __Pyx_INCREF(((PyObject *)Py_TYPE(((PyObject *)__pyx_v_self)))); - __Pyx_GIVEREF(((PyObject *)Py_TYPE(((PyObject *)__pyx_v_self)))); - PyTuple_SET_ITEM(__pyx_t_1, 0, ((PyObject *)Py_TYPE(((PyObject *)__pyx_v_self)))); - __Pyx_INCREF(__pyx_int_184977713); - __Pyx_GIVEREF(__pyx_int_184977713); - PyTuple_SET_ITEM(__pyx_t_1, 1, __pyx_int_184977713); - __Pyx_INCREF(__pyx_v_state); - __Pyx_GIVEREF(__pyx_v_state); - PyTuple_SET_ITEM(__pyx_t_1, 2, __pyx_v_state); - __pyx_t_4 = PyTuple_New(2); if (unlikely(!__pyx_t_4)) __PYX_ERR(1, 15, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_4); - __Pyx_GIVEREF(__pyx_t_5); - PyTuple_SET_ITEM(__pyx_t_4, 0, __pyx_t_5); - __Pyx_GIVEREF(__pyx_t_1); - PyTuple_SET_ITEM(__pyx_t_4, 1, __pyx_t_1); - __pyx_t_5 = 0; - __pyx_t_1 = 0; - __pyx_r = __pyx_t_4; - __pyx_t_4 = 0; - goto __pyx_L0; - } - - /* "(tree fragment)":1 - * def __reduce_cython__(self): # <<<<<<<<<<<<<< - * cdef tuple state - * cdef object _dict - */ - - /* function exit code */ - __pyx_L1_error:; - __Pyx_XDECREF(__pyx_t_1); - __Pyx_XDECREF(__pyx_t_4); - __Pyx_XDECREF(__pyx_t_5); - __Pyx_AddTraceback("View.MemoryView.Enum.__reduce_cython__", __pyx_clineno, __pyx_lineno, __pyx_filename); - __pyx_r = NULL; - __pyx_L0:; - __Pyx_XDECREF(__pyx_v_state); - __Pyx_XDECREF(__pyx_v__dict); - __Pyx_XGIVEREF(__pyx_r); - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} - -/* "(tree fragment)":16 - * else: - * return __pyx_unpickle_Enum, (type(self), 0xb068931, state) - * def __setstate_cython__(self, __pyx_state): # <<<<<<<<<<<<<< - * __pyx_unpickle_Enum__set_state(self, __pyx_state) - */ - -/* Python wrapper */ -static PyObject *__pyx_pw___pyx_MemviewEnum_3__setstate_cython__(PyObject *__pyx_v_self, PyObject *__pyx_v___pyx_state); /*proto*/ -static PyObject *__pyx_pw___pyx_MemviewEnum_3__setstate_cython__(PyObject *__pyx_v_self, PyObject *__pyx_v___pyx_state) { - PyObject *__pyx_r = 0; - __Pyx_RefNannyDeclarations - __Pyx_RefNannySetupContext("__setstate_cython__ (wrapper)", 0); - __pyx_r = __pyx_pf___pyx_MemviewEnum_2__setstate_cython__(((struct __pyx_MemviewEnum_obj *)__pyx_v_self), ((PyObject *)__pyx_v___pyx_state)); - - /* function exit code */ - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} - -static PyObject *__pyx_pf___pyx_MemviewEnum_2__setstate_cython__(struct __pyx_MemviewEnum_obj *__pyx_v_self, PyObject *__pyx_v___pyx_state) { - PyObject *__pyx_r = NULL; - __Pyx_RefNannyDeclarations - PyObject *__pyx_t_1 = NULL; - int __pyx_lineno = 0; - const char *__pyx_filename = NULL; - int __pyx_clineno = 0; - __Pyx_RefNannySetupContext("__setstate_cython__", 0); - - /* "(tree fragment)":17 - * return __pyx_unpickle_Enum, (type(self), 0xb068931, state) - * def __setstate_cython__(self, __pyx_state): - * __pyx_unpickle_Enum__set_state(self, __pyx_state) # <<<<<<<<<<<<<< - */ - if (!(likely(PyTuple_CheckExact(__pyx_v___pyx_state))||((__pyx_v___pyx_state) == Py_None)||(PyErr_Format(PyExc_TypeError, "Expected %.16s, got %.200s", "tuple", Py_TYPE(__pyx_v___pyx_state)->tp_name), 0))) __PYX_ERR(1, 17, __pyx_L1_error) - __pyx_t_1 = __pyx_unpickle_Enum__set_state(__pyx_v_self, ((PyObject*)__pyx_v___pyx_state)); if (unlikely(!__pyx_t_1)) __PYX_ERR(1, 17, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; - - /* "(tree fragment)":16 - * else: - * return __pyx_unpickle_Enum, (type(self), 0xb068931, state) - * def __setstate_cython__(self, __pyx_state): # <<<<<<<<<<<<<< - * __pyx_unpickle_Enum__set_state(self, __pyx_state) - */ - - /* function exit code */ - __pyx_r = Py_None; __Pyx_INCREF(Py_None); - goto __pyx_L0; - __pyx_L1_error:; - __Pyx_XDECREF(__pyx_t_1); - __Pyx_AddTraceback("View.MemoryView.Enum.__setstate_cython__", __pyx_clineno, __pyx_lineno, __pyx_filename); - __pyx_r = NULL; - __pyx_L0:; - __Pyx_XGIVEREF(__pyx_r); - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} - -/* "View.MemoryView":298 - * - * @cname('__pyx_align_pointer') - * cdef void *align_pointer(void *memory, size_t alignment) nogil: # <<<<<<<<<<<<<< - * "Align pointer memory on a given boundary" - * cdef Py_intptr_t aligned_p = memory - */ - -static void *__pyx_align_pointer(void *__pyx_v_memory, size_t __pyx_v_alignment) { - Py_intptr_t __pyx_v_aligned_p; - size_t __pyx_v_offset; - void *__pyx_r; - int __pyx_t_1; - - /* "View.MemoryView":300 - * cdef void *align_pointer(void *memory, size_t alignment) nogil: - * "Align pointer memory on a given boundary" - * cdef Py_intptr_t aligned_p = memory # <<<<<<<<<<<<<< - * cdef size_t offset - * - */ - __pyx_v_aligned_p = ((Py_intptr_t)__pyx_v_memory); - - /* "View.MemoryView":304 - * - * with cython.cdivision(True): - * offset = aligned_p % alignment # <<<<<<<<<<<<<< - * - * if offset > 0: - */ - __pyx_v_offset = (__pyx_v_aligned_p % __pyx_v_alignment); - - /* "View.MemoryView":306 - * offset = aligned_p % alignment - * - * if offset > 0: # <<<<<<<<<<<<<< - * aligned_p += alignment - offset - * - */ - __pyx_t_1 = ((__pyx_v_offset > 0) != 0); - if (__pyx_t_1) { - - /* "View.MemoryView":307 - * - * if offset > 0: - * aligned_p += alignment - offset # <<<<<<<<<<<<<< - * - * return aligned_p - */ - __pyx_v_aligned_p = (__pyx_v_aligned_p + (__pyx_v_alignment - __pyx_v_offset)); - - /* "View.MemoryView":306 - * offset = aligned_p % alignment - * - * if offset > 0: # <<<<<<<<<<<<<< - * aligned_p += alignment - offset - * - */ - } - - /* "View.MemoryView":309 - * aligned_p += alignment - offset - * - * return aligned_p # <<<<<<<<<<<<<< - * - * - */ - __pyx_r = ((void *)__pyx_v_aligned_p); - goto __pyx_L0; - - /* "View.MemoryView":298 - * - * @cname('__pyx_align_pointer') - * cdef void *align_pointer(void *memory, size_t alignment) nogil: # <<<<<<<<<<<<<< - * "Align pointer memory on a given boundary" - * cdef Py_intptr_t aligned_p = memory - */ - - /* function exit code */ - __pyx_L0:; - return __pyx_r; -} - -/* "View.MemoryView":345 - * cdef __Pyx_TypeInfo *typeinfo - * - * def __cinit__(memoryview self, object obj, int flags, bint dtype_is_object=False): # <<<<<<<<<<<<<< - * self.obj = obj - * self.flags = flags - */ - -/* Python wrapper */ -static int __pyx_memoryview___cinit__(PyObject *__pyx_v_self, PyObject *__pyx_args, PyObject *__pyx_kwds); /*proto*/ -static int __pyx_memoryview___cinit__(PyObject *__pyx_v_self, PyObject *__pyx_args, PyObject *__pyx_kwds) { - PyObject *__pyx_v_obj = 0; - int __pyx_v_flags; - int __pyx_v_dtype_is_object; - int __pyx_lineno = 0; - const char *__pyx_filename = NULL; - int __pyx_clineno = 0; - int __pyx_r; - __Pyx_RefNannyDeclarations - __Pyx_RefNannySetupContext("__cinit__ (wrapper)", 0); - { - static PyObject **__pyx_pyargnames[] = {&__pyx_n_s_obj,&__pyx_n_s_flags,&__pyx_n_s_dtype_is_object,0}; - PyObject* values[3] = {0,0,0}; - if (unlikely(__pyx_kwds)) { - Py_ssize_t kw_args; - const Py_ssize_t pos_args = PyTuple_GET_SIZE(__pyx_args); - switch (pos_args) { - case 3: values[2] = PyTuple_GET_ITEM(__pyx_args, 2); - CYTHON_FALLTHROUGH; - case 2: values[1] = PyTuple_GET_ITEM(__pyx_args, 1); - CYTHON_FALLTHROUGH; - case 1: values[0] = PyTuple_GET_ITEM(__pyx_args, 0); - CYTHON_FALLTHROUGH; - case 0: break; - default: goto __pyx_L5_argtuple_error; - } - kw_args = PyDict_Size(__pyx_kwds); - switch (pos_args) { - case 0: - if (likely((values[0] = __Pyx_PyDict_GetItemStr(__pyx_kwds, __pyx_n_s_obj)) != 0)) kw_args--; - else goto __pyx_L5_argtuple_error; - CYTHON_FALLTHROUGH; - case 1: - if (likely((values[1] = __Pyx_PyDict_GetItemStr(__pyx_kwds, __pyx_n_s_flags)) != 0)) kw_args--; - else { - __Pyx_RaiseArgtupleInvalid("__cinit__", 0, 2, 3, 1); __PYX_ERR(1, 345, __pyx_L3_error) - } - CYTHON_FALLTHROUGH; - case 2: - if (kw_args > 0) { - PyObject* value = __Pyx_PyDict_GetItemStr(__pyx_kwds, __pyx_n_s_dtype_is_object); - if (value) { values[2] = value; kw_args--; } - } - } - if (unlikely(kw_args > 0)) { - if (unlikely(__Pyx_ParseOptionalKeywords(__pyx_kwds, __pyx_pyargnames, 0, values, pos_args, "__cinit__") < 0)) __PYX_ERR(1, 345, __pyx_L3_error) - } - } else { - switch (PyTuple_GET_SIZE(__pyx_args)) { - case 3: values[2] = PyTuple_GET_ITEM(__pyx_args, 2); - CYTHON_FALLTHROUGH; - case 2: values[1] = PyTuple_GET_ITEM(__pyx_args, 1); - values[0] = PyTuple_GET_ITEM(__pyx_args, 0); - break; - default: goto __pyx_L5_argtuple_error; - } - } - __pyx_v_obj = values[0]; - __pyx_v_flags = __Pyx_PyInt_As_int(values[1]); if (unlikely((__pyx_v_flags == (int)-1) && PyErr_Occurred())) __PYX_ERR(1, 345, __pyx_L3_error) - if (values[2]) { - __pyx_v_dtype_is_object = __Pyx_PyObject_IsTrue(values[2]); if (unlikely((__pyx_v_dtype_is_object == (int)-1) && PyErr_Occurred())) __PYX_ERR(1, 345, __pyx_L3_error) - } else { - __pyx_v_dtype_is_object = ((int)0); - } - } - goto __pyx_L4_argument_unpacking_done; - __pyx_L5_argtuple_error:; - __Pyx_RaiseArgtupleInvalid("__cinit__", 0, 2, 3, PyTuple_GET_SIZE(__pyx_args)); __PYX_ERR(1, 345, __pyx_L3_error) - __pyx_L3_error:; - __Pyx_AddTraceback("View.MemoryView.memoryview.__cinit__", __pyx_clineno, __pyx_lineno, __pyx_filename); - __Pyx_RefNannyFinishContext(); - return -1; - __pyx_L4_argument_unpacking_done:; - __pyx_r = __pyx_memoryview___pyx_pf_15View_dot_MemoryView_10memoryview___cinit__(((struct __pyx_memoryview_obj *)__pyx_v_self), __pyx_v_obj, __pyx_v_flags, __pyx_v_dtype_is_object); - - /* function exit code */ - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} - -static int __pyx_memoryview___pyx_pf_15View_dot_MemoryView_10memoryview___cinit__(struct __pyx_memoryview_obj *__pyx_v_self, PyObject *__pyx_v_obj, int __pyx_v_flags, int __pyx_v_dtype_is_object) { - int __pyx_r; - __Pyx_RefNannyDeclarations - int __pyx_t_1; - int __pyx_t_2; - int __pyx_t_3; - int __pyx_t_4; - int __pyx_lineno = 0; - const char *__pyx_filename = NULL; - int __pyx_clineno = 0; - __Pyx_RefNannySetupContext("__cinit__", 0); - - /* "View.MemoryView":346 - * - * def __cinit__(memoryview self, object obj, int flags, bint dtype_is_object=False): - * self.obj = obj # <<<<<<<<<<<<<< - * self.flags = flags - * if type(self) is memoryview or obj is not None: - */ - __Pyx_INCREF(__pyx_v_obj); - __Pyx_GIVEREF(__pyx_v_obj); - __Pyx_GOTREF(__pyx_v_self->obj); - __Pyx_DECREF(__pyx_v_self->obj); - __pyx_v_self->obj = __pyx_v_obj; - - /* "View.MemoryView":347 - * def __cinit__(memoryview self, object obj, int flags, bint dtype_is_object=False): - * self.obj = obj - * self.flags = flags # <<<<<<<<<<<<<< - * if type(self) is memoryview or obj is not None: - * __Pyx_GetBuffer(obj, &self.view, flags) - */ - __pyx_v_self->flags = __pyx_v_flags; - - /* "View.MemoryView":348 - * self.obj = obj - * self.flags = flags - * if type(self) is memoryview or obj is not None: # <<<<<<<<<<<<<< - * __Pyx_GetBuffer(obj, &self.view, flags) - * if self.view.obj == NULL: - */ - __pyx_t_2 = (((PyObject *)Py_TYPE(((PyObject *)__pyx_v_self))) == ((PyObject *)__pyx_memoryview_type)); - __pyx_t_3 = (__pyx_t_2 != 0); - if (!__pyx_t_3) { - } else { - __pyx_t_1 = __pyx_t_3; - goto __pyx_L4_bool_binop_done; - } - __pyx_t_3 = (__pyx_v_obj != Py_None); - __pyx_t_2 = (__pyx_t_3 != 0); - __pyx_t_1 = __pyx_t_2; - __pyx_L4_bool_binop_done:; - if (__pyx_t_1) { - - /* "View.MemoryView":349 - * self.flags = flags - * if type(self) is memoryview or obj is not None: - * __Pyx_GetBuffer(obj, &self.view, flags) # <<<<<<<<<<<<<< - * if self.view.obj == NULL: - * (<__pyx_buffer *> &self.view).obj = Py_None - */ - __pyx_t_4 = __Pyx_GetBuffer(__pyx_v_obj, (&__pyx_v_self->view), __pyx_v_flags); if (unlikely(__pyx_t_4 == ((int)-1))) __PYX_ERR(1, 349, __pyx_L1_error) - - /* "View.MemoryView":350 - * if type(self) is memoryview or obj is not None: - * __Pyx_GetBuffer(obj, &self.view, flags) - * if self.view.obj == NULL: # <<<<<<<<<<<<<< - * (<__pyx_buffer *> &self.view).obj = Py_None - * Py_INCREF(Py_None) - */ - __pyx_t_1 = ((((PyObject *)__pyx_v_self->view.obj) == NULL) != 0); - if (__pyx_t_1) { - - /* "View.MemoryView":351 - * __Pyx_GetBuffer(obj, &self.view, flags) - * if self.view.obj == NULL: - * (<__pyx_buffer *> &self.view).obj = Py_None # <<<<<<<<<<<<<< - * Py_INCREF(Py_None) - * - */ - ((Py_buffer *)(&__pyx_v_self->view))->obj = Py_None; - - /* "View.MemoryView":352 - * if self.view.obj == NULL: - * (<__pyx_buffer *> &self.view).obj = Py_None - * Py_INCREF(Py_None) # <<<<<<<<<<<<<< - * - * global __pyx_memoryview_thread_locks_used - */ - Py_INCREF(Py_None); - - /* "View.MemoryView":350 - * if type(self) is memoryview or obj is not None: - * __Pyx_GetBuffer(obj, &self.view, flags) - * if self.view.obj == NULL: # <<<<<<<<<<<<<< - * (<__pyx_buffer *> &self.view).obj = Py_None - * Py_INCREF(Py_None) - */ - } - - /* "View.MemoryView":348 - * self.obj = obj - * self.flags = flags - * if type(self) is memoryview or obj is not None: # <<<<<<<<<<<<<< - * __Pyx_GetBuffer(obj, &self.view, flags) - * if self.view.obj == NULL: - */ - } - - /* "View.MemoryView":355 - * - * global __pyx_memoryview_thread_locks_used - * if __pyx_memoryview_thread_locks_used < THREAD_LOCKS_PREALLOCATED: # <<<<<<<<<<<<<< - * self.lock = __pyx_memoryview_thread_locks[__pyx_memoryview_thread_locks_used] - * __pyx_memoryview_thread_locks_used += 1 - */ - __pyx_t_1 = ((__pyx_memoryview_thread_locks_used < 8) != 0); - if (__pyx_t_1) { - - /* "View.MemoryView":356 - * global __pyx_memoryview_thread_locks_used - * if __pyx_memoryview_thread_locks_used < THREAD_LOCKS_PREALLOCATED: - * self.lock = __pyx_memoryview_thread_locks[__pyx_memoryview_thread_locks_used] # <<<<<<<<<<<<<< - * __pyx_memoryview_thread_locks_used += 1 - * if self.lock is NULL: - */ - __pyx_v_self->lock = (__pyx_memoryview_thread_locks[__pyx_memoryview_thread_locks_used]); - - /* "View.MemoryView":357 - * if __pyx_memoryview_thread_locks_used < THREAD_LOCKS_PREALLOCATED: - * self.lock = __pyx_memoryview_thread_locks[__pyx_memoryview_thread_locks_used] - * __pyx_memoryview_thread_locks_used += 1 # <<<<<<<<<<<<<< - * if self.lock is NULL: - * self.lock = PyThread_allocate_lock() - */ - __pyx_memoryview_thread_locks_used = (__pyx_memoryview_thread_locks_used + 1); - - /* "View.MemoryView":355 - * - * global __pyx_memoryview_thread_locks_used - * if __pyx_memoryview_thread_locks_used < THREAD_LOCKS_PREALLOCATED: # <<<<<<<<<<<<<< - * self.lock = __pyx_memoryview_thread_locks[__pyx_memoryview_thread_locks_used] - * __pyx_memoryview_thread_locks_used += 1 - */ - } - - /* "View.MemoryView":358 - * self.lock = __pyx_memoryview_thread_locks[__pyx_memoryview_thread_locks_used] - * __pyx_memoryview_thread_locks_used += 1 - * if self.lock is NULL: # <<<<<<<<<<<<<< - * self.lock = PyThread_allocate_lock() - * if self.lock is NULL: - */ - __pyx_t_1 = ((__pyx_v_self->lock == NULL) != 0); - if (__pyx_t_1) { - - /* "View.MemoryView":359 - * __pyx_memoryview_thread_locks_used += 1 - * if self.lock is NULL: - * self.lock = PyThread_allocate_lock() # <<<<<<<<<<<<<< - * if self.lock is NULL: - * raise MemoryError - */ - __pyx_v_self->lock = PyThread_allocate_lock(); - - /* "View.MemoryView":360 - * if self.lock is NULL: - * self.lock = PyThread_allocate_lock() - * if self.lock is NULL: # <<<<<<<<<<<<<< - * raise MemoryError - * - */ - __pyx_t_1 = ((__pyx_v_self->lock == NULL) != 0); - if (unlikely(__pyx_t_1)) { - - /* "View.MemoryView":361 - * self.lock = PyThread_allocate_lock() - * if self.lock is NULL: - * raise MemoryError # <<<<<<<<<<<<<< - * - * if flags & PyBUF_FORMAT: - */ - PyErr_NoMemory(); __PYX_ERR(1, 361, __pyx_L1_error) - - /* "View.MemoryView":360 - * if self.lock is NULL: - * self.lock = PyThread_allocate_lock() - * if self.lock is NULL: # <<<<<<<<<<<<<< - * raise MemoryError - * - */ - } - - /* "View.MemoryView":358 - * self.lock = __pyx_memoryview_thread_locks[__pyx_memoryview_thread_locks_used] - * __pyx_memoryview_thread_locks_used += 1 - * if self.lock is NULL: # <<<<<<<<<<<<<< - * self.lock = PyThread_allocate_lock() - * if self.lock is NULL: - */ - } - - /* "View.MemoryView":363 - * raise MemoryError - * - * if flags & PyBUF_FORMAT: # <<<<<<<<<<<<<< - * self.dtype_is_object = (self.view.format[0] == b'O' and self.view.format[1] == b'\0') - * else: - */ - __pyx_t_1 = ((__pyx_v_flags & PyBUF_FORMAT) != 0); - if (__pyx_t_1) { - - /* "View.MemoryView":364 - * - * if flags & PyBUF_FORMAT: - * self.dtype_is_object = (self.view.format[0] == b'O' and self.view.format[1] == b'\0') # <<<<<<<<<<<<<< - * else: - * self.dtype_is_object = dtype_is_object - */ - __pyx_t_2 = (((__pyx_v_self->view.format[0]) == 'O') != 0); - if (__pyx_t_2) { - } else { - __pyx_t_1 = __pyx_t_2; - goto __pyx_L11_bool_binop_done; - } - __pyx_t_2 = (((__pyx_v_self->view.format[1]) == '\x00') != 0); - __pyx_t_1 = __pyx_t_2; - __pyx_L11_bool_binop_done:; - __pyx_v_self->dtype_is_object = __pyx_t_1; - - /* "View.MemoryView":363 - * raise MemoryError - * - * if flags & PyBUF_FORMAT: # <<<<<<<<<<<<<< - * self.dtype_is_object = (self.view.format[0] == b'O' and self.view.format[1] == b'\0') - * else: - */ - goto __pyx_L10; - } - - /* "View.MemoryView":366 - * self.dtype_is_object = (self.view.format[0] == b'O' and self.view.format[1] == b'\0') - * else: - * self.dtype_is_object = dtype_is_object # <<<<<<<<<<<<<< - * - * self.acquisition_count_aligned_p = <__pyx_atomic_int *> align_pointer( - */ - /*else*/ { - __pyx_v_self->dtype_is_object = __pyx_v_dtype_is_object; - } - __pyx_L10:; - - /* "View.MemoryView":368 - * self.dtype_is_object = dtype_is_object - * - * self.acquisition_count_aligned_p = <__pyx_atomic_int *> align_pointer( # <<<<<<<<<<<<<< - * &self.acquisition_count[0], sizeof(__pyx_atomic_int)) - * self.typeinfo = NULL - */ - __pyx_v_self->acquisition_count_aligned_p = ((__pyx_atomic_int *)__pyx_align_pointer(((void *)(&(__pyx_v_self->acquisition_count[0]))), (sizeof(__pyx_atomic_int)))); - - /* "View.MemoryView":370 - * self.acquisition_count_aligned_p = <__pyx_atomic_int *> align_pointer( - * &self.acquisition_count[0], sizeof(__pyx_atomic_int)) - * self.typeinfo = NULL # <<<<<<<<<<<<<< - * - * def __dealloc__(memoryview self): - */ - __pyx_v_self->typeinfo = NULL; - - /* "View.MemoryView":345 - * cdef __Pyx_TypeInfo *typeinfo - * - * def __cinit__(memoryview self, object obj, int flags, bint dtype_is_object=False): # <<<<<<<<<<<<<< - * self.obj = obj - * self.flags = flags - */ - - /* function exit code */ - __pyx_r = 0; - goto __pyx_L0; - __pyx_L1_error:; - __Pyx_AddTraceback("View.MemoryView.memoryview.__cinit__", __pyx_clineno, __pyx_lineno, __pyx_filename); - __pyx_r = -1; - __pyx_L0:; - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} - -/* "View.MemoryView":372 - * self.typeinfo = NULL - * - * def __dealloc__(memoryview self): # <<<<<<<<<<<<<< - * if self.obj is not None: - * __Pyx_ReleaseBuffer(&self.view) - */ - -/* Python wrapper */ -static void __pyx_memoryview___dealloc__(PyObject *__pyx_v_self); /*proto*/ -static void __pyx_memoryview___dealloc__(PyObject *__pyx_v_self) { - __Pyx_RefNannyDeclarations - __Pyx_RefNannySetupContext("__dealloc__ (wrapper)", 0); - __pyx_memoryview___pyx_pf_15View_dot_MemoryView_10memoryview_2__dealloc__(((struct __pyx_memoryview_obj *)__pyx_v_self)); - - /* function exit code */ - __Pyx_RefNannyFinishContext(); -} - -static void __pyx_memoryview___pyx_pf_15View_dot_MemoryView_10memoryview_2__dealloc__(struct __pyx_memoryview_obj *__pyx_v_self) { - int __pyx_v_i; - __Pyx_RefNannyDeclarations - int __pyx_t_1; - int __pyx_t_2; - int __pyx_t_3; - int __pyx_t_4; - int __pyx_t_5; - PyThread_type_lock __pyx_t_6; - PyThread_type_lock __pyx_t_7; - __Pyx_RefNannySetupContext("__dealloc__", 0); - - /* "View.MemoryView":373 - * - * def __dealloc__(memoryview self): - * if self.obj is not None: # <<<<<<<<<<<<<< - * __Pyx_ReleaseBuffer(&self.view) - * elif (<__pyx_buffer *> &self.view).obj == Py_None: - */ - __pyx_t_1 = (__pyx_v_self->obj != Py_None); - __pyx_t_2 = (__pyx_t_1 != 0); - if (__pyx_t_2) { - - /* "View.MemoryView":374 - * def __dealloc__(memoryview self): - * if self.obj is not None: - * __Pyx_ReleaseBuffer(&self.view) # <<<<<<<<<<<<<< - * elif (<__pyx_buffer *> &self.view).obj == Py_None: - * - */ - __Pyx_ReleaseBuffer((&__pyx_v_self->view)); - - /* "View.MemoryView":373 - * - * def __dealloc__(memoryview self): - * if self.obj is not None: # <<<<<<<<<<<<<< - * __Pyx_ReleaseBuffer(&self.view) - * elif (<__pyx_buffer *> &self.view).obj == Py_None: - */ - goto __pyx_L3; - } - - /* "View.MemoryView":375 - * if self.obj is not None: - * __Pyx_ReleaseBuffer(&self.view) - * elif (<__pyx_buffer *> &self.view).obj == Py_None: # <<<<<<<<<<<<<< - * - * (<__pyx_buffer *> &self.view).obj = NULL - */ - __pyx_t_2 = ((((Py_buffer *)(&__pyx_v_self->view))->obj == Py_None) != 0); - if (__pyx_t_2) { - - /* "View.MemoryView":377 - * elif (<__pyx_buffer *> &self.view).obj == Py_None: - * - * (<__pyx_buffer *> &self.view).obj = NULL # <<<<<<<<<<<<<< - * Py_DECREF(Py_None) - * - */ - ((Py_buffer *)(&__pyx_v_self->view))->obj = NULL; - - /* "View.MemoryView":378 - * - * (<__pyx_buffer *> &self.view).obj = NULL - * Py_DECREF(Py_None) # <<<<<<<<<<<<<< - * - * cdef int i - */ - Py_DECREF(Py_None); - - /* "View.MemoryView":375 - * if self.obj is not None: - * __Pyx_ReleaseBuffer(&self.view) - * elif (<__pyx_buffer *> &self.view).obj == Py_None: # <<<<<<<<<<<<<< - * - * (<__pyx_buffer *> &self.view).obj = NULL - */ - } - __pyx_L3:; - - /* "View.MemoryView":382 - * cdef int i - * global __pyx_memoryview_thread_locks_used - * if self.lock != NULL: # <<<<<<<<<<<<<< - * for i in range(__pyx_memoryview_thread_locks_used): - * if __pyx_memoryview_thread_locks[i] is self.lock: - */ - __pyx_t_2 = ((__pyx_v_self->lock != NULL) != 0); - if (__pyx_t_2) { - - /* "View.MemoryView":383 - * global __pyx_memoryview_thread_locks_used - * if self.lock != NULL: - * for i in range(__pyx_memoryview_thread_locks_used): # <<<<<<<<<<<<<< - * if __pyx_memoryview_thread_locks[i] is self.lock: - * __pyx_memoryview_thread_locks_used -= 1 - */ - __pyx_t_3 = __pyx_memoryview_thread_locks_used; - __pyx_t_4 = __pyx_t_3; - for (__pyx_t_5 = 0; __pyx_t_5 < __pyx_t_4; __pyx_t_5+=1) { - __pyx_v_i = __pyx_t_5; - - /* "View.MemoryView":384 - * if self.lock != NULL: - * for i in range(__pyx_memoryview_thread_locks_used): - * if __pyx_memoryview_thread_locks[i] is self.lock: # <<<<<<<<<<<<<< - * __pyx_memoryview_thread_locks_used -= 1 - * if i != __pyx_memoryview_thread_locks_used: - */ - __pyx_t_2 = (((__pyx_memoryview_thread_locks[__pyx_v_i]) == __pyx_v_self->lock) != 0); - if (__pyx_t_2) { - - /* "View.MemoryView":385 - * for i in range(__pyx_memoryview_thread_locks_used): - * if __pyx_memoryview_thread_locks[i] is self.lock: - * __pyx_memoryview_thread_locks_used -= 1 # <<<<<<<<<<<<<< - * if i != __pyx_memoryview_thread_locks_used: - * __pyx_memoryview_thread_locks[i], __pyx_memoryview_thread_locks[__pyx_memoryview_thread_locks_used] = ( - */ - __pyx_memoryview_thread_locks_used = (__pyx_memoryview_thread_locks_used - 1); - - /* "View.MemoryView":386 - * if __pyx_memoryview_thread_locks[i] is self.lock: - * __pyx_memoryview_thread_locks_used -= 1 - * if i != __pyx_memoryview_thread_locks_used: # <<<<<<<<<<<<<< - * __pyx_memoryview_thread_locks[i], __pyx_memoryview_thread_locks[__pyx_memoryview_thread_locks_used] = ( - * __pyx_memoryview_thread_locks[__pyx_memoryview_thread_locks_used], __pyx_memoryview_thread_locks[i]) - */ - __pyx_t_2 = ((__pyx_v_i != __pyx_memoryview_thread_locks_used) != 0); - if (__pyx_t_2) { - - /* "View.MemoryView":388 - * if i != __pyx_memoryview_thread_locks_used: - * __pyx_memoryview_thread_locks[i], __pyx_memoryview_thread_locks[__pyx_memoryview_thread_locks_used] = ( - * __pyx_memoryview_thread_locks[__pyx_memoryview_thread_locks_used], __pyx_memoryview_thread_locks[i]) # <<<<<<<<<<<<<< - * break - * else: - */ - __pyx_t_6 = (__pyx_memoryview_thread_locks[__pyx_memoryview_thread_locks_used]); - __pyx_t_7 = (__pyx_memoryview_thread_locks[__pyx_v_i]); - - /* "View.MemoryView":387 - * __pyx_memoryview_thread_locks_used -= 1 - * if i != __pyx_memoryview_thread_locks_used: - * __pyx_memoryview_thread_locks[i], __pyx_memoryview_thread_locks[__pyx_memoryview_thread_locks_used] = ( # <<<<<<<<<<<<<< - * __pyx_memoryview_thread_locks[__pyx_memoryview_thread_locks_used], __pyx_memoryview_thread_locks[i]) - * break - */ - (__pyx_memoryview_thread_locks[__pyx_v_i]) = __pyx_t_6; - (__pyx_memoryview_thread_locks[__pyx_memoryview_thread_locks_used]) = __pyx_t_7; - - /* "View.MemoryView":386 - * if __pyx_memoryview_thread_locks[i] is self.lock: - * __pyx_memoryview_thread_locks_used -= 1 - * if i != __pyx_memoryview_thread_locks_used: # <<<<<<<<<<<<<< - * __pyx_memoryview_thread_locks[i], __pyx_memoryview_thread_locks[__pyx_memoryview_thread_locks_used] = ( - * __pyx_memoryview_thread_locks[__pyx_memoryview_thread_locks_used], __pyx_memoryview_thread_locks[i]) - */ - } - - /* "View.MemoryView":389 - * __pyx_memoryview_thread_locks[i], __pyx_memoryview_thread_locks[__pyx_memoryview_thread_locks_used] = ( - * __pyx_memoryview_thread_locks[__pyx_memoryview_thread_locks_used], __pyx_memoryview_thread_locks[i]) - * break # <<<<<<<<<<<<<< - * else: - * PyThread_free_lock(self.lock) - */ - goto __pyx_L6_break; - - /* "View.MemoryView":384 - * if self.lock != NULL: - * for i in range(__pyx_memoryview_thread_locks_used): - * if __pyx_memoryview_thread_locks[i] is self.lock: # <<<<<<<<<<<<<< - * __pyx_memoryview_thread_locks_used -= 1 - * if i != __pyx_memoryview_thread_locks_used: - */ - } - } - /*else*/ { - - /* "View.MemoryView":391 - * break - * else: - * PyThread_free_lock(self.lock) # <<<<<<<<<<<<<< - * - * cdef char *get_item_pointer(memoryview self, object index) except NULL: - */ - PyThread_free_lock(__pyx_v_self->lock); - } - __pyx_L6_break:; - - /* "View.MemoryView":382 - * cdef int i - * global __pyx_memoryview_thread_locks_used - * if self.lock != NULL: # <<<<<<<<<<<<<< - * for i in range(__pyx_memoryview_thread_locks_used): - * if __pyx_memoryview_thread_locks[i] is self.lock: - */ - } - - /* "View.MemoryView":372 - * self.typeinfo = NULL - * - * def __dealloc__(memoryview self): # <<<<<<<<<<<<<< - * if self.obj is not None: - * __Pyx_ReleaseBuffer(&self.view) - */ - - /* function exit code */ - __Pyx_RefNannyFinishContext(); -} - -/* "View.MemoryView":393 - * PyThread_free_lock(self.lock) - * - * cdef char *get_item_pointer(memoryview self, object index) except NULL: # <<<<<<<<<<<<<< - * cdef Py_ssize_t dim - * cdef char *itemp = self.view.buf - */ - -static char *__pyx_memoryview_get_item_pointer(struct __pyx_memoryview_obj *__pyx_v_self, PyObject *__pyx_v_index) { - Py_ssize_t __pyx_v_dim; - char *__pyx_v_itemp; - PyObject *__pyx_v_idx = NULL; - char *__pyx_r; - __Pyx_RefNannyDeclarations - Py_ssize_t __pyx_t_1; - PyObject *__pyx_t_2 = NULL; - Py_ssize_t __pyx_t_3; - PyObject *(*__pyx_t_4)(PyObject *); - PyObject *__pyx_t_5 = NULL; - Py_ssize_t __pyx_t_6; - char *__pyx_t_7; - int __pyx_lineno = 0; - const char *__pyx_filename = NULL; - int __pyx_clineno = 0; - __Pyx_RefNannySetupContext("get_item_pointer", 0); - - /* "View.MemoryView":395 - * cdef char *get_item_pointer(memoryview self, object index) except NULL: - * cdef Py_ssize_t dim - * cdef char *itemp = self.view.buf # <<<<<<<<<<<<<< - * - * for dim, idx in enumerate(index): - */ - __pyx_v_itemp = ((char *)__pyx_v_self->view.buf); - - /* "View.MemoryView":397 - * cdef char *itemp = self.view.buf - * - * for dim, idx in enumerate(index): # <<<<<<<<<<<<<< - * itemp = pybuffer_index(&self.view, itemp, idx, dim) - * - */ - __pyx_t_1 = 0; - if (likely(PyList_CheckExact(__pyx_v_index)) || PyTuple_CheckExact(__pyx_v_index)) { - __pyx_t_2 = __pyx_v_index; __Pyx_INCREF(__pyx_t_2); __pyx_t_3 = 0; - __pyx_t_4 = NULL; - } else { - __pyx_t_3 = -1; __pyx_t_2 = PyObject_GetIter(__pyx_v_index); if (unlikely(!__pyx_t_2)) __PYX_ERR(1, 397, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - __pyx_t_4 = Py_TYPE(__pyx_t_2)->tp_iternext; if (unlikely(!__pyx_t_4)) __PYX_ERR(1, 397, __pyx_L1_error) - } - for (;;) { - if (likely(!__pyx_t_4)) { - if (likely(PyList_CheckExact(__pyx_t_2))) { - if (__pyx_t_3 >= PyList_GET_SIZE(__pyx_t_2)) break; - #if CYTHON_ASSUME_SAFE_MACROS && !CYTHON_AVOID_BORROWED_REFS - __pyx_t_5 = PyList_GET_ITEM(__pyx_t_2, __pyx_t_3); __Pyx_INCREF(__pyx_t_5); __pyx_t_3++; if (unlikely(0 < 0)) __PYX_ERR(1, 397, __pyx_L1_error) - #else - __pyx_t_5 = PySequence_ITEM(__pyx_t_2, __pyx_t_3); __pyx_t_3++; if (unlikely(!__pyx_t_5)) __PYX_ERR(1, 397, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_5); - #endif - } else { - if (__pyx_t_3 >= PyTuple_GET_SIZE(__pyx_t_2)) break; - #if CYTHON_ASSUME_SAFE_MACROS && !CYTHON_AVOID_BORROWED_REFS - __pyx_t_5 = PyTuple_GET_ITEM(__pyx_t_2, __pyx_t_3); __Pyx_INCREF(__pyx_t_5); __pyx_t_3++; if (unlikely(0 < 0)) __PYX_ERR(1, 397, __pyx_L1_error) - #else - __pyx_t_5 = PySequence_ITEM(__pyx_t_2, __pyx_t_3); __pyx_t_3++; if (unlikely(!__pyx_t_5)) __PYX_ERR(1, 397, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_5); - #endif - } - } else { - __pyx_t_5 = __pyx_t_4(__pyx_t_2); - if (unlikely(!__pyx_t_5)) { - PyObject* exc_type = PyErr_Occurred(); - if (exc_type) { - if (likely(__Pyx_PyErr_GivenExceptionMatches(exc_type, PyExc_StopIteration))) PyErr_Clear(); - else __PYX_ERR(1, 397, __pyx_L1_error) - } - break; - } - __Pyx_GOTREF(__pyx_t_5); - } - __Pyx_XDECREF_SET(__pyx_v_idx, __pyx_t_5); - __pyx_t_5 = 0; - __pyx_v_dim = __pyx_t_1; - __pyx_t_1 = (__pyx_t_1 + 1); - - /* "View.MemoryView":398 - * - * for dim, idx in enumerate(index): - * itemp = pybuffer_index(&self.view, itemp, idx, dim) # <<<<<<<<<<<<<< - * - * return itemp - */ - __pyx_t_6 = __Pyx_PyIndex_AsSsize_t(__pyx_v_idx); if (unlikely((__pyx_t_6 == (Py_ssize_t)-1) && PyErr_Occurred())) __PYX_ERR(1, 398, __pyx_L1_error) - __pyx_t_7 = __pyx_pybuffer_index((&__pyx_v_self->view), __pyx_v_itemp, __pyx_t_6, __pyx_v_dim); if (unlikely(__pyx_t_7 == ((char *)NULL))) __PYX_ERR(1, 398, __pyx_L1_error) - __pyx_v_itemp = __pyx_t_7; - - /* "View.MemoryView":397 - * cdef char *itemp = self.view.buf - * - * for dim, idx in enumerate(index): # <<<<<<<<<<<<<< - * itemp = pybuffer_index(&self.view, itemp, idx, dim) - * - */ - } - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - - /* "View.MemoryView":400 - * itemp = pybuffer_index(&self.view, itemp, idx, dim) - * - * return itemp # <<<<<<<<<<<<<< - * - * - */ - __pyx_r = __pyx_v_itemp; - goto __pyx_L0; - - /* "View.MemoryView":393 - * PyThread_free_lock(self.lock) - * - * cdef char *get_item_pointer(memoryview self, object index) except NULL: # <<<<<<<<<<<<<< - * cdef Py_ssize_t dim - * cdef char *itemp = self.view.buf - */ - - /* function exit code */ - __pyx_L1_error:; - __Pyx_XDECREF(__pyx_t_2); - __Pyx_XDECREF(__pyx_t_5); - __Pyx_AddTraceback("View.MemoryView.memoryview.get_item_pointer", __pyx_clineno, __pyx_lineno, __pyx_filename); - __pyx_r = NULL; - __pyx_L0:; - __Pyx_XDECREF(__pyx_v_idx); - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} - -/* "View.MemoryView":403 - * - * - * def __getitem__(memoryview self, object index): # <<<<<<<<<<<<<< - * if index is Ellipsis: - * return self - */ - -/* Python wrapper */ -static PyObject *__pyx_memoryview___getitem__(PyObject *__pyx_v_self, PyObject *__pyx_v_index); /*proto*/ -static PyObject *__pyx_memoryview___getitem__(PyObject *__pyx_v_self, PyObject *__pyx_v_index) { - PyObject *__pyx_r = 0; - __Pyx_RefNannyDeclarations - __Pyx_RefNannySetupContext("__getitem__ (wrapper)", 0); - __pyx_r = __pyx_memoryview___pyx_pf_15View_dot_MemoryView_10memoryview_4__getitem__(((struct __pyx_memoryview_obj *)__pyx_v_self), ((PyObject *)__pyx_v_index)); - - /* function exit code */ - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} - -static PyObject *__pyx_memoryview___pyx_pf_15View_dot_MemoryView_10memoryview_4__getitem__(struct __pyx_memoryview_obj *__pyx_v_self, PyObject *__pyx_v_index) { - PyObject *__pyx_v_have_slices = NULL; - PyObject *__pyx_v_indices = NULL; - char *__pyx_v_itemp; - PyObject *__pyx_r = NULL; - __Pyx_RefNannyDeclarations - int __pyx_t_1; - int __pyx_t_2; - PyObject *__pyx_t_3 = NULL; - PyObject *__pyx_t_4 = NULL; - PyObject *__pyx_t_5 = NULL; - char *__pyx_t_6; - int __pyx_lineno = 0; - const char *__pyx_filename = NULL; - int __pyx_clineno = 0; - __Pyx_RefNannySetupContext("__getitem__", 0); - - /* "View.MemoryView":404 - * - * def __getitem__(memoryview self, object index): - * if index is Ellipsis: # <<<<<<<<<<<<<< - * return self - * - */ - __pyx_t_1 = (__pyx_v_index == __pyx_builtin_Ellipsis); - __pyx_t_2 = (__pyx_t_1 != 0); - if (__pyx_t_2) { - - /* "View.MemoryView":405 - * def __getitem__(memoryview self, object index): - * if index is Ellipsis: - * return self # <<<<<<<<<<<<<< - * - * have_slices, indices = _unellipsify(index, self.view.ndim) - */ - __Pyx_XDECREF(__pyx_r); - __Pyx_INCREF(((PyObject *)__pyx_v_self)); - __pyx_r = ((PyObject *)__pyx_v_self); - goto __pyx_L0; - - /* "View.MemoryView":404 - * - * def __getitem__(memoryview self, object index): - * if index is Ellipsis: # <<<<<<<<<<<<<< - * return self - * - */ - } - - /* "View.MemoryView":407 - * return self - * - * have_slices, indices = _unellipsify(index, self.view.ndim) # <<<<<<<<<<<<<< - * - * cdef char *itemp - */ - __pyx_t_3 = _unellipsify(__pyx_v_index, __pyx_v_self->view.ndim); if (unlikely(!__pyx_t_3)) __PYX_ERR(1, 407, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_3); - if (likely(__pyx_t_3 != Py_None)) { - PyObject* sequence = __pyx_t_3; - Py_ssize_t size = __Pyx_PySequence_SIZE(sequence); - if (unlikely(size != 2)) { - if (size > 2) __Pyx_RaiseTooManyValuesError(2); - else if (size >= 0) __Pyx_RaiseNeedMoreValuesError(size); - __PYX_ERR(1, 407, __pyx_L1_error) - } - #if CYTHON_ASSUME_SAFE_MACROS && !CYTHON_AVOID_BORROWED_REFS - __pyx_t_4 = PyTuple_GET_ITEM(sequence, 0); - __pyx_t_5 = PyTuple_GET_ITEM(sequence, 1); - __Pyx_INCREF(__pyx_t_4); - __Pyx_INCREF(__pyx_t_5); - #else - __pyx_t_4 = PySequence_ITEM(sequence, 0); if (unlikely(!__pyx_t_4)) __PYX_ERR(1, 407, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_4); - __pyx_t_5 = PySequence_ITEM(sequence, 1); if (unlikely(!__pyx_t_5)) __PYX_ERR(1, 407, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_5); - #endif - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - } else { - __Pyx_RaiseNoneNotIterableError(); __PYX_ERR(1, 407, __pyx_L1_error) - } - __pyx_v_have_slices = __pyx_t_4; - __pyx_t_4 = 0; - __pyx_v_indices = __pyx_t_5; - __pyx_t_5 = 0; - - /* "View.MemoryView":410 - * - * cdef char *itemp - * if have_slices: # <<<<<<<<<<<<<< - * return memview_slice(self, indices) - * else: - */ - __pyx_t_2 = __Pyx_PyObject_IsTrue(__pyx_v_have_slices); if (unlikely(__pyx_t_2 < 0)) __PYX_ERR(1, 410, __pyx_L1_error) - if (__pyx_t_2) { - - /* "View.MemoryView":411 - * cdef char *itemp - * if have_slices: - * return memview_slice(self, indices) # <<<<<<<<<<<<<< - * else: - * itemp = self.get_item_pointer(indices) - */ - __Pyx_XDECREF(__pyx_r); - __pyx_t_3 = ((PyObject *)__pyx_memview_slice(__pyx_v_self, __pyx_v_indices)); if (unlikely(!__pyx_t_3)) __PYX_ERR(1, 411, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_3); - __pyx_r = __pyx_t_3; - __pyx_t_3 = 0; - goto __pyx_L0; - - /* "View.MemoryView":410 - * - * cdef char *itemp - * if have_slices: # <<<<<<<<<<<<<< - * return memview_slice(self, indices) - * else: - */ - } - - /* "View.MemoryView":413 - * return memview_slice(self, indices) - * else: - * itemp = self.get_item_pointer(indices) # <<<<<<<<<<<<<< - * return self.convert_item_to_object(itemp) - * - */ - /*else*/ { - __pyx_t_6 = ((struct __pyx_vtabstruct_memoryview *)__pyx_v_self->__pyx_vtab)->get_item_pointer(__pyx_v_self, __pyx_v_indices); if (unlikely(__pyx_t_6 == ((char *)NULL))) __PYX_ERR(1, 413, __pyx_L1_error) - __pyx_v_itemp = __pyx_t_6; - - /* "View.MemoryView":414 - * else: - * itemp = self.get_item_pointer(indices) - * return self.convert_item_to_object(itemp) # <<<<<<<<<<<<<< - * - * def __setitem__(memoryview self, object index, object value): - */ - __Pyx_XDECREF(__pyx_r); - __pyx_t_3 = ((struct __pyx_vtabstruct_memoryview *)__pyx_v_self->__pyx_vtab)->convert_item_to_object(__pyx_v_self, __pyx_v_itemp); if (unlikely(!__pyx_t_3)) __PYX_ERR(1, 414, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_3); - __pyx_r = __pyx_t_3; - __pyx_t_3 = 0; - goto __pyx_L0; - } - - /* "View.MemoryView":403 - * - * - * def __getitem__(memoryview self, object index): # <<<<<<<<<<<<<< - * if index is Ellipsis: - * return self - */ - - /* function exit code */ - __pyx_L1_error:; - __Pyx_XDECREF(__pyx_t_3); - __Pyx_XDECREF(__pyx_t_4); - __Pyx_XDECREF(__pyx_t_5); - __Pyx_AddTraceback("View.MemoryView.memoryview.__getitem__", __pyx_clineno, __pyx_lineno, __pyx_filename); - __pyx_r = NULL; - __pyx_L0:; - __Pyx_XDECREF(__pyx_v_have_slices); - __Pyx_XDECREF(__pyx_v_indices); - __Pyx_XGIVEREF(__pyx_r); - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} - -/* "View.MemoryView":416 - * return self.convert_item_to_object(itemp) - * - * def __setitem__(memoryview self, object index, object value): # <<<<<<<<<<<<<< - * if self.view.readonly: - * raise TypeError("Cannot assign to read-only memoryview") - */ - -/* Python wrapper */ -static int __pyx_memoryview___setitem__(PyObject *__pyx_v_self, PyObject *__pyx_v_index, PyObject *__pyx_v_value); /*proto*/ -static int __pyx_memoryview___setitem__(PyObject *__pyx_v_self, PyObject *__pyx_v_index, PyObject *__pyx_v_value) { - int __pyx_r; - __Pyx_RefNannyDeclarations - __Pyx_RefNannySetupContext("__setitem__ (wrapper)", 0); - __pyx_r = __pyx_memoryview___pyx_pf_15View_dot_MemoryView_10memoryview_6__setitem__(((struct __pyx_memoryview_obj *)__pyx_v_self), ((PyObject *)__pyx_v_index), ((PyObject *)__pyx_v_value)); - - /* function exit code */ - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} - -static int __pyx_memoryview___pyx_pf_15View_dot_MemoryView_10memoryview_6__setitem__(struct __pyx_memoryview_obj *__pyx_v_self, PyObject *__pyx_v_index, PyObject *__pyx_v_value) { - PyObject *__pyx_v_have_slices = NULL; - PyObject *__pyx_v_obj = NULL; - int __pyx_r; - __Pyx_RefNannyDeclarations - int __pyx_t_1; - PyObject *__pyx_t_2 = NULL; - PyObject *__pyx_t_3 = NULL; - PyObject *__pyx_t_4 = NULL; - int __pyx_lineno = 0; - const char *__pyx_filename = NULL; - int __pyx_clineno = 0; - __Pyx_RefNannySetupContext("__setitem__", 0); - __Pyx_INCREF(__pyx_v_index); - - /* "View.MemoryView":417 - * - * def __setitem__(memoryview self, object index, object value): - * if self.view.readonly: # <<<<<<<<<<<<<< - * raise TypeError("Cannot assign to read-only memoryview") - * - */ - __pyx_t_1 = (__pyx_v_self->view.readonly != 0); - if (unlikely(__pyx_t_1)) { - - /* "View.MemoryView":418 - * def __setitem__(memoryview self, object index, object value): - * if self.view.readonly: - * raise TypeError("Cannot assign to read-only memoryview") # <<<<<<<<<<<<<< - * - * have_slices, index = _unellipsify(index, self.view.ndim) - */ - __pyx_t_2 = __Pyx_PyObject_Call(__pyx_builtin_TypeError, __pyx_tuple__9, NULL); if (unlikely(!__pyx_t_2)) __PYX_ERR(1, 418, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - __Pyx_Raise(__pyx_t_2, 0, 0, 0); - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - __PYX_ERR(1, 418, __pyx_L1_error) - - /* "View.MemoryView":417 - * - * def __setitem__(memoryview self, object index, object value): - * if self.view.readonly: # <<<<<<<<<<<<<< - * raise TypeError("Cannot assign to read-only memoryview") - * - */ - } - - /* "View.MemoryView":420 - * raise TypeError("Cannot assign to read-only memoryview") - * - * have_slices, index = _unellipsify(index, self.view.ndim) # <<<<<<<<<<<<<< - * - * if have_slices: - */ - __pyx_t_2 = _unellipsify(__pyx_v_index, __pyx_v_self->view.ndim); if (unlikely(!__pyx_t_2)) __PYX_ERR(1, 420, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - if (likely(__pyx_t_2 != Py_None)) { - PyObject* sequence = __pyx_t_2; - Py_ssize_t size = __Pyx_PySequence_SIZE(sequence); - if (unlikely(size != 2)) { - if (size > 2) __Pyx_RaiseTooManyValuesError(2); - else if (size >= 0) __Pyx_RaiseNeedMoreValuesError(size); - __PYX_ERR(1, 420, __pyx_L1_error) - } - #if CYTHON_ASSUME_SAFE_MACROS && !CYTHON_AVOID_BORROWED_REFS - __pyx_t_3 = PyTuple_GET_ITEM(sequence, 0); - __pyx_t_4 = PyTuple_GET_ITEM(sequence, 1); - __Pyx_INCREF(__pyx_t_3); - __Pyx_INCREF(__pyx_t_4); - #else - __pyx_t_3 = PySequence_ITEM(sequence, 0); if (unlikely(!__pyx_t_3)) __PYX_ERR(1, 420, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_3); - __pyx_t_4 = PySequence_ITEM(sequence, 1); if (unlikely(!__pyx_t_4)) __PYX_ERR(1, 420, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_4); - #endif - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - } else { - __Pyx_RaiseNoneNotIterableError(); __PYX_ERR(1, 420, __pyx_L1_error) - } - __pyx_v_have_slices = __pyx_t_3; - __pyx_t_3 = 0; - __Pyx_DECREF_SET(__pyx_v_index, __pyx_t_4); - __pyx_t_4 = 0; - - /* "View.MemoryView":422 - * have_slices, index = _unellipsify(index, self.view.ndim) - * - * if have_slices: # <<<<<<<<<<<<<< - * obj = self.is_slice(value) - * if obj: - */ - __pyx_t_1 = __Pyx_PyObject_IsTrue(__pyx_v_have_slices); if (unlikely(__pyx_t_1 < 0)) __PYX_ERR(1, 422, __pyx_L1_error) - if (__pyx_t_1) { - - /* "View.MemoryView":423 - * - * if have_slices: - * obj = self.is_slice(value) # <<<<<<<<<<<<<< - * if obj: - * self.setitem_slice_assignment(self[index], obj) - */ - __pyx_t_2 = ((struct __pyx_vtabstruct_memoryview *)__pyx_v_self->__pyx_vtab)->is_slice(__pyx_v_self, __pyx_v_value); if (unlikely(!__pyx_t_2)) __PYX_ERR(1, 423, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - __pyx_v_obj = __pyx_t_2; - __pyx_t_2 = 0; - - /* "View.MemoryView":424 - * if have_slices: - * obj = self.is_slice(value) - * if obj: # <<<<<<<<<<<<<< - * self.setitem_slice_assignment(self[index], obj) - * else: - */ - __pyx_t_1 = __Pyx_PyObject_IsTrue(__pyx_v_obj); if (unlikely(__pyx_t_1 < 0)) __PYX_ERR(1, 424, __pyx_L1_error) - if (__pyx_t_1) { - - /* "View.MemoryView":425 - * obj = self.is_slice(value) - * if obj: - * self.setitem_slice_assignment(self[index], obj) # <<<<<<<<<<<<<< - * else: - * self.setitem_slice_assign_scalar(self[index], value) - */ - __pyx_t_2 = __Pyx_PyObject_GetItem(((PyObject *)__pyx_v_self), __pyx_v_index); if (unlikely(!__pyx_t_2)) __PYX_ERR(1, 425, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - __pyx_t_4 = ((struct __pyx_vtabstruct_memoryview *)__pyx_v_self->__pyx_vtab)->setitem_slice_assignment(__pyx_v_self, __pyx_t_2, __pyx_v_obj); if (unlikely(!__pyx_t_4)) __PYX_ERR(1, 425, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_4); - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - __Pyx_DECREF(__pyx_t_4); __pyx_t_4 = 0; - - /* "View.MemoryView":424 - * if have_slices: - * obj = self.is_slice(value) - * if obj: # <<<<<<<<<<<<<< - * self.setitem_slice_assignment(self[index], obj) - * else: - */ - goto __pyx_L5; - } - - /* "View.MemoryView":427 - * self.setitem_slice_assignment(self[index], obj) - * else: - * self.setitem_slice_assign_scalar(self[index], value) # <<<<<<<<<<<<<< - * else: - * self.setitem_indexed(index, value) - */ - /*else*/ { - __pyx_t_4 = __Pyx_PyObject_GetItem(((PyObject *)__pyx_v_self), __pyx_v_index); if (unlikely(!__pyx_t_4)) __PYX_ERR(1, 427, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_4); - if (!(likely(((__pyx_t_4) == Py_None) || likely(__Pyx_TypeTest(__pyx_t_4, __pyx_memoryview_type))))) __PYX_ERR(1, 427, __pyx_L1_error) - __pyx_t_2 = ((struct __pyx_vtabstruct_memoryview *)__pyx_v_self->__pyx_vtab)->setitem_slice_assign_scalar(__pyx_v_self, ((struct __pyx_memoryview_obj *)__pyx_t_4), __pyx_v_value); if (unlikely(!__pyx_t_2)) __PYX_ERR(1, 427, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - __Pyx_DECREF(__pyx_t_4); __pyx_t_4 = 0; - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - } - __pyx_L5:; - - /* "View.MemoryView":422 - * have_slices, index = _unellipsify(index, self.view.ndim) - * - * if have_slices: # <<<<<<<<<<<<<< - * obj = self.is_slice(value) - * if obj: - */ - goto __pyx_L4; - } - - /* "View.MemoryView":429 - * self.setitem_slice_assign_scalar(self[index], value) - * else: - * self.setitem_indexed(index, value) # <<<<<<<<<<<<<< - * - * cdef is_slice(self, obj): - */ - /*else*/ { - __pyx_t_2 = ((struct __pyx_vtabstruct_memoryview *)__pyx_v_self->__pyx_vtab)->setitem_indexed(__pyx_v_self, __pyx_v_index, __pyx_v_value); if (unlikely(!__pyx_t_2)) __PYX_ERR(1, 429, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - } - __pyx_L4:; - - /* "View.MemoryView":416 - * return self.convert_item_to_object(itemp) - * - * def __setitem__(memoryview self, object index, object value): # <<<<<<<<<<<<<< - * if self.view.readonly: - * raise TypeError("Cannot assign to read-only memoryview") - */ - - /* function exit code */ - __pyx_r = 0; - goto __pyx_L0; - __pyx_L1_error:; - __Pyx_XDECREF(__pyx_t_2); - __Pyx_XDECREF(__pyx_t_3); - __Pyx_XDECREF(__pyx_t_4); - __Pyx_AddTraceback("View.MemoryView.memoryview.__setitem__", __pyx_clineno, __pyx_lineno, __pyx_filename); - __pyx_r = -1; - __pyx_L0:; - __Pyx_XDECREF(__pyx_v_have_slices); - __Pyx_XDECREF(__pyx_v_obj); - __Pyx_XDECREF(__pyx_v_index); - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} - -/* "View.MemoryView":431 - * self.setitem_indexed(index, value) - * - * cdef is_slice(self, obj): # <<<<<<<<<<<<<< - * if not isinstance(obj, memoryview): - * try: - */ - -static PyObject *__pyx_memoryview_is_slice(struct __pyx_memoryview_obj *__pyx_v_self, PyObject *__pyx_v_obj) { - PyObject *__pyx_r = NULL; - __Pyx_RefNannyDeclarations - int __pyx_t_1; - int __pyx_t_2; - PyObject *__pyx_t_3 = NULL; - PyObject *__pyx_t_4 = NULL; - PyObject *__pyx_t_5 = NULL; - PyObject *__pyx_t_6 = NULL; - PyObject *__pyx_t_7 = NULL; - PyObject *__pyx_t_8 = NULL; - int __pyx_t_9; - int __pyx_lineno = 0; - const char *__pyx_filename = NULL; - int __pyx_clineno = 0; - __Pyx_RefNannySetupContext("is_slice", 0); - __Pyx_INCREF(__pyx_v_obj); - - /* "View.MemoryView":432 - * - * cdef is_slice(self, obj): - * if not isinstance(obj, memoryview): # <<<<<<<<<<<<<< - * try: - * obj = memoryview(obj, self.flags & ~PyBUF_WRITABLE | PyBUF_ANY_CONTIGUOUS, - */ - __pyx_t_1 = __Pyx_TypeCheck(__pyx_v_obj, __pyx_memoryview_type); - __pyx_t_2 = ((!(__pyx_t_1 != 0)) != 0); - if (__pyx_t_2) { - - /* "View.MemoryView":433 - * cdef is_slice(self, obj): - * if not isinstance(obj, memoryview): - * try: # <<<<<<<<<<<<<< - * obj = memoryview(obj, self.flags & ~PyBUF_WRITABLE | PyBUF_ANY_CONTIGUOUS, - * self.dtype_is_object) - */ - { - __Pyx_PyThreadState_declare - __Pyx_PyThreadState_assign - __Pyx_ExceptionSave(&__pyx_t_3, &__pyx_t_4, &__pyx_t_5); - __Pyx_XGOTREF(__pyx_t_3); - __Pyx_XGOTREF(__pyx_t_4); - __Pyx_XGOTREF(__pyx_t_5); - /*try:*/ { - - /* "View.MemoryView":434 - * if not isinstance(obj, memoryview): - * try: - * obj = memoryview(obj, self.flags & ~PyBUF_WRITABLE | PyBUF_ANY_CONTIGUOUS, # <<<<<<<<<<<<<< - * self.dtype_is_object) - * except TypeError: - */ - __pyx_t_6 = __Pyx_PyInt_From_int(((__pyx_v_self->flags & (~PyBUF_WRITABLE)) | PyBUF_ANY_CONTIGUOUS)); if (unlikely(!__pyx_t_6)) __PYX_ERR(1, 434, __pyx_L4_error) - __Pyx_GOTREF(__pyx_t_6); - - /* "View.MemoryView":435 - * try: - * obj = memoryview(obj, self.flags & ~PyBUF_WRITABLE | PyBUF_ANY_CONTIGUOUS, - * self.dtype_is_object) # <<<<<<<<<<<<<< - * except TypeError: - * return None - */ - __pyx_t_7 = __Pyx_PyBool_FromLong(__pyx_v_self->dtype_is_object); if (unlikely(!__pyx_t_7)) __PYX_ERR(1, 435, __pyx_L4_error) - __Pyx_GOTREF(__pyx_t_7); - - /* "View.MemoryView":434 - * if not isinstance(obj, memoryview): - * try: - * obj = memoryview(obj, self.flags & ~PyBUF_WRITABLE | PyBUF_ANY_CONTIGUOUS, # <<<<<<<<<<<<<< - * self.dtype_is_object) - * except TypeError: - */ - __pyx_t_8 = PyTuple_New(3); if (unlikely(!__pyx_t_8)) __PYX_ERR(1, 434, __pyx_L4_error) - __Pyx_GOTREF(__pyx_t_8); - __Pyx_INCREF(__pyx_v_obj); - __Pyx_GIVEREF(__pyx_v_obj); - PyTuple_SET_ITEM(__pyx_t_8, 0, __pyx_v_obj); - __Pyx_GIVEREF(__pyx_t_6); - PyTuple_SET_ITEM(__pyx_t_8, 1, __pyx_t_6); - __Pyx_GIVEREF(__pyx_t_7); - PyTuple_SET_ITEM(__pyx_t_8, 2, __pyx_t_7); - __pyx_t_6 = 0; - __pyx_t_7 = 0; - __pyx_t_7 = __Pyx_PyObject_Call(((PyObject *)__pyx_memoryview_type), __pyx_t_8, NULL); if (unlikely(!__pyx_t_7)) __PYX_ERR(1, 434, __pyx_L4_error) - __Pyx_GOTREF(__pyx_t_7); - __Pyx_DECREF(__pyx_t_8); __pyx_t_8 = 0; - __Pyx_DECREF_SET(__pyx_v_obj, __pyx_t_7); - __pyx_t_7 = 0; - - /* "View.MemoryView":433 - * cdef is_slice(self, obj): - * if not isinstance(obj, memoryview): - * try: # <<<<<<<<<<<<<< - * obj = memoryview(obj, self.flags & ~PyBUF_WRITABLE | PyBUF_ANY_CONTIGUOUS, - * self.dtype_is_object) - */ - } - __Pyx_XDECREF(__pyx_t_3); __pyx_t_3 = 0; - __Pyx_XDECREF(__pyx_t_4); __pyx_t_4 = 0; - __Pyx_XDECREF(__pyx_t_5); __pyx_t_5 = 0; - goto __pyx_L9_try_end; - __pyx_L4_error:; - __Pyx_XDECREF(__pyx_t_6); __pyx_t_6 = 0; - __Pyx_XDECREF(__pyx_t_7); __pyx_t_7 = 0; - __Pyx_XDECREF(__pyx_t_8); __pyx_t_8 = 0; - - /* "View.MemoryView":436 - * obj = memoryview(obj, self.flags & ~PyBUF_WRITABLE | PyBUF_ANY_CONTIGUOUS, - * self.dtype_is_object) - * except TypeError: # <<<<<<<<<<<<<< - * return None - * - */ - __pyx_t_9 = __Pyx_PyErr_ExceptionMatches(__pyx_builtin_TypeError); - if (__pyx_t_9) { - __Pyx_AddTraceback("View.MemoryView.memoryview.is_slice", __pyx_clineno, __pyx_lineno, __pyx_filename); - if (__Pyx_GetException(&__pyx_t_7, &__pyx_t_8, &__pyx_t_6) < 0) __PYX_ERR(1, 436, __pyx_L6_except_error) - __Pyx_GOTREF(__pyx_t_7); - __Pyx_GOTREF(__pyx_t_8); - __Pyx_GOTREF(__pyx_t_6); - - /* "View.MemoryView":437 - * self.dtype_is_object) - * except TypeError: - * return None # <<<<<<<<<<<<<< - * - * return obj - */ - __Pyx_XDECREF(__pyx_r); - __pyx_r = Py_None; __Pyx_INCREF(Py_None); - __Pyx_DECREF(__pyx_t_6); __pyx_t_6 = 0; - __Pyx_DECREF(__pyx_t_7); __pyx_t_7 = 0; - __Pyx_DECREF(__pyx_t_8); __pyx_t_8 = 0; - goto __pyx_L7_except_return; - } - goto __pyx_L6_except_error; - __pyx_L6_except_error:; - - /* "View.MemoryView":433 - * cdef is_slice(self, obj): - * if not isinstance(obj, memoryview): - * try: # <<<<<<<<<<<<<< - * obj = memoryview(obj, self.flags & ~PyBUF_WRITABLE | PyBUF_ANY_CONTIGUOUS, - * self.dtype_is_object) - */ - __Pyx_XGIVEREF(__pyx_t_3); - __Pyx_XGIVEREF(__pyx_t_4); - __Pyx_XGIVEREF(__pyx_t_5); - __Pyx_ExceptionReset(__pyx_t_3, __pyx_t_4, __pyx_t_5); - goto __pyx_L1_error; - __pyx_L7_except_return:; - __Pyx_XGIVEREF(__pyx_t_3); - __Pyx_XGIVEREF(__pyx_t_4); - __Pyx_XGIVEREF(__pyx_t_5); - __Pyx_ExceptionReset(__pyx_t_3, __pyx_t_4, __pyx_t_5); - goto __pyx_L0; - __pyx_L9_try_end:; - } - - /* "View.MemoryView":432 - * - * cdef is_slice(self, obj): - * if not isinstance(obj, memoryview): # <<<<<<<<<<<<<< - * try: - * obj = memoryview(obj, self.flags & ~PyBUF_WRITABLE | PyBUF_ANY_CONTIGUOUS, - */ - } - - /* "View.MemoryView":439 - * return None - * - * return obj # <<<<<<<<<<<<<< - * - * cdef setitem_slice_assignment(self, dst, src): - */ - __Pyx_XDECREF(__pyx_r); - __Pyx_INCREF(__pyx_v_obj); - __pyx_r = __pyx_v_obj; - goto __pyx_L0; - - /* "View.MemoryView":431 - * self.setitem_indexed(index, value) - * - * cdef is_slice(self, obj): # <<<<<<<<<<<<<< - * if not isinstance(obj, memoryview): - * try: - */ - - /* function exit code */ - __pyx_L1_error:; - __Pyx_XDECREF(__pyx_t_6); - __Pyx_XDECREF(__pyx_t_7); - __Pyx_XDECREF(__pyx_t_8); - __Pyx_AddTraceback("View.MemoryView.memoryview.is_slice", __pyx_clineno, __pyx_lineno, __pyx_filename); - __pyx_r = 0; - __pyx_L0:; - __Pyx_XDECREF(__pyx_v_obj); - __Pyx_XGIVEREF(__pyx_r); - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} - -/* "View.MemoryView":441 - * return obj - * - * cdef setitem_slice_assignment(self, dst, src): # <<<<<<<<<<<<<< - * cdef __Pyx_memviewslice dst_slice - * cdef __Pyx_memviewslice src_slice - */ - -static PyObject *__pyx_memoryview_setitem_slice_assignment(struct __pyx_memoryview_obj *__pyx_v_self, PyObject *__pyx_v_dst, PyObject *__pyx_v_src) { - __Pyx_memviewslice __pyx_v_dst_slice; - __Pyx_memviewslice __pyx_v_src_slice; - PyObject *__pyx_r = NULL; - __Pyx_RefNannyDeclarations - __Pyx_memviewslice *__pyx_t_1; - __Pyx_memviewslice *__pyx_t_2; - PyObject *__pyx_t_3 = NULL; - int __pyx_t_4; - int __pyx_t_5; - int __pyx_t_6; - int __pyx_lineno = 0; - const char *__pyx_filename = NULL; - int __pyx_clineno = 0; - __Pyx_RefNannySetupContext("setitem_slice_assignment", 0); - - /* "View.MemoryView":445 - * cdef __Pyx_memviewslice src_slice - * - * memoryview_copy_contents(get_slice_from_memview(src, &src_slice)[0], # <<<<<<<<<<<<<< - * get_slice_from_memview(dst, &dst_slice)[0], - * src.ndim, dst.ndim, self.dtype_is_object) - */ - if (!(likely(((__pyx_v_src) == Py_None) || likely(__Pyx_TypeTest(__pyx_v_src, __pyx_memoryview_type))))) __PYX_ERR(1, 445, __pyx_L1_error) - __pyx_t_1 = __pyx_memoryview_get_slice_from_memoryview(((struct __pyx_memoryview_obj *)__pyx_v_src), (&__pyx_v_src_slice)); if (unlikely(__pyx_t_1 == ((__Pyx_memviewslice *)NULL))) __PYX_ERR(1, 445, __pyx_L1_error) - - /* "View.MemoryView":446 - * - * memoryview_copy_contents(get_slice_from_memview(src, &src_slice)[0], - * get_slice_from_memview(dst, &dst_slice)[0], # <<<<<<<<<<<<<< - * src.ndim, dst.ndim, self.dtype_is_object) - * - */ - if (!(likely(((__pyx_v_dst) == Py_None) || likely(__Pyx_TypeTest(__pyx_v_dst, __pyx_memoryview_type))))) __PYX_ERR(1, 446, __pyx_L1_error) - __pyx_t_2 = __pyx_memoryview_get_slice_from_memoryview(((struct __pyx_memoryview_obj *)__pyx_v_dst), (&__pyx_v_dst_slice)); if (unlikely(__pyx_t_2 == ((__Pyx_memviewslice *)NULL))) __PYX_ERR(1, 446, __pyx_L1_error) - - /* "View.MemoryView":447 - * memoryview_copy_contents(get_slice_from_memview(src, &src_slice)[0], - * get_slice_from_memview(dst, &dst_slice)[0], - * src.ndim, dst.ndim, self.dtype_is_object) # <<<<<<<<<<<<<< - * - * cdef setitem_slice_assign_scalar(self, memoryview dst, value): - */ - __pyx_t_3 = __Pyx_PyObject_GetAttrStr(__pyx_v_src, __pyx_n_s_ndim); if (unlikely(!__pyx_t_3)) __PYX_ERR(1, 447, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_3); - __pyx_t_4 = __Pyx_PyInt_As_int(__pyx_t_3); if (unlikely((__pyx_t_4 == (int)-1) && PyErr_Occurred())) __PYX_ERR(1, 447, __pyx_L1_error) - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - __pyx_t_3 = __Pyx_PyObject_GetAttrStr(__pyx_v_dst, __pyx_n_s_ndim); if (unlikely(!__pyx_t_3)) __PYX_ERR(1, 447, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_3); - __pyx_t_5 = __Pyx_PyInt_As_int(__pyx_t_3); if (unlikely((__pyx_t_5 == (int)-1) && PyErr_Occurred())) __PYX_ERR(1, 447, __pyx_L1_error) - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - - /* "View.MemoryView":445 - * cdef __Pyx_memviewslice src_slice - * - * memoryview_copy_contents(get_slice_from_memview(src, &src_slice)[0], # <<<<<<<<<<<<<< - * get_slice_from_memview(dst, &dst_slice)[0], - * src.ndim, dst.ndim, self.dtype_is_object) - */ - __pyx_t_6 = __pyx_memoryview_copy_contents((__pyx_t_1[0]), (__pyx_t_2[0]), __pyx_t_4, __pyx_t_5, __pyx_v_self->dtype_is_object); if (unlikely(__pyx_t_6 == ((int)-1))) __PYX_ERR(1, 445, __pyx_L1_error) - - /* "View.MemoryView":441 - * return obj - * - * cdef setitem_slice_assignment(self, dst, src): # <<<<<<<<<<<<<< - * cdef __Pyx_memviewslice dst_slice - * cdef __Pyx_memviewslice src_slice - */ - - /* function exit code */ - __pyx_r = Py_None; __Pyx_INCREF(Py_None); - goto __pyx_L0; - __pyx_L1_error:; - __Pyx_XDECREF(__pyx_t_3); - __Pyx_AddTraceback("View.MemoryView.memoryview.setitem_slice_assignment", __pyx_clineno, __pyx_lineno, __pyx_filename); - __pyx_r = 0; - __pyx_L0:; - __Pyx_XGIVEREF(__pyx_r); - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} - -/* "View.MemoryView":449 - * src.ndim, dst.ndim, self.dtype_is_object) - * - * cdef setitem_slice_assign_scalar(self, memoryview dst, value): # <<<<<<<<<<<<<< - * cdef int array[128] - * cdef void *tmp = NULL - */ - -static PyObject *__pyx_memoryview_setitem_slice_assign_scalar(struct __pyx_memoryview_obj *__pyx_v_self, struct __pyx_memoryview_obj *__pyx_v_dst, PyObject *__pyx_v_value) { - int __pyx_v_array[0x80]; - void *__pyx_v_tmp; - void *__pyx_v_item; - __Pyx_memviewslice *__pyx_v_dst_slice; - __Pyx_memviewslice __pyx_v_tmp_slice; - PyObject *__pyx_r = NULL; - __Pyx_RefNannyDeclarations - __Pyx_memviewslice *__pyx_t_1; - int __pyx_t_2; - PyObject *__pyx_t_3 = NULL; - int __pyx_t_4; - int __pyx_t_5; - char const *__pyx_t_6; - PyObject *__pyx_t_7 = NULL; - PyObject *__pyx_t_8 = NULL; - PyObject *__pyx_t_9 = NULL; - PyObject *__pyx_t_10 = NULL; - PyObject *__pyx_t_11 = NULL; - PyObject *__pyx_t_12 = NULL; - int __pyx_lineno = 0; - const char *__pyx_filename = NULL; - int __pyx_clineno = 0; - __Pyx_RefNannySetupContext("setitem_slice_assign_scalar", 0); - - /* "View.MemoryView":451 - * cdef setitem_slice_assign_scalar(self, memoryview dst, value): - * cdef int array[128] - * cdef void *tmp = NULL # <<<<<<<<<<<<<< - * cdef void *item - * - */ - __pyx_v_tmp = NULL; - - /* "View.MemoryView":456 - * cdef __Pyx_memviewslice *dst_slice - * cdef __Pyx_memviewslice tmp_slice - * dst_slice = get_slice_from_memview(dst, &tmp_slice) # <<<<<<<<<<<<<< - * - * if self.view.itemsize > sizeof(array): - */ - __pyx_t_1 = __pyx_memoryview_get_slice_from_memoryview(__pyx_v_dst, (&__pyx_v_tmp_slice)); if (unlikely(__pyx_t_1 == ((__Pyx_memviewslice *)NULL))) __PYX_ERR(1, 456, __pyx_L1_error) - __pyx_v_dst_slice = __pyx_t_1; - - /* "View.MemoryView":458 - * dst_slice = get_slice_from_memview(dst, &tmp_slice) - * - * if self.view.itemsize > sizeof(array): # <<<<<<<<<<<<<< - * tmp = PyMem_Malloc(self.view.itemsize) - * if tmp == NULL: - */ - __pyx_t_2 = ((((size_t)__pyx_v_self->view.itemsize) > (sizeof(__pyx_v_array))) != 0); - if (__pyx_t_2) { - - /* "View.MemoryView":459 - * - * if self.view.itemsize > sizeof(array): - * tmp = PyMem_Malloc(self.view.itemsize) # <<<<<<<<<<<<<< - * if tmp == NULL: - * raise MemoryError - */ - __pyx_v_tmp = PyMem_Malloc(__pyx_v_self->view.itemsize); - - /* "View.MemoryView":460 - * if self.view.itemsize > sizeof(array): - * tmp = PyMem_Malloc(self.view.itemsize) - * if tmp == NULL: # <<<<<<<<<<<<<< - * raise MemoryError - * item = tmp - */ - __pyx_t_2 = ((__pyx_v_tmp == NULL) != 0); - if (unlikely(__pyx_t_2)) { - - /* "View.MemoryView":461 - * tmp = PyMem_Malloc(self.view.itemsize) - * if tmp == NULL: - * raise MemoryError # <<<<<<<<<<<<<< - * item = tmp - * else: - */ - PyErr_NoMemory(); __PYX_ERR(1, 461, __pyx_L1_error) - - /* "View.MemoryView":460 - * if self.view.itemsize > sizeof(array): - * tmp = PyMem_Malloc(self.view.itemsize) - * if tmp == NULL: # <<<<<<<<<<<<<< - * raise MemoryError - * item = tmp - */ - } - - /* "View.MemoryView":462 - * if tmp == NULL: - * raise MemoryError - * item = tmp # <<<<<<<<<<<<<< - * else: - * item = array - */ - __pyx_v_item = __pyx_v_tmp; - - /* "View.MemoryView":458 - * dst_slice = get_slice_from_memview(dst, &tmp_slice) - * - * if self.view.itemsize > sizeof(array): # <<<<<<<<<<<<<< - * tmp = PyMem_Malloc(self.view.itemsize) - * if tmp == NULL: - */ - goto __pyx_L3; - } - - /* "View.MemoryView":464 - * item = tmp - * else: - * item = array # <<<<<<<<<<<<<< - * - * try: - */ - /*else*/ { - __pyx_v_item = ((void *)__pyx_v_array); - } - __pyx_L3:; - - /* "View.MemoryView":466 - * item = array - * - * try: # <<<<<<<<<<<<<< - * if self.dtype_is_object: - * ( item)[0] = value - */ - /*try:*/ { - - /* "View.MemoryView":467 - * - * try: - * if self.dtype_is_object: # <<<<<<<<<<<<<< - * ( item)[0] = value - * else: - */ - __pyx_t_2 = (__pyx_v_self->dtype_is_object != 0); - if (__pyx_t_2) { - - /* "View.MemoryView":468 - * try: - * if self.dtype_is_object: - * ( item)[0] = value # <<<<<<<<<<<<<< - * else: - * self.assign_item_from_object( item, value) - */ - (((PyObject **)__pyx_v_item)[0]) = ((PyObject *)__pyx_v_value); - - /* "View.MemoryView":467 - * - * try: - * if self.dtype_is_object: # <<<<<<<<<<<<<< - * ( item)[0] = value - * else: - */ - goto __pyx_L8; - } - - /* "View.MemoryView":470 - * ( item)[0] = value - * else: - * self.assign_item_from_object( item, value) # <<<<<<<<<<<<<< - * - * - */ - /*else*/ { - __pyx_t_3 = ((struct __pyx_vtabstruct_memoryview *)__pyx_v_self->__pyx_vtab)->assign_item_from_object(__pyx_v_self, ((char *)__pyx_v_item), __pyx_v_value); if (unlikely(!__pyx_t_3)) __PYX_ERR(1, 470, __pyx_L6_error) - __Pyx_GOTREF(__pyx_t_3); - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - } - __pyx_L8:; - - /* "View.MemoryView":474 - * - * - * if self.view.suboffsets != NULL: # <<<<<<<<<<<<<< - * assert_direct_dimensions(self.view.suboffsets, self.view.ndim) - * slice_assign_scalar(dst_slice, dst.view.ndim, self.view.itemsize, - */ - __pyx_t_2 = ((__pyx_v_self->view.suboffsets != NULL) != 0); - if (__pyx_t_2) { - - /* "View.MemoryView":475 - * - * if self.view.suboffsets != NULL: - * assert_direct_dimensions(self.view.suboffsets, self.view.ndim) # <<<<<<<<<<<<<< - * slice_assign_scalar(dst_slice, dst.view.ndim, self.view.itemsize, - * item, self.dtype_is_object) - */ - __pyx_t_3 = assert_direct_dimensions(__pyx_v_self->view.suboffsets, __pyx_v_self->view.ndim); if (unlikely(!__pyx_t_3)) __PYX_ERR(1, 475, __pyx_L6_error) - __Pyx_GOTREF(__pyx_t_3); - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - - /* "View.MemoryView":474 - * - * - * if self.view.suboffsets != NULL: # <<<<<<<<<<<<<< - * assert_direct_dimensions(self.view.suboffsets, self.view.ndim) - * slice_assign_scalar(dst_slice, dst.view.ndim, self.view.itemsize, - */ - } - - /* "View.MemoryView":476 - * if self.view.suboffsets != NULL: - * assert_direct_dimensions(self.view.suboffsets, self.view.ndim) - * slice_assign_scalar(dst_slice, dst.view.ndim, self.view.itemsize, # <<<<<<<<<<<<<< - * item, self.dtype_is_object) - * finally: - */ - __pyx_memoryview_slice_assign_scalar(__pyx_v_dst_slice, __pyx_v_dst->view.ndim, __pyx_v_self->view.itemsize, __pyx_v_item, __pyx_v_self->dtype_is_object); - } - - /* "View.MemoryView":479 - * item, self.dtype_is_object) - * finally: - * PyMem_Free(tmp) # <<<<<<<<<<<<<< - * - * cdef setitem_indexed(self, index, value): - */ - /*finally:*/ { - /*normal exit:*/{ - PyMem_Free(__pyx_v_tmp); - goto __pyx_L7; - } - __pyx_L6_error:; - /*exception exit:*/{ - __Pyx_PyThreadState_declare - __Pyx_PyThreadState_assign - __pyx_t_7 = 0; __pyx_t_8 = 0; __pyx_t_9 = 0; __pyx_t_10 = 0; __pyx_t_11 = 0; __pyx_t_12 = 0; - __Pyx_XDECREF(__pyx_t_3); __pyx_t_3 = 0; - if (PY_MAJOR_VERSION >= 3) __Pyx_ExceptionSwap(&__pyx_t_10, &__pyx_t_11, &__pyx_t_12); - if ((PY_MAJOR_VERSION < 3) || unlikely(__Pyx_GetException(&__pyx_t_7, &__pyx_t_8, &__pyx_t_9) < 0)) __Pyx_ErrFetch(&__pyx_t_7, &__pyx_t_8, &__pyx_t_9); - __Pyx_XGOTREF(__pyx_t_7); - __Pyx_XGOTREF(__pyx_t_8); - __Pyx_XGOTREF(__pyx_t_9); - __Pyx_XGOTREF(__pyx_t_10); - __Pyx_XGOTREF(__pyx_t_11); - __Pyx_XGOTREF(__pyx_t_12); - __pyx_t_4 = __pyx_lineno; __pyx_t_5 = __pyx_clineno; __pyx_t_6 = __pyx_filename; - { - PyMem_Free(__pyx_v_tmp); - } - if (PY_MAJOR_VERSION >= 3) { - __Pyx_XGIVEREF(__pyx_t_10); - __Pyx_XGIVEREF(__pyx_t_11); - __Pyx_XGIVEREF(__pyx_t_12); - __Pyx_ExceptionReset(__pyx_t_10, __pyx_t_11, __pyx_t_12); - } - __Pyx_XGIVEREF(__pyx_t_7); - __Pyx_XGIVEREF(__pyx_t_8); - __Pyx_XGIVEREF(__pyx_t_9); - __Pyx_ErrRestore(__pyx_t_7, __pyx_t_8, __pyx_t_9); - __pyx_t_7 = 0; __pyx_t_8 = 0; __pyx_t_9 = 0; __pyx_t_10 = 0; __pyx_t_11 = 0; __pyx_t_12 = 0; - __pyx_lineno = __pyx_t_4; __pyx_clineno = __pyx_t_5; __pyx_filename = __pyx_t_6; - goto __pyx_L1_error; - } - __pyx_L7:; - } - - /* "View.MemoryView":449 - * src.ndim, dst.ndim, self.dtype_is_object) - * - * cdef setitem_slice_assign_scalar(self, memoryview dst, value): # <<<<<<<<<<<<<< - * cdef int array[128] - * cdef void *tmp = NULL - */ - - /* function exit code */ - __pyx_r = Py_None; __Pyx_INCREF(Py_None); - goto __pyx_L0; - __pyx_L1_error:; - __Pyx_XDECREF(__pyx_t_3); - __Pyx_AddTraceback("View.MemoryView.memoryview.setitem_slice_assign_scalar", __pyx_clineno, __pyx_lineno, __pyx_filename); - __pyx_r = 0; - __pyx_L0:; - __Pyx_XGIVEREF(__pyx_r); - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} - -/* "View.MemoryView":481 - * PyMem_Free(tmp) - * - * cdef setitem_indexed(self, index, value): # <<<<<<<<<<<<<< - * cdef char *itemp = self.get_item_pointer(index) - * self.assign_item_from_object(itemp, value) - */ - -static PyObject *__pyx_memoryview_setitem_indexed(struct __pyx_memoryview_obj *__pyx_v_self, PyObject *__pyx_v_index, PyObject *__pyx_v_value) { - char *__pyx_v_itemp; - PyObject *__pyx_r = NULL; - __Pyx_RefNannyDeclarations - char *__pyx_t_1; - PyObject *__pyx_t_2 = NULL; - int __pyx_lineno = 0; - const char *__pyx_filename = NULL; - int __pyx_clineno = 0; - __Pyx_RefNannySetupContext("setitem_indexed", 0); - - /* "View.MemoryView":482 - * - * cdef setitem_indexed(self, index, value): - * cdef char *itemp = self.get_item_pointer(index) # <<<<<<<<<<<<<< - * self.assign_item_from_object(itemp, value) - * - */ - __pyx_t_1 = ((struct __pyx_vtabstruct_memoryview *)__pyx_v_self->__pyx_vtab)->get_item_pointer(__pyx_v_self, __pyx_v_index); if (unlikely(__pyx_t_1 == ((char *)NULL))) __PYX_ERR(1, 482, __pyx_L1_error) - __pyx_v_itemp = __pyx_t_1; - - /* "View.MemoryView":483 - * cdef setitem_indexed(self, index, value): - * cdef char *itemp = self.get_item_pointer(index) - * self.assign_item_from_object(itemp, value) # <<<<<<<<<<<<<< - * - * cdef convert_item_to_object(self, char *itemp): - */ - __pyx_t_2 = ((struct __pyx_vtabstruct_memoryview *)__pyx_v_self->__pyx_vtab)->assign_item_from_object(__pyx_v_self, __pyx_v_itemp, __pyx_v_value); if (unlikely(!__pyx_t_2)) __PYX_ERR(1, 483, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - - /* "View.MemoryView":481 - * PyMem_Free(tmp) - * - * cdef setitem_indexed(self, index, value): # <<<<<<<<<<<<<< - * cdef char *itemp = self.get_item_pointer(index) - * self.assign_item_from_object(itemp, value) - */ - - /* function exit code */ - __pyx_r = Py_None; __Pyx_INCREF(Py_None); - goto __pyx_L0; - __pyx_L1_error:; - __Pyx_XDECREF(__pyx_t_2); - __Pyx_AddTraceback("View.MemoryView.memoryview.setitem_indexed", __pyx_clineno, __pyx_lineno, __pyx_filename); - __pyx_r = 0; - __pyx_L0:; - __Pyx_XGIVEREF(__pyx_r); - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} - -/* "View.MemoryView":485 - * self.assign_item_from_object(itemp, value) - * - * cdef convert_item_to_object(self, char *itemp): # <<<<<<<<<<<<<< - * """Only used if instantiated manually by the user, or if Cython doesn't - * know how to convert the type""" - */ - -static PyObject *__pyx_memoryview_convert_item_to_object(struct __pyx_memoryview_obj *__pyx_v_self, char *__pyx_v_itemp) { - PyObject *__pyx_v_struct = NULL; - PyObject *__pyx_v_bytesitem = 0; - PyObject *__pyx_v_result = NULL; - PyObject *__pyx_r = NULL; - __Pyx_RefNannyDeclarations - PyObject *__pyx_t_1 = NULL; - PyObject *__pyx_t_2 = NULL; - PyObject *__pyx_t_3 = NULL; - PyObject *__pyx_t_4 = NULL; - PyObject *__pyx_t_5 = NULL; - PyObject *__pyx_t_6 = NULL; - PyObject *__pyx_t_7 = NULL; - int __pyx_t_8; - PyObject *__pyx_t_9 = NULL; - size_t __pyx_t_10; - int __pyx_t_11; - int __pyx_lineno = 0; - const char *__pyx_filename = NULL; - int __pyx_clineno = 0; - __Pyx_RefNannySetupContext("convert_item_to_object", 0); - - /* "View.MemoryView":488 - * """Only used if instantiated manually by the user, or if Cython doesn't - * know how to convert the type""" - * import struct # <<<<<<<<<<<<<< - * cdef bytes bytesitem - * - */ - __pyx_t_1 = __Pyx_Import(__pyx_n_s_struct, 0, 0); if (unlikely(!__pyx_t_1)) __PYX_ERR(1, 488, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - __pyx_v_struct = __pyx_t_1; - __pyx_t_1 = 0; - - /* "View.MemoryView":491 - * cdef bytes bytesitem - * - * bytesitem = itemp[:self.view.itemsize] # <<<<<<<<<<<<<< - * try: - * result = struct.unpack(self.view.format, bytesitem) - */ - __pyx_t_1 = __Pyx_PyBytes_FromStringAndSize(__pyx_v_itemp + 0, __pyx_v_self->view.itemsize - 0); if (unlikely(!__pyx_t_1)) __PYX_ERR(1, 491, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - __pyx_v_bytesitem = ((PyObject*)__pyx_t_1); - __pyx_t_1 = 0; - - /* "View.MemoryView":492 - * - * bytesitem = itemp[:self.view.itemsize] - * try: # <<<<<<<<<<<<<< - * result = struct.unpack(self.view.format, bytesitem) - * except struct.error: - */ - { - __Pyx_PyThreadState_declare - __Pyx_PyThreadState_assign - __Pyx_ExceptionSave(&__pyx_t_2, &__pyx_t_3, &__pyx_t_4); - __Pyx_XGOTREF(__pyx_t_2); - __Pyx_XGOTREF(__pyx_t_3); - __Pyx_XGOTREF(__pyx_t_4); - /*try:*/ { - - /* "View.MemoryView":493 - * bytesitem = itemp[:self.view.itemsize] - * try: - * result = struct.unpack(self.view.format, bytesitem) # <<<<<<<<<<<<<< - * except struct.error: - * raise ValueError("Unable to convert item to object") - */ - __pyx_t_5 = __Pyx_PyObject_GetAttrStr(__pyx_v_struct, __pyx_n_s_unpack); if (unlikely(!__pyx_t_5)) __PYX_ERR(1, 493, __pyx_L3_error) - __Pyx_GOTREF(__pyx_t_5); - __pyx_t_6 = __Pyx_PyBytes_FromString(__pyx_v_self->view.format); if (unlikely(!__pyx_t_6)) __PYX_ERR(1, 493, __pyx_L3_error) - __Pyx_GOTREF(__pyx_t_6); - __pyx_t_7 = NULL; - __pyx_t_8 = 0; - if (CYTHON_UNPACK_METHODS && likely(PyMethod_Check(__pyx_t_5))) { - __pyx_t_7 = PyMethod_GET_SELF(__pyx_t_5); - if (likely(__pyx_t_7)) { - PyObject* function = PyMethod_GET_FUNCTION(__pyx_t_5); - __Pyx_INCREF(__pyx_t_7); - __Pyx_INCREF(function); - __Pyx_DECREF_SET(__pyx_t_5, function); - __pyx_t_8 = 1; - } - } - #if CYTHON_FAST_PYCALL - if (PyFunction_Check(__pyx_t_5)) { - PyObject *__pyx_temp[3] = {__pyx_t_7, __pyx_t_6, __pyx_v_bytesitem}; - __pyx_t_1 = __Pyx_PyFunction_FastCall(__pyx_t_5, __pyx_temp+1-__pyx_t_8, 2+__pyx_t_8); if (unlikely(!__pyx_t_1)) __PYX_ERR(1, 493, __pyx_L3_error) - __Pyx_XDECREF(__pyx_t_7); __pyx_t_7 = 0; - __Pyx_GOTREF(__pyx_t_1); - __Pyx_DECREF(__pyx_t_6); __pyx_t_6 = 0; - } else - #endif - #if CYTHON_FAST_PYCCALL - if (__Pyx_PyFastCFunction_Check(__pyx_t_5)) { - PyObject *__pyx_temp[3] = {__pyx_t_7, __pyx_t_6, __pyx_v_bytesitem}; - __pyx_t_1 = __Pyx_PyCFunction_FastCall(__pyx_t_5, __pyx_temp+1-__pyx_t_8, 2+__pyx_t_8); if (unlikely(!__pyx_t_1)) __PYX_ERR(1, 493, __pyx_L3_error) - __Pyx_XDECREF(__pyx_t_7); __pyx_t_7 = 0; - __Pyx_GOTREF(__pyx_t_1); - __Pyx_DECREF(__pyx_t_6); __pyx_t_6 = 0; - } else - #endif - { - __pyx_t_9 = PyTuple_New(2+__pyx_t_8); if (unlikely(!__pyx_t_9)) __PYX_ERR(1, 493, __pyx_L3_error) - __Pyx_GOTREF(__pyx_t_9); - if (__pyx_t_7) { - __Pyx_GIVEREF(__pyx_t_7); PyTuple_SET_ITEM(__pyx_t_9, 0, __pyx_t_7); __pyx_t_7 = NULL; - } - __Pyx_GIVEREF(__pyx_t_6); - PyTuple_SET_ITEM(__pyx_t_9, 0+__pyx_t_8, __pyx_t_6); - __Pyx_INCREF(__pyx_v_bytesitem); - __Pyx_GIVEREF(__pyx_v_bytesitem); - PyTuple_SET_ITEM(__pyx_t_9, 1+__pyx_t_8, __pyx_v_bytesitem); - __pyx_t_6 = 0; - __pyx_t_1 = __Pyx_PyObject_Call(__pyx_t_5, __pyx_t_9, NULL); if (unlikely(!__pyx_t_1)) __PYX_ERR(1, 493, __pyx_L3_error) - __Pyx_GOTREF(__pyx_t_1); - __Pyx_DECREF(__pyx_t_9); __pyx_t_9 = 0; - } - __Pyx_DECREF(__pyx_t_5); __pyx_t_5 = 0; - __pyx_v_result = __pyx_t_1; - __pyx_t_1 = 0; - - /* "View.MemoryView":492 - * - * bytesitem = itemp[:self.view.itemsize] - * try: # <<<<<<<<<<<<<< - * result = struct.unpack(self.view.format, bytesitem) - * except struct.error: - */ - } - - /* "View.MemoryView":497 - * raise ValueError("Unable to convert item to object") - * else: - * if len(self.view.format) == 1: # <<<<<<<<<<<<<< - * return result[0] - * return result - */ - /*else:*/ { - __pyx_t_10 = strlen(__pyx_v_self->view.format); - __pyx_t_11 = ((__pyx_t_10 == 1) != 0); - if (__pyx_t_11) { - - /* "View.MemoryView":498 - * else: - * if len(self.view.format) == 1: - * return result[0] # <<<<<<<<<<<<<< - * return result - * - */ - __Pyx_XDECREF(__pyx_r); - __pyx_t_1 = __Pyx_GetItemInt(__pyx_v_result, 0, long, 1, __Pyx_PyInt_From_long, 0, 0, 1); if (unlikely(!__pyx_t_1)) __PYX_ERR(1, 498, __pyx_L5_except_error) - __Pyx_GOTREF(__pyx_t_1); - __pyx_r = __pyx_t_1; - __pyx_t_1 = 0; - goto __pyx_L6_except_return; - - /* "View.MemoryView":497 - * raise ValueError("Unable to convert item to object") - * else: - * if len(self.view.format) == 1: # <<<<<<<<<<<<<< - * return result[0] - * return result - */ - } - - /* "View.MemoryView":499 - * if len(self.view.format) == 1: - * return result[0] - * return result # <<<<<<<<<<<<<< - * - * cdef assign_item_from_object(self, char *itemp, object value): - */ - __Pyx_XDECREF(__pyx_r); - __Pyx_INCREF(__pyx_v_result); - __pyx_r = __pyx_v_result; - goto __pyx_L6_except_return; - } - __pyx_L3_error:; - __Pyx_XDECREF(__pyx_t_1); __pyx_t_1 = 0; - __Pyx_XDECREF(__pyx_t_5); __pyx_t_5 = 0; - __Pyx_XDECREF(__pyx_t_6); __pyx_t_6 = 0; - __Pyx_XDECREF(__pyx_t_7); __pyx_t_7 = 0; - __Pyx_XDECREF(__pyx_t_9); __pyx_t_9 = 0; - - /* "View.MemoryView":494 - * try: - * result = struct.unpack(self.view.format, bytesitem) - * except struct.error: # <<<<<<<<<<<<<< - * raise ValueError("Unable to convert item to object") - * else: - */ - __Pyx_ErrFetch(&__pyx_t_1, &__pyx_t_5, &__pyx_t_9); - __pyx_t_6 = __Pyx_PyObject_GetAttrStr(__pyx_v_struct, __pyx_n_s_error); if (unlikely(!__pyx_t_6)) __PYX_ERR(1, 494, __pyx_L5_except_error) - __Pyx_GOTREF(__pyx_t_6); - __pyx_t_8 = __Pyx_PyErr_GivenExceptionMatches(__pyx_t_1, __pyx_t_6); - __Pyx_DECREF(__pyx_t_6); __pyx_t_6 = 0; - __Pyx_ErrRestore(__pyx_t_1, __pyx_t_5, __pyx_t_9); - __pyx_t_1 = 0; __pyx_t_5 = 0; __pyx_t_9 = 0; - if (__pyx_t_8) { - __Pyx_AddTraceback("View.MemoryView.memoryview.convert_item_to_object", __pyx_clineno, __pyx_lineno, __pyx_filename); - if (__Pyx_GetException(&__pyx_t_9, &__pyx_t_5, &__pyx_t_1) < 0) __PYX_ERR(1, 494, __pyx_L5_except_error) - __Pyx_GOTREF(__pyx_t_9); - __Pyx_GOTREF(__pyx_t_5); - __Pyx_GOTREF(__pyx_t_1); - - /* "View.MemoryView":495 - * result = struct.unpack(self.view.format, bytesitem) - * except struct.error: - * raise ValueError("Unable to convert item to object") # <<<<<<<<<<<<<< - * else: - * if len(self.view.format) == 1: - */ - __pyx_t_6 = __Pyx_PyObject_Call(__pyx_builtin_ValueError, __pyx_tuple__10, NULL); if (unlikely(!__pyx_t_6)) __PYX_ERR(1, 495, __pyx_L5_except_error) - __Pyx_GOTREF(__pyx_t_6); - __Pyx_Raise(__pyx_t_6, 0, 0, 0); - __Pyx_DECREF(__pyx_t_6); __pyx_t_6 = 0; - __PYX_ERR(1, 495, __pyx_L5_except_error) - } - goto __pyx_L5_except_error; - __pyx_L5_except_error:; - - /* "View.MemoryView":492 - * - * bytesitem = itemp[:self.view.itemsize] - * try: # <<<<<<<<<<<<<< - * result = struct.unpack(self.view.format, bytesitem) - * except struct.error: - */ - __Pyx_XGIVEREF(__pyx_t_2); - __Pyx_XGIVEREF(__pyx_t_3); - __Pyx_XGIVEREF(__pyx_t_4); - __Pyx_ExceptionReset(__pyx_t_2, __pyx_t_3, __pyx_t_4); - goto __pyx_L1_error; - __pyx_L6_except_return:; - __Pyx_XGIVEREF(__pyx_t_2); - __Pyx_XGIVEREF(__pyx_t_3); - __Pyx_XGIVEREF(__pyx_t_4); - __Pyx_ExceptionReset(__pyx_t_2, __pyx_t_3, __pyx_t_4); - goto __pyx_L0; - } - - /* "View.MemoryView":485 - * self.assign_item_from_object(itemp, value) - * - * cdef convert_item_to_object(self, char *itemp): # <<<<<<<<<<<<<< - * """Only used if instantiated manually by the user, or if Cython doesn't - * know how to convert the type""" - */ - - /* function exit code */ - __pyx_L1_error:; - __Pyx_XDECREF(__pyx_t_1); - __Pyx_XDECREF(__pyx_t_5); - __Pyx_XDECREF(__pyx_t_6); - __Pyx_XDECREF(__pyx_t_7); - __Pyx_XDECREF(__pyx_t_9); - __Pyx_AddTraceback("View.MemoryView.memoryview.convert_item_to_object", __pyx_clineno, __pyx_lineno, __pyx_filename); - __pyx_r = 0; - __pyx_L0:; - __Pyx_XDECREF(__pyx_v_struct); - __Pyx_XDECREF(__pyx_v_bytesitem); - __Pyx_XDECREF(__pyx_v_result); - __Pyx_XGIVEREF(__pyx_r); - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} - -/* "View.MemoryView":501 - * return result - * - * cdef assign_item_from_object(self, char *itemp, object value): # <<<<<<<<<<<<<< - * """Only used if instantiated manually by the user, or if Cython doesn't - * know how to convert the type""" - */ - -static PyObject *__pyx_memoryview_assign_item_from_object(struct __pyx_memoryview_obj *__pyx_v_self, char *__pyx_v_itemp, PyObject *__pyx_v_value) { - PyObject *__pyx_v_struct = NULL; - char __pyx_v_c; - PyObject *__pyx_v_bytesvalue = 0; - Py_ssize_t __pyx_v_i; - PyObject *__pyx_r = NULL; - __Pyx_RefNannyDeclarations - PyObject *__pyx_t_1 = NULL; - int __pyx_t_2; - int __pyx_t_3; - PyObject *__pyx_t_4 = NULL; - PyObject *__pyx_t_5 = NULL; - PyObject *__pyx_t_6 = NULL; - int __pyx_t_7; - PyObject *__pyx_t_8 = NULL; - Py_ssize_t __pyx_t_9; - PyObject *__pyx_t_10 = NULL; - char *__pyx_t_11; - char *__pyx_t_12; - char *__pyx_t_13; - char *__pyx_t_14; - int __pyx_lineno = 0; - const char *__pyx_filename = NULL; - int __pyx_clineno = 0; - __Pyx_RefNannySetupContext("assign_item_from_object", 0); - - /* "View.MemoryView":504 - * """Only used if instantiated manually by the user, or if Cython doesn't - * know how to convert the type""" - * import struct # <<<<<<<<<<<<<< - * cdef char c - * cdef bytes bytesvalue - */ - __pyx_t_1 = __Pyx_Import(__pyx_n_s_struct, 0, 0); if (unlikely(!__pyx_t_1)) __PYX_ERR(1, 504, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - __pyx_v_struct = __pyx_t_1; - __pyx_t_1 = 0; - - /* "View.MemoryView":509 - * cdef Py_ssize_t i - * - * if isinstance(value, tuple): # <<<<<<<<<<<<<< - * bytesvalue = struct.pack(self.view.format, *value) - * else: - */ - __pyx_t_2 = PyTuple_Check(__pyx_v_value); - __pyx_t_3 = (__pyx_t_2 != 0); - if (__pyx_t_3) { - - /* "View.MemoryView":510 - * - * if isinstance(value, tuple): - * bytesvalue = struct.pack(self.view.format, *value) # <<<<<<<<<<<<<< - * else: - * bytesvalue = struct.pack(self.view.format, value) - */ - __pyx_t_1 = __Pyx_PyObject_GetAttrStr(__pyx_v_struct, __pyx_n_s_pack); if (unlikely(!__pyx_t_1)) __PYX_ERR(1, 510, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - __pyx_t_4 = __Pyx_PyBytes_FromString(__pyx_v_self->view.format); if (unlikely(!__pyx_t_4)) __PYX_ERR(1, 510, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_4); - __pyx_t_5 = PyTuple_New(1); if (unlikely(!__pyx_t_5)) __PYX_ERR(1, 510, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_5); - __Pyx_GIVEREF(__pyx_t_4); - PyTuple_SET_ITEM(__pyx_t_5, 0, __pyx_t_4); - __pyx_t_4 = 0; - __pyx_t_4 = __Pyx_PySequence_Tuple(__pyx_v_value); if (unlikely(!__pyx_t_4)) __PYX_ERR(1, 510, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_4); - __pyx_t_6 = PyNumber_Add(__pyx_t_5, __pyx_t_4); if (unlikely(!__pyx_t_6)) __PYX_ERR(1, 510, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_6); - __Pyx_DECREF(__pyx_t_5); __pyx_t_5 = 0; - __Pyx_DECREF(__pyx_t_4); __pyx_t_4 = 0; - __pyx_t_4 = __Pyx_PyObject_Call(__pyx_t_1, __pyx_t_6, NULL); if (unlikely(!__pyx_t_4)) __PYX_ERR(1, 510, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_4); - __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; - __Pyx_DECREF(__pyx_t_6); __pyx_t_6 = 0; - if (!(likely(PyBytes_CheckExact(__pyx_t_4))||((__pyx_t_4) == Py_None)||(PyErr_Format(PyExc_TypeError, "Expected %.16s, got %.200s", "bytes", Py_TYPE(__pyx_t_4)->tp_name), 0))) __PYX_ERR(1, 510, __pyx_L1_error) - __pyx_v_bytesvalue = ((PyObject*)__pyx_t_4); - __pyx_t_4 = 0; - - /* "View.MemoryView":509 - * cdef Py_ssize_t i - * - * if isinstance(value, tuple): # <<<<<<<<<<<<<< - * bytesvalue = struct.pack(self.view.format, *value) - * else: - */ - goto __pyx_L3; - } - - /* "View.MemoryView":512 - * bytesvalue = struct.pack(self.view.format, *value) - * else: - * bytesvalue = struct.pack(self.view.format, value) # <<<<<<<<<<<<<< - * - * for i, c in enumerate(bytesvalue): - */ - /*else*/ { - __pyx_t_6 = __Pyx_PyObject_GetAttrStr(__pyx_v_struct, __pyx_n_s_pack); if (unlikely(!__pyx_t_6)) __PYX_ERR(1, 512, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_6); - __pyx_t_1 = __Pyx_PyBytes_FromString(__pyx_v_self->view.format); if (unlikely(!__pyx_t_1)) __PYX_ERR(1, 512, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - __pyx_t_5 = NULL; - __pyx_t_7 = 0; - if (CYTHON_UNPACK_METHODS && likely(PyMethod_Check(__pyx_t_6))) { - __pyx_t_5 = PyMethod_GET_SELF(__pyx_t_6); - if (likely(__pyx_t_5)) { - PyObject* function = PyMethod_GET_FUNCTION(__pyx_t_6); - __Pyx_INCREF(__pyx_t_5); - __Pyx_INCREF(function); - __Pyx_DECREF_SET(__pyx_t_6, function); - __pyx_t_7 = 1; - } - } - #if CYTHON_FAST_PYCALL - if (PyFunction_Check(__pyx_t_6)) { - PyObject *__pyx_temp[3] = {__pyx_t_5, __pyx_t_1, __pyx_v_value}; - __pyx_t_4 = __Pyx_PyFunction_FastCall(__pyx_t_6, __pyx_temp+1-__pyx_t_7, 2+__pyx_t_7); if (unlikely(!__pyx_t_4)) __PYX_ERR(1, 512, __pyx_L1_error) - __Pyx_XDECREF(__pyx_t_5); __pyx_t_5 = 0; - __Pyx_GOTREF(__pyx_t_4); - __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; - } else - #endif - #if CYTHON_FAST_PYCCALL - if (__Pyx_PyFastCFunction_Check(__pyx_t_6)) { - PyObject *__pyx_temp[3] = {__pyx_t_5, __pyx_t_1, __pyx_v_value}; - __pyx_t_4 = __Pyx_PyCFunction_FastCall(__pyx_t_6, __pyx_temp+1-__pyx_t_7, 2+__pyx_t_7); if (unlikely(!__pyx_t_4)) __PYX_ERR(1, 512, __pyx_L1_error) - __Pyx_XDECREF(__pyx_t_5); __pyx_t_5 = 0; - __Pyx_GOTREF(__pyx_t_4); - __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; - } else - #endif - { - __pyx_t_8 = PyTuple_New(2+__pyx_t_7); if (unlikely(!__pyx_t_8)) __PYX_ERR(1, 512, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_8); - if (__pyx_t_5) { - __Pyx_GIVEREF(__pyx_t_5); PyTuple_SET_ITEM(__pyx_t_8, 0, __pyx_t_5); __pyx_t_5 = NULL; - } - __Pyx_GIVEREF(__pyx_t_1); - PyTuple_SET_ITEM(__pyx_t_8, 0+__pyx_t_7, __pyx_t_1); - __Pyx_INCREF(__pyx_v_value); - __Pyx_GIVEREF(__pyx_v_value); - PyTuple_SET_ITEM(__pyx_t_8, 1+__pyx_t_7, __pyx_v_value); - __pyx_t_1 = 0; - __pyx_t_4 = __Pyx_PyObject_Call(__pyx_t_6, __pyx_t_8, NULL); if (unlikely(!__pyx_t_4)) __PYX_ERR(1, 512, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_4); - __Pyx_DECREF(__pyx_t_8); __pyx_t_8 = 0; - } - __Pyx_DECREF(__pyx_t_6); __pyx_t_6 = 0; - if (!(likely(PyBytes_CheckExact(__pyx_t_4))||((__pyx_t_4) == Py_None)||(PyErr_Format(PyExc_TypeError, "Expected %.16s, got %.200s", "bytes", Py_TYPE(__pyx_t_4)->tp_name), 0))) __PYX_ERR(1, 512, __pyx_L1_error) - __pyx_v_bytesvalue = ((PyObject*)__pyx_t_4); - __pyx_t_4 = 0; - } - __pyx_L3:; - - /* "View.MemoryView":514 - * bytesvalue = struct.pack(self.view.format, value) - * - * for i, c in enumerate(bytesvalue): # <<<<<<<<<<<<<< - * itemp[i] = c - * - */ - __pyx_t_9 = 0; - if (unlikely(__pyx_v_bytesvalue == Py_None)) { - PyErr_SetString(PyExc_TypeError, "'NoneType' is not iterable"); - __PYX_ERR(1, 514, __pyx_L1_error) - } - __Pyx_INCREF(__pyx_v_bytesvalue); - __pyx_t_10 = __pyx_v_bytesvalue; - __pyx_t_12 = PyBytes_AS_STRING(__pyx_t_10); - __pyx_t_13 = (__pyx_t_12 + PyBytes_GET_SIZE(__pyx_t_10)); - for (__pyx_t_14 = __pyx_t_12; __pyx_t_14 < __pyx_t_13; __pyx_t_14++) { - __pyx_t_11 = __pyx_t_14; - __pyx_v_c = (__pyx_t_11[0]); - - /* "View.MemoryView":515 - * - * for i, c in enumerate(bytesvalue): - * itemp[i] = c # <<<<<<<<<<<<<< - * - * @cname('getbuffer') - */ - __pyx_v_i = __pyx_t_9; - - /* "View.MemoryView":514 - * bytesvalue = struct.pack(self.view.format, value) - * - * for i, c in enumerate(bytesvalue): # <<<<<<<<<<<<<< - * itemp[i] = c - * - */ - __pyx_t_9 = (__pyx_t_9 + 1); - - /* "View.MemoryView":515 - * - * for i, c in enumerate(bytesvalue): - * itemp[i] = c # <<<<<<<<<<<<<< - * - * @cname('getbuffer') - */ - (__pyx_v_itemp[__pyx_v_i]) = __pyx_v_c; - } - __Pyx_DECREF(__pyx_t_10); __pyx_t_10 = 0; - - /* "View.MemoryView":501 - * return result - * - * cdef assign_item_from_object(self, char *itemp, object value): # <<<<<<<<<<<<<< - * """Only used if instantiated manually by the user, or if Cython doesn't - * know how to convert the type""" - */ - - /* function exit code */ - __pyx_r = Py_None; __Pyx_INCREF(Py_None); - goto __pyx_L0; - __pyx_L1_error:; - __Pyx_XDECREF(__pyx_t_1); - __Pyx_XDECREF(__pyx_t_4); - __Pyx_XDECREF(__pyx_t_5); - __Pyx_XDECREF(__pyx_t_6); - __Pyx_XDECREF(__pyx_t_8); - __Pyx_XDECREF(__pyx_t_10); - __Pyx_AddTraceback("View.MemoryView.memoryview.assign_item_from_object", __pyx_clineno, __pyx_lineno, __pyx_filename); - __pyx_r = 0; - __pyx_L0:; - __Pyx_XDECREF(__pyx_v_struct); - __Pyx_XDECREF(__pyx_v_bytesvalue); - __Pyx_XGIVEREF(__pyx_r); - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} - -/* "View.MemoryView":518 - * - * @cname('getbuffer') - * def __getbuffer__(self, Py_buffer *info, int flags): # <<<<<<<<<<<<<< - * if flags & PyBUF_WRITABLE and self.view.readonly: - * raise ValueError("Cannot create writable memory view from read-only memoryview") - */ - -/* Python wrapper */ -static CYTHON_UNUSED int __pyx_memoryview_getbuffer(PyObject *__pyx_v_self, Py_buffer *__pyx_v_info, int __pyx_v_flags); /*proto*/ -static CYTHON_UNUSED int __pyx_memoryview_getbuffer(PyObject *__pyx_v_self, Py_buffer *__pyx_v_info, int __pyx_v_flags) { - int __pyx_r; - __Pyx_RefNannyDeclarations - __Pyx_RefNannySetupContext("__getbuffer__ (wrapper)", 0); - __pyx_r = __pyx_memoryview___pyx_pf_15View_dot_MemoryView_10memoryview_8__getbuffer__(((struct __pyx_memoryview_obj *)__pyx_v_self), ((Py_buffer *)__pyx_v_info), ((int)__pyx_v_flags)); - - /* function exit code */ - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} - -static int __pyx_memoryview___pyx_pf_15View_dot_MemoryView_10memoryview_8__getbuffer__(struct __pyx_memoryview_obj *__pyx_v_self, Py_buffer *__pyx_v_info, int __pyx_v_flags) { - int __pyx_r; - __Pyx_RefNannyDeclarations - int __pyx_t_1; - int __pyx_t_2; - PyObject *__pyx_t_3 = NULL; - Py_ssize_t *__pyx_t_4; - char *__pyx_t_5; - void *__pyx_t_6; - int __pyx_t_7; - Py_ssize_t __pyx_t_8; - int __pyx_lineno = 0; - const char *__pyx_filename = NULL; - int __pyx_clineno = 0; - if (__pyx_v_info == NULL) { - PyErr_SetString(PyExc_BufferError, "PyObject_GetBuffer: view==NULL argument is obsolete"); - return -1; - } - __Pyx_RefNannySetupContext("__getbuffer__", 0); - __pyx_v_info->obj = Py_None; __Pyx_INCREF(Py_None); - __Pyx_GIVEREF(__pyx_v_info->obj); - - /* "View.MemoryView":519 - * @cname('getbuffer') - * def __getbuffer__(self, Py_buffer *info, int flags): - * if flags & PyBUF_WRITABLE and self.view.readonly: # <<<<<<<<<<<<<< - * raise ValueError("Cannot create writable memory view from read-only memoryview") - * - */ - __pyx_t_2 = ((__pyx_v_flags & PyBUF_WRITABLE) != 0); - if (__pyx_t_2) { - } else { - __pyx_t_1 = __pyx_t_2; - goto __pyx_L4_bool_binop_done; - } - __pyx_t_2 = (__pyx_v_self->view.readonly != 0); - __pyx_t_1 = __pyx_t_2; - __pyx_L4_bool_binop_done:; - if (unlikely(__pyx_t_1)) { - - /* "View.MemoryView":520 - * def __getbuffer__(self, Py_buffer *info, int flags): - * if flags & PyBUF_WRITABLE and self.view.readonly: - * raise ValueError("Cannot create writable memory view from read-only memoryview") # <<<<<<<<<<<<<< - * - * if flags & PyBUF_ND: - */ - __pyx_t_3 = __Pyx_PyObject_Call(__pyx_builtin_ValueError, __pyx_tuple__11, NULL); if (unlikely(!__pyx_t_3)) __PYX_ERR(1, 520, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_3); - __Pyx_Raise(__pyx_t_3, 0, 0, 0); - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - __PYX_ERR(1, 520, __pyx_L1_error) - - /* "View.MemoryView":519 - * @cname('getbuffer') - * def __getbuffer__(self, Py_buffer *info, int flags): - * if flags & PyBUF_WRITABLE and self.view.readonly: # <<<<<<<<<<<<<< - * raise ValueError("Cannot create writable memory view from read-only memoryview") - * - */ - } - - /* "View.MemoryView":522 - * raise ValueError("Cannot create writable memory view from read-only memoryview") - * - * if flags & PyBUF_ND: # <<<<<<<<<<<<<< - * info.shape = self.view.shape - * else: - */ - __pyx_t_1 = ((__pyx_v_flags & PyBUF_ND) != 0); - if (__pyx_t_1) { - - /* "View.MemoryView":523 - * - * if flags & PyBUF_ND: - * info.shape = self.view.shape # <<<<<<<<<<<<<< - * else: - * info.shape = NULL - */ - __pyx_t_4 = __pyx_v_self->view.shape; - __pyx_v_info->shape = __pyx_t_4; - - /* "View.MemoryView":522 - * raise ValueError("Cannot create writable memory view from read-only memoryview") - * - * if flags & PyBUF_ND: # <<<<<<<<<<<<<< - * info.shape = self.view.shape - * else: - */ - goto __pyx_L6; - } - - /* "View.MemoryView":525 - * info.shape = self.view.shape - * else: - * info.shape = NULL # <<<<<<<<<<<<<< - * - * if flags & PyBUF_STRIDES: - */ - /*else*/ { - __pyx_v_info->shape = NULL; - } - __pyx_L6:; - - /* "View.MemoryView":527 - * info.shape = NULL - * - * if flags & PyBUF_STRIDES: # <<<<<<<<<<<<<< - * info.strides = self.view.strides - * else: - */ - __pyx_t_1 = ((__pyx_v_flags & PyBUF_STRIDES) != 0); - if (__pyx_t_1) { - - /* "View.MemoryView":528 - * - * if flags & PyBUF_STRIDES: - * info.strides = self.view.strides # <<<<<<<<<<<<<< - * else: - * info.strides = NULL - */ - __pyx_t_4 = __pyx_v_self->view.strides; - __pyx_v_info->strides = __pyx_t_4; - - /* "View.MemoryView":527 - * info.shape = NULL - * - * if flags & PyBUF_STRIDES: # <<<<<<<<<<<<<< - * info.strides = self.view.strides - * else: - */ - goto __pyx_L7; - } - - /* "View.MemoryView":530 - * info.strides = self.view.strides - * else: - * info.strides = NULL # <<<<<<<<<<<<<< - * - * if flags & PyBUF_INDIRECT: - */ - /*else*/ { - __pyx_v_info->strides = NULL; - } - __pyx_L7:; - - /* "View.MemoryView":532 - * info.strides = NULL - * - * if flags & PyBUF_INDIRECT: # <<<<<<<<<<<<<< - * info.suboffsets = self.view.suboffsets - * else: - */ - __pyx_t_1 = ((__pyx_v_flags & PyBUF_INDIRECT) != 0); - if (__pyx_t_1) { - - /* "View.MemoryView":533 - * - * if flags & PyBUF_INDIRECT: - * info.suboffsets = self.view.suboffsets # <<<<<<<<<<<<<< - * else: - * info.suboffsets = NULL - */ - __pyx_t_4 = __pyx_v_self->view.suboffsets; - __pyx_v_info->suboffsets = __pyx_t_4; - - /* "View.MemoryView":532 - * info.strides = NULL - * - * if flags & PyBUF_INDIRECT: # <<<<<<<<<<<<<< - * info.suboffsets = self.view.suboffsets - * else: - */ - goto __pyx_L8; - } - - /* "View.MemoryView":535 - * info.suboffsets = self.view.suboffsets - * else: - * info.suboffsets = NULL # <<<<<<<<<<<<<< - * - * if flags & PyBUF_FORMAT: - */ - /*else*/ { - __pyx_v_info->suboffsets = NULL; - } - __pyx_L8:; - - /* "View.MemoryView":537 - * info.suboffsets = NULL - * - * if flags & PyBUF_FORMAT: # <<<<<<<<<<<<<< - * info.format = self.view.format - * else: - */ - __pyx_t_1 = ((__pyx_v_flags & PyBUF_FORMAT) != 0); - if (__pyx_t_1) { - - /* "View.MemoryView":538 - * - * if flags & PyBUF_FORMAT: - * info.format = self.view.format # <<<<<<<<<<<<<< - * else: - * info.format = NULL - */ - __pyx_t_5 = __pyx_v_self->view.format; - __pyx_v_info->format = __pyx_t_5; - - /* "View.MemoryView":537 - * info.suboffsets = NULL - * - * if flags & PyBUF_FORMAT: # <<<<<<<<<<<<<< - * info.format = self.view.format - * else: - */ - goto __pyx_L9; - } - - /* "View.MemoryView":540 - * info.format = self.view.format - * else: - * info.format = NULL # <<<<<<<<<<<<<< - * - * info.buf = self.view.buf - */ - /*else*/ { - __pyx_v_info->format = NULL; - } - __pyx_L9:; - - /* "View.MemoryView":542 - * info.format = NULL - * - * info.buf = self.view.buf # <<<<<<<<<<<<<< - * info.ndim = self.view.ndim - * info.itemsize = self.view.itemsize - */ - __pyx_t_6 = __pyx_v_self->view.buf; - __pyx_v_info->buf = __pyx_t_6; - - /* "View.MemoryView":543 - * - * info.buf = self.view.buf - * info.ndim = self.view.ndim # <<<<<<<<<<<<<< - * info.itemsize = self.view.itemsize - * info.len = self.view.len - */ - __pyx_t_7 = __pyx_v_self->view.ndim; - __pyx_v_info->ndim = __pyx_t_7; - - /* "View.MemoryView":544 - * info.buf = self.view.buf - * info.ndim = self.view.ndim - * info.itemsize = self.view.itemsize # <<<<<<<<<<<<<< - * info.len = self.view.len - * info.readonly = self.view.readonly - */ - __pyx_t_8 = __pyx_v_self->view.itemsize; - __pyx_v_info->itemsize = __pyx_t_8; - - /* "View.MemoryView":545 - * info.ndim = self.view.ndim - * info.itemsize = self.view.itemsize - * info.len = self.view.len # <<<<<<<<<<<<<< - * info.readonly = self.view.readonly - * info.obj = self - */ - __pyx_t_8 = __pyx_v_self->view.len; - __pyx_v_info->len = __pyx_t_8; - - /* "View.MemoryView":546 - * info.itemsize = self.view.itemsize - * info.len = self.view.len - * info.readonly = self.view.readonly # <<<<<<<<<<<<<< - * info.obj = self - * - */ - __pyx_t_1 = __pyx_v_self->view.readonly; - __pyx_v_info->readonly = __pyx_t_1; - - /* "View.MemoryView":547 - * info.len = self.view.len - * info.readonly = self.view.readonly - * info.obj = self # <<<<<<<<<<<<<< - * - * __pyx_getbuffer = capsule( &__pyx_memoryview_getbuffer, "getbuffer(obj, view, flags)") - */ - __Pyx_INCREF(((PyObject *)__pyx_v_self)); - __Pyx_GIVEREF(((PyObject *)__pyx_v_self)); - __Pyx_GOTREF(__pyx_v_info->obj); - __Pyx_DECREF(__pyx_v_info->obj); - __pyx_v_info->obj = ((PyObject *)__pyx_v_self); - - /* "View.MemoryView":518 - * - * @cname('getbuffer') - * def __getbuffer__(self, Py_buffer *info, int flags): # <<<<<<<<<<<<<< - * if flags & PyBUF_WRITABLE and self.view.readonly: - * raise ValueError("Cannot create writable memory view from read-only memoryview") - */ - - /* function exit code */ - __pyx_r = 0; - goto __pyx_L0; - __pyx_L1_error:; - __Pyx_XDECREF(__pyx_t_3); - __Pyx_AddTraceback("View.MemoryView.memoryview.__getbuffer__", __pyx_clineno, __pyx_lineno, __pyx_filename); - __pyx_r = -1; - if (__pyx_v_info->obj != NULL) { - __Pyx_GOTREF(__pyx_v_info->obj); - __Pyx_DECREF(__pyx_v_info->obj); __pyx_v_info->obj = 0; - } - goto __pyx_L2; - __pyx_L0:; - if (__pyx_v_info->obj == Py_None) { - __Pyx_GOTREF(__pyx_v_info->obj); - __Pyx_DECREF(__pyx_v_info->obj); __pyx_v_info->obj = 0; - } - __pyx_L2:; - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} - -/* "View.MemoryView":553 - * - * @property - * def T(self): # <<<<<<<<<<<<<< - * cdef _memoryviewslice result = memoryview_copy(self) - * transpose_memslice(&result.from_slice) - */ - -/* Python wrapper */ -static PyObject *__pyx_pw_15View_dot_MemoryView_10memoryview_1T_1__get__(PyObject *__pyx_v_self); /*proto*/ -static PyObject *__pyx_pw_15View_dot_MemoryView_10memoryview_1T_1__get__(PyObject *__pyx_v_self) { - PyObject *__pyx_r = 0; - __Pyx_RefNannyDeclarations - __Pyx_RefNannySetupContext("__get__ (wrapper)", 0); - __pyx_r = __pyx_pf_15View_dot_MemoryView_10memoryview_1T___get__(((struct __pyx_memoryview_obj *)__pyx_v_self)); - - /* function exit code */ - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} - -static PyObject *__pyx_pf_15View_dot_MemoryView_10memoryview_1T___get__(struct __pyx_memoryview_obj *__pyx_v_self) { - struct __pyx_memoryviewslice_obj *__pyx_v_result = 0; - PyObject *__pyx_r = NULL; - __Pyx_RefNannyDeclarations - PyObject *__pyx_t_1 = NULL; - int __pyx_t_2; - int __pyx_lineno = 0; - const char *__pyx_filename = NULL; - int __pyx_clineno = 0; - __Pyx_RefNannySetupContext("__get__", 0); - - /* "View.MemoryView":554 - * @property - * def T(self): - * cdef _memoryviewslice result = memoryview_copy(self) # <<<<<<<<<<<<<< - * transpose_memslice(&result.from_slice) - * return result - */ - __pyx_t_1 = __pyx_memoryview_copy_object(__pyx_v_self); if (unlikely(!__pyx_t_1)) __PYX_ERR(1, 554, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - if (!(likely(((__pyx_t_1) == Py_None) || likely(__Pyx_TypeTest(__pyx_t_1, __pyx_memoryviewslice_type))))) __PYX_ERR(1, 554, __pyx_L1_error) - __pyx_v_result = ((struct __pyx_memoryviewslice_obj *)__pyx_t_1); - __pyx_t_1 = 0; - - /* "View.MemoryView":555 - * def T(self): - * cdef _memoryviewslice result = memoryview_copy(self) - * transpose_memslice(&result.from_slice) # <<<<<<<<<<<<<< - * return result - * - */ - __pyx_t_2 = __pyx_memslice_transpose((&__pyx_v_result->from_slice)); if (unlikely(__pyx_t_2 == ((int)0))) __PYX_ERR(1, 555, __pyx_L1_error) - - /* "View.MemoryView":556 - * cdef _memoryviewslice result = memoryview_copy(self) - * transpose_memslice(&result.from_slice) - * return result # <<<<<<<<<<<<<< - * - * @property - */ - __Pyx_XDECREF(__pyx_r); - __Pyx_INCREF(((PyObject *)__pyx_v_result)); - __pyx_r = ((PyObject *)__pyx_v_result); - goto __pyx_L0; - - /* "View.MemoryView":553 - * - * @property - * def T(self): # <<<<<<<<<<<<<< - * cdef _memoryviewslice result = memoryview_copy(self) - * transpose_memslice(&result.from_slice) - */ - - /* function exit code */ - __pyx_L1_error:; - __Pyx_XDECREF(__pyx_t_1); - __Pyx_AddTraceback("View.MemoryView.memoryview.T.__get__", __pyx_clineno, __pyx_lineno, __pyx_filename); - __pyx_r = NULL; - __pyx_L0:; - __Pyx_XDECREF((PyObject *)__pyx_v_result); - __Pyx_XGIVEREF(__pyx_r); - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} - -/* "View.MemoryView":559 - * - * @property - * def base(self): # <<<<<<<<<<<<<< - * return self.obj - * - */ - -/* Python wrapper */ -static PyObject *__pyx_pw_15View_dot_MemoryView_10memoryview_4base_1__get__(PyObject *__pyx_v_self); /*proto*/ -static PyObject *__pyx_pw_15View_dot_MemoryView_10memoryview_4base_1__get__(PyObject *__pyx_v_self) { - PyObject *__pyx_r = 0; - __Pyx_RefNannyDeclarations - __Pyx_RefNannySetupContext("__get__ (wrapper)", 0); - __pyx_r = __pyx_pf_15View_dot_MemoryView_10memoryview_4base___get__(((struct __pyx_memoryview_obj *)__pyx_v_self)); - - /* function exit code */ - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} - -static PyObject *__pyx_pf_15View_dot_MemoryView_10memoryview_4base___get__(struct __pyx_memoryview_obj *__pyx_v_self) { - PyObject *__pyx_r = NULL; - __Pyx_RefNannyDeclarations - __Pyx_RefNannySetupContext("__get__", 0); - - /* "View.MemoryView":560 - * @property - * def base(self): - * return self.obj # <<<<<<<<<<<<<< - * - * @property - */ - __Pyx_XDECREF(__pyx_r); - __Pyx_INCREF(__pyx_v_self->obj); - __pyx_r = __pyx_v_self->obj; - goto __pyx_L0; - - /* "View.MemoryView":559 - * - * @property - * def base(self): # <<<<<<<<<<<<<< - * return self.obj - * - */ - - /* function exit code */ - __pyx_L0:; - __Pyx_XGIVEREF(__pyx_r); - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} - -/* "View.MemoryView":563 - * - * @property - * def shape(self): # <<<<<<<<<<<<<< - * return tuple([length for length in self.view.shape[:self.view.ndim]]) - * - */ - -/* Python wrapper */ -static PyObject *__pyx_pw_15View_dot_MemoryView_10memoryview_5shape_1__get__(PyObject *__pyx_v_self); /*proto*/ -static PyObject *__pyx_pw_15View_dot_MemoryView_10memoryview_5shape_1__get__(PyObject *__pyx_v_self) { - PyObject *__pyx_r = 0; - __Pyx_RefNannyDeclarations - __Pyx_RefNannySetupContext("__get__ (wrapper)", 0); - __pyx_r = __pyx_pf_15View_dot_MemoryView_10memoryview_5shape___get__(((struct __pyx_memoryview_obj *)__pyx_v_self)); - - /* function exit code */ - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} - -static PyObject *__pyx_pf_15View_dot_MemoryView_10memoryview_5shape___get__(struct __pyx_memoryview_obj *__pyx_v_self) { - Py_ssize_t __pyx_v_length; - PyObject *__pyx_r = NULL; - __Pyx_RefNannyDeclarations - PyObject *__pyx_t_1 = NULL; - Py_ssize_t *__pyx_t_2; - Py_ssize_t *__pyx_t_3; - Py_ssize_t *__pyx_t_4; - PyObject *__pyx_t_5 = NULL; - int __pyx_lineno = 0; - const char *__pyx_filename = NULL; - int __pyx_clineno = 0; - __Pyx_RefNannySetupContext("__get__", 0); - - /* "View.MemoryView":564 - * @property - * def shape(self): - * return tuple([length for length in self.view.shape[:self.view.ndim]]) # <<<<<<<<<<<<<< - * - * @property - */ - __Pyx_XDECREF(__pyx_r); - __pyx_t_1 = PyList_New(0); if (unlikely(!__pyx_t_1)) __PYX_ERR(1, 564, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - __pyx_t_3 = (__pyx_v_self->view.shape + __pyx_v_self->view.ndim); - for (__pyx_t_4 = __pyx_v_self->view.shape; __pyx_t_4 < __pyx_t_3; __pyx_t_4++) { - __pyx_t_2 = __pyx_t_4; - __pyx_v_length = (__pyx_t_2[0]); - __pyx_t_5 = PyInt_FromSsize_t(__pyx_v_length); if (unlikely(!__pyx_t_5)) __PYX_ERR(1, 564, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_5); - if (unlikely(__Pyx_ListComp_Append(__pyx_t_1, (PyObject*)__pyx_t_5))) __PYX_ERR(1, 564, __pyx_L1_error) - __Pyx_DECREF(__pyx_t_5); __pyx_t_5 = 0; - } - __pyx_t_5 = PyList_AsTuple(((PyObject*)__pyx_t_1)); if (unlikely(!__pyx_t_5)) __PYX_ERR(1, 564, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_5); - __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; - __pyx_r = __pyx_t_5; - __pyx_t_5 = 0; - goto __pyx_L0; - - /* "View.MemoryView":563 - * - * @property - * def shape(self): # <<<<<<<<<<<<<< - * return tuple([length for length in self.view.shape[:self.view.ndim]]) - * - */ - - /* function exit code */ - __pyx_L1_error:; - __Pyx_XDECREF(__pyx_t_1); - __Pyx_XDECREF(__pyx_t_5); - __Pyx_AddTraceback("View.MemoryView.memoryview.shape.__get__", __pyx_clineno, __pyx_lineno, __pyx_filename); - __pyx_r = NULL; - __pyx_L0:; - __Pyx_XGIVEREF(__pyx_r); - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} - -/* "View.MemoryView":567 - * - * @property - * def strides(self): # <<<<<<<<<<<<<< - * if self.view.strides == NULL: - * - */ - -/* Python wrapper */ -static PyObject *__pyx_pw_15View_dot_MemoryView_10memoryview_7strides_1__get__(PyObject *__pyx_v_self); /*proto*/ -static PyObject *__pyx_pw_15View_dot_MemoryView_10memoryview_7strides_1__get__(PyObject *__pyx_v_self) { - PyObject *__pyx_r = 0; - __Pyx_RefNannyDeclarations - __Pyx_RefNannySetupContext("__get__ (wrapper)", 0); - __pyx_r = __pyx_pf_15View_dot_MemoryView_10memoryview_7strides___get__(((struct __pyx_memoryview_obj *)__pyx_v_self)); - - /* function exit code */ - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} - -static PyObject *__pyx_pf_15View_dot_MemoryView_10memoryview_7strides___get__(struct __pyx_memoryview_obj *__pyx_v_self) { - Py_ssize_t __pyx_v_stride; - PyObject *__pyx_r = NULL; - __Pyx_RefNannyDeclarations - int __pyx_t_1; - PyObject *__pyx_t_2 = NULL; - Py_ssize_t *__pyx_t_3; - Py_ssize_t *__pyx_t_4; - Py_ssize_t *__pyx_t_5; - PyObject *__pyx_t_6 = NULL; - int __pyx_lineno = 0; - const char *__pyx_filename = NULL; - int __pyx_clineno = 0; - __Pyx_RefNannySetupContext("__get__", 0); - - /* "View.MemoryView":568 - * @property - * def strides(self): - * if self.view.strides == NULL: # <<<<<<<<<<<<<< - * - * raise ValueError("Buffer view does not expose strides") - */ - __pyx_t_1 = ((__pyx_v_self->view.strides == NULL) != 0); - if (unlikely(__pyx_t_1)) { - - /* "View.MemoryView":570 - * if self.view.strides == NULL: - * - * raise ValueError("Buffer view does not expose strides") # <<<<<<<<<<<<<< - * - * return tuple([stride for stride in self.view.strides[:self.view.ndim]]) - */ - __pyx_t_2 = __Pyx_PyObject_Call(__pyx_builtin_ValueError, __pyx_tuple__12, NULL); if (unlikely(!__pyx_t_2)) __PYX_ERR(1, 570, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - __Pyx_Raise(__pyx_t_2, 0, 0, 0); - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - __PYX_ERR(1, 570, __pyx_L1_error) - - /* "View.MemoryView":568 - * @property - * def strides(self): - * if self.view.strides == NULL: # <<<<<<<<<<<<<< - * - * raise ValueError("Buffer view does not expose strides") - */ - } - - /* "View.MemoryView":572 - * raise ValueError("Buffer view does not expose strides") - * - * return tuple([stride for stride in self.view.strides[:self.view.ndim]]) # <<<<<<<<<<<<<< - * - * @property - */ - __Pyx_XDECREF(__pyx_r); - __pyx_t_2 = PyList_New(0); if (unlikely(!__pyx_t_2)) __PYX_ERR(1, 572, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - __pyx_t_4 = (__pyx_v_self->view.strides + __pyx_v_self->view.ndim); - for (__pyx_t_5 = __pyx_v_self->view.strides; __pyx_t_5 < __pyx_t_4; __pyx_t_5++) { - __pyx_t_3 = __pyx_t_5; - __pyx_v_stride = (__pyx_t_3[0]); - __pyx_t_6 = PyInt_FromSsize_t(__pyx_v_stride); if (unlikely(!__pyx_t_6)) __PYX_ERR(1, 572, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_6); - if (unlikely(__Pyx_ListComp_Append(__pyx_t_2, (PyObject*)__pyx_t_6))) __PYX_ERR(1, 572, __pyx_L1_error) - __Pyx_DECREF(__pyx_t_6); __pyx_t_6 = 0; - } - __pyx_t_6 = PyList_AsTuple(((PyObject*)__pyx_t_2)); if (unlikely(!__pyx_t_6)) __PYX_ERR(1, 572, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_6); - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - __pyx_r = __pyx_t_6; - __pyx_t_6 = 0; - goto __pyx_L0; - - /* "View.MemoryView":567 - * - * @property - * def strides(self): # <<<<<<<<<<<<<< - * if self.view.strides == NULL: - * - */ - - /* function exit code */ - __pyx_L1_error:; - __Pyx_XDECREF(__pyx_t_2); - __Pyx_XDECREF(__pyx_t_6); - __Pyx_AddTraceback("View.MemoryView.memoryview.strides.__get__", __pyx_clineno, __pyx_lineno, __pyx_filename); - __pyx_r = NULL; - __pyx_L0:; - __Pyx_XGIVEREF(__pyx_r); - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} - -/* "View.MemoryView":575 - * - * @property - * def suboffsets(self): # <<<<<<<<<<<<<< - * if self.view.suboffsets == NULL: - * return (-1,) * self.view.ndim - */ - -/* Python wrapper */ -static PyObject *__pyx_pw_15View_dot_MemoryView_10memoryview_10suboffsets_1__get__(PyObject *__pyx_v_self); /*proto*/ -static PyObject *__pyx_pw_15View_dot_MemoryView_10memoryview_10suboffsets_1__get__(PyObject *__pyx_v_self) { - PyObject *__pyx_r = 0; - __Pyx_RefNannyDeclarations - __Pyx_RefNannySetupContext("__get__ (wrapper)", 0); - __pyx_r = __pyx_pf_15View_dot_MemoryView_10memoryview_10suboffsets___get__(((struct __pyx_memoryview_obj *)__pyx_v_self)); - - /* function exit code */ - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} - -static PyObject *__pyx_pf_15View_dot_MemoryView_10memoryview_10suboffsets___get__(struct __pyx_memoryview_obj *__pyx_v_self) { - Py_ssize_t __pyx_v_suboffset; - PyObject *__pyx_r = NULL; - __Pyx_RefNannyDeclarations - int __pyx_t_1; - PyObject *__pyx_t_2 = NULL; - PyObject *__pyx_t_3 = NULL; - Py_ssize_t *__pyx_t_4; - Py_ssize_t *__pyx_t_5; - Py_ssize_t *__pyx_t_6; - int __pyx_lineno = 0; - const char *__pyx_filename = NULL; - int __pyx_clineno = 0; - __Pyx_RefNannySetupContext("__get__", 0); - - /* "View.MemoryView":576 - * @property - * def suboffsets(self): - * if self.view.suboffsets == NULL: # <<<<<<<<<<<<<< - * return (-1,) * self.view.ndim - * - */ - __pyx_t_1 = ((__pyx_v_self->view.suboffsets == NULL) != 0); - if (__pyx_t_1) { - - /* "View.MemoryView":577 - * def suboffsets(self): - * if self.view.suboffsets == NULL: - * return (-1,) * self.view.ndim # <<<<<<<<<<<<<< - * - * return tuple([suboffset for suboffset in self.view.suboffsets[:self.view.ndim]]) - */ - __Pyx_XDECREF(__pyx_r); - __pyx_t_2 = __Pyx_PyInt_From_int(__pyx_v_self->view.ndim); if (unlikely(!__pyx_t_2)) __PYX_ERR(1, 577, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - __pyx_t_3 = PyNumber_Multiply(__pyx_tuple__13, __pyx_t_2); if (unlikely(!__pyx_t_3)) __PYX_ERR(1, 577, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_3); - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - __pyx_r = __pyx_t_3; - __pyx_t_3 = 0; - goto __pyx_L0; - - /* "View.MemoryView":576 - * @property - * def suboffsets(self): - * if self.view.suboffsets == NULL: # <<<<<<<<<<<<<< - * return (-1,) * self.view.ndim - * - */ - } - - /* "View.MemoryView":579 - * return (-1,) * self.view.ndim - * - * return tuple([suboffset for suboffset in self.view.suboffsets[:self.view.ndim]]) # <<<<<<<<<<<<<< - * - * @property - */ - __Pyx_XDECREF(__pyx_r); - __pyx_t_3 = PyList_New(0); if (unlikely(!__pyx_t_3)) __PYX_ERR(1, 579, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_3); - __pyx_t_5 = (__pyx_v_self->view.suboffsets + __pyx_v_self->view.ndim); - for (__pyx_t_6 = __pyx_v_self->view.suboffsets; __pyx_t_6 < __pyx_t_5; __pyx_t_6++) { - __pyx_t_4 = __pyx_t_6; - __pyx_v_suboffset = (__pyx_t_4[0]); - __pyx_t_2 = PyInt_FromSsize_t(__pyx_v_suboffset); if (unlikely(!__pyx_t_2)) __PYX_ERR(1, 579, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - if (unlikely(__Pyx_ListComp_Append(__pyx_t_3, (PyObject*)__pyx_t_2))) __PYX_ERR(1, 579, __pyx_L1_error) - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - } - __pyx_t_2 = PyList_AsTuple(((PyObject*)__pyx_t_3)); if (unlikely(!__pyx_t_2)) __PYX_ERR(1, 579, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - __pyx_r = __pyx_t_2; - __pyx_t_2 = 0; - goto __pyx_L0; - - /* "View.MemoryView":575 - * - * @property - * def suboffsets(self): # <<<<<<<<<<<<<< - * if self.view.suboffsets == NULL: - * return (-1,) * self.view.ndim - */ - - /* function exit code */ - __pyx_L1_error:; - __Pyx_XDECREF(__pyx_t_2); - __Pyx_XDECREF(__pyx_t_3); - __Pyx_AddTraceback("View.MemoryView.memoryview.suboffsets.__get__", __pyx_clineno, __pyx_lineno, __pyx_filename); - __pyx_r = NULL; - __pyx_L0:; - __Pyx_XGIVEREF(__pyx_r); - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} - -/* "View.MemoryView":582 - * - * @property - * def ndim(self): # <<<<<<<<<<<<<< - * return self.view.ndim - * - */ - -/* Python wrapper */ -static PyObject *__pyx_pw_15View_dot_MemoryView_10memoryview_4ndim_1__get__(PyObject *__pyx_v_self); /*proto*/ -static PyObject *__pyx_pw_15View_dot_MemoryView_10memoryview_4ndim_1__get__(PyObject *__pyx_v_self) { - PyObject *__pyx_r = 0; - __Pyx_RefNannyDeclarations - __Pyx_RefNannySetupContext("__get__ (wrapper)", 0); - __pyx_r = __pyx_pf_15View_dot_MemoryView_10memoryview_4ndim___get__(((struct __pyx_memoryview_obj *)__pyx_v_self)); - - /* function exit code */ - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} - -static PyObject *__pyx_pf_15View_dot_MemoryView_10memoryview_4ndim___get__(struct __pyx_memoryview_obj *__pyx_v_self) { - PyObject *__pyx_r = NULL; - __Pyx_RefNannyDeclarations - PyObject *__pyx_t_1 = NULL; - int __pyx_lineno = 0; - const char *__pyx_filename = NULL; - int __pyx_clineno = 0; - __Pyx_RefNannySetupContext("__get__", 0); - - /* "View.MemoryView":583 - * @property - * def ndim(self): - * return self.view.ndim # <<<<<<<<<<<<<< - * - * @property - */ - __Pyx_XDECREF(__pyx_r); - __pyx_t_1 = __Pyx_PyInt_From_int(__pyx_v_self->view.ndim); if (unlikely(!__pyx_t_1)) __PYX_ERR(1, 583, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - __pyx_r = __pyx_t_1; - __pyx_t_1 = 0; - goto __pyx_L0; - - /* "View.MemoryView":582 - * - * @property - * def ndim(self): # <<<<<<<<<<<<<< - * return self.view.ndim - * - */ - - /* function exit code */ - __pyx_L1_error:; - __Pyx_XDECREF(__pyx_t_1); - __Pyx_AddTraceback("View.MemoryView.memoryview.ndim.__get__", __pyx_clineno, __pyx_lineno, __pyx_filename); - __pyx_r = NULL; - __pyx_L0:; - __Pyx_XGIVEREF(__pyx_r); - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} - -/* "View.MemoryView":586 - * - * @property - * def itemsize(self): # <<<<<<<<<<<<<< - * return self.view.itemsize - * - */ - -/* Python wrapper */ -static PyObject *__pyx_pw_15View_dot_MemoryView_10memoryview_8itemsize_1__get__(PyObject *__pyx_v_self); /*proto*/ -static PyObject *__pyx_pw_15View_dot_MemoryView_10memoryview_8itemsize_1__get__(PyObject *__pyx_v_self) { - PyObject *__pyx_r = 0; - __Pyx_RefNannyDeclarations - __Pyx_RefNannySetupContext("__get__ (wrapper)", 0); - __pyx_r = __pyx_pf_15View_dot_MemoryView_10memoryview_8itemsize___get__(((struct __pyx_memoryview_obj *)__pyx_v_self)); - - /* function exit code */ - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} - -static PyObject *__pyx_pf_15View_dot_MemoryView_10memoryview_8itemsize___get__(struct __pyx_memoryview_obj *__pyx_v_self) { - PyObject *__pyx_r = NULL; - __Pyx_RefNannyDeclarations - PyObject *__pyx_t_1 = NULL; - int __pyx_lineno = 0; - const char *__pyx_filename = NULL; - int __pyx_clineno = 0; - __Pyx_RefNannySetupContext("__get__", 0); - - /* "View.MemoryView":587 - * @property - * def itemsize(self): - * return self.view.itemsize # <<<<<<<<<<<<<< - * - * @property - */ - __Pyx_XDECREF(__pyx_r); - __pyx_t_1 = PyInt_FromSsize_t(__pyx_v_self->view.itemsize); if (unlikely(!__pyx_t_1)) __PYX_ERR(1, 587, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - __pyx_r = __pyx_t_1; - __pyx_t_1 = 0; - goto __pyx_L0; - - /* "View.MemoryView":586 - * - * @property - * def itemsize(self): # <<<<<<<<<<<<<< - * return self.view.itemsize - * - */ - - /* function exit code */ - __pyx_L1_error:; - __Pyx_XDECREF(__pyx_t_1); - __Pyx_AddTraceback("View.MemoryView.memoryview.itemsize.__get__", __pyx_clineno, __pyx_lineno, __pyx_filename); - __pyx_r = NULL; - __pyx_L0:; - __Pyx_XGIVEREF(__pyx_r); - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} - -/* "View.MemoryView":590 - * - * @property - * def nbytes(self): # <<<<<<<<<<<<<< - * return self.size * self.view.itemsize - * - */ - -/* Python wrapper */ -static PyObject *__pyx_pw_15View_dot_MemoryView_10memoryview_6nbytes_1__get__(PyObject *__pyx_v_self); /*proto*/ -static PyObject *__pyx_pw_15View_dot_MemoryView_10memoryview_6nbytes_1__get__(PyObject *__pyx_v_self) { - PyObject *__pyx_r = 0; - __Pyx_RefNannyDeclarations - __Pyx_RefNannySetupContext("__get__ (wrapper)", 0); - __pyx_r = __pyx_pf_15View_dot_MemoryView_10memoryview_6nbytes___get__(((struct __pyx_memoryview_obj *)__pyx_v_self)); - - /* function exit code */ - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} - -static PyObject *__pyx_pf_15View_dot_MemoryView_10memoryview_6nbytes___get__(struct __pyx_memoryview_obj *__pyx_v_self) { - PyObject *__pyx_r = NULL; - __Pyx_RefNannyDeclarations - PyObject *__pyx_t_1 = NULL; - PyObject *__pyx_t_2 = NULL; - PyObject *__pyx_t_3 = NULL; - int __pyx_lineno = 0; - const char *__pyx_filename = NULL; - int __pyx_clineno = 0; - __Pyx_RefNannySetupContext("__get__", 0); - - /* "View.MemoryView":591 - * @property - * def nbytes(self): - * return self.size * self.view.itemsize # <<<<<<<<<<<<<< - * - * @property - */ - __Pyx_XDECREF(__pyx_r); - __pyx_t_1 = __Pyx_PyObject_GetAttrStr(((PyObject *)__pyx_v_self), __pyx_n_s_size); if (unlikely(!__pyx_t_1)) __PYX_ERR(1, 591, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - __pyx_t_2 = PyInt_FromSsize_t(__pyx_v_self->view.itemsize); if (unlikely(!__pyx_t_2)) __PYX_ERR(1, 591, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - __pyx_t_3 = PyNumber_Multiply(__pyx_t_1, __pyx_t_2); if (unlikely(!__pyx_t_3)) __PYX_ERR(1, 591, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_3); - __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - __pyx_r = __pyx_t_3; - __pyx_t_3 = 0; - goto __pyx_L0; - - /* "View.MemoryView":590 - * - * @property - * def nbytes(self): # <<<<<<<<<<<<<< - * return self.size * self.view.itemsize - * - */ - - /* function exit code */ - __pyx_L1_error:; - __Pyx_XDECREF(__pyx_t_1); - __Pyx_XDECREF(__pyx_t_2); - __Pyx_XDECREF(__pyx_t_3); - __Pyx_AddTraceback("View.MemoryView.memoryview.nbytes.__get__", __pyx_clineno, __pyx_lineno, __pyx_filename); - __pyx_r = NULL; - __pyx_L0:; - __Pyx_XGIVEREF(__pyx_r); - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} - -/* "View.MemoryView":594 - * - * @property - * def size(self): # <<<<<<<<<<<<<< - * if self._size is None: - * result = 1 - */ - -/* Python wrapper */ -static PyObject *__pyx_pw_15View_dot_MemoryView_10memoryview_4size_1__get__(PyObject *__pyx_v_self); /*proto*/ -static PyObject *__pyx_pw_15View_dot_MemoryView_10memoryview_4size_1__get__(PyObject *__pyx_v_self) { - PyObject *__pyx_r = 0; - __Pyx_RefNannyDeclarations - __Pyx_RefNannySetupContext("__get__ (wrapper)", 0); - __pyx_r = __pyx_pf_15View_dot_MemoryView_10memoryview_4size___get__(((struct __pyx_memoryview_obj *)__pyx_v_self)); - - /* function exit code */ - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} - -static PyObject *__pyx_pf_15View_dot_MemoryView_10memoryview_4size___get__(struct __pyx_memoryview_obj *__pyx_v_self) { - PyObject *__pyx_v_result = NULL; - PyObject *__pyx_v_length = NULL; - PyObject *__pyx_r = NULL; - __Pyx_RefNannyDeclarations - int __pyx_t_1; - int __pyx_t_2; - Py_ssize_t *__pyx_t_3; - Py_ssize_t *__pyx_t_4; - Py_ssize_t *__pyx_t_5; - PyObject *__pyx_t_6 = NULL; - int __pyx_lineno = 0; - const char *__pyx_filename = NULL; - int __pyx_clineno = 0; - __Pyx_RefNannySetupContext("__get__", 0); - - /* "View.MemoryView":595 - * @property - * def size(self): - * if self._size is None: # <<<<<<<<<<<<<< - * result = 1 - * - */ - __pyx_t_1 = (__pyx_v_self->_size == Py_None); - __pyx_t_2 = (__pyx_t_1 != 0); - if (__pyx_t_2) { - - /* "View.MemoryView":596 - * def size(self): - * if self._size is None: - * result = 1 # <<<<<<<<<<<<<< - * - * for length in self.view.shape[:self.view.ndim]: - */ - __Pyx_INCREF(__pyx_int_1); - __pyx_v_result = __pyx_int_1; - - /* "View.MemoryView":598 - * result = 1 - * - * for length in self.view.shape[:self.view.ndim]: # <<<<<<<<<<<<<< - * result *= length - * - */ - __pyx_t_4 = (__pyx_v_self->view.shape + __pyx_v_self->view.ndim); - for (__pyx_t_5 = __pyx_v_self->view.shape; __pyx_t_5 < __pyx_t_4; __pyx_t_5++) { - __pyx_t_3 = __pyx_t_5; - __pyx_t_6 = PyInt_FromSsize_t((__pyx_t_3[0])); if (unlikely(!__pyx_t_6)) __PYX_ERR(1, 598, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_6); - __Pyx_XDECREF_SET(__pyx_v_length, __pyx_t_6); - __pyx_t_6 = 0; - - /* "View.MemoryView":599 - * - * for length in self.view.shape[:self.view.ndim]: - * result *= length # <<<<<<<<<<<<<< - * - * self._size = result - */ - __pyx_t_6 = PyNumber_InPlaceMultiply(__pyx_v_result, __pyx_v_length); if (unlikely(!__pyx_t_6)) __PYX_ERR(1, 599, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_6); - __Pyx_DECREF_SET(__pyx_v_result, __pyx_t_6); - __pyx_t_6 = 0; - } - - /* "View.MemoryView":601 - * result *= length - * - * self._size = result # <<<<<<<<<<<<<< - * - * return self._size - */ - __Pyx_INCREF(__pyx_v_result); - __Pyx_GIVEREF(__pyx_v_result); - __Pyx_GOTREF(__pyx_v_self->_size); - __Pyx_DECREF(__pyx_v_self->_size); - __pyx_v_self->_size = __pyx_v_result; - - /* "View.MemoryView":595 - * @property - * def size(self): - * if self._size is None: # <<<<<<<<<<<<<< - * result = 1 - * - */ - } - - /* "View.MemoryView":603 - * self._size = result - * - * return self._size # <<<<<<<<<<<<<< - * - * def __len__(self): - */ - __Pyx_XDECREF(__pyx_r); - __Pyx_INCREF(__pyx_v_self->_size); - __pyx_r = __pyx_v_self->_size; - goto __pyx_L0; - - /* "View.MemoryView":594 - * - * @property - * def size(self): # <<<<<<<<<<<<<< - * if self._size is None: - * result = 1 - */ - - /* function exit code */ - __pyx_L1_error:; - __Pyx_XDECREF(__pyx_t_6); - __Pyx_AddTraceback("View.MemoryView.memoryview.size.__get__", __pyx_clineno, __pyx_lineno, __pyx_filename); - __pyx_r = NULL; - __pyx_L0:; - __Pyx_XDECREF(__pyx_v_result); - __Pyx_XDECREF(__pyx_v_length); - __Pyx_XGIVEREF(__pyx_r); - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} - -/* "View.MemoryView":605 - * return self._size - * - * def __len__(self): # <<<<<<<<<<<<<< - * if self.view.ndim >= 1: - * return self.view.shape[0] - */ - -/* Python wrapper */ -static Py_ssize_t __pyx_memoryview___len__(PyObject *__pyx_v_self); /*proto*/ -static Py_ssize_t __pyx_memoryview___len__(PyObject *__pyx_v_self) { - Py_ssize_t __pyx_r; - __Pyx_RefNannyDeclarations - __Pyx_RefNannySetupContext("__len__ (wrapper)", 0); - __pyx_r = __pyx_memoryview___pyx_pf_15View_dot_MemoryView_10memoryview_10__len__(((struct __pyx_memoryview_obj *)__pyx_v_self)); - - /* function exit code */ - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} - -static Py_ssize_t __pyx_memoryview___pyx_pf_15View_dot_MemoryView_10memoryview_10__len__(struct __pyx_memoryview_obj *__pyx_v_self) { - Py_ssize_t __pyx_r; - __Pyx_RefNannyDeclarations - int __pyx_t_1; - __Pyx_RefNannySetupContext("__len__", 0); - - /* "View.MemoryView":606 - * - * def __len__(self): - * if self.view.ndim >= 1: # <<<<<<<<<<<<<< - * return self.view.shape[0] - * - */ - __pyx_t_1 = ((__pyx_v_self->view.ndim >= 1) != 0); - if (__pyx_t_1) { - - /* "View.MemoryView":607 - * def __len__(self): - * if self.view.ndim >= 1: - * return self.view.shape[0] # <<<<<<<<<<<<<< - * - * return 0 - */ - __pyx_r = (__pyx_v_self->view.shape[0]); - goto __pyx_L0; - - /* "View.MemoryView":606 - * - * def __len__(self): - * if self.view.ndim >= 1: # <<<<<<<<<<<<<< - * return self.view.shape[0] - * - */ - } - - /* "View.MemoryView":609 - * return self.view.shape[0] - * - * return 0 # <<<<<<<<<<<<<< - * - * def __repr__(self): - */ - __pyx_r = 0; - goto __pyx_L0; - - /* "View.MemoryView":605 - * return self._size - * - * def __len__(self): # <<<<<<<<<<<<<< - * if self.view.ndim >= 1: - * return self.view.shape[0] - */ - - /* function exit code */ - __pyx_L0:; - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} - -/* "View.MemoryView":611 - * return 0 - * - * def __repr__(self): # <<<<<<<<<<<<<< - * return "" % (self.base.__class__.__name__, - * id(self)) - */ - -/* Python wrapper */ -static PyObject *__pyx_memoryview___repr__(PyObject *__pyx_v_self); /*proto*/ -static PyObject *__pyx_memoryview___repr__(PyObject *__pyx_v_self) { - PyObject *__pyx_r = 0; - __Pyx_RefNannyDeclarations - __Pyx_RefNannySetupContext("__repr__ (wrapper)", 0); - __pyx_r = __pyx_memoryview___pyx_pf_15View_dot_MemoryView_10memoryview_12__repr__(((struct __pyx_memoryview_obj *)__pyx_v_self)); - - /* function exit code */ - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} - -static PyObject *__pyx_memoryview___pyx_pf_15View_dot_MemoryView_10memoryview_12__repr__(struct __pyx_memoryview_obj *__pyx_v_self) { - PyObject *__pyx_r = NULL; - __Pyx_RefNannyDeclarations - PyObject *__pyx_t_1 = NULL; - PyObject *__pyx_t_2 = NULL; - PyObject *__pyx_t_3 = NULL; - int __pyx_lineno = 0; - const char *__pyx_filename = NULL; - int __pyx_clineno = 0; - __Pyx_RefNannySetupContext("__repr__", 0); - - /* "View.MemoryView":612 - * - * def __repr__(self): - * return "" % (self.base.__class__.__name__, # <<<<<<<<<<<<<< - * id(self)) - * - */ - __Pyx_XDECREF(__pyx_r); - __pyx_t_1 = __Pyx_PyObject_GetAttrStr(((PyObject *)__pyx_v_self), __pyx_n_s_base); if (unlikely(!__pyx_t_1)) __PYX_ERR(1, 612, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - __pyx_t_2 = __Pyx_PyObject_GetAttrStr(__pyx_t_1, __pyx_n_s_class); if (unlikely(!__pyx_t_2)) __PYX_ERR(1, 612, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; - __pyx_t_1 = __Pyx_PyObject_GetAttrStr(__pyx_t_2, __pyx_n_s_name_2); if (unlikely(!__pyx_t_1)) __PYX_ERR(1, 612, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - - /* "View.MemoryView":613 - * def __repr__(self): - * return "" % (self.base.__class__.__name__, - * id(self)) # <<<<<<<<<<<<<< - * - * def __str__(self): - */ - __pyx_t_2 = __Pyx_PyObject_CallOneArg(__pyx_builtin_id, ((PyObject *)__pyx_v_self)); if (unlikely(!__pyx_t_2)) __PYX_ERR(1, 613, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - - /* "View.MemoryView":612 - * - * def __repr__(self): - * return "" % (self.base.__class__.__name__, # <<<<<<<<<<<<<< - * id(self)) - * - */ - __pyx_t_3 = PyTuple_New(2); if (unlikely(!__pyx_t_3)) __PYX_ERR(1, 612, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_3); - __Pyx_GIVEREF(__pyx_t_1); - PyTuple_SET_ITEM(__pyx_t_3, 0, __pyx_t_1); - __Pyx_GIVEREF(__pyx_t_2); - PyTuple_SET_ITEM(__pyx_t_3, 1, __pyx_t_2); - __pyx_t_1 = 0; - __pyx_t_2 = 0; - __pyx_t_2 = __Pyx_PyString_Format(__pyx_kp_s_MemoryView_of_r_at_0x_x, __pyx_t_3); if (unlikely(!__pyx_t_2)) __PYX_ERR(1, 612, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - __pyx_r = __pyx_t_2; - __pyx_t_2 = 0; - goto __pyx_L0; - - /* "View.MemoryView":611 - * return 0 - * - * def __repr__(self): # <<<<<<<<<<<<<< - * return "" % (self.base.__class__.__name__, - * id(self)) - */ - - /* function exit code */ - __pyx_L1_error:; - __Pyx_XDECREF(__pyx_t_1); - __Pyx_XDECREF(__pyx_t_2); - __Pyx_XDECREF(__pyx_t_3); - __Pyx_AddTraceback("View.MemoryView.memoryview.__repr__", __pyx_clineno, __pyx_lineno, __pyx_filename); - __pyx_r = NULL; - __pyx_L0:; - __Pyx_XGIVEREF(__pyx_r); - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} - -/* "View.MemoryView":615 - * id(self)) - * - * def __str__(self): # <<<<<<<<<<<<<< - * return "" % (self.base.__class__.__name__,) - * - */ - -/* Python wrapper */ -static PyObject *__pyx_memoryview___str__(PyObject *__pyx_v_self); /*proto*/ -static PyObject *__pyx_memoryview___str__(PyObject *__pyx_v_self) { - PyObject *__pyx_r = 0; - __Pyx_RefNannyDeclarations - __Pyx_RefNannySetupContext("__str__ (wrapper)", 0); - __pyx_r = __pyx_memoryview___pyx_pf_15View_dot_MemoryView_10memoryview_14__str__(((struct __pyx_memoryview_obj *)__pyx_v_self)); - - /* function exit code */ - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} - -static PyObject *__pyx_memoryview___pyx_pf_15View_dot_MemoryView_10memoryview_14__str__(struct __pyx_memoryview_obj *__pyx_v_self) { - PyObject *__pyx_r = NULL; - __Pyx_RefNannyDeclarations - PyObject *__pyx_t_1 = NULL; - PyObject *__pyx_t_2 = NULL; - int __pyx_lineno = 0; - const char *__pyx_filename = NULL; - int __pyx_clineno = 0; - __Pyx_RefNannySetupContext("__str__", 0); - - /* "View.MemoryView":616 - * - * def __str__(self): - * return "" % (self.base.__class__.__name__,) # <<<<<<<<<<<<<< - * - * - */ - __Pyx_XDECREF(__pyx_r); - __pyx_t_1 = __Pyx_PyObject_GetAttrStr(((PyObject *)__pyx_v_self), __pyx_n_s_base); if (unlikely(!__pyx_t_1)) __PYX_ERR(1, 616, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - __pyx_t_2 = __Pyx_PyObject_GetAttrStr(__pyx_t_1, __pyx_n_s_class); if (unlikely(!__pyx_t_2)) __PYX_ERR(1, 616, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; - __pyx_t_1 = __Pyx_PyObject_GetAttrStr(__pyx_t_2, __pyx_n_s_name_2); if (unlikely(!__pyx_t_1)) __PYX_ERR(1, 616, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - __pyx_t_2 = PyTuple_New(1); if (unlikely(!__pyx_t_2)) __PYX_ERR(1, 616, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - __Pyx_GIVEREF(__pyx_t_1); - PyTuple_SET_ITEM(__pyx_t_2, 0, __pyx_t_1); - __pyx_t_1 = 0; - __pyx_t_1 = __Pyx_PyString_Format(__pyx_kp_s_MemoryView_of_r_object, __pyx_t_2); if (unlikely(!__pyx_t_1)) __PYX_ERR(1, 616, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - __pyx_r = __pyx_t_1; - __pyx_t_1 = 0; - goto __pyx_L0; - - /* "View.MemoryView":615 - * id(self)) - * - * def __str__(self): # <<<<<<<<<<<<<< - * return "" % (self.base.__class__.__name__,) - * - */ - - /* function exit code */ - __pyx_L1_error:; - __Pyx_XDECREF(__pyx_t_1); - __Pyx_XDECREF(__pyx_t_2); - __Pyx_AddTraceback("View.MemoryView.memoryview.__str__", __pyx_clineno, __pyx_lineno, __pyx_filename); - __pyx_r = NULL; - __pyx_L0:; - __Pyx_XGIVEREF(__pyx_r); - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} - -/* "View.MemoryView":619 - * - * - * def is_c_contig(self): # <<<<<<<<<<<<<< - * cdef __Pyx_memviewslice *mslice - * cdef __Pyx_memviewslice tmp - */ - -/* Python wrapper */ -static PyObject *__pyx_memoryview_is_c_contig(PyObject *__pyx_v_self, CYTHON_UNUSED PyObject *unused); /*proto*/ -static PyObject *__pyx_memoryview_is_c_contig(PyObject *__pyx_v_self, CYTHON_UNUSED PyObject *unused) { - PyObject *__pyx_r = 0; - __Pyx_RefNannyDeclarations - __Pyx_RefNannySetupContext("is_c_contig (wrapper)", 0); - __pyx_r = __pyx_memoryview___pyx_pf_15View_dot_MemoryView_10memoryview_16is_c_contig(((struct __pyx_memoryview_obj *)__pyx_v_self)); - - /* function exit code */ - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} - -static PyObject *__pyx_memoryview___pyx_pf_15View_dot_MemoryView_10memoryview_16is_c_contig(struct __pyx_memoryview_obj *__pyx_v_self) { - __Pyx_memviewslice *__pyx_v_mslice; - __Pyx_memviewslice __pyx_v_tmp; - PyObject *__pyx_r = NULL; - __Pyx_RefNannyDeclarations - __Pyx_memviewslice *__pyx_t_1; - PyObject *__pyx_t_2 = NULL; - int __pyx_lineno = 0; - const char *__pyx_filename = NULL; - int __pyx_clineno = 0; - __Pyx_RefNannySetupContext("is_c_contig", 0); - - /* "View.MemoryView":622 - * cdef __Pyx_memviewslice *mslice - * cdef __Pyx_memviewslice tmp - * mslice = get_slice_from_memview(self, &tmp) # <<<<<<<<<<<<<< - * return slice_is_contig(mslice[0], 'C', self.view.ndim) - * - */ - __pyx_t_1 = __pyx_memoryview_get_slice_from_memoryview(__pyx_v_self, (&__pyx_v_tmp)); if (unlikely(__pyx_t_1 == ((__Pyx_memviewslice *)NULL))) __PYX_ERR(1, 622, __pyx_L1_error) - __pyx_v_mslice = __pyx_t_1; - - /* "View.MemoryView":623 - * cdef __Pyx_memviewslice tmp - * mslice = get_slice_from_memview(self, &tmp) - * return slice_is_contig(mslice[0], 'C', self.view.ndim) # <<<<<<<<<<<<<< - * - * def is_f_contig(self): - */ - __Pyx_XDECREF(__pyx_r); - __pyx_t_2 = __Pyx_PyBool_FromLong(__pyx_memviewslice_is_contig((__pyx_v_mslice[0]), 'C', __pyx_v_self->view.ndim)); if (unlikely(!__pyx_t_2)) __PYX_ERR(1, 623, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - __pyx_r = __pyx_t_2; - __pyx_t_2 = 0; - goto __pyx_L0; - - /* "View.MemoryView":619 - * - * - * def is_c_contig(self): # <<<<<<<<<<<<<< - * cdef __Pyx_memviewslice *mslice - * cdef __Pyx_memviewslice tmp - */ - - /* function exit code */ - __pyx_L1_error:; - __Pyx_XDECREF(__pyx_t_2); - __Pyx_AddTraceback("View.MemoryView.memoryview.is_c_contig", __pyx_clineno, __pyx_lineno, __pyx_filename); - __pyx_r = NULL; - __pyx_L0:; - __Pyx_XGIVEREF(__pyx_r); - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} - -/* "View.MemoryView":625 - * return slice_is_contig(mslice[0], 'C', self.view.ndim) - * - * def is_f_contig(self): # <<<<<<<<<<<<<< - * cdef __Pyx_memviewslice *mslice - * cdef __Pyx_memviewslice tmp - */ - -/* Python wrapper */ -static PyObject *__pyx_memoryview_is_f_contig(PyObject *__pyx_v_self, CYTHON_UNUSED PyObject *unused); /*proto*/ -static PyObject *__pyx_memoryview_is_f_contig(PyObject *__pyx_v_self, CYTHON_UNUSED PyObject *unused) { - PyObject *__pyx_r = 0; - __Pyx_RefNannyDeclarations - __Pyx_RefNannySetupContext("is_f_contig (wrapper)", 0); - __pyx_r = __pyx_memoryview___pyx_pf_15View_dot_MemoryView_10memoryview_18is_f_contig(((struct __pyx_memoryview_obj *)__pyx_v_self)); - - /* function exit code */ - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} - -static PyObject *__pyx_memoryview___pyx_pf_15View_dot_MemoryView_10memoryview_18is_f_contig(struct __pyx_memoryview_obj *__pyx_v_self) { - __Pyx_memviewslice *__pyx_v_mslice; - __Pyx_memviewslice __pyx_v_tmp; - PyObject *__pyx_r = NULL; - __Pyx_RefNannyDeclarations - __Pyx_memviewslice *__pyx_t_1; - PyObject *__pyx_t_2 = NULL; - int __pyx_lineno = 0; - const char *__pyx_filename = NULL; - int __pyx_clineno = 0; - __Pyx_RefNannySetupContext("is_f_contig", 0); - - /* "View.MemoryView":628 - * cdef __Pyx_memviewslice *mslice - * cdef __Pyx_memviewslice tmp - * mslice = get_slice_from_memview(self, &tmp) # <<<<<<<<<<<<<< - * return slice_is_contig(mslice[0], 'F', self.view.ndim) - * - */ - __pyx_t_1 = __pyx_memoryview_get_slice_from_memoryview(__pyx_v_self, (&__pyx_v_tmp)); if (unlikely(__pyx_t_1 == ((__Pyx_memviewslice *)NULL))) __PYX_ERR(1, 628, __pyx_L1_error) - __pyx_v_mslice = __pyx_t_1; - - /* "View.MemoryView":629 - * cdef __Pyx_memviewslice tmp - * mslice = get_slice_from_memview(self, &tmp) - * return slice_is_contig(mslice[0], 'F', self.view.ndim) # <<<<<<<<<<<<<< - * - * def copy(self): - */ - __Pyx_XDECREF(__pyx_r); - __pyx_t_2 = __Pyx_PyBool_FromLong(__pyx_memviewslice_is_contig((__pyx_v_mslice[0]), 'F', __pyx_v_self->view.ndim)); if (unlikely(!__pyx_t_2)) __PYX_ERR(1, 629, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - __pyx_r = __pyx_t_2; - __pyx_t_2 = 0; - goto __pyx_L0; - - /* "View.MemoryView":625 - * return slice_is_contig(mslice[0], 'C', self.view.ndim) - * - * def is_f_contig(self): # <<<<<<<<<<<<<< - * cdef __Pyx_memviewslice *mslice - * cdef __Pyx_memviewslice tmp - */ - - /* function exit code */ - __pyx_L1_error:; - __Pyx_XDECREF(__pyx_t_2); - __Pyx_AddTraceback("View.MemoryView.memoryview.is_f_contig", __pyx_clineno, __pyx_lineno, __pyx_filename); - __pyx_r = NULL; - __pyx_L0:; - __Pyx_XGIVEREF(__pyx_r); - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} - -/* "View.MemoryView":631 - * return slice_is_contig(mslice[0], 'F', self.view.ndim) - * - * def copy(self): # <<<<<<<<<<<<<< - * cdef __Pyx_memviewslice mslice - * cdef int flags = self.flags & ~PyBUF_F_CONTIGUOUS - */ - -/* Python wrapper */ -static PyObject *__pyx_memoryview_copy(PyObject *__pyx_v_self, CYTHON_UNUSED PyObject *unused); /*proto*/ -static PyObject *__pyx_memoryview_copy(PyObject *__pyx_v_self, CYTHON_UNUSED PyObject *unused) { - PyObject *__pyx_r = 0; - __Pyx_RefNannyDeclarations - __Pyx_RefNannySetupContext("copy (wrapper)", 0); - __pyx_r = __pyx_memoryview___pyx_pf_15View_dot_MemoryView_10memoryview_20copy(((struct __pyx_memoryview_obj *)__pyx_v_self)); - - /* function exit code */ - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} - -static PyObject *__pyx_memoryview___pyx_pf_15View_dot_MemoryView_10memoryview_20copy(struct __pyx_memoryview_obj *__pyx_v_self) { - __Pyx_memviewslice __pyx_v_mslice; - int __pyx_v_flags; - PyObject *__pyx_r = NULL; - __Pyx_RefNannyDeclarations - __Pyx_memviewslice __pyx_t_1; - PyObject *__pyx_t_2 = NULL; - int __pyx_lineno = 0; - const char *__pyx_filename = NULL; - int __pyx_clineno = 0; - __Pyx_RefNannySetupContext("copy", 0); - - /* "View.MemoryView":633 - * def copy(self): - * cdef __Pyx_memviewslice mslice - * cdef int flags = self.flags & ~PyBUF_F_CONTIGUOUS # <<<<<<<<<<<<<< - * - * slice_copy(self, &mslice) - */ - __pyx_v_flags = (__pyx_v_self->flags & (~PyBUF_F_CONTIGUOUS)); - - /* "View.MemoryView":635 - * cdef int flags = self.flags & ~PyBUF_F_CONTIGUOUS - * - * slice_copy(self, &mslice) # <<<<<<<<<<<<<< - * mslice = slice_copy_contig(&mslice, "c", self.view.ndim, - * self.view.itemsize, - */ - __pyx_memoryview_slice_copy(__pyx_v_self, (&__pyx_v_mslice)); - - /* "View.MemoryView":636 - * - * slice_copy(self, &mslice) - * mslice = slice_copy_contig(&mslice, "c", self.view.ndim, # <<<<<<<<<<<<<< - * self.view.itemsize, - * flags|PyBUF_C_CONTIGUOUS, - */ - __pyx_t_1 = __pyx_memoryview_copy_new_contig((&__pyx_v_mslice), ((char *)"c"), __pyx_v_self->view.ndim, __pyx_v_self->view.itemsize, (__pyx_v_flags | PyBUF_C_CONTIGUOUS), __pyx_v_self->dtype_is_object); if (unlikely(PyErr_Occurred())) __PYX_ERR(1, 636, __pyx_L1_error) - __pyx_v_mslice = __pyx_t_1; - - /* "View.MemoryView":641 - * self.dtype_is_object) - * - * return memoryview_copy_from_slice(self, &mslice) # <<<<<<<<<<<<<< - * - * def copy_fortran(self): - */ - __Pyx_XDECREF(__pyx_r); - __pyx_t_2 = __pyx_memoryview_copy_object_from_slice(__pyx_v_self, (&__pyx_v_mslice)); if (unlikely(!__pyx_t_2)) __PYX_ERR(1, 641, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - __pyx_r = __pyx_t_2; - __pyx_t_2 = 0; - goto __pyx_L0; - - /* "View.MemoryView":631 - * return slice_is_contig(mslice[0], 'F', self.view.ndim) - * - * def copy(self): # <<<<<<<<<<<<<< - * cdef __Pyx_memviewslice mslice - * cdef int flags = self.flags & ~PyBUF_F_CONTIGUOUS - */ - - /* function exit code */ - __pyx_L1_error:; - __Pyx_XDECREF(__pyx_t_2); - __Pyx_AddTraceback("View.MemoryView.memoryview.copy", __pyx_clineno, __pyx_lineno, __pyx_filename); - __pyx_r = NULL; - __pyx_L0:; - __Pyx_XGIVEREF(__pyx_r); - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} - -/* "View.MemoryView":643 - * return memoryview_copy_from_slice(self, &mslice) - * - * def copy_fortran(self): # <<<<<<<<<<<<<< - * cdef __Pyx_memviewslice src, dst - * cdef int flags = self.flags & ~PyBUF_C_CONTIGUOUS - */ - -/* Python wrapper */ -static PyObject *__pyx_memoryview_copy_fortran(PyObject *__pyx_v_self, CYTHON_UNUSED PyObject *unused); /*proto*/ -static PyObject *__pyx_memoryview_copy_fortran(PyObject *__pyx_v_self, CYTHON_UNUSED PyObject *unused) { - PyObject *__pyx_r = 0; - __Pyx_RefNannyDeclarations - __Pyx_RefNannySetupContext("copy_fortran (wrapper)", 0); - __pyx_r = __pyx_memoryview___pyx_pf_15View_dot_MemoryView_10memoryview_22copy_fortran(((struct __pyx_memoryview_obj *)__pyx_v_self)); - - /* function exit code */ - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} - -static PyObject *__pyx_memoryview___pyx_pf_15View_dot_MemoryView_10memoryview_22copy_fortran(struct __pyx_memoryview_obj *__pyx_v_self) { - __Pyx_memviewslice __pyx_v_src; - __Pyx_memviewslice __pyx_v_dst; - int __pyx_v_flags; - PyObject *__pyx_r = NULL; - __Pyx_RefNannyDeclarations - __Pyx_memviewslice __pyx_t_1; - PyObject *__pyx_t_2 = NULL; - int __pyx_lineno = 0; - const char *__pyx_filename = NULL; - int __pyx_clineno = 0; - __Pyx_RefNannySetupContext("copy_fortran", 0); - - /* "View.MemoryView":645 - * def copy_fortran(self): - * cdef __Pyx_memviewslice src, dst - * cdef int flags = self.flags & ~PyBUF_C_CONTIGUOUS # <<<<<<<<<<<<<< - * - * slice_copy(self, &src) - */ - __pyx_v_flags = (__pyx_v_self->flags & (~PyBUF_C_CONTIGUOUS)); - - /* "View.MemoryView":647 - * cdef int flags = self.flags & ~PyBUF_C_CONTIGUOUS - * - * slice_copy(self, &src) # <<<<<<<<<<<<<< - * dst = slice_copy_contig(&src, "fortran", self.view.ndim, - * self.view.itemsize, - */ - __pyx_memoryview_slice_copy(__pyx_v_self, (&__pyx_v_src)); - - /* "View.MemoryView":648 - * - * slice_copy(self, &src) - * dst = slice_copy_contig(&src, "fortran", self.view.ndim, # <<<<<<<<<<<<<< - * self.view.itemsize, - * flags|PyBUF_F_CONTIGUOUS, - */ - __pyx_t_1 = __pyx_memoryview_copy_new_contig((&__pyx_v_src), ((char *)"fortran"), __pyx_v_self->view.ndim, __pyx_v_self->view.itemsize, (__pyx_v_flags | PyBUF_F_CONTIGUOUS), __pyx_v_self->dtype_is_object); if (unlikely(PyErr_Occurred())) __PYX_ERR(1, 648, __pyx_L1_error) - __pyx_v_dst = __pyx_t_1; - - /* "View.MemoryView":653 - * self.dtype_is_object) - * - * return memoryview_copy_from_slice(self, &dst) # <<<<<<<<<<<<<< - * - * - */ - __Pyx_XDECREF(__pyx_r); - __pyx_t_2 = __pyx_memoryview_copy_object_from_slice(__pyx_v_self, (&__pyx_v_dst)); if (unlikely(!__pyx_t_2)) __PYX_ERR(1, 653, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - __pyx_r = __pyx_t_2; - __pyx_t_2 = 0; - goto __pyx_L0; - - /* "View.MemoryView":643 - * return memoryview_copy_from_slice(self, &mslice) - * - * def copy_fortran(self): # <<<<<<<<<<<<<< - * cdef __Pyx_memviewslice src, dst - * cdef int flags = self.flags & ~PyBUF_C_CONTIGUOUS - */ - - /* function exit code */ - __pyx_L1_error:; - __Pyx_XDECREF(__pyx_t_2); - __Pyx_AddTraceback("View.MemoryView.memoryview.copy_fortran", __pyx_clineno, __pyx_lineno, __pyx_filename); - __pyx_r = NULL; - __pyx_L0:; - __Pyx_XGIVEREF(__pyx_r); - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} - -/* "(tree fragment)":1 - * def __reduce_cython__(self): # <<<<<<<<<<<<<< - * raise TypeError("no default __reduce__ due to non-trivial __cinit__") - * def __setstate_cython__(self, __pyx_state): - */ - -/* Python wrapper */ -static PyObject *__pyx_pw___pyx_memoryview_1__reduce_cython__(PyObject *__pyx_v_self, CYTHON_UNUSED PyObject *unused); /*proto*/ -static PyObject *__pyx_pw___pyx_memoryview_1__reduce_cython__(PyObject *__pyx_v_self, CYTHON_UNUSED PyObject *unused) { - PyObject *__pyx_r = 0; - __Pyx_RefNannyDeclarations - __Pyx_RefNannySetupContext("__reduce_cython__ (wrapper)", 0); - __pyx_r = __pyx_pf___pyx_memoryview___reduce_cython__(((struct __pyx_memoryview_obj *)__pyx_v_self)); - - /* function exit code */ - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} - -static PyObject *__pyx_pf___pyx_memoryview___reduce_cython__(CYTHON_UNUSED struct __pyx_memoryview_obj *__pyx_v_self) { - PyObject *__pyx_r = NULL; - __Pyx_RefNannyDeclarations - PyObject *__pyx_t_1 = NULL; - int __pyx_lineno = 0; - const char *__pyx_filename = NULL; - int __pyx_clineno = 0; - __Pyx_RefNannySetupContext("__reduce_cython__", 0); - - /* "(tree fragment)":2 - * def __reduce_cython__(self): - * raise TypeError("no default __reduce__ due to non-trivial __cinit__") # <<<<<<<<<<<<<< - * def __setstate_cython__(self, __pyx_state): - * raise TypeError("no default __reduce__ due to non-trivial __cinit__") - */ - __pyx_t_1 = __Pyx_PyObject_Call(__pyx_builtin_TypeError, __pyx_tuple__14, NULL); if (unlikely(!__pyx_t_1)) __PYX_ERR(1, 2, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - __Pyx_Raise(__pyx_t_1, 0, 0, 0); - __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; - __PYX_ERR(1, 2, __pyx_L1_error) - - /* "(tree fragment)":1 - * def __reduce_cython__(self): # <<<<<<<<<<<<<< - * raise TypeError("no default __reduce__ due to non-trivial __cinit__") - * def __setstate_cython__(self, __pyx_state): - */ - - /* function exit code */ - __pyx_L1_error:; - __Pyx_XDECREF(__pyx_t_1); - __Pyx_AddTraceback("View.MemoryView.memoryview.__reduce_cython__", __pyx_clineno, __pyx_lineno, __pyx_filename); - __pyx_r = NULL; - __Pyx_XGIVEREF(__pyx_r); - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} - -/* "(tree fragment)":3 - * def __reduce_cython__(self): - * raise TypeError("no default __reduce__ due to non-trivial __cinit__") - * def __setstate_cython__(self, __pyx_state): # <<<<<<<<<<<<<< - * raise TypeError("no default __reduce__ due to non-trivial __cinit__") - */ - -/* Python wrapper */ -static PyObject *__pyx_pw___pyx_memoryview_3__setstate_cython__(PyObject *__pyx_v_self, PyObject *__pyx_v___pyx_state); /*proto*/ -static PyObject *__pyx_pw___pyx_memoryview_3__setstate_cython__(PyObject *__pyx_v_self, PyObject *__pyx_v___pyx_state) { - PyObject *__pyx_r = 0; - __Pyx_RefNannyDeclarations - __Pyx_RefNannySetupContext("__setstate_cython__ (wrapper)", 0); - __pyx_r = __pyx_pf___pyx_memoryview_2__setstate_cython__(((struct __pyx_memoryview_obj *)__pyx_v_self), ((PyObject *)__pyx_v___pyx_state)); - - /* function exit code */ - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} - -static PyObject *__pyx_pf___pyx_memoryview_2__setstate_cython__(CYTHON_UNUSED struct __pyx_memoryview_obj *__pyx_v_self, CYTHON_UNUSED PyObject *__pyx_v___pyx_state) { - PyObject *__pyx_r = NULL; - __Pyx_RefNannyDeclarations - PyObject *__pyx_t_1 = NULL; - int __pyx_lineno = 0; - const char *__pyx_filename = NULL; - int __pyx_clineno = 0; - __Pyx_RefNannySetupContext("__setstate_cython__", 0); - - /* "(tree fragment)":4 - * raise TypeError("no default __reduce__ due to non-trivial __cinit__") - * def __setstate_cython__(self, __pyx_state): - * raise TypeError("no default __reduce__ due to non-trivial __cinit__") # <<<<<<<<<<<<<< - */ - __pyx_t_1 = __Pyx_PyObject_Call(__pyx_builtin_TypeError, __pyx_tuple__15, NULL); if (unlikely(!__pyx_t_1)) __PYX_ERR(1, 4, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - __Pyx_Raise(__pyx_t_1, 0, 0, 0); - __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; - __PYX_ERR(1, 4, __pyx_L1_error) - - /* "(tree fragment)":3 - * def __reduce_cython__(self): - * raise TypeError("no default __reduce__ due to non-trivial __cinit__") - * def __setstate_cython__(self, __pyx_state): # <<<<<<<<<<<<<< - * raise TypeError("no default __reduce__ due to non-trivial __cinit__") - */ - - /* function exit code */ - __pyx_L1_error:; - __Pyx_XDECREF(__pyx_t_1); - __Pyx_AddTraceback("View.MemoryView.memoryview.__setstate_cython__", __pyx_clineno, __pyx_lineno, __pyx_filename); - __pyx_r = NULL; - __Pyx_XGIVEREF(__pyx_r); - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} - -/* "View.MemoryView":657 - * - * @cname('__pyx_memoryview_new') - * cdef memoryview_cwrapper(object o, int flags, bint dtype_is_object, __Pyx_TypeInfo *typeinfo): # <<<<<<<<<<<<<< - * cdef memoryview result = memoryview(o, flags, dtype_is_object) - * result.typeinfo = typeinfo - */ - -static PyObject *__pyx_memoryview_new(PyObject *__pyx_v_o, int __pyx_v_flags, int __pyx_v_dtype_is_object, __Pyx_TypeInfo *__pyx_v_typeinfo) { - struct __pyx_memoryview_obj *__pyx_v_result = 0; - PyObject *__pyx_r = NULL; - __Pyx_RefNannyDeclarations - PyObject *__pyx_t_1 = NULL; - PyObject *__pyx_t_2 = NULL; - PyObject *__pyx_t_3 = NULL; - int __pyx_lineno = 0; - const char *__pyx_filename = NULL; - int __pyx_clineno = 0; - __Pyx_RefNannySetupContext("memoryview_cwrapper", 0); - - /* "View.MemoryView":658 - * @cname('__pyx_memoryview_new') - * cdef memoryview_cwrapper(object o, int flags, bint dtype_is_object, __Pyx_TypeInfo *typeinfo): - * cdef memoryview result = memoryview(o, flags, dtype_is_object) # <<<<<<<<<<<<<< - * result.typeinfo = typeinfo - * return result - */ - __pyx_t_1 = __Pyx_PyInt_From_int(__pyx_v_flags); if (unlikely(!__pyx_t_1)) __PYX_ERR(1, 658, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - __pyx_t_2 = __Pyx_PyBool_FromLong(__pyx_v_dtype_is_object); if (unlikely(!__pyx_t_2)) __PYX_ERR(1, 658, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - __pyx_t_3 = PyTuple_New(3); if (unlikely(!__pyx_t_3)) __PYX_ERR(1, 658, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_3); - __Pyx_INCREF(__pyx_v_o); - __Pyx_GIVEREF(__pyx_v_o); - PyTuple_SET_ITEM(__pyx_t_3, 0, __pyx_v_o); - __Pyx_GIVEREF(__pyx_t_1); - PyTuple_SET_ITEM(__pyx_t_3, 1, __pyx_t_1); - __Pyx_GIVEREF(__pyx_t_2); - PyTuple_SET_ITEM(__pyx_t_3, 2, __pyx_t_2); - __pyx_t_1 = 0; - __pyx_t_2 = 0; - __pyx_t_2 = __Pyx_PyObject_Call(((PyObject *)__pyx_memoryview_type), __pyx_t_3, NULL); if (unlikely(!__pyx_t_2)) __PYX_ERR(1, 658, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - __pyx_v_result = ((struct __pyx_memoryview_obj *)__pyx_t_2); - __pyx_t_2 = 0; - - /* "View.MemoryView":659 - * cdef memoryview_cwrapper(object o, int flags, bint dtype_is_object, __Pyx_TypeInfo *typeinfo): - * cdef memoryview result = memoryview(o, flags, dtype_is_object) - * result.typeinfo = typeinfo # <<<<<<<<<<<<<< - * return result - * - */ - __pyx_v_result->typeinfo = __pyx_v_typeinfo; - - /* "View.MemoryView":660 - * cdef memoryview result = memoryview(o, flags, dtype_is_object) - * result.typeinfo = typeinfo - * return result # <<<<<<<<<<<<<< - * - * @cname('__pyx_memoryview_check') - */ - __Pyx_XDECREF(__pyx_r); - __Pyx_INCREF(((PyObject *)__pyx_v_result)); - __pyx_r = ((PyObject *)__pyx_v_result); - goto __pyx_L0; - - /* "View.MemoryView":657 - * - * @cname('__pyx_memoryview_new') - * cdef memoryview_cwrapper(object o, int flags, bint dtype_is_object, __Pyx_TypeInfo *typeinfo): # <<<<<<<<<<<<<< - * cdef memoryview result = memoryview(o, flags, dtype_is_object) - * result.typeinfo = typeinfo - */ - - /* function exit code */ - __pyx_L1_error:; - __Pyx_XDECREF(__pyx_t_1); - __Pyx_XDECREF(__pyx_t_2); - __Pyx_XDECREF(__pyx_t_3); - __Pyx_AddTraceback("View.MemoryView.memoryview_cwrapper", __pyx_clineno, __pyx_lineno, __pyx_filename); - __pyx_r = 0; - __pyx_L0:; - __Pyx_XDECREF((PyObject *)__pyx_v_result); - __Pyx_XGIVEREF(__pyx_r); - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} - -/* "View.MemoryView":663 - * - * @cname('__pyx_memoryview_check') - * cdef inline bint memoryview_check(object o): # <<<<<<<<<<<<<< - * return isinstance(o, memoryview) - * - */ - -static CYTHON_INLINE int __pyx_memoryview_check(PyObject *__pyx_v_o) { - int __pyx_r; - __Pyx_RefNannyDeclarations - int __pyx_t_1; - __Pyx_RefNannySetupContext("memoryview_check", 0); - - /* "View.MemoryView":664 - * @cname('__pyx_memoryview_check') - * cdef inline bint memoryview_check(object o): - * return isinstance(o, memoryview) # <<<<<<<<<<<<<< - * - * cdef tuple _unellipsify(object index, int ndim): - */ - __pyx_t_1 = __Pyx_TypeCheck(__pyx_v_o, __pyx_memoryview_type); - __pyx_r = __pyx_t_1; - goto __pyx_L0; - - /* "View.MemoryView":663 - * - * @cname('__pyx_memoryview_check') - * cdef inline bint memoryview_check(object o): # <<<<<<<<<<<<<< - * return isinstance(o, memoryview) - * - */ - - /* function exit code */ - __pyx_L0:; - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} - -/* "View.MemoryView":666 - * return isinstance(o, memoryview) - * - * cdef tuple _unellipsify(object index, int ndim): # <<<<<<<<<<<<<< - * """ - * Replace all ellipses with full slices and fill incomplete indices with - */ - -static PyObject *_unellipsify(PyObject *__pyx_v_index, int __pyx_v_ndim) { - PyObject *__pyx_v_tup = NULL; - PyObject *__pyx_v_result = NULL; - int __pyx_v_have_slices; - int __pyx_v_seen_ellipsis; - CYTHON_UNUSED PyObject *__pyx_v_idx = NULL; - PyObject *__pyx_v_item = NULL; - Py_ssize_t __pyx_v_nslices; - PyObject *__pyx_r = NULL; - __Pyx_RefNannyDeclarations - int __pyx_t_1; - int __pyx_t_2; - PyObject *__pyx_t_3 = NULL; - PyObject *__pyx_t_4 = NULL; - Py_ssize_t __pyx_t_5; - PyObject *(*__pyx_t_6)(PyObject *); - PyObject *__pyx_t_7 = NULL; - Py_ssize_t __pyx_t_8; - int __pyx_t_9; - int __pyx_t_10; - PyObject *__pyx_t_11 = NULL; - int __pyx_lineno = 0; - const char *__pyx_filename = NULL; - int __pyx_clineno = 0; - __Pyx_RefNannySetupContext("_unellipsify", 0); - - /* "View.MemoryView":671 - * full slices. - * """ - * if not isinstance(index, tuple): # <<<<<<<<<<<<<< - * tup = (index,) - * else: - */ - __pyx_t_1 = PyTuple_Check(__pyx_v_index); - __pyx_t_2 = ((!(__pyx_t_1 != 0)) != 0); - if (__pyx_t_2) { - - /* "View.MemoryView":672 - * """ - * if not isinstance(index, tuple): - * tup = (index,) # <<<<<<<<<<<<<< - * else: - * tup = index - */ - __pyx_t_3 = PyTuple_New(1); if (unlikely(!__pyx_t_3)) __PYX_ERR(1, 672, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_3); - __Pyx_INCREF(__pyx_v_index); - __Pyx_GIVEREF(__pyx_v_index); - PyTuple_SET_ITEM(__pyx_t_3, 0, __pyx_v_index); - __pyx_v_tup = __pyx_t_3; - __pyx_t_3 = 0; - - /* "View.MemoryView":671 - * full slices. - * """ - * if not isinstance(index, tuple): # <<<<<<<<<<<<<< - * tup = (index,) - * else: - */ - goto __pyx_L3; - } - - /* "View.MemoryView":674 - * tup = (index,) - * else: - * tup = index # <<<<<<<<<<<<<< - * - * result = [] - */ - /*else*/ { - __Pyx_INCREF(__pyx_v_index); - __pyx_v_tup = __pyx_v_index; - } - __pyx_L3:; - - /* "View.MemoryView":676 - * tup = index - * - * result = [] # <<<<<<<<<<<<<< - * have_slices = False - * seen_ellipsis = False - */ - __pyx_t_3 = PyList_New(0); if (unlikely(!__pyx_t_3)) __PYX_ERR(1, 676, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_3); - __pyx_v_result = ((PyObject*)__pyx_t_3); - __pyx_t_3 = 0; - - /* "View.MemoryView":677 - * - * result = [] - * have_slices = False # <<<<<<<<<<<<<< - * seen_ellipsis = False - * for idx, item in enumerate(tup): - */ - __pyx_v_have_slices = 0; - - /* "View.MemoryView":678 - * result = [] - * have_slices = False - * seen_ellipsis = False # <<<<<<<<<<<<<< - * for idx, item in enumerate(tup): - * if item is Ellipsis: - */ - __pyx_v_seen_ellipsis = 0; - - /* "View.MemoryView":679 - * have_slices = False - * seen_ellipsis = False - * for idx, item in enumerate(tup): # <<<<<<<<<<<<<< - * if item is Ellipsis: - * if not seen_ellipsis: - */ - __Pyx_INCREF(__pyx_int_0); - __pyx_t_3 = __pyx_int_0; - if (likely(PyList_CheckExact(__pyx_v_tup)) || PyTuple_CheckExact(__pyx_v_tup)) { - __pyx_t_4 = __pyx_v_tup; __Pyx_INCREF(__pyx_t_4); __pyx_t_5 = 0; - __pyx_t_6 = NULL; - } else { - __pyx_t_5 = -1; __pyx_t_4 = PyObject_GetIter(__pyx_v_tup); if (unlikely(!__pyx_t_4)) __PYX_ERR(1, 679, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_4); - __pyx_t_6 = Py_TYPE(__pyx_t_4)->tp_iternext; if (unlikely(!__pyx_t_6)) __PYX_ERR(1, 679, __pyx_L1_error) - } - for (;;) { - if (likely(!__pyx_t_6)) { - if (likely(PyList_CheckExact(__pyx_t_4))) { - if (__pyx_t_5 >= PyList_GET_SIZE(__pyx_t_4)) break; - #if CYTHON_ASSUME_SAFE_MACROS && !CYTHON_AVOID_BORROWED_REFS - __pyx_t_7 = PyList_GET_ITEM(__pyx_t_4, __pyx_t_5); __Pyx_INCREF(__pyx_t_7); __pyx_t_5++; if (unlikely(0 < 0)) __PYX_ERR(1, 679, __pyx_L1_error) - #else - __pyx_t_7 = PySequence_ITEM(__pyx_t_4, __pyx_t_5); __pyx_t_5++; if (unlikely(!__pyx_t_7)) __PYX_ERR(1, 679, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_7); - #endif - } else { - if (__pyx_t_5 >= PyTuple_GET_SIZE(__pyx_t_4)) break; - #if CYTHON_ASSUME_SAFE_MACROS && !CYTHON_AVOID_BORROWED_REFS - __pyx_t_7 = PyTuple_GET_ITEM(__pyx_t_4, __pyx_t_5); __Pyx_INCREF(__pyx_t_7); __pyx_t_5++; if (unlikely(0 < 0)) __PYX_ERR(1, 679, __pyx_L1_error) - #else - __pyx_t_7 = PySequence_ITEM(__pyx_t_4, __pyx_t_5); __pyx_t_5++; if (unlikely(!__pyx_t_7)) __PYX_ERR(1, 679, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_7); - #endif - } - } else { - __pyx_t_7 = __pyx_t_6(__pyx_t_4); - if (unlikely(!__pyx_t_7)) { - PyObject* exc_type = PyErr_Occurred(); - if (exc_type) { - if (likely(__Pyx_PyErr_GivenExceptionMatches(exc_type, PyExc_StopIteration))) PyErr_Clear(); - else __PYX_ERR(1, 679, __pyx_L1_error) - } - break; - } - __Pyx_GOTREF(__pyx_t_7); - } - __Pyx_XDECREF_SET(__pyx_v_item, __pyx_t_7); - __pyx_t_7 = 0; - __Pyx_INCREF(__pyx_t_3); - __Pyx_XDECREF_SET(__pyx_v_idx, __pyx_t_3); - __pyx_t_7 = __Pyx_PyInt_AddObjC(__pyx_t_3, __pyx_int_1, 1, 0, 0); if (unlikely(!__pyx_t_7)) __PYX_ERR(1, 679, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_7); - __Pyx_DECREF(__pyx_t_3); - __pyx_t_3 = __pyx_t_7; - __pyx_t_7 = 0; - - /* "View.MemoryView":680 - * seen_ellipsis = False - * for idx, item in enumerate(tup): - * if item is Ellipsis: # <<<<<<<<<<<<<< - * if not seen_ellipsis: - * result.extend([slice(None)] * (ndim - len(tup) + 1)) - */ - __pyx_t_2 = (__pyx_v_item == __pyx_builtin_Ellipsis); - __pyx_t_1 = (__pyx_t_2 != 0); - if (__pyx_t_1) { - - /* "View.MemoryView":681 - * for idx, item in enumerate(tup): - * if item is Ellipsis: - * if not seen_ellipsis: # <<<<<<<<<<<<<< - * result.extend([slice(None)] * (ndim - len(tup) + 1)) - * seen_ellipsis = True - */ - __pyx_t_1 = ((!(__pyx_v_seen_ellipsis != 0)) != 0); - if (__pyx_t_1) { - - /* "View.MemoryView":682 - * if item is Ellipsis: - * if not seen_ellipsis: - * result.extend([slice(None)] * (ndim - len(tup) + 1)) # <<<<<<<<<<<<<< - * seen_ellipsis = True - * else: - */ - __pyx_t_8 = PyObject_Length(__pyx_v_tup); if (unlikely(__pyx_t_8 == ((Py_ssize_t)-1))) __PYX_ERR(1, 682, __pyx_L1_error) - __pyx_t_7 = PyList_New(1 * ((((__pyx_v_ndim - __pyx_t_8) + 1)<0) ? 0:((__pyx_v_ndim - __pyx_t_8) + 1))); if (unlikely(!__pyx_t_7)) __PYX_ERR(1, 682, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_7); - { Py_ssize_t __pyx_temp; - for (__pyx_temp=0; __pyx_temp < ((__pyx_v_ndim - __pyx_t_8) + 1); __pyx_temp++) { - __Pyx_INCREF(__pyx_slice__16); - __Pyx_GIVEREF(__pyx_slice__16); - PyList_SET_ITEM(__pyx_t_7, __pyx_temp, __pyx_slice__16); - } - } - __pyx_t_9 = __Pyx_PyList_Extend(__pyx_v_result, __pyx_t_7); if (unlikely(__pyx_t_9 == ((int)-1))) __PYX_ERR(1, 682, __pyx_L1_error) - __Pyx_DECREF(__pyx_t_7); __pyx_t_7 = 0; - - /* "View.MemoryView":683 - * if not seen_ellipsis: - * result.extend([slice(None)] * (ndim - len(tup) + 1)) - * seen_ellipsis = True # <<<<<<<<<<<<<< - * else: - * result.append(slice(None)) - */ - __pyx_v_seen_ellipsis = 1; - - /* "View.MemoryView":681 - * for idx, item in enumerate(tup): - * if item is Ellipsis: - * if not seen_ellipsis: # <<<<<<<<<<<<<< - * result.extend([slice(None)] * (ndim - len(tup) + 1)) - * seen_ellipsis = True - */ - goto __pyx_L7; - } - - /* "View.MemoryView":685 - * seen_ellipsis = True - * else: - * result.append(slice(None)) # <<<<<<<<<<<<<< - * have_slices = True - * else: - */ - /*else*/ { - __pyx_t_9 = __Pyx_PyList_Append(__pyx_v_result, __pyx_slice__16); if (unlikely(__pyx_t_9 == ((int)-1))) __PYX_ERR(1, 685, __pyx_L1_error) - } - __pyx_L7:; - - /* "View.MemoryView":686 - * else: - * result.append(slice(None)) - * have_slices = True # <<<<<<<<<<<<<< - * else: - * if not isinstance(item, slice) and not PyIndex_Check(item): - */ - __pyx_v_have_slices = 1; - - /* "View.MemoryView":680 - * seen_ellipsis = False - * for idx, item in enumerate(tup): - * if item is Ellipsis: # <<<<<<<<<<<<<< - * if not seen_ellipsis: - * result.extend([slice(None)] * (ndim - len(tup) + 1)) - */ - goto __pyx_L6; - } - - /* "View.MemoryView":688 - * have_slices = True - * else: - * if not isinstance(item, slice) and not PyIndex_Check(item): # <<<<<<<<<<<<<< - * raise TypeError("Cannot index with type '%s'" % type(item)) - * - */ - /*else*/ { - __pyx_t_2 = PySlice_Check(__pyx_v_item); - __pyx_t_10 = ((!(__pyx_t_2 != 0)) != 0); - if (__pyx_t_10) { - } else { - __pyx_t_1 = __pyx_t_10; - goto __pyx_L9_bool_binop_done; - } - __pyx_t_10 = ((!(PyIndex_Check(__pyx_v_item) != 0)) != 0); - __pyx_t_1 = __pyx_t_10; - __pyx_L9_bool_binop_done:; - if (unlikely(__pyx_t_1)) { - - /* "View.MemoryView":689 - * else: - * if not isinstance(item, slice) and not PyIndex_Check(item): - * raise TypeError("Cannot index with type '%s'" % type(item)) # <<<<<<<<<<<<<< - * - * have_slices = have_slices or isinstance(item, slice) - */ - __pyx_t_7 = __Pyx_PyString_FormatSafe(__pyx_kp_s_Cannot_index_with_type_s, ((PyObject *)Py_TYPE(__pyx_v_item))); if (unlikely(!__pyx_t_7)) __PYX_ERR(1, 689, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_7); - __pyx_t_11 = __Pyx_PyObject_CallOneArg(__pyx_builtin_TypeError, __pyx_t_7); if (unlikely(!__pyx_t_11)) __PYX_ERR(1, 689, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_11); - __Pyx_DECREF(__pyx_t_7); __pyx_t_7 = 0; - __Pyx_Raise(__pyx_t_11, 0, 0, 0); - __Pyx_DECREF(__pyx_t_11); __pyx_t_11 = 0; - __PYX_ERR(1, 689, __pyx_L1_error) - - /* "View.MemoryView":688 - * have_slices = True - * else: - * if not isinstance(item, slice) and not PyIndex_Check(item): # <<<<<<<<<<<<<< - * raise TypeError("Cannot index with type '%s'" % type(item)) - * - */ - } - - /* "View.MemoryView":691 - * raise TypeError("Cannot index with type '%s'" % type(item)) - * - * have_slices = have_slices or isinstance(item, slice) # <<<<<<<<<<<<<< - * result.append(item) - * - */ - __pyx_t_10 = (__pyx_v_have_slices != 0); - if (!__pyx_t_10) { - } else { - __pyx_t_1 = __pyx_t_10; - goto __pyx_L11_bool_binop_done; - } - __pyx_t_10 = PySlice_Check(__pyx_v_item); - __pyx_t_2 = (__pyx_t_10 != 0); - __pyx_t_1 = __pyx_t_2; - __pyx_L11_bool_binop_done:; - __pyx_v_have_slices = __pyx_t_1; - - /* "View.MemoryView":692 - * - * have_slices = have_slices or isinstance(item, slice) - * result.append(item) # <<<<<<<<<<<<<< - * - * nslices = ndim - len(result) - */ - __pyx_t_9 = __Pyx_PyList_Append(__pyx_v_result, __pyx_v_item); if (unlikely(__pyx_t_9 == ((int)-1))) __PYX_ERR(1, 692, __pyx_L1_error) - } - __pyx_L6:; - - /* "View.MemoryView":679 - * have_slices = False - * seen_ellipsis = False - * for idx, item in enumerate(tup): # <<<<<<<<<<<<<< - * if item is Ellipsis: - * if not seen_ellipsis: - */ - } - __Pyx_DECREF(__pyx_t_4); __pyx_t_4 = 0; - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - - /* "View.MemoryView":694 - * result.append(item) - * - * nslices = ndim - len(result) # <<<<<<<<<<<<<< - * if nslices: - * result.extend([slice(None)] * nslices) - */ - __pyx_t_5 = PyList_GET_SIZE(__pyx_v_result); if (unlikely(__pyx_t_5 == ((Py_ssize_t)-1))) __PYX_ERR(1, 694, __pyx_L1_error) - __pyx_v_nslices = (__pyx_v_ndim - __pyx_t_5); - - /* "View.MemoryView":695 - * - * nslices = ndim - len(result) - * if nslices: # <<<<<<<<<<<<<< - * result.extend([slice(None)] * nslices) - * - */ - __pyx_t_1 = (__pyx_v_nslices != 0); - if (__pyx_t_1) { - - /* "View.MemoryView":696 - * nslices = ndim - len(result) - * if nslices: - * result.extend([slice(None)] * nslices) # <<<<<<<<<<<<<< - * - * return have_slices or nslices, tuple(result) - */ - __pyx_t_3 = PyList_New(1 * ((__pyx_v_nslices<0) ? 0:__pyx_v_nslices)); if (unlikely(!__pyx_t_3)) __PYX_ERR(1, 696, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_3); - { Py_ssize_t __pyx_temp; - for (__pyx_temp=0; __pyx_temp < __pyx_v_nslices; __pyx_temp++) { - __Pyx_INCREF(__pyx_slice__16); - __Pyx_GIVEREF(__pyx_slice__16); - PyList_SET_ITEM(__pyx_t_3, __pyx_temp, __pyx_slice__16); - } - } - __pyx_t_9 = __Pyx_PyList_Extend(__pyx_v_result, __pyx_t_3); if (unlikely(__pyx_t_9 == ((int)-1))) __PYX_ERR(1, 696, __pyx_L1_error) - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - - /* "View.MemoryView":695 - * - * nslices = ndim - len(result) - * if nslices: # <<<<<<<<<<<<<< - * result.extend([slice(None)] * nslices) - * - */ - } - - /* "View.MemoryView":698 - * result.extend([slice(None)] * nslices) - * - * return have_slices or nslices, tuple(result) # <<<<<<<<<<<<<< - * - * cdef assert_direct_dimensions(Py_ssize_t *suboffsets, int ndim): - */ - __Pyx_XDECREF(__pyx_r); - if (!__pyx_v_have_slices) { - } else { - __pyx_t_4 = __Pyx_PyBool_FromLong(__pyx_v_have_slices); if (unlikely(!__pyx_t_4)) __PYX_ERR(1, 698, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_4); - __pyx_t_3 = __pyx_t_4; - __pyx_t_4 = 0; - goto __pyx_L14_bool_binop_done; - } - __pyx_t_4 = PyInt_FromSsize_t(__pyx_v_nslices); if (unlikely(!__pyx_t_4)) __PYX_ERR(1, 698, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_4); - __pyx_t_3 = __pyx_t_4; - __pyx_t_4 = 0; - __pyx_L14_bool_binop_done:; - __pyx_t_4 = PyList_AsTuple(__pyx_v_result); if (unlikely(!__pyx_t_4)) __PYX_ERR(1, 698, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_4); - __pyx_t_11 = PyTuple_New(2); if (unlikely(!__pyx_t_11)) __PYX_ERR(1, 698, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_11); - __Pyx_GIVEREF(__pyx_t_3); - PyTuple_SET_ITEM(__pyx_t_11, 0, __pyx_t_3); - __Pyx_GIVEREF(__pyx_t_4); - PyTuple_SET_ITEM(__pyx_t_11, 1, __pyx_t_4); - __pyx_t_3 = 0; - __pyx_t_4 = 0; - __pyx_r = ((PyObject*)__pyx_t_11); - __pyx_t_11 = 0; - goto __pyx_L0; - - /* "View.MemoryView":666 - * return isinstance(o, memoryview) - * - * cdef tuple _unellipsify(object index, int ndim): # <<<<<<<<<<<<<< - * """ - * Replace all ellipses with full slices and fill incomplete indices with - */ - - /* function exit code */ - __pyx_L1_error:; - __Pyx_XDECREF(__pyx_t_3); - __Pyx_XDECREF(__pyx_t_4); - __Pyx_XDECREF(__pyx_t_7); - __Pyx_XDECREF(__pyx_t_11); - __Pyx_AddTraceback("View.MemoryView._unellipsify", __pyx_clineno, __pyx_lineno, __pyx_filename); - __pyx_r = 0; - __pyx_L0:; - __Pyx_XDECREF(__pyx_v_tup); - __Pyx_XDECREF(__pyx_v_result); - __Pyx_XDECREF(__pyx_v_idx); - __Pyx_XDECREF(__pyx_v_item); - __Pyx_XGIVEREF(__pyx_r); - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} - -/* "View.MemoryView":700 - * return have_slices or nslices, tuple(result) - * - * cdef assert_direct_dimensions(Py_ssize_t *suboffsets, int ndim): # <<<<<<<<<<<<<< - * for suboffset in suboffsets[:ndim]: - * if suboffset >= 0: - */ - -static PyObject *assert_direct_dimensions(Py_ssize_t *__pyx_v_suboffsets, int __pyx_v_ndim) { - Py_ssize_t __pyx_v_suboffset; - PyObject *__pyx_r = NULL; - __Pyx_RefNannyDeclarations - Py_ssize_t *__pyx_t_1; - Py_ssize_t *__pyx_t_2; - Py_ssize_t *__pyx_t_3; - int __pyx_t_4; - PyObject *__pyx_t_5 = NULL; - int __pyx_lineno = 0; - const char *__pyx_filename = NULL; - int __pyx_clineno = 0; - __Pyx_RefNannySetupContext("assert_direct_dimensions", 0); - - /* "View.MemoryView":701 - * - * cdef assert_direct_dimensions(Py_ssize_t *suboffsets, int ndim): - * for suboffset in suboffsets[:ndim]: # <<<<<<<<<<<<<< - * if suboffset >= 0: - * raise ValueError("Indirect dimensions not supported") - */ - __pyx_t_2 = (__pyx_v_suboffsets + __pyx_v_ndim); - for (__pyx_t_3 = __pyx_v_suboffsets; __pyx_t_3 < __pyx_t_2; __pyx_t_3++) { - __pyx_t_1 = __pyx_t_3; - __pyx_v_suboffset = (__pyx_t_1[0]); - - /* "View.MemoryView":702 - * cdef assert_direct_dimensions(Py_ssize_t *suboffsets, int ndim): - * for suboffset in suboffsets[:ndim]: - * if suboffset >= 0: # <<<<<<<<<<<<<< - * raise ValueError("Indirect dimensions not supported") - * - */ - __pyx_t_4 = ((__pyx_v_suboffset >= 0) != 0); - if (unlikely(__pyx_t_4)) { - - /* "View.MemoryView":703 - * for suboffset in suboffsets[:ndim]: - * if suboffset >= 0: - * raise ValueError("Indirect dimensions not supported") # <<<<<<<<<<<<<< - * - * - */ - __pyx_t_5 = __Pyx_PyObject_Call(__pyx_builtin_ValueError, __pyx_tuple__17, NULL); if (unlikely(!__pyx_t_5)) __PYX_ERR(1, 703, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_5); - __Pyx_Raise(__pyx_t_5, 0, 0, 0); - __Pyx_DECREF(__pyx_t_5); __pyx_t_5 = 0; - __PYX_ERR(1, 703, __pyx_L1_error) - - /* "View.MemoryView":702 - * cdef assert_direct_dimensions(Py_ssize_t *suboffsets, int ndim): - * for suboffset in suboffsets[:ndim]: - * if suboffset >= 0: # <<<<<<<<<<<<<< - * raise ValueError("Indirect dimensions not supported") - * - */ - } - } - - /* "View.MemoryView":700 - * return have_slices or nslices, tuple(result) - * - * cdef assert_direct_dimensions(Py_ssize_t *suboffsets, int ndim): # <<<<<<<<<<<<<< - * for suboffset in suboffsets[:ndim]: - * if suboffset >= 0: - */ - - /* function exit code */ - __pyx_r = Py_None; __Pyx_INCREF(Py_None); - goto __pyx_L0; - __pyx_L1_error:; - __Pyx_XDECREF(__pyx_t_5); - __Pyx_AddTraceback("View.MemoryView.assert_direct_dimensions", __pyx_clineno, __pyx_lineno, __pyx_filename); - __pyx_r = 0; - __pyx_L0:; - __Pyx_XGIVEREF(__pyx_r); - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} - -/* "View.MemoryView":710 - * - * @cname('__pyx_memview_slice') - * cdef memoryview memview_slice(memoryview memview, object indices): # <<<<<<<<<<<<<< - * cdef int new_ndim = 0, suboffset_dim = -1, dim - * cdef bint negative_step - */ - -static struct __pyx_memoryview_obj *__pyx_memview_slice(struct __pyx_memoryview_obj *__pyx_v_memview, PyObject *__pyx_v_indices) { - int __pyx_v_new_ndim; - int __pyx_v_suboffset_dim; - int __pyx_v_dim; - __Pyx_memviewslice __pyx_v_src; - __Pyx_memviewslice __pyx_v_dst; - __Pyx_memviewslice *__pyx_v_p_src; - struct __pyx_memoryviewslice_obj *__pyx_v_memviewsliceobj = 0; - __Pyx_memviewslice *__pyx_v_p_dst; - int *__pyx_v_p_suboffset_dim; - Py_ssize_t __pyx_v_start; - Py_ssize_t __pyx_v_stop; - Py_ssize_t __pyx_v_step; - int __pyx_v_have_start; - int __pyx_v_have_stop; - int __pyx_v_have_step; - PyObject *__pyx_v_index = NULL; - struct __pyx_memoryview_obj *__pyx_r = NULL; - __Pyx_RefNannyDeclarations - int __pyx_t_1; - int __pyx_t_2; - PyObject *__pyx_t_3 = NULL; - struct __pyx_memoryview_obj *__pyx_t_4; - char *__pyx_t_5; - int __pyx_t_6; - Py_ssize_t __pyx_t_7; - PyObject *(*__pyx_t_8)(PyObject *); - PyObject *__pyx_t_9 = NULL; - Py_ssize_t __pyx_t_10; - int __pyx_t_11; - Py_ssize_t __pyx_t_12; - int __pyx_lineno = 0; - const char *__pyx_filename = NULL; - int __pyx_clineno = 0; - __Pyx_RefNannySetupContext("memview_slice", 0); - - /* "View.MemoryView":711 - * @cname('__pyx_memview_slice') - * cdef memoryview memview_slice(memoryview memview, object indices): - * cdef int new_ndim = 0, suboffset_dim = -1, dim # <<<<<<<<<<<<<< - * cdef bint negative_step - * cdef __Pyx_memviewslice src, dst - */ - __pyx_v_new_ndim = 0; - __pyx_v_suboffset_dim = -1; - - /* "View.MemoryView":718 - * - * - * memset(&dst, 0, sizeof(dst)) # <<<<<<<<<<<<<< - * - * cdef _memoryviewslice memviewsliceobj - */ - (void)(memset((&__pyx_v_dst), 0, (sizeof(__pyx_v_dst)))); - - /* "View.MemoryView":722 - * cdef _memoryviewslice memviewsliceobj - * - * assert memview.view.ndim > 0 # <<<<<<<<<<<<<< - * - * if isinstance(memview, _memoryviewslice): - */ - #ifndef CYTHON_WITHOUT_ASSERTIONS - if (unlikely(!Py_OptimizeFlag)) { - if (unlikely(!((__pyx_v_memview->view.ndim > 0) != 0))) { - PyErr_SetNone(PyExc_AssertionError); - __PYX_ERR(1, 722, __pyx_L1_error) - } - } - #endif - - /* "View.MemoryView":724 - * assert memview.view.ndim > 0 - * - * if isinstance(memview, _memoryviewslice): # <<<<<<<<<<<<<< - * memviewsliceobj = memview - * p_src = &memviewsliceobj.from_slice - */ - __pyx_t_1 = __Pyx_TypeCheck(((PyObject *)__pyx_v_memview), __pyx_memoryviewslice_type); - __pyx_t_2 = (__pyx_t_1 != 0); - if (__pyx_t_2) { - - /* "View.MemoryView":725 - * - * if isinstance(memview, _memoryviewslice): - * memviewsliceobj = memview # <<<<<<<<<<<<<< - * p_src = &memviewsliceobj.from_slice - * else: - */ - if (!(likely(((((PyObject *)__pyx_v_memview)) == Py_None) || likely(__Pyx_TypeTest(((PyObject *)__pyx_v_memview), __pyx_memoryviewslice_type))))) __PYX_ERR(1, 725, __pyx_L1_error) - __pyx_t_3 = ((PyObject *)__pyx_v_memview); - __Pyx_INCREF(__pyx_t_3); - __pyx_v_memviewsliceobj = ((struct __pyx_memoryviewslice_obj *)__pyx_t_3); - __pyx_t_3 = 0; - - /* "View.MemoryView":726 - * if isinstance(memview, _memoryviewslice): - * memviewsliceobj = memview - * p_src = &memviewsliceobj.from_slice # <<<<<<<<<<<<<< - * else: - * slice_copy(memview, &src) - */ - __pyx_v_p_src = (&__pyx_v_memviewsliceobj->from_slice); - - /* "View.MemoryView":724 - * assert memview.view.ndim > 0 - * - * if isinstance(memview, _memoryviewslice): # <<<<<<<<<<<<<< - * memviewsliceobj = memview - * p_src = &memviewsliceobj.from_slice - */ - goto __pyx_L3; - } - - /* "View.MemoryView":728 - * p_src = &memviewsliceobj.from_slice - * else: - * slice_copy(memview, &src) # <<<<<<<<<<<<<< - * p_src = &src - * - */ - /*else*/ { - __pyx_memoryview_slice_copy(__pyx_v_memview, (&__pyx_v_src)); - - /* "View.MemoryView":729 - * else: - * slice_copy(memview, &src) - * p_src = &src # <<<<<<<<<<<<<< - * - * - */ - __pyx_v_p_src = (&__pyx_v_src); - } - __pyx_L3:; - - /* "View.MemoryView":735 - * - * - * dst.memview = p_src.memview # <<<<<<<<<<<<<< - * dst.data = p_src.data - * - */ - __pyx_t_4 = __pyx_v_p_src->memview; - __pyx_v_dst.memview = __pyx_t_4; - - /* "View.MemoryView":736 - * - * dst.memview = p_src.memview - * dst.data = p_src.data # <<<<<<<<<<<<<< - * - * - */ - __pyx_t_5 = __pyx_v_p_src->data; - __pyx_v_dst.data = __pyx_t_5; - - /* "View.MemoryView":741 - * - * - * cdef __Pyx_memviewslice *p_dst = &dst # <<<<<<<<<<<<<< - * cdef int *p_suboffset_dim = &suboffset_dim - * cdef Py_ssize_t start, stop, step - */ - __pyx_v_p_dst = (&__pyx_v_dst); - - /* "View.MemoryView":742 - * - * cdef __Pyx_memviewslice *p_dst = &dst - * cdef int *p_suboffset_dim = &suboffset_dim # <<<<<<<<<<<<<< - * cdef Py_ssize_t start, stop, step - * cdef bint have_start, have_stop, have_step - */ - __pyx_v_p_suboffset_dim = (&__pyx_v_suboffset_dim); - - /* "View.MemoryView":746 - * cdef bint have_start, have_stop, have_step - * - * for dim, index in enumerate(indices): # <<<<<<<<<<<<<< - * if PyIndex_Check(index): - * slice_memviewslice( - */ - __pyx_t_6 = 0; - if (likely(PyList_CheckExact(__pyx_v_indices)) || PyTuple_CheckExact(__pyx_v_indices)) { - __pyx_t_3 = __pyx_v_indices; __Pyx_INCREF(__pyx_t_3); __pyx_t_7 = 0; - __pyx_t_8 = NULL; - } else { - __pyx_t_7 = -1; __pyx_t_3 = PyObject_GetIter(__pyx_v_indices); if (unlikely(!__pyx_t_3)) __PYX_ERR(1, 746, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_3); - __pyx_t_8 = Py_TYPE(__pyx_t_3)->tp_iternext; if (unlikely(!__pyx_t_8)) __PYX_ERR(1, 746, __pyx_L1_error) - } - for (;;) { - if (likely(!__pyx_t_8)) { - if (likely(PyList_CheckExact(__pyx_t_3))) { - if (__pyx_t_7 >= PyList_GET_SIZE(__pyx_t_3)) break; - #if CYTHON_ASSUME_SAFE_MACROS && !CYTHON_AVOID_BORROWED_REFS - __pyx_t_9 = PyList_GET_ITEM(__pyx_t_3, __pyx_t_7); __Pyx_INCREF(__pyx_t_9); __pyx_t_7++; if (unlikely(0 < 0)) __PYX_ERR(1, 746, __pyx_L1_error) - #else - __pyx_t_9 = PySequence_ITEM(__pyx_t_3, __pyx_t_7); __pyx_t_7++; if (unlikely(!__pyx_t_9)) __PYX_ERR(1, 746, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_9); - #endif - } else { - if (__pyx_t_7 >= PyTuple_GET_SIZE(__pyx_t_3)) break; - #if CYTHON_ASSUME_SAFE_MACROS && !CYTHON_AVOID_BORROWED_REFS - __pyx_t_9 = PyTuple_GET_ITEM(__pyx_t_3, __pyx_t_7); __Pyx_INCREF(__pyx_t_9); __pyx_t_7++; if (unlikely(0 < 0)) __PYX_ERR(1, 746, __pyx_L1_error) - #else - __pyx_t_9 = PySequence_ITEM(__pyx_t_3, __pyx_t_7); __pyx_t_7++; if (unlikely(!__pyx_t_9)) __PYX_ERR(1, 746, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_9); - #endif - } - } else { - __pyx_t_9 = __pyx_t_8(__pyx_t_3); - if (unlikely(!__pyx_t_9)) { - PyObject* exc_type = PyErr_Occurred(); - if (exc_type) { - if (likely(__Pyx_PyErr_GivenExceptionMatches(exc_type, PyExc_StopIteration))) PyErr_Clear(); - else __PYX_ERR(1, 746, __pyx_L1_error) - } - break; - } - __Pyx_GOTREF(__pyx_t_9); - } - __Pyx_XDECREF_SET(__pyx_v_index, __pyx_t_9); - __pyx_t_9 = 0; - __pyx_v_dim = __pyx_t_6; - __pyx_t_6 = (__pyx_t_6 + 1); - - /* "View.MemoryView":747 - * - * for dim, index in enumerate(indices): - * if PyIndex_Check(index): # <<<<<<<<<<<<<< - * slice_memviewslice( - * p_dst, p_src.shape[dim], p_src.strides[dim], p_src.suboffsets[dim], - */ - __pyx_t_2 = (PyIndex_Check(__pyx_v_index) != 0); - if (__pyx_t_2) { - - /* "View.MemoryView":751 - * p_dst, p_src.shape[dim], p_src.strides[dim], p_src.suboffsets[dim], - * dim, new_ndim, p_suboffset_dim, - * index, 0, 0, # start, stop, step # <<<<<<<<<<<<<< - * 0, 0, 0, # have_{start,stop,step} - * False) - */ - __pyx_t_10 = __Pyx_PyIndex_AsSsize_t(__pyx_v_index); if (unlikely((__pyx_t_10 == (Py_ssize_t)-1) && PyErr_Occurred())) __PYX_ERR(1, 751, __pyx_L1_error) - - /* "View.MemoryView":748 - * for dim, index in enumerate(indices): - * if PyIndex_Check(index): - * slice_memviewslice( # <<<<<<<<<<<<<< - * p_dst, p_src.shape[dim], p_src.strides[dim], p_src.suboffsets[dim], - * dim, new_ndim, p_suboffset_dim, - */ - __pyx_t_11 = __pyx_memoryview_slice_memviewslice(__pyx_v_p_dst, (__pyx_v_p_src->shape[__pyx_v_dim]), (__pyx_v_p_src->strides[__pyx_v_dim]), (__pyx_v_p_src->suboffsets[__pyx_v_dim]), __pyx_v_dim, __pyx_v_new_ndim, __pyx_v_p_suboffset_dim, __pyx_t_10, 0, 0, 0, 0, 0, 0); if (unlikely(__pyx_t_11 == ((int)-1))) __PYX_ERR(1, 748, __pyx_L1_error) - - /* "View.MemoryView":747 - * - * for dim, index in enumerate(indices): - * if PyIndex_Check(index): # <<<<<<<<<<<<<< - * slice_memviewslice( - * p_dst, p_src.shape[dim], p_src.strides[dim], p_src.suboffsets[dim], - */ - goto __pyx_L6; - } - - /* "View.MemoryView":754 - * 0, 0, 0, # have_{start,stop,step} - * False) - * elif index is None: # <<<<<<<<<<<<<< - * p_dst.shape[new_ndim] = 1 - * p_dst.strides[new_ndim] = 0 - */ - __pyx_t_2 = (__pyx_v_index == Py_None); - __pyx_t_1 = (__pyx_t_2 != 0); - if (__pyx_t_1) { - - /* "View.MemoryView":755 - * False) - * elif index is None: - * p_dst.shape[new_ndim] = 1 # <<<<<<<<<<<<<< - * p_dst.strides[new_ndim] = 0 - * p_dst.suboffsets[new_ndim] = -1 - */ - (__pyx_v_p_dst->shape[__pyx_v_new_ndim]) = 1; - - /* "View.MemoryView":756 - * elif index is None: - * p_dst.shape[new_ndim] = 1 - * p_dst.strides[new_ndim] = 0 # <<<<<<<<<<<<<< - * p_dst.suboffsets[new_ndim] = -1 - * new_ndim += 1 - */ - (__pyx_v_p_dst->strides[__pyx_v_new_ndim]) = 0; - - /* "View.MemoryView":757 - * p_dst.shape[new_ndim] = 1 - * p_dst.strides[new_ndim] = 0 - * p_dst.suboffsets[new_ndim] = -1 # <<<<<<<<<<<<<< - * new_ndim += 1 - * else: - */ - (__pyx_v_p_dst->suboffsets[__pyx_v_new_ndim]) = -1L; - - /* "View.MemoryView":758 - * p_dst.strides[new_ndim] = 0 - * p_dst.suboffsets[new_ndim] = -1 - * new_ndim += 1 # <<<<<<<<<<<<<< - * else: - * start = index.start or 0 - */ - __pyx_v_new_ndim = (__pyx_v_new_ndim + 1); - - /* "View.MemoryView":754 - * 0, 0, 0, # have_{start,stop,step} - * False) - * elif index is None: # <<<<<<<<<<<<<< - * p_dst.shape[new_ndim] = 1 - * p_dst.strides[new_ndim] = 0 - */ - goto __pyx_L6; - } - - /* "View.MemoryView":760 - * new_ndim += 1 - * else: - * start = index.start or 0 # <<<<<<<<<<<<<< - * stop = index.stop or 0 - * step = index.step or 0 - */ - /*else*/ { - __pyx_t_9 = __Pyx_PyObject_GetAttrStr(__pyx_v_index, __pyx_n_s_start); if (unlikely(!__pyx_t_9)) __PYX_ERR(1, 760, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_9); - __pyx_t_1 = __Pyx_PyObject_IsTrue(__pyx_t_9); if (unlikely(__pyx_t_1 < 0)) __PYX_ERR(1, 760, __pyx_L1_error) - if (!__pyx_t_1) { - __Pyx_DECREF(__pyx_t_9); __pyx_t_9 = 0; - } else { - __pyx_t_12 = __Pyx_PyIndex_AsSsize_t(__pyx_t_9); if (unlikely((__pyx_t_12 == (Py_ssize_t)-1) && PyErr_Occurred())) __PYX_ERR(1, 760, __pyx_L1_error) - __pyx_t_10 = __pyx_t_12; - __Pyx_DECREF(__pyx_t_9); __pyx_t_9 = 0; - goto __pyx_L7_bool_binop_done; - } - __pyx_t_10 = 0; - __pyx_L7_bool_binop_done:; - __pyx_v_start = __pyx_t_10; - - /* "View.MemoryView":761 - * else: - * start = index.start or 0 - * stop = index.stop or 0 # <<<<<<<<<<<<<< - * step = index.step or 0 - * - */ - __pyx_t_9 = __Pyx_PyObject_GetAttrStr(__pyx_v_index, __pyx_n_s_stop); if (unlikely(!__pyx_t_9)) __PYX_ERR(1, 761, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_9); - __pyx_t_1 = __Pyx_PyObject_IsTrue(__pyx_t_9); if (unlikely(__pyx_t_1 < 0)) __PYX_ERR(1, 761, __pyx_L1_error) - if (!__pyx_t_1) { - __Pyx_DECREF(__pyx_t_9); __pyx_t_9 = 0; - } else { - __pyx_t_12 = __Pyx_PyIndex_AsSsize_t(__pyx_t_9); if (unlikely((__pyx_t_12 == (Py_ssize_t)-1) && PyErr_Occurred())) __PYX_ERR(1, 761, __pyx_L1_error) - __pyx_t_10 = __pyx_t_12; - __Pyx_DECREF(__pyx_t_9); __pyx_t_9 = 0; - goto __pyx_L9_bool_binop_done; - } - __pyx_t_10 = 0; - __pyx_L9_bool_binop_done:; - __pyx_v_stop = __pyx_t_10; - - /* "View.MemoryView":762 - * start = index.start or 0 - * stop = index.stop or 0 - * step = index.step or 0 # <<<<<<<<<<<<<< - * - * have_start = index.start is not None - */ - __pyx_t_9 = __Pyx_PyObject_GetAttrStr(__pyx_v_index, __pyx_n_s_step); if (unlikely(!__pyx_t_9)) __PYX_ERR(1, 762, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_9); - __pyx_t_1 = __Pyx_PyObject_IsTrue(__pyx_t_9); if (unlikely(__pyx_t_1 < 0)) __PYX_ERR(1, 762, __pyx_L1_error) - if (!__pyx_t_1) { - __Pyx_DECREF(__pyx_t_9); __pyx_t_9 = 0; - } else { - __pyx_t_12 = __Pyx_PyIndex_AsSsize_t(__pyx_t_9); if (unlikely((__pyx_t_12 == (Py_ssize_t)-1) && PyErr_Occurred())) __PYX_ERR(1, 762, __pyx_L1_error) - __pyx_t_10 = __pyx_t_12; - __Pyx_DECREF(__pyx_t_9); __pyx_t_9 = 0; - goto __pyx_L11_bool_binop_done; - } - __pyx_t_10 = 0; - __pyx_L11_bool_binop_done:; - __pyx_v_step = __pyx_t_10; - - /* "View.MemoryView":764 - * step = index.step or 0 - * - * have_start = index.start is not None # <<<<<<<<<<<<<< - * have_stop = index.stop is not None - * have_step = index.step is not None - */ - __pyx_t_9 = __Pyx_PyObject_GetAttrStr(__pyx_v_index, __pyx_n_s_start); if (unlikely(!__pyx_t_9)) __PYX_ERR(1, 764, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_9); - __pyx_t_1 = (__pyx_t_9 != Py_None); - __Pyx_DECREF(__pyx_t_9); __pyx_t_9 = 0; - __pyx_v_have_start = __pyx_t_1; - - /* "View.MemoryView":765 - * - * have_start = index.start is not None - * have_stop = index.stop is not None # <<<<<<<<<<<<<< - * have_step = index.step is not None - * - */ - __pyx_t_9 = __Pyx_PyObject_GetAttrStr(__pyx_v_index, __pyx_n_s_stop); if (unlikely(!__pyx_t_9)) __PYX_ERR(1, 765, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_9); - __pyx_t_1 = (__pyx_t_9 != Py_None); - __Pyx_DECREF(__pyx_t_9); __pyx_t_9 = 0; - __pyx_v_have_stop = __pyx_t_1; - - /* "View.MemoryView":766 - * have_start = index.start is not None - * have_stop = index.stop is not None - * have_step = index.step is not None # <<<<<<<<<<<<<< - * - * slice_memviewslice( - */ - __pyx_t_9 = __Pyx_PyObject_GetAttrStr(__pyx_v_index, __pyx_n_s_step); if (unlikely(!__pyx_t_9)) __PYX_ERR(1, 766, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_9); - __pyx_t_1 = (__pyx_t_9 != Py_None); - __Pyx_DECREF(__pyx_t_9); __pyx_t_9 = 0; - __pyx_v_have_step = __pyx_t_1; - - /* "View.MemoryView":768 - * have_step = index.step is not None - * - * slice_memviewslice( # <<<<<<<<<<<<<< - * p_dst, p_src.shape[dim], p_src.strides[dim], p_src.suboffsets[dim], - * dim, new_ndim, p_suboffset_dim, - */ - __pyx_t_11 = __pyx_memoryview_slice_memviewslice(__pyx_v_p_dst, (__pyx_v_p_src->shape[__pyx_v_dim]), (__pyx_v_p_src->strides[__pyx_v_dim]), (__pyx_v_p_src->suboffsets[__pyx_v_dim]), __pyx_v_dim, __pyx_v_new_ndim, __pyx_v_p_suboffset_dim, __pyx_v_start, __pyx_v_stop, __pyx_v_step, __pyx_v_have_start, __pyx_v_have_stop, __pyx_v_have_step, 1); if (unlikely(__pyx_t_11 == ((int)-1))) __PYX_ERR(1, 768, __pyx_L1_error) - - /* "View.MemoryView":774 - * have_start, have_stop, have_step, - * True) - * new_ndim += 1 # <<<<<<<<<<<<<< - * - * if isinstance(memview, _memoryviewslice): - */ - __pyx_v_new_ndim = (__pyx_v_new_ndim + 1); - } - __pyx_L6:; - - /* "View.MemoryView":746 - * cdef bint have_start, have_stop, have_step - * - * for dim, index in enumerate(indices): # <<<<<<<<<<<<<< - * if PyIndex_Check(index): - * slice_memviewslice( - */ - } - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - - /* "View.MemoryView":776 - * new_ndim += 1 - * - * if isinstance(memview, _memoryviewslice): # <<<<<<<<<<<<<< - * return memoryview_fromslice(dst, new_ndim, - * memviewsliceobj.to_object_func, - */ - __pyx_t_1 = __Pyx_TypeCheck(((PyObject *)__pyx_v_memview), __pyx_memoryviewslice_type); - __pyx_t_2 = (__pyx_t_1 != 0); - if (__pyx_t_2) { - - /* "View.MemoryView":777 - * - * if isinstance(memview, _memoryviewslice): - * return memoryview_fromslice(dst, new_ndim, # <<<<<<<<<<<<<< - * memviewsliceobj.to_object_func, - * memviewsliceobj.to_dtype_func, - */ - __Pyx_XDECREF(((PyObject *)__pyx_r)); - - /* "View.MemoryView":778 - * if isinstance(memview, _memoryviewslice): - * return memoryview_fromslice(dst, new_ndim, - * memviewsliceobj.to_object_func, # <<<<<<<<<<<<<< - * memviewsliceobj.to_dtype_func, - * memview.dtype_is_object) - */ - if (unlikely(!__pyx_v_memviewsliceobj)) { __Pyx_RaiseUnboundLocalError("memviewsliceobj"); __PYX_ERR(1, 778, __pyx_L1_error) } - - /* "View.MemoryView":779 - * return memoryview_fromslice(dst, new_ndim, - * memviewsliceobj.to_object_func, - * memviewsliceobj.to_dtype_func, # <<<<<<<<<<<<<< - * memview.dtype_is_object) - * else: - */ - if (unlikely(!__pyx_v_memviewsliceobj)) { __Pyx_RaiseUnboundLocalError("memviewsliceobj"); __PYX_ERR(1, 779, __pyx_L1_error) } - - /* "View.MemoryView":777 - * - * if isinstance(memview, _memoryviewslice): - * return memoryview_fromslice(dst, new_ndim, # <<<<<<<<<<<<<< - * memviewsliceobj.to_object_func, - * memviewsliceobj.to_dtype_func, - */ - __pyx_t_3 = __pyx_memoryview_fromslice(__pyx_v_dst, __pyx_v_new_ndim, __pyx_v_memviewsliceobj->to_object_func, __pyx_v_memviewsliceobj->to_dtype_func, __pyx_v_memview->dtype_is_object); if (unlikely(!__pyx_t_3)) __PYX_ERR(1, 777, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_3); - if (!(likely(((__pyx_t_3) == Py_None) || likely(__Pyx_TypeTest(__pyx_t_3, __pyx_memoryview_type))))) __PYX_ERR(1, 777, __pyx_L1_error) - __pyx_r = ((struct __pyx_memoryview_obj *)__pyx_t_3); - __pyx_t_3 = 0; - goto __pyx_L0; - - /* "View.MemoryView":776 - * new_ndim += 1 - * - * if isinstance(memview, _memoryviewslice): # <<<<<<<<<<<<<< - * return memoryview_fromslice(dst, new_ndim, - * memviewsliceobj.to_object_func, - */ - } - - /* "View.MemoryView":782 - * memview.dtype_is_object) - * else: - * return memoryview_fromslice(dst, new_ndim, NULL, NULL, # <<<<<<<<<<<<<< - * memview.dtype_is_object) - * - */ - /*else*/ { - __Pyx_XDECREF(((PyObject *)__pyx_r)); - - /* "View.MemoryView":783 - * else: - * return memoryview_fromslice(dst, new_ndim, NULL, NULL, - * memview.dtype_is_object) # <<<<<<<<<<<<<< - * - * - */ - __pyx_t_3 = __pyx_memoryview_fromslice(__pyx_v_dst, __pyx_v_new_ndim, NULL, NULL, __pyx_v_memview->dtype_is_object); if (unlikely(!__pyx_t_3)) __PYX_ERR(1, 782, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_3); - - /* "View.MemoryView":782 - * memview.dtype_is_object) - * else: - * return memoryview_fromslice(dst, new_ndim, NULL, NULL, # <<<<<<<<<<<<<< - * memview.dtype_is_object) - * - */ - if (!(likely(((__pyx_t_3) == Py_None) || likely(__Pyx_TypeTest(__pyx_t_3, __pyx_memoryview_type))))) __PYX_ERR(1, 782, __pyx_L1_error) - __pyx_r = ((struct __pyx_memoryview_obj *)__pyx_t_3); - __pyx_t_3 = 0; - goto __pyx_L0; - } - - /* "View.MemoryView":710 - * - * @cname('__pyx_memview_slice') - * cdef memoryview memview_slice(memoryview memview, object indices): # <<<<<<<<<<<<<< - * cdef int new_ndim = 0, suboffset_dim = -1, dim - * cdef bint negative_step - */ - - /* function exit code */ - __pyx_L1_error:; - __Pyx_XDECREF(__pyx_t_3); - __Pyx_XDECREF(__pyx_t_9); - __Pyx_AddTraceback("View.MemoryView.memview_slice", __pyx_clineno, __pyx_lineno, __pyx_filename); - __pyx_r = 0; - __pyx_L0:; - __Pyx_XDECREF((PyObject *)__pyx_v_memviewsliceobj); - __Pyx_XDECREF(__pyx_v_index); - __Pyx_XGIVEREF((PyObject *)__pyx_r); - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} - -/* "View.MemoryView":807 - * - * @cname('__pyx_memoryview_slice_memviewslice') - * cdef int slice_memviewslice( # <<<<<<<<<<<<<< - * __Pyx_memviewslice *dst, - * Py_ssize_t shape, Py_ssize_t stride, Py_ssize_t suboffset, - */ - -static int __pyx_memoryview_slice_memviewslice(__Pyx_memviewslice *__pyx_v_dst, Py_ssize_t __pyx_v_shape, Py_ssize_t __pyx_v_stride, Py_ssize_t __pyx_v_suboffset, int __pyx_v_dim, int __pyx_v_new_ndim, int *__pyx_v_suboffset_dim, Py_ssize_t __pyx_v_start, Py_ssize_t __pyx_v_stop, Py_ssize_t __pyx_v_step, int __pyx_v_have_start, int __pyx_v_have_stop, int __pyx_v_have_step, int __pyx_v_is_slice) { - Py_ssize_t __pyx_v_new_shape; - int __pyx_v_negative_step; - int __pyx_r; - int __pyx_t_1; - int __pyx_t_2; - int __pyx_t_3; - int __pyx_lineno = 0; - const char *__pyx_filename = NULL; - int __pyx_clineno = 0; - - /* "View.MemoryView":827 - * cdef bint negative_step - * - * if not is_slice: # <<<<<<<<<<<<<< - * - * if start < 0: - */ - __pyx_t_1 = ((!(__pyx_v_is_slice != 0)) != 0); - if (__pyx_t_1) { - - /* "View.MemoryView":829 - * if not is_slice: - * - * if start < 0: # <<<<<<<<<<<<<< - * start += shape - * if not 0 <= start < shape: - */ - __pyx_t_1 = ((__pyx_v_start < 0) != 0); - if (__pyx_t_1) { - - /* "View.MemoryView":830 - * - * if start < 0: - * start += shape # <<<<<<<<<<<<<< - * if not 0 <= start < shape: - * _err_dim(IndexError, "Index out of bounds (axis %d)", dim) - */ - __pyx_v_start = (__pyx_v_start + __pyx_v_shape); - - /* "View.MemoryView":829 - * if not is_slice: - * - * if start < 0: # <<<<<<<<<<<<<< - * start += shape - * if not 0 <= start < shape: - */ - } - - /* "View.MemoryView":831 - * if start < 0: - * start += shape - * if not 0 <= start < shape: # <<<<<<<<<<<<<< - * _err_dim(IndexError, "Index out of bounds (axis %d)", dim) - * else: - */ - __pyx_t_1 = (0 <= __pyx_v_start); - if (__pyx_t_1) { - __pyx_t_1 = (__pyx_v_start < __pyx_v_shape); - } - __pyx_t_2 = ((!(__pyx_t_1 != 0)) != 0); - if (__pyx_t_2) { - - /* "View.MemoryView":832 - * start += shape - * if not 0 <= start < shape: - * _err_dim(IndexError, "Index out of bounds (axis %d)", dim) # <<<<<<<<<<<<<< - * else: - * - */ - __pyx_t_3 = __pyx_memoryview_err_dim(__pyx_builtin_IndexError, ((char *)"Index out of bounds (axis %d)"), __pyx_v_dim); if (unlikely(__pyx_t_3 == ((int)-1))) __PYX_ERR(1, 832, __pyx_L1_error) - - /* "View.MemoryView":831 - * if start < 0: - * start += shape - * if not 0 <= start < shape: # <<<<<<<<<<<<<< - * _err_dim(IndexError, "Index out of bounds (axis %d)", dim) - * else: - */ - } - - /* "View.MemoryView":827 - * cdef bint negative_step - * - * if not is_slice: # <<<<<<<<<<<<<< - * - * if start < 0: - */ - goto __pyx_L3; - } - - /* "View.MemoryView":835 - * else: - * - * negative_step = have_step != 0 and step < 0 # <<<<<<<<<<<<<< - * - * if have_step and step == 0: - */ - /*else*/ { - __pyx_t_1 = ((__pyx_v_have_step != 0) != 0); - if (__pyx_t_1) { - } else { - __pyx_t_2 = __pyx_t_1; - goto __pyx_L6_bool_binop_done; - } - __pyx_t_1 = ((__pyx_v_step < 0) != 0); - __pyx_t_2 = __pyx_t_1; - __pyx_L6_bool_binop_done:; - __pyx_v_negative_step = __pyx_t_2; - - /* "View.MemoryView":837 - * negative_step = have_step != 0 and step < 0 - * - * if have_step and step == 0: # <<<<<<<<<<<<<< - * _err_dim(ValueError, "Step may not be zero (axis %d)", dim) - * - */ - __pyx_t_1 = (__pyx_v_have_step != 0); - if (__pyx_t_1) { - } else { - __pyx_t_2 = __pyx_t_1; - goto __pyx_L9_bool_binop_done; - } - __pyx_t_1 = ((__pyx_v_step == 0) != 0); - __pyx_t_2 = __pyx_t_1; - __pyx_L9_bool_binop_done:; - if (__pyx_t_2) { - - /* "View.MemoryView":838 - * - * if have_step and step == 0: - * _err_dim(ValueError, "Step may not be zero (axis %d)", dim) # <<<<<<<<<<<<<< - * - * - */ - __pyx_t_3 = __pyx_memoryview_err_dim(__pyx_builtin_ValueError, ((char *)"Step may not be zero (axis %d)"), __pyx_v_dim); if (unlikely(__pyx_t_3 == ((int)-1))) __PYX_ERR(1, 838, __pyx_L1_error) - - /* "View.MemoryView":837 - * negative_step = have_step != 0 and step < 0 - * - * if have_step and step == 0: # <<<<<<<<<<<<<< - * _err_dim(ValueError, "Step may not be zero (axis %d)", dim) - * - */ - } - - /* "View.MemoryView":841 - * - * - * if have_start: # <<<<<<<<<<<<<< - * if start < 0: - * start += shape - */ - __pyx_t_2 = (__pyx_v_have_start != 0); - if (__pyx_t_2) { - - /* "View.MemoryView":842 - * - * if have_start: - * if start < 0: # <<<<<<<<<<<<<< - * start += shape - * if start < 0: - */ - __pyx_t_2 = ((__pyx_v_start < 0) != 0); - if (__pyx_t_2) { - - /* "View.MemoryView":843 - * if have_start: - * if start < 0: - * start += shape # <<<<<<<<<<<<<< - * if start < 0: - * start = 0 - */ - __pyx_v_start = (__pyx_v_start + __pyx_v_shape); - - /* "View.MemoryView":844 - * if start < 0: - * start += shape - * if start < 0: # <<<<<<<<<<<<<< - * start = 0 - * elif start >= shape: - */ - __pyx_t_2 = ((__pyx_v_start < 0) != 0); - if (__pyx_t_2) { - - /* "View.MemoryView":845 - * start += shape - * if start < 0: - * start = 0 # <<<<<<<<<<<<<< - * elif start >= shape: - * if negative_step: - */ - __pyx_v_start = 0; - - /* "View.MemoryView":844 - * if start < 0: - * start += shape - * if start < 0: # <<<<<<<<<<<<<< - * start = 0 - * elif start >= shape: - */ - } - - /* "View.MemoryView":842 - * - * if have_start: - * if start < 0: # <<<<<<<<<<<<<< - * start += shape - * if start < 0: - */ - goto __pyx_L12; - } - - /* "View.MemoryView":846 - * if start < 0: - * start = 0 - * elif start >= shape: # <<<<<<<<<<<<<< - * if negative_step: - * start = shape - 1 - */ - __pyx_t_2 = ((__pyx_v_start >= __pyx_v_shape) != 0); - if (__pyx_t_2) { - - /* "View.MemoryView":847 - * start = 0 - * elif start >= shape: - * if negative_step: # <<<<<<<<<<<<<< - * start = shape - 1 - * else: - */ - __pyx_t_2 = (__pyx_v_negative_step != 0); - if (__pyx_t_2) { - - /* "View.MemoryView":848 - * elif start >= shape: - * if negative_step: - * start = shape - 1 # <<<<<<<<<<<<<< - * else: - * start = shape - */ - __pyx_v_start = (__pyx_v_shape - 1); - - /* "View.MemoryView":847 - * start = 0 - * elif start >= shape: - * if negative_step: # <<<<<<<<<<<<<< - * start = shape - 1 - * else: - */ - goto __pyx_L14; - } - - /* "View.MemoryView":850 - * start = shape - 1 - * else: - * start = shape # <<<<<<<<<<<<<< - * else: - * if negative_step: - */ - /*else*/ { - __pyx_v_start = __pyx_v_shape; - } - __pyx_L14:; - - /* "View.MemoryView":846 - * if start < 0: - * start = 0 - * elif start >= shape: # <<<<<<<<<<<<<< - * if negative_step: - * start = shape - 1 - */ - } - __pyx_L12:; - - /* "View.MemoryView":841 - * - * - * if have_start: # <<<<<<<<<<<<<< - * if start < 0: - * start += shape - */ - goto __pyx_L11; - } - - /* "View.MemoryView":852 - * start = shape - * else: - * if negative_step: # <<<<<<<<<<<<<< - * start = shape - 1 - * else: - */ - /*else*/ { - __pyx_t_2 = (__pyx_v_negative_step != 0); - if (__pyx_t_2) { - - /* "View.MemoryView":853 - * else: - * if negative_step: - * start = shape - 1 # <<<<<<<<<<<<<< - * else: - * start = 0 - */ - __pyx_v_start = (__pyx_v_shape - 1); - - /* "View.MemoryView":852 - * start = shape - * else: - * if negative_step: # <<<<<<<<<<<<<< - * start = shape - 1 - * else: - */ - goto __pyx_L15; - } - - /* "View.MemoryView":855 - * start = shape - 1 - * else: - * start = 0 # <<<<<<<<<<<<<< - * - * if have_stop: - */ - /*else*/ { - __pyx_v_start = 0; - } - __pyx_L15:; - } - __pyx_L11:; - - /* "View.MemoryView":857 - * start = 0 - * - * if have_stop: # <<<<<<<<<<<<<< - * if stop < 0: - * stop += shape - */ - __pyx_t_2 = (__pyx_v_have_stop != 0); - if (__pyx_t_2) { - - /* "View.MemoryView":858 - * - * if have_stop: - * if stop < 0: # <<<<<<<<<<<<<< - * stop += shape - * if stop < 0: - */ - __pyx_t_2 = ((__pyx_v_stop < 0) != 0); - if (__pyx_t_2) { - - /* "View.MemoryView":859 - * if have_stop: - * if stop < 0: - * stop += shape # <<<<<<<<<<<<<< - * if stop < 0: - * stop = 0 - */ - __pyx_v_stop = (__pyx_v_stop + __pyx_v_shape); - - /* "View.MemoryView":860 - * if stop < 0: - * stop += shape - * if stop < 0: # <<<<<<<<<<<<<< - * stop = 0 - * elif stop > shape: - */ - __pyx_t_2 = ((__pyx_v_stop < 0) != 0); - if (__pyx_t_2) { - - /* "View.MemoryView":861 - * stop += shape - * if stop < 0: - * stop = 0 # <<<<<<<<<<<<<< - * elif stop > shape: - * stop = shape - */ - __pyx_v_stop = 0; - - /* "View.MemoryView":860 - * if stop < 0: - * stop += shape - * if stop < 0: # <<<<<<<<<<<<<< - * stop = 0 - * elif stop > shape: - */ - } - - /* "View.MemoryView":858 - * - * if have_stop: - * if stop < 0: # <<<<<<<<<<<<<< - * stop += shape - * if stop < 0: - */ - goto __pyx_L17; - } - - /* "View.MemoryView":862 - * if stop < 0: - * stop = 0 - * elif stop > shape: # <<<<<<<<<<<<<< - * stop = shape - * else: - */ - __pyx_t_2 = ((__pyx_v_stop > __pyx_v_shape) != 0); - if (__pyx_t_2) { - - /* "View.MemoryView":863 - * stop = 0 - * elif stop > shape: - * stop = shape # <<<<<<<<<<<<<< - * else: - * if negative_step: - */ - __pyx_v_stop = __pyx_v_shape; - - /* "View.MemoryView":862 - * if stop < 0: - * stop = 0 - * elif stop > shape: # <<<<<<<<<<<<<< - * stop = shape - * else: - */ - } - __pyx_L17:; - - /* "View.MemoryView":857 - * start = 0 - * - * if have_stop: # <<<<<<<<<<<<<< - * if stop < 0: - * stop += shape - */ - goto __pyx_L16; - } - - /* "View.MemoryView":865 - * stop = shape - * else: - * if negative_step: # <<<<<<<<<<<<<< - * stop = -1 - * else: - */ - /*else*/ { - __pyx_t_2 = (__pyx_v_negative_step != 0); - if (__pyx_t_2) { - - /* "View.MemoryView":866 - * else: - * if negative_step: - * stop = -1 # <<<<<<<<<<<<<< - * else: - * stop = shape - */ - __pyx_v_stop = -1L; - - /* "View.MemoryView":865 - * stop = shape - * else: - * if negative_step: # <<<<<<<<<<<<<< - * stop = -1 - * else: - */ - goto __pyx_L19; - } - - /* "View.MemoryView":868 - * stop = -1 - * else: - * stop = shape # <<<<<<<<<<<<<< - * - * if not have_step: - */ - /*else*/ { - __pyx_v_stop = __pyx_v_shape; - } - __pyx_L19:; - } - __pyx_L16:; - - /* "View.MemoryView":870 - * stop = shape - * - * if not have_step: # <<<<<<<<<<<<<< - * step = 1 - * - */ - __pyx_t_2 = ((!(__pyx_v_have_step != 0)) != 0); - if (__pyx_t_2) { - - /* "View.MemoryView":871 - * - * if not have_step: - * step = 1 # <<<<<<<<<<<<<< - * - * - */ - __pyx_v_step = 1; - - /* "View.MemoryView":870 - * stop = shape - * - * if not have_step: # <<<<<<<<<<<<<< - * step = 1 - * - */ - } - - /* "View.MemoryView":875 - * - * with cython.cdivision(True): - * new_shape = (stop - start) // step # <<<<<<<<<<<<<< - * - * if (stop - start) - step * new_shape: - */ - __pyx_v_new_shape = ((__pyx_v_stop - __pyx_v_start) / __pyx_v_step); - - /* "View.MemoryView":877 - * new_shape = (stop - start) // step - * - * if (stop - start) - step * new_shape: # <<<<<<<<<<<<<< - * new_shape += 1 - * - */ - __pyx_t_2 = (((__pyx_v_stop - __pyx_v_start) - (__pyx_v_step * __pyx_v_new_shape)) != 0); - if (__pyx_t_2) { - - /* "View.MemoryView":878 - * - * if (stop - start) - step * new_shape: - * new_shape += 1 # <<<<<<<<<<<<<< - * - * if new_shape < 0: - */ - __pyx_v_new_shape = (__pyx_v_new_shape + 1); - - /* "View.MemoryView":877 - * new_shape = (stop - start) // step - * - * if (stop - start) - step * new_shape: # <<<<<<<<<<<<<< - * new_shape += 1 - * - */ - } - - /* "View.MemoryView":880 - * new_shape += 1 - * - * if new_shape < 0: # <<<<<<<<<<<<<< - * new_shape = 0 - * - */ - __pyx_t_2 = ((__pyx_v_new_shape < 0) != 0); - if (__pyx_t_2) { - - /* "View.MemoryView":881 - * - * if new_shape < 0: - * new_shape = 0 # <<<<<<<<<<<<<< - * - * - */ - __pyx_v_new_shape = 0; - - /* "View.MemoryView":880 - * new_shape += 1 - * - * if new_shape < 0: # <<<<<<<<<<<<<< - * new_shape = 0 - * - */ - } - - /* "View.MemoryView":884 - * - * - * dst.strides[new_ndim] = stride * step # <<<<<<<<<<<<<< - * dst.shape[new_ndim] = new_shape - * dst.suboffsets[new_ndim] = suboffset - */ - (__pyx_v_dst->strides[__pyx_v_new_ndim]) = (__pyx_v_stride * __pyx_v_step); - - /* "View.MemoryView":885 - * - * dst.strides[new_ndim] = stride * step - * dst.shape[new_ndim] = new_shape # <<<<<<<<<<<<<< - * dst.suboffsets[new_ndim] = suboffset - * - */ - (__pyx_v_dst->shape[__pyx_v_new_ndim]) = __pyx_v_new_shape; - - /* "View.MemoryView":886 - * dst.strides[new_ndim] = stride * step - * dst.shape[new_ndim] = new_shape - * dst.suboffsets[new_ndim] = suboffset # <<<<<<<<<<<<<< - * - * - */ - (__pyx_v_dst->suboffsets[__pyx_v_new_ndim]) = __pyx_v_suboffset; - } - __pyx_L3:; - - /* "View.MemoryView":889 - * - * - * if suboffset_dim[0] < 0: # <<<<<<<<<<<<<< - * dst.data += start * stride - * else: - */ - __pyx_t_2 = (((__pyx_v_suboffset_dim[0]) < 0) != 0); - if (__pyx_t_2) { - - /* "View.MemoryView":890 - * - * if suboffset_dim[0] < 0: - * dst.data += start * stride # <<<<<<<<<<<<<< - * else: - * dst.suboffsets[suboffset_dim[0]] += start * stride - */ - __pyx_v_dst->data = (__pyx_v_dst->data + (__pyx_v_start * __pyx_v_stride)); - - /* "View.MemoryView":889 - * - * - * if suboffset_dim[0] < 0: # <<<<<<<<<<<<<< - * dst.data += start * stride - * else: - */ - goto __pyx_L23; - } - - /* "View.MemoryView":892 - * dst.data += start * stride - * else: - * dst.suboffsets[suboffset_dim[0]] += start * stride # <<<<<<<<<<<<<< - * - * if suboffset >= 0: - */ - /*else*/ { - __pyx_t_3 = (__pyx_v_suboffset_dim[0]); - (__pyx_v_dst->suboffsets[__pyx_t_3]) = ((__pyx_v_dst->suboffsets[__pyx_t_3]) + (__pyx_v_start * __pyx_v_stride)); - } - __pyx_L23:; - - /* "View.MemoryView":894 - * dst.suboffsets[suboffset_dim[0]] += start * stride - * - * if suboffset >= 0: # <<<<<<<<<<<<<< - * if not is_slice: - * if new_ndim == 0: - */ - __pyx_t_2 = ((__pyx_v_suboffset >= 0) != 0); - if (__pyx_t_2) { - - /* "View.MemoryView":895 - * - * if suboffset >= 0: - * if not is_slice: # <<<<<<<<<<<<<< - * if new_ndim == 0: - * dst.data = ( dst.data)[0] + suboffset - */ - __pyx_t_2 = ((!(__pyx_v_is_slice != 0)) != 0); - if (__pyx_t_2) { - - /* "View.MemoryView":896 - * if suboffset >= 0: - * if not is_slice: - * if new_ndim == 0: # <<<<<<<<<<<<<< - * dst.data = ( dst.data)[0] + suboffset - * else: - */ - __pyx_t_2 = ((__pyx_v_new_ndim == 0) != 0); - if (__pyx_t_2) { - - /* "View.MemoryView":897 - * if not is_slice: - * if new_ndim == 0: - * dst.data = ( dst.data)[0] + suboffset # <<<<<<<<<<<<<< - * else: - * _err_dim(IndexError, "All dimensions preceding dimension %d " - */ - __pyx_v_dst->data = ((((char **)__pyx_v_dst->data)[0]) + __pyx_v_suboffset); - - /* "View.MemoryView":896 - * if suboffset >= 0: - * if not is_slice: - * if new_ndim == 0: # <<<<<<<<<<<<<< - * dst.data = ( dst.data)[0] + suboffset - * else: - */ - goto __pyx_L26; - } - - /* "View.MemoryView":899 - * dst.data = ( dst.data)[0] + suboffset - * else: - * _err_dim(IndexError, "All dimensions preceding dimension %d " # <<<<<<<<<<<<<< - * "must be indexed and not sliced", dim) - * else: - */ - /*else*/ { - - /* "View.MemoryView":900 - * else: - * _err_dim(IndexError, "All dimensions preceding dimension %d " - * "must be indexed and not sliced", dim) # <<<<<<<<<<<<<< - * else: - * suboffset_dim[0] = new_ndim - */ - __pyx_t_3 = __pyx_memoryview_err_dim(__pyx_builtin_IndexError, ((char *)"All dimensions preceding dimension %d must be indexed and not sliced"), __pyx_v_dim); if (unlikely(__pyx_t_3 == ((int)-1))) __PYX_ERR(1, 899, __pyx_L1_error) - } - __pyx_L26:; - - /* "View.MemoryView":895 - * - * if suboffset >= 0: - * if not is_slice: # <<<<<<<<<<<<<< - * if new_ndim == 0: - * dst.data = ( dst.data)[0] + suboffset - */ - goto __pyx_L25; - } - - /* "View.MemoryView":902 - * "must be indexed and not sliced", dim) - * else: - * suboffset_dim[0] = new_ndim # <<<<<<<<<<<<<< - * - * return 0 - */ - /*else*/ { - (__pyx_v_suboffset_dim[0]) = __pyx_v_new_ndim; - } - __pyx_L25:; - - /* "View.MemoryView":894 - * dst.suboffsets[suboffset_dim[0]] += start * stride - * - * if suboffset >= 0: # <<<<<<<<<<<<<< - * if not is_slice: - * if new_ndim == 0: - */ - } - - /* "View.MemoryView":904 - * suboffset_dim[0] = new_ndim - * - * return 0 # <<<<<<<<<<<<<< - * - * - */ - __pyx_r = 0; - goto __pyx_L0; - - /* "View.MemoryView":807 - * - * @cname('__pyx_memoryview_slice_memviewslice') - * cdef int slice_memviewslice( # <<<<<<<<<<<<<< - * __Pyx_memviewslice *dst, - * Py_ssize_t shape, Py_ssize_t stride, Py_ssize_t suboffset, - */ - - /* function exit code */ - __pyx_L1_error:; - { - #ifdef WITH_THREAD - PyGILState_STATE __pyx_gilstate_save = __Pyx_PyGILState_Ensure(); - #endif - __Pyx_AddTraceback("View.MemoryView.slice_memviewslice", __pyx_clineno, __pyx_lineno, __pyx_filename); - #ifdef WITH_THREAD - __Pyx_PyGILState_Release(__pyx_gilstate_save); - #endif - } - __pyx_r = -1; - __pyx_L0:; - return __pyx_r; -} - -/* "View.MemoryView":910 - * - * @cname('__pyx_pybuffer_index') - * cdef char *pybuffer_index(Py_buffer *view, char *bufp, Py_ssize_t index, # <<<<<<<<<<<<<< - * Py_ssize_t dim) except NULL: - * cdef Py_ssize_t shape, stride, suboffset = -1 - */ - -static char *__pyx_pybuffer_index(Py_buffer *__pyx_v_view, char *__pyx_v_bufp, Py_ssize_t __pyx_v_index, Py_ssize_t __pyx_v_dim) { - Py_ssize_t __pyx_v_shape; - Py_ssize_t __pyx_v_stride; - Py_ssize_t __pyx_v_suboffset; - Py_ssize_t __pyx_v_itemsize; - char *__pyx_v_resultp; - char *__pyx_r; - __Pyx_RefNannyDeclarations - Py_ssize_t __pyx_t_1; - int __pyx_t_2; - PyObject *__pyx_t_3 = NULL; - PyObject *__pyx_t_4 = NULL; - int __pyx_lineno = 0; - const char *__pyx_filename = NULL; - int __pyx_clineno = 0; - __Pyx_RefNannySetupContext("pybuffer_index", 0); - - /* "View.MemoryView":912 - * cdef char *pybuffer_index(Py_buffer *view, char *bufp, Py_ssize_t index, - * Py_ssize_t dim) except NULL: - * cdef Py_ssize_t shape, stride, suboffset = -1 # <<<<<<<<<<<<<< - * cdef Py_ssize_t itemsize = view.itemsize - * cdef char *resultp - */ - __pyx_v_suboffset = -1L; - - /* "View.MemoryView":913 - * Py_ssize_t dim) except NULL: - * cdef Py_ssize_t shape, stride, suboffset = -1 - * cdef Py_ssize_t itemsize = view.itemsize # <<<<<<<<<<<<<< - * cdef char *resultp - * - */ - __pyx_t_1 = __pyx_v_view->itemsize; - __pyx_v_itemsize = __pyx_t_1; - - /* "View.MemoryView":916 - * cdef char *resultp - * - * if view.ndim == 0: # <<<<<<<<<<<<<< - * shape = view.len / itemsize - * stride = itemsize - */ - __pyx_t_2 = ((__pyx_v_view->ndim == 0) != 0); - if (__pyx_t_2) { - - /* "View.MemoryView":917 - * - * if view.ndim == 0: - * shape = view.len / itemsize # <<<<<<<<<<<<<< - * stride = itemsize - * else: - */ - if (unlikely(__pyx_v_itemsize == 0)) { - PyErr_SetString(PyExc_ZeroDivisionError, "integer division or modulo by zero"); - __PYX_ERR(1, 917, __pyx_L1_error) - } - else if (sizeof(Py_ssize_t) == sizeof(long) && (!(((Py_ssize_t)-1) > 0)) && unlikely(__pyx_v_itemsize == (Py_ssize_t)-1) && unlikely(UNARY_NEG_WOULD_OVERFLOW(__pyx_v_view->len))) { - PyErr_SetString(PyExc_OverflowError, "value too large to perform division"); - __PYX_ERR(1, 917, __pyx_L1_error) - } - __pyx_v_shape = __Pyx_div_Py_ssize_t(__pyx_v_view->len, __pyx_v_itemsize); - - /* "View.MemoryView":918 - * if view.ndim == 0: - * shape = view.len / itemsize - * stride = itemsize # <<<<<<<<<<<<<< - * else: - * shape = view.shape[dim] - */ - __pyx_v_stride = __pyx_v_itemsize; - - /* "View.MemoryView":916 - * cdef char *resultp - * - * if view.ndim == 0: # <<<<<<<<<<<<<< - * shape = view.len / itemsize - * stride = itemsize - */ - goto __pyx_L3; - } - - /* "View.MemoryView":920 - * stride = itemsize - * else: - * shape = view.shape[dim] # <<<<<<<<<<<<<< - * stride = view.strides[dim] - * if view.suboffsets != NULL: - */ - /*else*/ { - __pyx_v_shape = (__pyx_v_view->shape[__pyx_v_dim]); - - /* "View.MemoryView":921 - * else: - * shape = view.shape[dim] - * stride = view.strides[dim] # <<<<<<<<<<<<<< - * if view.suboffsets != NULL: - * suboffset = view.suboffsets[dim] - */ - __pyx_v_stride = (__pyx_v_view->strides[__pyx_v_dim]); - - /* "View.MemoryView":922 - * shape = view.shape[dim] - * stride = view.strides[dim] - * if view.suboffsets != NULL: # <<<<<<<<<<<<<< - * suboffset = view.suboffsets[dim] - * - */ - __pyx_t_2 = ((__pyx_v_view->suboffsets != NULL) != 0); - if (__pyx_t_2) { - - /* "View.MemoryView":923 - * stride = view.strides[dim] - * if view.suboffsets != NULL: - * suboffset = view.suboffsets[dim] # <<<<<<<<<<<<<< - * - * if index < 0: - */ - __pyx_v_suboffset = (__pyx_v_view->suboffsets[__pyx_v_dim]); - - /* "View.MemoryView":922 - * shape = view.shape[dim] - * stride = view.strides[dim] - * if view.suboffsets != NULL: # <<<<<<<<<<<<<< - * suboffset = view.suboffsets[dim] - * - */ - } - } - __pyx_L3:; - - /* "View.MemoryView":925 - * suboffset = view.suboffsets[dim] - * - * if index < 0: # <<<<<<<<<<<<<< - * index += view.shape[dim] - * if index < 0: - */ - __pyx_t_2 = ((__pyx_v_index < 0) != 0); - if (__pyx_t_2) { - - /* "View.MemoryView":926 - * - * if index < 0: - * index += view.shape[dim] # <<<<<<<<<<<<<< - * if index < 0: - * raise IndexError("Out of bounds on buffer access (axis %d)" % dim) - */ - __pyx_v_index = (__pyx_v_index + (__pyx_v_view->shape[__pyx_v_dim])); - - /* "View.MemoryView":927 - * if index < 0: - * index += view.shape[dim] - * if index < 0: # <<<<<<<<<<<<<< - * raise IndexError("Out of bounds on buffer access (axis %d)" % dim) - * - */ - __pyx_t_2 = ((__pyx_v_index < 0) != 0); - if (unlikely(__pyx_t_2)) { - - /* "View.MemoryView":928 - * index += view.shape[dim] - * if index < 0: - * raise IndexError("Out of bounds on buffer access (axis %d)" % dim) # <<<<<<<<<<<<<< - * - * if index >= shape: - */ - __pyx_t_3 = PyInt_FromSsize_t(__pyx_v_dim); if (unlikely(!__pyx_t_3)) __PYX_ERR(1, 928, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_3); - __pyx_t_4 = __Pyx_PyString_Format(__pyx_kp_s_Out_of_bounds_on_buffer_access_a, __pyx_t_3); if (unlikely(!__pyx_t_4)) __PYX_ERR(1, 928, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_4); - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - __pyx_t_3 = __Pyx_PyObject_CallOneArg(__pyx_builtin_IndexError, __pyx_t_4); if (unlikely(!__pyx_t_3)) __PYX_ERR(1, 928, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_3); - __Pyx_DECREF(__pyx_t_4); __pyx_t_4 = 0; - __Pyx_Raise(__pyx_t_3, 0, 0, 0); - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - __PYX_ERR(1, 928, __pyx_L1_error) - - /* "View.MemoryView":927 - * if index < 0: - * index += view.shape[dim] - * if index < 0: # <<<<<<<<<<<<<< - * raise IndexError("Out of bounds on buffer access (axis %d)" % dim) - * - */ - } - - /* "View.MemoryView":925 - * suboffset = view.suboffsets[dim] - * - * if index < 0: # <<<<<<<<<<<<<< - * index += view.shape[dim] - * if index < 0: - */ - } - - /* "View.MemoryView":930 - * raise IndexError("Out of bounds on buffer access (axis %d)" % dim) - * - * if index >= shape: # <<<<<<<<<<<<<< - * raise IndexError("Out of bounds on buffer access (axis %d)" % dim) - * - */ - __pyx_t_2 = ((__pyx_v_index >= __pyx_v_shape) != 0); - if (unlikely(__pyx_t_2)) { - - /* "View.MemoryView":931 - * - * if index >= shape: - * raise IndexError("Out of bounds on buffer access (axis %d)" % dim) # <<<<<<<<<<<<<< - * - * resultp = bufp + index * stride - */ - __pyx_t_3 = PyInt_FromSsize_t(__pyx_v_dim); if (unlikely(!__pyx_t_3)) __PYX_ERR(1, 931, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_3); - __pyx_t_4 = __Pyx_PyString_Format(__pyx_kp_s_Out_of_bounds_on_buffer_access_a, __pyx_t_3); if (unlikely(!__pyx_t_4)) __PYX_ERR(1, 931, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_4); - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - __pyx_t_3 = __Pyx_PyObject_CallOneArg(__pyx_builtin_IndexError, __pyx_t_4); if (unlikely(!__pyx_t_3)) __PYX_ERR(1, 931, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_3); - __Pyx_DECREF(__pyx_t_4); __pyx_t_4 = 0; - __Pyx_Raise(__pyx_t_3, 0, 0, 0); - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - __PYX_ERR(1, 931, __pyx_L1_error) - - /* "View.MemoryView":930 - * raise IndexError("Out of bounds on buffer access (axis %d)" % dim) - * - * if index >= shape: # <<<<<<<<<<<<<< - * raise IndexError("Out of bounds on buffer access (axis %d)" % dim) - * - */ - } - - /* "View.MemoryView":933 - * raise IndexError("Out of bounds on buffer access (axis %d)" % dim) - * - * resultp = bufp + index * stride # <<<<<<<<<<<<<< - * if suboffset >= 0: - * resultp = ( resultp)[0] + suboffset - */ - __pyx_v_resultp = (__pyx_v_bufp + (__pyx_v_index * __pyx_v_stride)); - - /* "View.MemoryView":934 - * - * resultp = bufp + index * stride - * if suboffset >= 0: # <<<<<<<<<<<<<< - * resultp = ( resultp)[0] + suboffset - * - */ - __pyx_t_2 = ((__pyx_v_suboffset >= 0) != 0); - if (__pyx_t_2) { - - /* "View.MemoryView":935 - * resultp = bufp + index * stride - * if suboffset >= 0: - * resultp = ( resultp)[0] + suboffset # <<<<<<<<<<<<<< - * - * return resultp - */ - __pyx_v_resultp = ((((char **)__pyx_v_resultp)[0]) + __pyx_v_suboffset); - - /* "View.MemoryView":934 - * - * resultp = bufp + index * stride - * if suboffset >= 0: # <<<<<<<<<<<<<< - * resultp = ( resultp)[0] + suboffset - * - */ - } - - /* "View.MemoryView":937 - * resultp = ( resultp)[0] + suboffset - * - * return resultp # <<<<<<<<<<<<<< - * - * - */ - __pyx_r = __pyx_v_resultp; - goto __pyx_L0; - - /* "View.MemoryView":910 - * - * @cname('__pyx_pybuffer_index') - * cdef char *pybuffer_index(Py_buffer *view, char *bufp, Py_ssize_t index, # <<<<<<<<<<<<<< - * Py_ssize_t dim) except NULL: - * cdef Py_ssize_t shape, stride, suboffset = -1 - */ - - /* function exit code */ - __pyx_L1_error:; - __Pyx_XDECREF(__pyx_t_3); - __Pyx_XDECREF(__pyx_t_4); - __Pyx_AddTraceback("View.MemoryView.pybuffer_index", __pyx_clineno, __pyx_lineno, __pyx_filename); - __pyx_r = NULL; - __pyx_L0:; - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} - -/* "View.MemoryView":943 - * - * @cname('__pyx_memslice_transpose') - * cdef int transpose_memslice(__Pyx_memviewslice *memslice) nogil except 0: # <<<<<<<<<<<<<< - * cdef int ndim = memslice.memview.view.ndim - * - */ - -static int __pyx_memslice_transpose(__Pyx_memviewslice *__pyx_v_memslice) { - int __pyx_v_ndim; - Py_ssize_t *__pyx_v_shape; - Py_ssize_t *__pyx_v_strides; - int __pyx_v_i; - int __pyx_v_j; - int __pyx_r; - int __pyx_t_1; - Py_ssize_t *__pyx_t_2; - long __pyx_t_3; - long __pyx_t_4; - Py_ssize_t __pyx_t_5; - Py_ssize_t __pyx_t_6; - int __pyx_t_7; - int __pyx_t_8; - int __pyx_t_9; - int __pyx_lineno = 0; - const char *__pyx_filename = NULL; - int __pyx_clineno = 0; - - /* "View.MemoryView":944 - * @cname('__pyx_memslice_transpose') - * cdef int transpose_memslice(__Pyx_memviewslice *memslice) nogil except 0: - * cdef int ndim = memslice.memview.view.ndim # <<<<<<<<<<<<<< - * - * cdef Py_ssize_t *shape = memslice.shape - */ - __pyx_t_1 = __pyx_v_memslice->memview->view.ndim; - __pyx_v_ndim = __pyx_t_1; - - /* "View.MemoryView":946 - * cdef int ndim = memslice.memview.view.ndim - * - * cdef Py_ssize_t *shape = memslice.shape # <<<<<<<<<<<<<< - * cdef Py_ssize_t *strides = memslice.strides - * - */ - __pyx_t_2 = __pyx_v_memslice->shape; - __pyx_v_shape = __pyx_t_2; - - /* "View.MemoryView":947 - * - * cdef Py_ssize_t *shape = memslice.shape - * cdef Py_ssize_t *strides = memslice.strides # <<<<<<<<<<<<<< - * - * - */ - __pyx_t_2 = __pyx_v_memslice->strides; - __pyx_v_strides = __pyx_t_2; - - /* "View.MemoryView":951 - * - * cdef int i, j - * for i in range(ndim / 2): # <<<<<<<<<<<<<< - * j = ndim - 1 - i - * strides[i], strides[j] = strides[j], strides[i] - */ - __pyx_t_3 = __Pyx_div_long(__pyx_v_ndim, 2); - __pyx_t_4 = __pyx_t_3; - for (__pyx_t_1 = 0; __pyx_t_1 < __pyx_t_4; __pyx_t_1+=1) { - __pyx_v_i = __pyx_t_1; - - /* "View.MemoryView":952 - * cdef int i, j - * for i in range(ndim / 2): - * j = ndim - 1 - i # <<<<<<<<<<<<<< - * strides[i], strides[j] = strides[j], strides[i] - * shape[i], shape[j] = shape[j], shape[i] - */ - __pyx_v_j = ((__pyx_v_ndim - 1) - __pyx_v_i); - - /* "View.MemoryView":953 - * for i in range(ndim / 2): - * j = ndim - 1 - i - * strides[i], strides[j] = strides[j], strides[i] # <<<<<<<<<<<<<< - * shape[i], shape[j] = shape[j], shape[i] - * - */ - __pyx_t_5 = (__pyx_v_strides[__pyx_v_j]); - __pyx_t_6 = (__pyx_v_strides[__pyx_v_i]); - (__pyx_v_strides[__pyx_v_i]) = __pyx_t_5; - (__pyx_v_strides[__pyx_v_j]) = __pyx_t_6; - - /* "View.MemoryView":954 - * j = ndim - 1 - i - * strides[i], strides[j] = strides[j], strides[i] - * shape[i], shape[j] = shape[j], shape[i] # <<<<<<<<<<<<<< - * - * if memslice.suboffsets[i] >= 0 or memslice.suboffsets[j] >= 0: - */ - __pyx_t_6 = (__pyx_v_shape[__pyx_v_j]); - __pyx_t_5 = (__pyx_v_shape[__pyx_v_i]); - (__pyx_v_shape[__pyx_v_i]) = __pyx_t_6; - (__pyx_v_shape[__pyx_v_j]) = __pyx_t_5; - - /* "View.MemoryView":956 - * shape[i], shape[j] = shape[j], shape[i] - * - * if memslice.suboffsets[i] >= 0 or memslice.suboffsets[j] >= 0: # <<<<<<<<<<<<<< - * _err(ValueError, "Cannot transpose memoryview with indirect dimensions") - * - */ - __pyx_t_8 = (((__pyx_v_memslice->suboffsets[__pyx_v_i]) >= 0) != 0); - if (!__pyx_t_8) { - } else { - __pyx_t_7 = __pyx_t_8; - goto __pyx_L6_bool_binop_done; - } - __pyx_t_8 = (((__pyx_v_memslice->suboffsets[__pyx_v_j]) >= 0) != 0); - __pyx_t_7 = __pyx_t_8; - __pyx_L6_bool_binop_done:; - if (__pyx_t_7) { - - /* "View.MemoryView":957 - * - * if memslice.suboffsets[i] >= 0 or memslice.suboffsets[j] >= 0: - * _err(ValueError, "Cannot transpose memoryview with indirect dimensions") # <<<<<<<<<<<<<< - * - * return 1 - */ - __pyx_t_9 = __pyx_memoryview_err(__pyx_builtin_ValueError, ((char *)"Cannot transpose memoryview with indirect dimensions")); if (unlikely(__pyx_t_9 == ((int)-1))) __PYX_ERR(1, 957, __pyx_L1_error) - - /* "View.MemoryView":956 - * shape[i], shape[j] = shape[j], shape[i] - * - * if memslice.suboffsets[i] >= 0 or memslice.suboffsets[j] >= 0: # <<<<<<<<<<<<<< - * _err(ValueError, "Cannot transpose memoryview with indirect dimensions") - * - */ - } - } - - /* "View.MemoryView":959 - * _err(ValueError, "Cannot transpose memoryview with indirect dimensions") - * - * return 1 # <<<<<<<<<<<<<< - * - * - */ - __pyx_r = 1; - goto __pyx_L0; - - /* "View.MemoryView":943 - * - * @cname('__pyx_memslice_transpose') - * cdef int transpose_memslice(__Pyx_memviewslice *memslice) nogil except 0: # <<<<<<<<<<<<<< - * cdef int ndim = memslice.memview.view.ndim - * - */ - - /* function exit code */ - __pyx_L1_error:; - { - #ifdef WITH_THREAD - PyGILState_STATE __pyx_gilstate_save = __Pyx_PyGILState_Ensure(); - #endif - __Pyx_AddTraceback("View.MemoryView.transpose_memslice", __pyx_clineno, __pyx_lineno, __pyx_filename); - #ifdef WITH_THREAD - __Pyx_PyGILState_Release(__pyx_gilstate_save); - #endif - } - __pyx_r = 0; - __pyx_L0:; - return __pyx_r; -} - -/* "View.MemoryView":976 - * cdef int (*to_dtype_func)(char *, object) except 0 - * - * def __dealloc__(self): # <<<<<<<<<<<<<< - * __PYX_XDEC_MEMVIEW(&self.from_slice, 1) - * - */ - -/* Python wrapper */ -static void __pyx_memoryviewslice___dealloc__(PyObject *__pyx_v_self); /*proto*/ -static void __pyx_memoryviewslice___dealloc__(PyObject *__pyx_v_self) { - __Pyx_RefNannyDeclarations - __Pyx_RefNannySetupContext("__dealloc__ (wrapper)", 0); - __pyx_memoryviewslice___pyx_pf_15View_dot_MemoryView_16_memoryviewslice___dealloc__(((struct __pyx_memoryviewslice_obj *)__pyx_v_self)); - - /* function exit code */ - __Pyx_RefNannyFinishContext(); -} - -static void __pyx_memoryviewslice___pyx_pf_15View_dot_MemoryView_16_memoryviewslice___dealloc__(struct __pyx_memoryviewslice_obj *__pyx_v_self) { - __Pyx_RefNannyDeclarations - __Pyx_RefNannySetupContext("__dealloc__", 0); - - /* "View.MemoryView":977 - * - * def __dealloc__(self): - * __PYX_XDEC_MEMVIEW(&self.from_slice, 1) # <<<<<<<<<<<<<< - * - * cdef convert_item_to_object(self, char *itemp): - */ - __PYX_XDEC_MEMVIEW((&__pyx_v_self->from_slice), 1); - - /* "View.MemoryView":976 - * cdef int (*to_dtype_func)(char *, object) except 0 - * - * def __dealloc__(self): # <<<<<<<<<<<<<< - * __PYX_XDEC_MEMVIEW(&self.from_slice, 1) - * - */ - - /* function exit code */ - __Pyx_RefNannyFinishContext(); -} - -/* "View.MemoryView":979 - * __PYX_XDEC_MEMVIEW(&self.from_slice, 1) - * - * cdef convert_item_to_object(self, char *itemp): # <<<<<<<<<<<<<< - * if self.to_object_func != NULL: - * return self.to_object_func(itemp) - */ - -static PyObject *__pyx_memoryviewslice_convert_item_to_object(struct __pyx_memoryviewslice_obj *__pyx_v_self, char *__pyx_v_itemp) { - PyObject *__pyx_r = NULL; - __Pyx_RefNannyDeclarations - int __pyx_t_1; - PyObject *__pyx_t_2 = NULL; - int __pyx_lineno = 0; - const char *__pyx_filename = NULL; - int __pyx_clineno = 0; - __Pyx_RefNannySetupContext("convert_item_to_object", 0); - - /* "View.MemoryView":980 - * - * cdef convert_item_to_object(self, char *itemp): - * if self.to_object_func != NULL: # <<<<<<<<<<<<<< - * return self.to_object_func(itemp) - * else: - */ - __pyx_t_1 = ((__pyx_v_self->to_object_func != NULL) != 0); - if (__pyx_t_1) { - - /* "View.MemoryView":981 - * cdef convert_item_to_object(self, char *itemp): - * if self.to_object_func != NULL: - * return self.to_object_func(itemp) # <<<<<<<<<<<<<< - * else: - * return memoryview.convert_item_to_object(self, itemp) - */ - __Pyx_XDECREF(__pyx_r); - __pyx_t_2 = __pyx_v_self->to_object_func(__pyx_v_itemp); if (unlikely(!__pyx_t_2)) __PYX_ERR(1, 981, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - __pyx_r = __pyx_t_2; - __pyx_t_2 = 0; - goto __pyx_L0; - - /* "View.MemoryView":980 - * - * cdef convert_item_to_object(self, char *itemp): - * if self.to_object_func != NULL: # <<<<<<<<<<<<<< - * return self.to_object_func(itemp) - * else: - */ - } - - /* "View.MemoryView":983 - * return self.to_object_func(itemp) - * else: - * return memoryview.convert_item_to_object(self, itemp) # <<<<<<<<<<<<<< - * - * cdef assign_item_from_object(self, char *itemp, object value): - */ - /*else*/ { - __Pyx_XDECREF(__pyx_r); - __pyx_t_2 = __pyx_memoryview_convert_item_to_object(((struct __pyx_memoryview_obj *)__pyx_v_self), __pyx_v_itemp); if (unlikely(!__pyx_t_2)) __PYX_ERR(1, 983, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - __pyx_r = __pyx_t_2; - __pyx_t_2 = 0; - goto __pyx_L0; - } - - /* "View.MemoryView":979 - * __PYX_XDEC_MEMVIEW(&self.from_slice, 1) - * - * cdef convert_item_to_object(self, char *itemp): # <<<<<<<<<<<<<< - * if self.to_object_func != NULL: - * return self.to_object_func(itemp) - */ - - /* function exit code */ - __pyx_L1_error:; - __Pyx_XDECREF(__pyx_t_2); - __Pyx_AddTraceback("View.MemoryView._memoryviewslice.convert_item_to_object", __pyx_clineno, __pyx_lineno, __pyx_filename); - __pyx_r = 0; - __pyx_L0:; - __Pyx_XGIVEREF(__pyx_r); - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} - -/* "View.MemoryView":985 - * return memoryview.convert_item_to_object(self, itemp) - * - * cdef assign_item_from_object(self, char *itemp, object value): # <<<<<<<<<<<<<< - * if self.to_dtype_func != NULL: - * self.to_dtype_func(itemp, value) - */ - -static PyObject *__pyx_memoryviewslice_assign_item_from_object(struct __pyx_memoryviewslice_obj *__pyx_v_self, char *__pyx_v_itemp, PyObject *__pyx_v_value) { - PyObject *__pyx_r = NULL; - __Pyx_RefNannyDeclarations - int __pyx_t_1; - int __pyx_t_2; - PyObject *__pyx_t_3 = NULL; - int __pyx_lineno = 0; - const char *__pyx_filename = NULL; - int __pyx_clineno = 0; - __Pyx_RefNannySetupContext("assign_item_from_object", 0); - - /* "View.MemoryView":986 - * - * cdef assign_item_from_object(self, char *itemp, object value): - * if self.to_dtype_func != NULL: # <<<<<<<<<<<<<< - * self.to_dtype_func(itemp, value) - * else: - */ - __pyx_t_1 = ((__pyx_v_self->to_dtype_func != NULL) != 0); - if (__pyx_t_1) { - - /* "View.MemoryView":987 - * cdef assign_item_from_object(self, char *itemp, object value): - * if self.to_dtype_func != NULL: - * self.to_dtype_func(itemp, value) # <<<<<<<<<<<<<< - * else: - * memoryview.assign_item_from_object(self, itemp, value) - */ - __pyx_t_2 = __pyx_v_self->to_dtype_func(__pyx_v_itemp, __pyx_v_value); if (unlikely(__pyx_t_2 == ((int)0))) __PYX_ERR(1, 987, __pyx_L1_error) - - /* "View.MemoryView":986 - * - * cdef assign_item_from_object(self, char *itemp, object value): - * if self.to_dtype_func != NULL: # <<<<<<<<<<<<<< - * self.to_dtype_func(itemp, value) - * else: - */ - goto __pyx_L3; - } - - /* "View.MemoryView":989 - * self.to_dtype_func(itemp, value) - * else: - * memoryview.assign_item_from_object(self, itemp, value) # <<<<<<<<<<<<<< - * - * @property - */ - /*else*/ { - __pyx_t_3 = __pyx_memoryview_assign_item_from_object(((struct __pyx_memoryview_obj *)__pyx_v_self), __pyx_v_itemp, __pyx_v_value); if (unlikely(!__pyx_t_3)) __PYX_ERR(1, 989, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_3); - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - } - __pyx_L3:; - - /* "View.MemoryView":985 - * return memoryview.convert_item_to_object(self, itemp) - * - * cdef assign_item_from_object(self, char *itemp, object value): # <<<<<<<<<<<<<< - * if self.to_dtype_func != NULL: - * self.to_dtype_func(itemp, value) - */ - - /* function exit code */ - __pyx_r = Py_None; __Pyx_INCREF(Py_None); - goto __pyx_L0; - __pyx_L1_error:; - __Pyx_XDECREF(__pyx_t_3); - __Pyx_AddTraceback("View.MemoryView._memoryviewslice.assign_item_from_object", __pyx_clineno, __pyx_lineno, __pyx_filename); - __pyx_r = 0; - __pyx_L0:; - __Pyx_XGIVEREF(__pyx_r); - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} - -/* "View.MemoryView":992 - * - * @property - * def base(self): # <<<<<<<<<<<<<< - * return self.from_object - * - */ - -/* Python wrapper */ -static PyObject *__pyx_pw_15View_dot_MemoryView_16_memoryviewslice_4base_1__get__(PyObject *__pyx_v_self); /*proto*/ -static PyObject *__pyx_pw_15View_dot_MemoryView_16_memoryviewslice_4base_1__get__(PyObject *__pyx_v_self) { - PyObject *__pyx_r = 0; - __Pyx_RefNannyDeclarations - __Pyx_RefNannySetupContext("__get__ (wrapper)", 0); - __pyx_r = __pyx_pf_15View_dot_MemoryView_16_memoryviewslice_4base___get__(((struct __pyx_memoryviewslice_obj *)__pyx_v_self)); - - /* function exit code */ - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} - -static PyObject *__pyx_pf_15View_dot_MemoryView_16_memoryviewslice_4base___get__(struct __pyx_memoryviewslice_obj *__pyx_v_self) { - PyObject *__pyx_r = NULL; - __Pyx_RefNannyDeclarations - __Pyx_RefNannySetupContext("__get__", 0); - - /* "View.MemoryView":993 - * @property - * def base(self): - * return self.from_object # <<<<<<<<<<<<<< - * - * __pyx_getbuffer = capsule( &__pyx_memoryview_getbuffer, "getbuffer(obj, view, flags)") - */ - __Pyx_XDECREF(__pyx_r); - __Pyx_INCREF(__pyx_v_self->from_object); - __pyx_r = __pyx_v_self->from_object; - goto __pyx_L0; - - /* "View.MemoryView":992 - * - * @property - * def base(self): # <<<<<<<<<<<<<< - * return self.from_object - * - */ - - /* function exit code */ - __pyx_L0:; - __Pyx_XGIVEREF(__pyx_r); - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} - -/* "(tree fragment)":1 - * def __reduce_cython__(self): # <<<<<<<<<<<<<< - * raise TypeError("no default __reduce__ due to non-trivial __cinit__") - * def __setstate_cython__(self, __pyx_state): - */ - -/* Python wrapper */ -static PyObject *__pyx_pw___pyx_memoryviewslice_1__reduce_cython__(PyObject *__pyx_v_self, CYTHON_UNUSED PyObject *unused); /*proto*/ -static PyObject *__pyx_pw___pyx_memoryviewslice_1__reduce_cython__(PyObject *__pyx_v_self, CYTHON_UNUSED PyObject *unused) { - PyObject *__pyx_r = 0; - __Pyx_RefNannyDeclarations - __Pyx_RefNannySetupContext("__reduce_cython__ (wrapper)", 0); - __pyx_r = __pyx_pf___pyx_memoryviewslice___reduce_cython__(((struct __pyx_memoryviewslice_obj *)__pyx_v_self)); - - /* function exit code */ - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} - -static PyObject *__pyx_pf___pyx_memoryviewslice___reduce_cython__(CYTHON_UNUSED struct __pyx_memoryviewslice_obj *__pyx_v_self) { - PyObject *__pyx_r = NULL; - __Pyx_RefNannyDeclarations - PyObject *__pyx_t_1 = NULL; - int __pyx_lineno = 0; - const char *__pyx_filename = NULL; - int __pyx_clineno = 0; - __Pyx_RefNannySetupContext("__reduce_cython__", 0); - - /* "(tree fragment)":2 - * def __reduce_cython__(self): - * raise TypeError("no default __reduce__ due to non-trivial __cinit__") # <<<<<<<<<<<<<< - * def __setstate_cython__(self, __pyx_state): - * raise TypeError("no default __reduce__ due to non-trivial __cinit__") - */ - __pyx_t_1 = __Pyx_PyObject_Call(__pyx_builtin_TypeError, __pyx_tuple__18, NULL); if (unlikely(!__pyx_t_1)) __PYX_ERR(1, 2, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - __Pyx_Raise(__pyx_t_1, 0, 0, 0); - __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; - __PYX_ERR(1, 2, __pyx_L1_error) - - /* "(tree fragment)":1 - * def __reduce_cython__(self): # <<<<<<<<<<<<<< - * raise TypeError("no default __reduce__ due to non-trivial __cinit__") - * def __setstate_cython__(self, __pyx_state): - */ - - /* function exit code */ - __pyx_L1_error:; - __Pyx_XDECREF(__pyx_t_1); - __Pyx_AddTraceback("View.MemoryView._memoryviewslice.__reduce_cython__", __pyx_clineno, __pyx_lineno, __pyx_filename); - __pyx_r = NULL; - __Pyx_XGIVEREF(__pyx_r); - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} - -/* "(tree fragment)":3 - * def __reduce_cython__(self): - * raise TypeError("no default __reduce__ due to non-trivial __cinit__") - * def __setstate_cython__(self, __pyx_state): # <<<<<<<<<<<<<< - * raise TypeError("no default __reduce__ due to non-trivial __cinit__") - */ - -/* Python wrapper */ -static PyObject *__pyx_pw___pyx_memoryviewslice_3__setstate_cython__(PyObject *__pyx_v_self, PyObject *__pyx_v___pyx_state); /*proto*/ -static PyObject *__pyx_pw___pyx_memoryviewslice_3__setstate_cython__(PyObject *__pyx_v_self, PyObject *__pyx_v___pyx_state) { - PyObject *__pyx_r = 0; - __Pyx_RefNannyDeclarations - __Pyx_RefNannySetupContext("__setstate_cython__ (wrapper)", 0); - __pyx_r = __pyx_pf___pyx_memoryviewslice_2__setstate_cython__(((struct __pyx_memoryviewslice_obj *)__pyx_v_self), ((PyObject *)__pyx_v___pyx_state)); - - /* function exit code */ - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} - -static PyObject *__pyx_pf___pyx_memoryviewslice_2__setstate_cython__(CYTHON_UNUSED struct __pyx_memoryviewslice_obj *__pyx_v_self, CYTHON_UNUSED PyObject *__pyx_v___pyx_state) { - PyObject *__pyx_r = NULL; - __Pyx_RefNannyDeclarations - PyObject *__pyx_t_1 = NULL; - int __pyx_lineno = 0; - const char *__pyx_filename = NULL; - int __pyx_clineno = 0; - __Pyx_RefNannySetupContext("__setstate_cython__", 0); - - /* "(tree fragment)":4 - * raise TypeError("no default __reduce__ due to non-trivial __cinit__") - * def __setstate_cython__(self, __pyx_state): - * raise TypeError("no default __reduce__ due to non-trivial __cinit__") # <<<<<<<<<<<<<< - */ - __pyx_t_1 = __Pyx_PyObject_Call(__pyx_builtin_TypeError, __pyx_tuple__19, NULL); if (unlikely(!__pyx_t_1)) __PYX_ERR(1, 4, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - __Pyx_Raise(__pyx_t_1, 0, 0, 0); - __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; - __PYX_ERR(1, 4, __pyx_L1_error) - - /* "(tree fragment)":3 - * def __reduce_cython__(self): - * raise TypeError("no default __reduce__ due to non-trivial __cinit__") - * def __setstate_cython__(self, __pyx_state): # <<<<<<<<<<<<<< - * raise TypeError("no default __reduce__ due to non-trivial __cinit__") - */ - - /* function exit code */ - __pyx_L1_error:; - __Pyx_XDECREF(__pyx_t_1); - __Pyx_AddTraceback("View.MemoryView._memoryviewslice.__setstate_cython__", __pyx_clineno, __pyx_lineno, __pyx_filename); - __pyx_r = NULL; - __Pyx_XGIVEREF(__pyx_r); - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} - -/* "View.MemoryView":999 - * - * @cname('__pyx_memoryview_fromslice') - * cdef memoryview_fromslice(__Pyx_memviewslice memviewslice, # <<<<<<<<<<<<<< - * int ndim, - * object (*to_object_func)(char *), - */ - -static PyObject *__pyx_memoryview_fromslice(__Pyx_memviewslice __pyx_v_memviewslice, int __pyx_v_ndim, PyObject *(*__pyx_v_to_object_func)(char *), int (*__pyx_v_to_dtype_func)(char *, PyObject *), int __pyx_v_dtype_is_object) { - struct __pyx_memoryviewslice_obj *__pyx_v_result = 0; - Py_ssize_t __pyx_v_suboffset; - PyObject *__pyx_v_length = NULL; - PyObject *__pyx_r = NULL; - __Pyx_RefNannyDeclarations - int __pyx_t_1; - PyObject *__pyx_t_2 = NULL; - PyObject *__pyx_t_3 = NULL; - __Pyx_TypeInfo *__pyx_t_4; - Py_buffer __pyx_t_5; - Py_ssize_t *__pyx_t_6; - Py_ssize_t *__pyx_t_7; - Py_ssize_t *__pyx_t_8; - Py_ssize_t __pyx_t_9; - int __pyx_lineno = 0; - const char *__pyx_filename = NULL; - int __pyx_clineno = 0; - __Pyx_RefNannySetupContext("memoryview_fromslice", 0); - - /* "View.MemoryView":1007 - * cdef _memoryviewslice result - * - * if memviewslice.memview == Py_None: # <<<<<<<<<<<<<< - * return None - * - */ - __pyx_t_1 = ((((PyObject *)__pyx_v_memviewslice.memview) == Py_None) != 0); - if (__pyx_t_1) { - - /* "View.MemoryView":1008 - * - * if memviewslice.memview == Py_None: - * return None # <<<<<<<<<<<<<< - * - * - */ - __Pyx_XDECREF(__pyx_r); - __pyx_r = Py_None; __Pyx_INCREF(Py_None); - goto __pyx_L0; - - /* "View.MemoryView":1007 - * cdef _memoryviewslice result - * - * if memviewslice.memview == Py_None: # <<<<<<<<<<<<<< - * return None - * - */ - } - - /* "View.MemoryView":1013 - * - * - * result = _memoryviewslice(None, 0, dtype_is_object) # <<<<<<<<<<<<<< - * - * result.from_slice = memviewslice - */ - __pyx_t_2 = __Pyx_PyBool_FromLong(__pyx_v_dtype_is_object); if (unlikely(!__pyx_t_2)) __PYX_ERR(1, 1013, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - __pyx_t_3 = PyTuple_New(3); if (unlikely(!__pyx_t_3)) __PYX_ERR(1, 1013, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_3); - __Pyx_INCREF(Py_None); - __Pyx_GIVEREF(Py_None); - PyTuple_SET_ITEM(__pyx_t_3, 0, Py_None); - __Pyx_INCREF(__pyx_int_0); - __Pyx_GIVEREF(__pyx_int_0); - PyTuple_SET_ITEM(__pyx_t_3, 1, __pyx_int_0); - __Pyx_GIVEREF(__pyx_t_2); - PyTuple_SET_ITEM(__pyx_t_3, 2, __pyx_t_2); - __pyx_t_2 = 0; - __pyx_t_2 = __Pyx_PyObject_Call(((PyObject *)__pyx_memoryviewslice_type), __pyx_t_3, NULL); if (unlikely(!__pyx_t_2)) __PYX_ERR(1, 1013, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - __pyx_v_result = ((struct __pyx_memoryviewslice_obj *)__pyx_t_2); - __pyx_t_2 = 0; - - /* "View.MemoryView":1015 - * result = _memoryviewslice(None, 0, dtype_is_object) - * - * result.from_slice = memviewslice # <<<<<<<<<<<<<< - * __PYX_INC_MEMVIEW(&memviewslice, 1) - * - */ - __pyx_v_result->from_slice = __pyx_v_memviewslice; - - /* "View.MemoryView":1016 - * - * result.from_slice = memviewslice - * __PYX_INC_MEMVIEW(&memviewslice, 1) # <<<<<<<<<<<<<< - * - * result.from_object = ( memviewslice.memview).base - */ - __PYX_INC_MEMVIEW((&__pyx_v_memviewslice), 1); - - /* "View.MemoryView":1018 - * __PYX_INC_MEMVIEW(&memviewslice, 1) - * - * result.from_object = ( memviewslice.memview).base # <<<<<<<<<<<<<< - * result.typeinfo = memviewslice.memview.typeinfo - * - */ - __pyx_t_2 = __Pyx_PyObject_GetAttrStr(((PyObject *)__pyx_v_memviewslice.memview), __pyx_n_s_base); if (unlikely(!__pyx_t_2)) __PYX_ERR(1, 1018, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - __Pyx_GIVEREF(__pyx_t_2); - __Pyx_GOTREF(__pyx_v_result->from_object); - __Pyx_DECREF(__pyx_v_result->from_object); - __pyx_v_result->from_object = __pyx_t_2; - __pyx_t_2 = 0; - - /* "View.MemoryView":1019 - * - * result.from_object = ( memviewslice.memview).base - * result.typeinfo = memviewslice.memview.typeinfo # <<<<<<<<<<<<<< - * - * result.view = memviewslice.memview.view - */ - __pyx_t_4 = __pyx_v_memviewslice.memview->typeinfo; - __pyx_v_result->__pyx_base.typeinfo = __pyx_t_4; - - /* "View.MemoryView":1021 - * result.typeinfo = memviewslice.memview.typeinfo - * - * result.view = memviewslice.memview.view # <<<<<<<<<<<<<< - * result.view.buf = memviewslice.data - * result.view.ndim = ndim - */ - __pyx_t_5 = __pyx_v_memviewslice.memview->view; - __pyx_v_result->__pyx_base.view = __pyx_t_5; - - /* "View.MemoryView":1022 - * - * result.view = memviewslice.memview.view - * result.view.buf = memviewslice.data # <<<<<<<<<<<<<< - * result.view.ndim = ndim - * (<__pyx_buffer *> &result.view).obj = Py_None - */ - __pyx_v_result->__pyx_base.view.buf = ((void *)__pyx_v_memviewslice.data); - - /* "View.MemoryView":1023 - * result.view = memviewslice.memview.view - * result.view.buf = memviewslice.data - * result.view.ndim = ndim # <<<<<<<<<<<<<< - * (<__pyx_buffer *> &result.view).obj = Py_None - * Py_INCREF(Py_None) - */ - __pyx_v_result->__pyx_base.view.ndim = __pyx_v_ndim; - - /* "View.MemoryView":1024 - * result.view.buf = memviewslice.data - * result.view.ndim = ndim - * (<__pyx_buffer *> &result.view).obj = Py_None # <<<<<<<<<<<<<< - * Py_INCREF(Py_None) - * - */ - ((Py_buffer *)(&__pyx_v_result->__pyx_base.view))->obj = Py_None; - - /* "View.MemoryView":1025 - * result.view.ndim = ndim - * (<__pyx_buffer *> &result.view).obj = Py_None - * Py_INCREF(Py_None) # <<<<<<<<<<<<<< - * - * if (memviewslice.memview).flags & PyBUF_WRITABLE: - */ - Py_INCREF(Py_None); - - /* "View.MemoryView":1027 - * Py_INCREF(Py_None) - * - * if (memviewslice.memview).flags & PyBUF_WRITABLE: # <<<<<<<<<<<<<< - * result.flags = PyBUF_RECORDS - * else: - */ - __pyx_t_1 = ((((struct __pyx_memoryview_obj *)__pyx_v_memviewslice.memview)->flags & PyBUF_WRITABLE) != 0); - if (__pyx_t_1) { - - /* "View.MemoryView":1028 - * - * if (memviewslice.memview).flags & PyBUF_WRITABLE: - * result.flags = PyBUF_RECORDS # <<<<<<<<<<<<<< - * else: - * result.flags = PyBUF_RECORDS_RO - */ - __pyx_v_result->__pyx_base.flags = PyBUF_RECORDS; - - /* "View.MemoryView":1027 - * Py_INCREF(Py_None) - * - * if (memviewslice.memview).flags & PyBUF_WRITABLE: # <<<<<<<<<<<<<< - * result.flags = PyBUF_RECORDS - * else: - */ - goto __pyx_L4; - } - - /* "View.MemoryView":1030 - * result.flags = PyBUF_RECORDS - * else: - * result.flags = PyBUF_RECORDS_RO # <<<<<<<<<<<<<< - * - * result.view.shape = result.from_slice.shape - */ - /*else*/ { - __pyx_v_result->__pyx_base.flags = PyBUF_RECORDS_RO; - } - __pyx_L4:; - - /* "View.MemoryView":1032 - * result.flags = PyBUF_RECORDS_RO - * - * result.view.shape = result.from_slice.shape # <<<<<<<<<<<<<< - * result.view.strides = result.from_slice.strides - * - */ - __pyx_v_result->__pyx_base.view.shape = ((Py_ssize_t *)__pyx_v_result->from_slice.shape); - - /* "View.MemoryView":1033 - * - * result.view.shape = result.from_slice.shape - * result.view.strides = result.from_slice.strides # <<<<<<<<<<<<<< - * - * - */ - __pyx_v_result->__pyx_base.view.strides = ((Py_ssize_t *)__pyx_v_result->from_slice.strides); - - /* "View.MemoryView":1036 - * - * - * result.view.suboffsets = NULL # <<<<<<<<<<<<<< - * for suboffset in result.from_slice.suboffsets[:ndim]: - * if suboffset >= 0: - */ - __pyx_v_result->__pyx_base.view.suboffsets = NULL; - - /* "View.MemoryView":1037 - * - * result.view.suboffsets = NULL - * for suboffset in result.from_slice.suboffsets[:ndim]: # <<<<<<<<<<<<<< - * if suboffset >= 0: - * result.view.suboffsets = result.from_slice.suboffsets - */ - __pyx_t_7 = (__pyx_v_result->from_slice.suboffsets + __pyx_v_ndim); - for (__pyx_t_8 = __pyx_v_result->from_slice.suboffsets; __pyx_t_8 < __pyx_t_7; __pyx_t_8++) { - __pyx_t_6 = __pyx_t_8; - __pyx_v_suboffset = (__pyx_t_6[0]); - - /* "View.MemoryView":1038 - * result.view.suboffsets = NULL - * for suboffset in result.from_slice.suboffsets[:ndim]: - * if suboffset >= 0: # <<<<<<<<<<<<<< - * result.view.suboffsets = result.from_slice.suboffsets - * break - */ - __pyx_t_1 = ((__pyx_v_suboffset >= 0) != 0); - if (__pyx_t_1) { - - /* "View.MemoryView":1039 - * for suboffset in result.from_slice.suboffsets[:ndim]: - * if suboffset >= 0: - * result.view.suboffsets = result.from_slice.suboffsets # <<<<<<<<<<<<<< - * break - * - */ - __pyx_v_result->__pyx_base.view.suboffsets = ((Py_ssize_t *)__pyx_v_result->from_slice.suboffsets); - - /* "View.MemoryView":1040 - * if suboffset >= 0: - * result.view.suboffsets = result.from_slice.suboffsets - * break # <<<<<<<<<<<<<< - * - * result.view.len = result.view.itemsize - */ - goto __pyx_L6_break; - - /* "View.MemoryView":1038 - * result.view.suboffsets = NULL - * for suboffset in result.from_slice.suboffsets[:ndim]: - * if suboffset >= 0: # <<<<<<<<<<<<<< - * result.view.suboffsets = result.from_slice.suboffsets - * break - */ - } - } - __pyx_L6_break:; - - /* "View.MemoryView":1042 - * break - * - * result.view.len = result.view.itemsize # <<<<<<<<<<<<<< - * for length in result.view.shape[:ndim]: - * result.view.len *= length - */ - __pyx_t_9 = __pyx_v_result->__pyx_base.view.itemsize; - __pyx_v_result->__pyx_base.view.len = __pyx_t_9; - - /* "View.MemoryView":1043 - * - * result.view.len = result.view.itemsize - * for length in result.view.shape[:ndim]: # <<<<<<<<<<<<<< - * result.view.len *= length - * - */ - __pyx_t_7 = (__pyx_v_result->__pyx_base.view.shape + __pyx_v_ndim); - for (__pyx_t_8 = __pyx_v_result->__pyx_base.view.shape; __pyx_t_8 < __pyx_t_7; __pyx_t_8++) { - __pyx_t_6 = __pyx_t_8; - __pyx_t_2 = PyInt_FromSsize_t((__pyx_t_6[0])); if (unlikely(!__pyx_t_2)) __PYX_ERR(1, 1043, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - __Pyx_XDECREF_SET(__pyx_v_length, __pyx_t_2); - __pyx_t_2 = 0; - - /* "View.MemoryView":1044 - * result.view.len = result.view.itemsize - * for length in result.view.shape[:ndim]: - * result.view.len *= length # <<<<<<<<<<<<<< - * - * result.to_object_func = to_object_func - */ - __pyx_t_2 = PyInt_FromSsize_t(__pyx_v_result->__pyx_base.view.len); if (unlikely(!__pyx_t_2)) __PYX_ERR(1, 1044, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - __pyx_t_3 = PyNumber_InPlaceMultiply(__pyx_t_2, __pyx_v_length); if (unlikely(!__pyx_t_3)) __PYX_ERR(1, 1044, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_3); - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - __pyx_t_9 = __Pyx_PyIndex_AsSsize_t(__pyx_t_3); if (unlikely((__pyx_t_9 == (Py_ssize_t)-1) && PyErr_Occurred())) __PYX_ERR(1, 1044, __pyx_L1_error) - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - __pyx_v_result->__pyx_base.view.len = __pyx_t_9; - } - - /* "View.MemoryView":1046 - * result.view.len *= length - * - * result.to_object_func = to_object_func # <<<<<<<<<<<<<< - * result.to_dtype_func = to_dtype_func - * - */ - __pyx_v_result->to_object_func = __pyx_v_to_object_func; - - /* "View.MemoryView":1047 - * - * result.to_object_func = to_object_func - * result.to_dtype_func = to_dtype_func # <<<<<<<<<<<<<< - * - * return result - */ - __pyx_v_result->to_dtype_func = __pyx_v_to_dtype_func; - - /* "View.MemoryView":1049 - * result.to_dtype_func = to_dtype_func - * - * return result # <<<<<<<<<<<<<< - * - * @cname('__pyx_memoryview_get_slice_from_memoryview') - */ - __Pyx_XDECREF(__pyx_r); - __Pyx_INCREF(((PyObject *)__pyx_v_result)); - __pyx_r = ((PyObject *)__pyx_v_result); - goto __pyx_L0; - - /* "View.MemoryView":999 - * - * @cname('__pyx_memoryview_fromslice') - * cdef memoryview_fromslice(__Pyx_memviewslice memviewslice, # <<<<<<<<<<<<<< - * int ndim, - * object (*to_object_func)(char *), - */ - - /* function exit code */ - __pyx_L1_error:; - __Pyx_XDECREF(__pyx_t_2); - __Pyx_XDECREF(__pyx_t_3); - __Pyx_AddTraceback("View.MemoryView.memoryview_fromslice", __pyx_clineno, __pyx_lineno, __pyx_filename); - __pyx_r = 0; - __pyx_L0:; - __Pyx_XDECREF((PyObject *)__pyx_v_result); - __Pyx_XDECREF(__pyx_v_length); - __Pyx_XGIVEREF(__pyx_r); - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} - -/* "View.MemoryView":1052 - * - * @cname('__pyx_memoryview_get_slice_from_memoryview') - * cdef __Pyx_memviewslice *get_slice_from_memview(memoryview memview, # <<<<<<<<<<<<<< - * __Pyx_memviewslice *mslice) except NULL: - * cdef _memoryviewslice obj - */ - -static __Pyx_memviewslice *__pyx_memoryview_get_slice_from_memoryview(struct __pyx_memoryview_obj *__pyx_v_memview, __Pyx_memviewslice *__pyx_v_mslice) { - struct __pyx_memoryviewslice_obj *__pyx_v_obj = 0; - __Pyx_memviewslice *__pyx_r; - __Pyx_RefNannyDeclarations - int __pyx_t_1; - int __pyx_t_2; - PyObject *__pyx_t_3 = NULL; - int __pyx_lineno = 0; - const char *__pyx_filename = NULL; - int __pyx_clineno = 0; - __Pyx_RefNannySetupContext("get_slice_from_memview", 0); - - /* "View.MemoryView":1055 - * __Pyx_memviewslice *mslice) except NULL: - * cdef _memoryviewslice obj - * if isinstance(memview, _memoryviewslice): # <<<<<<<<<<<<<< - * obj = memview - * return &obj.from_slice - */ - __pyx_t_1 = __Pyx_TypeCheck(((PyObject *)__pyx_v_memview), __pyx_memoryviewslice_type); - __pyx_t_2 = (__pyx_t_1 != 0); - if (__pyx_t_2) { - - /* "View.MemoryView":1056 - * cdef _memoryviewslice obj - * if isinstance(memview, _memoryviewslice): - * obj = memview # <<<<<<<<<<<<<< - * return &obj.from_slice - * else: - */ - if (!(likely(((((PyObject *)__pyx_v_memview)) == Py_None) || likely(__Pyx_TypeTest(((PyObject *)__pyx_v_memview), __pyx_memoryviewslice_type))))) __PYX_ERR(1, 1056, __pyx_L1_error) - __pyx_t_3 = ((PyObject *)__pyx_v_memview); - __Pyx_INCREF(__pyx_t_3); - __pyx_v_obj = ((struct __pyx_memoryviewslice_obj *)__pyx_t_3); - __pyx_t_3 = 0; - - /* "View.MemoryView":1057 - * if isinstance(memview, _memoryviewslice): - * obj = memview - * return &obj.from_slice # <<<<<<<<<<<<<< - * else: - * slice_copy(memview, mslice) - */ - __pyx_r = (&__pyx_v_obj->from_slice); - goto __pyx_L0; - - /* "View.MemoryView":1055 - * __Pyx_memviewslice *mslice) except NULL: - * cdef _memoryviewslice obj - * if isinstance(memview, _memoryviewslice): # <<<<<<<<<<<<<< - * obj = memview - * return &obj.from_slice - */ - } - - /* "View.MemoryView":1059 - * return &obj.from_slice - * else: - * slice_copy(memview, mslice) # <<<<<<<<<<<<<< - * return mslice - * - */ - /*else*/ { - __pyx_memoryview_slice_copy(__pyx_v_memview, __pyx_v_mslice); - - /* "View.MemoryView":1060 - * else: - * slice_copy(memview, mslice) - * return mslice # <<<<<<<<<<<<<< - * - * @cname('__pyx_memoryview_slice_copy') - */ - __pyx_r = __pyx_v_mslice; - goto __pyx_L0; - } - - /* "View.MemoryView":1052 - * - * @cname('__pyx_memoryview_get_slice_from_memoryview') - * cdef __Pyx_memviewslice *get_slice_from_memview(memoryview memview, # <<<<<<<<<<<<<< - * __Pyx_memviewslice *mslice) except NULL: - * cdef _memoryviewslice obj - */ - - /* function exit code */ - __pyx_L1_error:; - __Pyx_XDECREF(__pyx_t_3); - __Pyx_AddTraceback("View.MemoryView.get_slice_from_memview", __pyx_clineno, __pyx_lineno, __pyx_filename); - __pyx_r = NULL; - __pyx_L0:; - __Pyx_XDECREF((PyObject *)__pyx_v_obj); - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} - -/* "View.MemoryView":1063 - * - * @cname('__pyx_memoryview_slice_copy') - * cdef void slice_copy(memoryview memview, __Pyx_memviewslice *dst): # <<<<<<<<<<<<<< - * cdef int dim - * cdef (Py_ssize_t*) shape, strides, suboffsets - */ - -static void __pyx_memoryview_slice_copy(struct __pyx_memoryview_obj *__pyx_v_memview, __Pyx_memviewslice *__pyx_v_dst) { - int __pyx_v_dim; - Py_ssize_t *__pyx_v_shape; - Py_ssize_t *__pyx_v_strides; - Py_ssize_t *__pyx_v_suboffsets; - __Pyx_RefNannyDeclarations - Py_ssize_t *__pyx_t_1; - int __pyx_t_2; - int __pyx_t_3; - int __pyx_t_4; - Py_ssize_t __pyx_t_5; - __Pyx_RefNannySetupContext("slice_copy", 0); - - /* "View.MemoryView":1067 - * cdef (Py_ssize_t*) shape, strides, suboffsets - * - * shape = memview.view.shape # <<<<<<<<<<<<<< - * strides = memview.view.strides - * suboffsets = memview.view.suboffsets - */ - __pyx_t_1 = __pyx_v_memview->view.shape; - __pyx_v_shape = __pyx_t_1; - - /* "View.MemoryView":1068 - * - * shape = memview.view.shape - * strides = memview.view.strides # <<<<<<<<<<<<<< - * suboffsets = memview.view.suboffsets - * - */ - __pyx_t_1 = __pyx_v_memview->view.strides; - __pyx_v_strides = __pyx_t_1; - - /* "View.MemoryView":1069 - * shape = memview.view.shape - * strides = memview.view.strides - * suboffsets = memview.view.suboffsets # <<<<<<<<<<<<<< - * - * dst.memview = <__pyx_memoryview *> memview - */ - __pyx_t_1 = __pyx_v_memview->view.suboffsets; - __pyx_v_suboffsets = __pyx_t_1; - - /* "View.MemoryView":1071 - * suboffsets = memview.view.suboffsets - * - * dst.memview = <__pyx_memoryview *> memview # <<<<<<<<<<<<<< - * dst.data = memview.view.buf - * - */ - __pyx_v_dst->memview = ((struct __pyx_memoryview_obj *)__pyx_v_memview); - - /* "View.MemoryView":1072 - * - * dst.memview = <__pyx_memoryview *> memview - * dst.data = memview.view.buf # <<<<<<<<<<<<<< - * - * for dim in range(memview.view.ndim): - */ - __pyx_v_dst->data = ((char *)__pyx_v_memview->view.buf); - - /* "View.MemoryView":1074 - * dst.data = memview.view.buf - * - * for dim in range(memview.view.ndim): # <<<<<<<<<<<<<< - * dst.shape[dim] = shape[dim] - * dst.strides[dim] = strides[dim] - */ - __pyx_t_2 = __pyx_v_memview->view.ndim; - __pyx_t_3 = __pyx_t_2; - for (__pyx_t_4 = 0; __pyx_t_4 < __pyx_t_3; __pyx_t_4+=1) { - __pyx_v_dim = __pyx_t_4; - - /* "View.MemoryView":1075 - * - * for dim in range(memview.view.ndim): - * dst.shape[dim] = shape[dim] # <<<<<<<<<<<<<< - * dst.strides[dim] = strides[dim] - * dst.suboffsets[dim] = suboffsets[dim] if suboffsets else -1 - */ - (__pyx_v_dst->shape[__pyx_v_dim]) = (__pyx_v_shape[__pyx_v_dim]); - - /* "View.MemoryView":1076 - * for dim in range(memview.view.ndim): - * dst.shape[dim] = shape[dim] - * dst.strides[dim] = strides[dim] # <<<<<<<<<<<<<< - * dst.suboffsets[dim] = suboffsets[dim] if suboffsets else -1 - * - */ - (__pyx_v_dst->strides[__pyx_v_dim]) = (__pyx_v_strides[__pyx_v_dim]); - - /* "View.MemoryView":1077 - * dst.shape[dim] = shape[dim] - * dst.strides[dim] = strides[dim] - * dst.suboffsets[dim] = suboffsets[dim] if suboffsets else -1 # <<<<<<<<<<<<<< - * - * @cname('__pyx_memoryview_copy_object') - */ - if ((__pyx_v_suboffsets != 0)) { - __pyx_t_5 = (__pyx_v_suboffsets[__pyx_v_dim]); - } else { - __pyx_t_5 = -1L; - } - (__pyx_v_dst->suboffsets[__pyx_v_dim]) = __pyx_t_5; - } - - /* "View.MemoryView":1063 - * - * @cname('__pyx_memoryview_slice_copy') - * cdef void slice_copy(memoryview memview, __Pyx_memviewslice *dst): # <<<<<<<<<<<<<< - * cdef int dim - * cdef (Py_ssize_t*) shape, strides, suboffsets - */ - - /* function exit code */ - __Pyx_RefNannyFinishContext(); -} - -/* "View.MemoryView":1080 - * - * @cname('__pyx_memoryview_copy_object') - * cdef memoryview_copy(memoryview memview): # <<<<<<<<<<<<<< - * "Create a new memoryview object" - * cdef __Pyx_memviewslice memviewslice - */ - -static PyObject *__pyx_memoryview_copy_object(struct __pyx_memoryview_obj *__pyx_v_memview) { - __Pyx_memviewslice __pyx_v_memviewslice; - PyObject *__pyx_r = NULL; - __Pyx_RefNannyDeclarations - PyObject *__pyx_t_1 = NULL; - int __pyx_lineno = 0; - const char *__pyx_filename = NULL; - int __pyx_clineno = 0; - __Pyx_RefNannySetupContext("memoryview_copy", 0); - - /* "View.MemoryView":1083 - * "Create a new memoryview object" - * cdef __Pyx_memviewslice memviewslice - * slice_copy(memview, &memviewslice) # <<<<<<<<<<<<<< - * return memoryview_copy_from_slice(memview, &memviewslice) - * - */ - __pyx_memoryview_slice_copy(__pyx_v_memview, (&__pyx_v_memviewslice)); - - /* "View.MemoryView":1084 - * cdef __Pyx_memviewslice memviewslice - * slice_copy(memview, &memviewslice) - * return memoryview_copy_from_slice(memview, &memviewslice) # <<<<<<<<<<<<<< - * - * @cname('__pyx_memoryview_copy_object_from_slice') - */ - __Pyx_XDECREF(__pyx_r); - __pyx_t_1 = __pyx_memoryview_copy_object_from_slice(__pyx_v_memview, (&__pyx_v_memviewslice)); if (unlikely(!__pyx_t_1)) __PYX_ERR(1, 1084, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - __pyx_r = __pyx_t_1; - __pyx_t_1 = 0; - goto __pyx_L0; - - /* "View.MemoryView":1080 - * - * @cname('__pyx_memoryview_copy_object') - * cdef memoryview_copy(memoryview memview): # <<<<<<<<<<<<<< - * "Create a new memoryview object" - * cdef __Pyx_memviewslice memviewslice - */ - - /* function exit code */ - __pyx_L1_error:; - __Pyx_XDECREF(__pyx_t_1); - __Pyx_AddTraceback("View.MemoryView.memoryview_copy", __pyx_clineno, __pyx_lineno, __pyx_filename); - __pyx_r = 0; - __pyx_L0:; - __Pyx_XGIVEREF(__pyx_r); - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} - -/* "View.MemoryView":1087 - * - * @cname('__pyx_memoryview_copy_object_from_slice') - * cdef memoryview_copy_from_slice(memoryview memview, __Pyx_memviewslice *memviewslice): # <<<<<<<<<<<<<< - * """ - * Create a new memoryview object from a given memoryview object and slice. - */ - -static PyObject *__pyx_memoryview_copy_object_from_slice(struct __pyx_memoryview_obj *__pyx_v_memview, __Pyx_memviewslice *__pyx_v_memviewslice) { - PyObject *(*__pyx_v_to_object_func)(char *); - int (*__pyx_v_to_dtype_func)(char *, PyObject *); - PyObject *__pyx_r = NULL; - __Pyx_RefNannyDeclarations - int __pyx_t_1; - int __pyx_t_2; - PyObject *(*__pyx_t_3)(char *); - int (*__pyx_t_4)(char *, PyObject *); - PyObject *__pyx_t_5 = NULL; - int __pyx_lineno = 0; - const char *__pyx_filename = NULL; - int __pyx_clineno = 0; - __Pyx_RefNannySetupContext("memoryview_copy_from_slice", 0); - - /* "View.MemoryView":1094 - * cdef int (*to_dtype_func)(char *, object) except 0 - * - * if isinstance(memview, _memoryviewslice): # <<<<<<<<<<<<<< - * to_object_func = (<_memoryviewslice> memview).to_object_func - * to_dtype_func = (<_memoryviewslice> memview).to_dtype_func - */ - __pyx_t_1 = __Pyx_TypeCheck(((PyObject *)__pyx_v_memview), __pyx_memoryviewslice_type); - __pyx_t_2 = (__pyx_t_1 != 0); - if (__pyx_t_2) { - - /* "View.MemoryView":1095 - * - * if isinstance(memview, _memoryviewslice): - * to_object_func = (<_memoryviewslice> memview).to_object_func # <<<<<<<<<<<<<< - * to_dtype_func = (<_memoryviewslice> memview).to_dtype_func - * else: - */ - __pyx_t_3 = ((struct __pyx_memoryviewslice_obj *)__pyx_v_memview)->to_object_func; - __pyx_v_to_object_func = __pyx_t_3; - - /* "View.MemoryView":1096 - * if isinstance(memview, _memoryviewslice): - * to_object_func = (<_memoryviewslice> memview).to_object_func - * to_dtype_func = (<_memoryviewslice> memview).to_dtype_func # <<<<<<<<<<<<<< - * else: - * to_object_func = NULL - */ - __pyx_t_4 = ((struct __pyx_memoryviewslice_obj *)__pyx_v_memview)->to_dtype_func; - __pyx_v_to_dtype_func = __pyx_t_4; - - /* "View.MemoryView":1094 - * cdef int (*to_dtype_func)(char *, object) except 0 - * - * if isinstance(memview, _memoryviewslice): # <<<<<<<<<<<<<< - * to_object_func = (<_memoryviewslice> memview).to_object_func - * to_dtype_func = (<_memoryviewslice> memview).to_dtype_func - */ - goto __pyx_L3; - } - - /* "View.MemoryView":1098 - * to_dtype_func = (<_memoryviewslice> memview).to_dtype_func - * else: - * to_object_func = NULL # <<<<<<<<<<<<<< - * to_dtype_func = NULL - * - */ - /*else*/ { - __pyx_v_to_object_func = NULL; - - /* "View.MemoryView":1099 - * else: - * to_object_func = NULL - * to_dtype_func = NULL # <<<<<<<<<<<<<< - * - * return memoryview_fromslice(memviewslice[0], memview.view.ndim, - */ - __pyx_v_to_dtype_func = NULL; - } - __pyx_L3:; - - /* "View.MemoryView":1101 - * to_dtype_func = NULL - * - * return memoryview_fromslice(memviewslice[0], memview.view.ndim, # <<<<<<<<<<<<<< - * to_object_func, to_dtype_func, - * memview.dtype_is_object) - */ - __Pyx_XDECREF(__pyx_r); - - /* "View.MemoryView":1103 - * return memoryview_fromslice(memviewslice[0], memview.view.ndim, - * to_object_func, to_dtype_func, - * memview.dtype_is_object) # <<<<<<<<<<<<<< - * - * - */ - __pyx_t_5 = __pyx_memoryview_fromslice((__pyx_v_memviewslice[0]), __pyx_v_memview->view.ndim, __pyx_v_to_object_func, __pyx_v_to_dtype_func, __pyx_v_memview->dtype_is_object); if (unlikely(!__pyx_t_5)) __PYX_ERR(1, 1101, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_5); - __pyx_r = __pyx_t_5; - __pyx_t_5 = 0; - goto __pyx_L0; - - /* "View.MemoryView":1087 - * - * @cname('__pyx_memoryview_copy_object_from_slice') - * cdef memoryview_copy_from_slice(memoryview memview, __Pyx_memviewslice *memviewslice): # <<<<<<<<<<<<<< - * """ - * Create a new memoryview object from a given memoryview object and slice. - */ - - /* function exit code */ - __pyx_L1_error:; - __Pyx_XDECREF(__pyx_t_5); - __Pyx_AddTraceback("View.MemoryView.memoryview_copy_from_slice", __pyx_clineno, __pyx_lineno, __pyx_filename); - __pyx_r = 0; - __pyx_L0:; - __Pyx_XGIVEREF(__pyx_r); - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} - -/* "View.MemoryView":1109 - * - * - * cdef Py_ssize_t abs_py_ssize_t(Py_ssize_t arg) nogil: # <<<<<<<<<<<<<< - * if arg < 0: - * return -arg - */ - -static Py_ssize_t abs_py_ssize_t(Py_ssize_t __pyx_v_arg) { - Py_ssize_t __pyx_r; - int __pyx_t_1; - - /* "View.MemoryView":1110 - * - * cdef Py_ssize_t abs_py_ssize_t(Py_ssize_t arg) nogil: - * if arg < 0: # <<<<<<<<<<<<<< - * return -arg - * else: - */ - __pyx_t_1 = ((__pyx_v_arg < 0) != 0); - if (__pyx_t_1) { - - /* "View.MemoryView":1111 - * cdef Py_ssize_t abs_py_ssize_t(Py_ssize_t arg) nogil: - * if arg < 0: - * return -arg # <<<<<<<<<<<<<< - * else: - * return arg - */ - __pyx_r = (-__pyx_v_arg); - goto __pyx_L0; - - /* "View.MemoryView":1110 - * - * cdef Py_ssize_t abs_py_ssize_t(Py_ssize_t arg) nogil: - * if arg < 0: # <<<<<<<<<<<<<< - * return -arg - * else: - */ - } - - /* "View.MemoryView":1113 - * return -arg - * else: - * return arg # <<<<<<<<<<<<<< - * - * @cname('__pyx_get_best_slice_order') - */ - /*else*/ { - __pyx_r = __pyx_v_arg; - goto __pyx_L0; - } - - /* "View.MemoryView":1109 - * - * - * cdef Py_ssize_t abs_py_ssize_t(Py_ssize_t arg) nogil: # <<<<<<<<<<<<<< - * if arg < 0: - * return -arg - */ - - /* function exit code */ - __pyx_L0:; - return __pyx_r; -} - -/* "View.MemoryView":1116 - * - * @cname('__pyx_get_best_slice_order') - * cdef char get_best_order(__Pyx_memviewslice *mslice, int ndim) nogil: # <<<<<<<<<<<<<< - * """ - * Figure out the best memory access order for a given slice. - */ - -static char __pyx_get_best_slice_order(__Pyx_memviewslice *__pyx_v_mslice, int __pyx_v_ndim) { - int __pyx_v_i; - Py_ssize_t __pyx_v_c_stride; - Py_ssize_t __pyx_v_f_stride; - char __pyx_r; - int __pyx_t_1; - int __pyx_t_2; - int __pyx_t_3; - int __pyx_t_4; - - /* "View.MemoryView":1121 - * """ - * cdef int i - * cdef Py_ssize_t c_stride = 0 # <<<<<<<<<<<<<< - * cdef Py_ssize_t f_stride = 0 - * - */ - __pyx_v_c_stride = 0; - - /* "View.MemoryView":1122 - * cdef int i - * cdef Py_ssize_t c_stride = 0 - * cdef Py_ssize_t f_stride = 0 # <<<<<<<<<<<<<< - * - * for i in range(ndim - 1, -1, -1): - */ - __pyx_v_f_stride = 0; - - /* "View.MemoryView":1124 - * cdef Py_ssize_t f_stride = 0 - * - * for i in range(ndim - 1, -1, -1): # <<<<<<<<<<<<<< - * if mslice.shape[i] > 1: - * c_stride = mslice.strides[i] - */ - for (__pyx_t_1 = (__pyx_v_ndim - 1); __pyx_t_1 > -1; __pyx_t_1-=1) { - __pyx_v_i = __pyx_t_1; - - /* "View.MemoryView":1125 - * - * for i in range(ndim - 1, -1, -1): - * if mslice.shape[i] > 1: # <<<<<<<<<<<<<< - * c_stride = mslice.strides[i] - * break - */ - __pyx_t_2 = (((__pyx_v_mslice->shape[__pyx_v_i]) > 1) != 0); - if (__pyx_t_2) { - - /* "View.MemoryView":1126 - * for i in range(ndim - 1, -1, -1): - * if mslice.shape[i] > 1: - * c_stride = mslice.strides[i] # <<<<<<<<<<<<<< - * break - * - */ - __pyx_v_c_stride = (__pyx_v_mslice->strides[__pyx_v_i]); - - /* "View.MemoryView":1127 - * if mslice.shape[i] > 1: - * c_stride = mslice.strides[i] - * break # <<<<<<<<<<<<<< - * - * for i in range(ndim): - */ - goto __pyx_L4_break; - - /* "View.MemoryView":1125 - * - * for i in range(ndim - 1, -1, -1): - * if mslice.shape[i] > 1: # <<<<<<<<<<<<<< - * c_stride = mslice.strides[i] - * break - */ - } - } - __pyx_L4_break:; - - /* "View.MemoryView":1129 - * break - * - * for i in range(ndim): # <<<<<<<<<<<<<< - * if mslice.shape[i] > 1: - * f_stride = mslice.strides[i] - */ - __pyx_t_1 = __pyx_v_ndim; - __pyx_t_3 = __pyx_t_1; - for (__pyx_t_4 = 0; __pyx_t_4 < __pyx_t_3; __pyx_t_4+=1) { - __pyx_v_i = __pyx_t_4; - - /* "View.MemoryView":1130 - * - * for i in range(ndim): - * if mslice.shape[i] > 1: # <<<<<<<<<<<<<< - * f_stride = mslice.strides[i] - * break - */ - __pyx_t_2 = (((__pyx_v_mslice->shape[__pyx_v_i]) > 1) != 0); - if (__pyx_t_2) { - - /* "View.MemoryView":1131 - * for i in range(ndim): - * if mslice.shape[i] > 1: - * f_stride = mslice.strides[i] # <<<<<<<<<<<<<< - * break - * - */ - __pyx_v_f_stride = (__pyx_v_mslice->strides[__pyx_v_i]); - - /* "View.MemoryView":1132 - * if mslice.shape[i] > 1: - * f_stride = mslice.strides[i] - * break # <<<<<<<<<<<<<< - * - * if abs_py_ssize_t(c_stride) <= abs_py_ssize_t(f_stride): - */ - goto __pyx_L7_break; - - /* "View.MemoryView":1130 - * - * for i in range(ndim): - * if mslice.shape[i] > 1: # <<<<<<<<<<<<<< - * f_stride = mslice.strides[i] - * break - */ - } - } - __pyx_L7_break:; - - /* "View.MemoryView":1134 - * break - * - * if abs_py_ssize_t(c_stride) <= abs_py_ssize_t(f_stride): # <<<<<<<<<<<<<< - * return 'C' - * else: - */ - __pyx_t_2 = ((abs_py_ssize_t(__pyx_v_c_stride) <= abs_py_ssize_t(__pyx_v_f_stride)) != 0); - if (__pyx_t_2) { - - /* "View.MemoryView":1135 - * - * if abs_py_ssize_t(c_stride) <= abs_py_ssize_t(f_stride): - * return 'C' # <<<<<<<<<<<<<< - * else: - * return 'F' - */ - __pyx_r = 'C'; - goto __pyx_L0; - - /* "View.MemoryView":1134 - * break - * - * if abs_py_ssize_t(c_stride) <= abs_py_ssize_t(f_stride): # <<<<<<<<<<<<<< - * return 'C' - * else: - */ - } - - /* "View.MemoryView":1137 - * return 'C' - * else: - * return 'F' # <<<<<<<<<<<<<< - * - * @cython.cdivision(True) - */ - /*else*/ { - __pyx_r = 'F'; - goto __pyx_L0; - } - - /* "View.MemoryView":1116 - * - * @cname('__pyx_get_best_slice_order') - * cdef char get_best_order(__Pyx_memviewslice *mslice, int ndim) nogil: # <<<<<<<<<<<<<< - * """ - * Figure out the best memory access order for a given slice. - */ - - /* function exit code */ - __pyx_L0:; - return __pyx_r; -} - -/* "View.MemoryView":1140 - * - * @cython.cdivision(True) - * cdef void _copy_strided_to_strided(char *src_data, Py_ssize_t *src_strides, # <<<<<<<<<<<<<< - * char *dst_data, Py_ssize_t *dst_strides, - * Py_ssize_t *src_shape, Py_ssize_t *dst_shape, - */ - -static void _copy_strided_to_strided(char *__pyx_v_src_data, Py_ssize_t *__pyx_v_src_strides, char *__pyx_v_dst_data, Py_ssize_t *__pyx_v_dst_strides, Py_ssize_t *__pyx_v_src_shape, Py_ssize_t *__pyx_v_dst_shape, int __pyx_v_ndim, size_t __pyx_v_itemsize) { - CYTHON_UNUSED Py_ssize_t __pyx_v_i; - CYTHON_UNUSED Py_ssize_t __pyx_v_src_extent; - Py_ssize_t __pyx_v_dst_extent; - Py_ssize_t __pyx_v_src_stride; - Py_ssize_t __pyx_v_dst_stride; - int __pyx_t_1; - int __pyx_t_2; - int __pyx_t_3; - Py_ssize_t __pyx_t_4; - Py_ssize_t __pyx_t_5; - Py_ssize_t __pyx_t_6; - - /* "View.MemoryView":1147 - * - * cdef Py_ssize_t i - * cdef Py_ssize_t src_extent = src_shape[0] # <<<<<<<<<<<<<< - * cdef Py_ssize_t dst_extent = dst_shape[0] - * cdef Py_ssize_t src_stride = src_strides[0] - */ - __pyx_v_src_extent = (__pyx_v_src_shape[0]); - - /* "View.MemoryView":1148 - * cdef Py_ssize_t i - * cdef Py_ssize_t src_extent = src_shape[0] - * cdef Py_ssize_t dst_extent = dst_shape[0] # <<<<<<<<<<<<<< - * cdef Py_ssize_t src_stride = src_strides[0] - * cdef Py_ssize_t dst_stride = dst_strides[0] - */ - __pyx_v_dst_extent = (__pyx_v_dst_shape[0]); - - /* "View.MemoryView":1149 - * cdef Py_ssize_t src_extent = src_shape[0] - * cdef Py_ssize_t dst_extent = dst_shape[0] - * cdef Py_ssize_t src_stride = src_strides[0] # <<<<<<<<<<<<<< - * cdef Py_ssize_t dst_stride = dst_strides[0] - * - */ - __pyx_v_src_stride = (__pyx_v_src_strides[0]); - - /* "View.MemoryView":1150 - * cdef Py_ssize_t dst_extent = dst_shape[0] - * cdef Py_ssize_t src_stride = src_strides[0] - * cdef Py_ssize_t dst_stride = dst_strides[0] # <<<<<<<<<<<<<< - * - * if ndim == 1: - */ - __pyx_v_dst_stride = (__pyx_v_dst_strides[0]); - - /* "View.MemoryView":1152 - * cdef Py_ssize_t dst_stride = dst_strides[0] - * - * if ndim == 1: # <<<<<<<<<<<<<< - * if (src_stride > 0 and dst_stride > 0 and - * src_stride == itemsize == dst_stride): - */ - __pyx_t_1 = ((__pyx_v_ndim == 1) != 0); - if (__pyx_t_1) { - - /* "View.MemoryView":1153 - * - * if ndim == 1: - * if (src_stride > 0 and dst_stride > 0 and # <<<<<<<<<<<<<< - * src_stride == itemsize == dst_stride): - * memcpy(dst_data, src_data, itemsize * dst_extent) - */ - __pyx_t_2 = ((__pyx_v_src_stride > 0) != 0); - if (__pyx_t_2) { - } else { - __pyx_t_1 = __pyx_t_2; - goto __pyx_L5_bool_binop_done; - } - __pyx_t_2 = ((__pyx_v_dst_stride > 0) != 0); - if (__pyx_t_2) { - } else { - __pyx_t_1 = __pyx_t_2; - goto __pyx_L5_bool_binop_done; - } - - /* "View.MemoryView":1154 - * if ndim == 1: - * if (src_stride > 0 and dst_stride > 0 and - * src_stride == itemsize == dst_stride): # <<<<<<<<<<<<<< - * memcpy(dst_data, src_data, itemsize * dst_extent) - * else: - */ - __pyx_t_2 = (((size_t)__pyx_v_src_stride) == __pyx_v_itemsize); - if (__pyx_t_2) { - __pyx_t_2 = (__pyx_v_itemsize == ((size_t)__pyx_v_dst_stride)); - } - __pyx_t_3 = (__pyx_t_2 != 0); - __pyx_t_1 = __pyx_t_3; - __pyx_L5_bool_binop_done:; - - /* "View.MemoryView":1153 - * - * if ndim == 1: - * if (src_stride > 0 and dst_stride > 0 and # <<<<<<<<<<<<<< - * src_stride == itemsize == dst_stride): - * memcpy(dst_data, src_data, itemsize * dst_extent) - */ - if (__pyx_t_1) { - - /* "View.MemoryView":1155 - * if (src_stride > 0 and dst_stride > 0 and - * src_stride == itemsize == dst_stride): - * memcpy(dst_data, src_data, itemsize * dst_extent) # <<<<<<<<<<<<<< - * else: - * for i in range(dst_extent): - */ - (void)(memcpy(__pyx_v_dst_data, __pyx_v_src_data, (__pyx_v_itemsize * __pyx_v_dst_extent))); - - /* "View.MemoryView":1153 - * - * if ndim == 1: - * if (src_stride > 0 and dst_stride > 0 and # <<<<<<<<<<<<<< - * src_stride == itemsize == dst_stride): - * memcpy(dst_data, src_data, itemsize * dst_extent) - */ - goto __pyx_L4; - } - - /* "View.MemoryView":1157 - * memcpy(dst_data, src_data, itemsize * dst_extent) - * else: - * for i in range(dst_extent): # <<<<<<<<<<<<<< - * memcpy(dst_data, src_data, itemsize) - * src_data += src_stride - */ - /*else*/ { - __pyx_t_4 = __pyx_v_dst_extent; - __pyx_t_5 = __pyx_t_4; - for (__pyx_t_6 = 0; __pyx_t_6 < __pyx_t_5; __pyx_t_6+=1) { - __pyx_v_i = __pyx_t_6; - - /* "View.MemoryView":1158 - * else: - * for i in range(dst_extent): - * memcpy(dst_data, src_data, itemsize) # <<<<<<<<<<<<<< - * src_data += src_stride - * dst_data += dst_stride - */ - (void)(memcpy(__pyx_v_dst_data, __pyx_v_src_data, __pyx_v_itemsize)); - - /* "View.MemoryView":1159 - * for i in range(dst_extent): - * memcpy(dst_data, src_data, itemsize) - * src_data += src_stride # <<<<<<<<<<<<<< - * dst_data += dst_stride - * else: - */ - __pyx_v_src_data = (__pyx_v_src_data + __pyx_v_src_stride); - - /* "View.MemoryView":1160 - * memcpy(dst_data, src_data, itemsize) - * src_data += src_stride - * dst_data += dst_stride # <<<<<<<<<<<<<< - * else: - * for i in range(dst_extent): - */ - __pyx_v_dst_data = (__pyx_v_dst_data + __pyx_v_dst_stride); - } - } - __pyx_L4:; - - /* "View.MemoryView":1152 - * cdef Py_ssize_t dst_stride = dst_strides[0] - * - * if ndim == 1: # <<<<<<<<<<<<<< - * if (src_stride > 0 and dst_stride > 0 and - * src_stride == itemsize == dst_stride): - */ - goto __pyx_L3; - } - - /* "View.MemoryView":1162 - * dst_data += dst_stride - * else: - * for i in range(dst_extent): # <<<<<<<<<<<<<< - * _copy_strided_to_strided(src_data, src_strides + 1, - * dst_data, dst_strides + 1, - */ - /*else*/ { - __pyx_t_4 = __pyx_v_dst_extent; - __pyx_t_5 = __pyx_t_4; - for (__pyx_t_6 = 0; __pyx_t_6 < __pyx_t_5; __pyx_t_6+=1) { - __pyx_v_i = __pyx_t_6; - - /* "View.MemoryView":1163 - * else: - * for i in range(dst_extent): - * _copy_strided_to_strided(src_data, src_strides + 1, # <<<<<<<<<<<<<< - * dst_data, dst_strides + 1, - * src_shape + 1, dst_shape + 1, - */ - _copy_strided_to_strided(__pyx_v_src_data, (__pyx_v_src_strides + 1), __pyx_v_dst_data, (__pyx_v_dst_strides + 1), (__pyx_v_src_shape + 1), (__pyx_v_dst_shape + 1), (__pyx_v_ndim - 1), __pyx_v_itemsize); - - /* "View.MemoryView":1167 - * src_shape + 1, dst_shape + 1, - * ndim - 1, itemsize) - * src_data += src_stride # <<<<<<<<<<<<<< - * dst_data += dst_stride - * - */ - __pyx_v_src_data = (__pyx_v_src_data + __pyx_v_src_stride); - - /* "View.MemoryView":1168 - * ndim - 1, itemsize) - * src_data += src_stride - * dst_data += dst_stride # <<<<<<<<<<<<<< - * - * cdef void copy_strided_to_strided(__Pyx_memviewslice *src, - */ - __pyx_v_dst_data = (__pyx_v_dst_data + __pyx_v_dst_stride); - } - } - __pyx_L3:; - - /* "View.MemoryView":1140 - * - * @cython.cdivision(True) - * cdef void _copy_strided_to_strided(char *src_data, Py_ssize_t *src_strides, # <<<<<<<<<<<<<< - * char *dst_data, Py_ssize_t *dst_strides, - * Py_ssize_t *src_shape, Py_ssize_t *dst_shape, - */ - - /* function exit code */ -} - -/* "View.MemoryView":1170 - * dst_data += dst_stride - * - * cdef void copy_strided_to_strided(__Pyx_memviewslice *src, # <<<<<<<<<<<<<< - * __Pyx_memviewslice *dst, - * int ndim, size_t itemsize) nogil: - */ - -static void copy_strided_to_strided(__Pyx_memviewslice *__pyx_v_src, __Pyx_memviewslice *__pyx_v_dst, int __pyx_v_ndim, size_t __pyx_v_itemsize) { - - /* "View.MemoryView":1173 - * __Pyx_memviewslice *dst, - * int ndim, size_t itemsize) nogil: - * _copy_strided_to_strided(src.data, src.strides, dst.data, dst.strides, # <<<<<<<<<<<<<< - * src.shape, dst.shape, ndim, itemsize) - * - */ - _copy_strided_to_strided(__pyx_v_src->data, __pyx_v_src->strides, __pyx_v_dst->data, __pyx_v_dst->strides, __pyx_v_src->shape, __pyx_v_dst->shape, __pyx_v_ndim, __pyx_v_itemsize); - - /* "View.MemoryView":1170 - * dst_data += dst_stride - * - * cdef void copy_strided_to_strided(__Pyx_memviewslice *src, # <<<<<<<<<<<<<< - * __Pyx_memviewslice *dst, - * int ndim, size_t itemsize) nogil: - */ - - /* function exit code */ -} - -/* "View.MemoryView":1177 - * - * @cname('__pyx_memoryview_slice_get_size') - * cdef Py_ssize_t slice_get_size(__Pyx_memviewslice *src, int ndim) nogil: # <<<<<<<<<<<<<< - * "Return the size of the memory occupied by the slice in number of bytes" - * cdef Py_ssize_t shape, size = src.memview.view.itemsize - */ - -static Py_ssize_t __pyx_memoryview_slice_get_size(__Pyx_memviewslice *__pyx_v_src, int __pyx_v_ndim) { - Py_ssize_t __pyx_v_shape; - Py_ssize_t __pyx_v_size; - Py_ssize_t __pyx_r; - Py_ssize_t __pyx_t_1; - Py_ssize_t *__pyx_t_2; - Py_ssize_t *__pyx_t_3; - Py_ssize_t *__pyx_t_4; - - /* "View.MemoryView":1179 - * cdef Py_ssize_t slice_get_size(__Pyx_memviewslice *src, int ndim) nogil: - * "Return the size of the memory occupied by the slice in number of bytes" - * cdef Py_ssize_t shape, size = src.memview.view.itemsize # <<<<<<<<<<<<<< - * - * for shape in src.shape[:ndim]: - */ - __pyx_t_1 = __pyx_v_src->memview->view.itemsize; - __pyx_v_size = __pyx_t_1; - - /* "View.MemoryView":1181 - * cdef Py_ssize_t shape, size = src.memview.view.itemsize - * - * for shape in src.shape[:ndim]: # <<<<<<<<<<<<<< - * size *= shape - * - */ - __pyx_t_3 = (__pyx_v_src->shape + __pyx_v_ndim); - for (__pyx_t_4 = __pyx_v_src->shape; __pyx_t_4 < __pyx_t_3; __pyx_t_4++) { - __pyx_t_2 = __pyx_t_4; - __pyx_v_shape = (__pyx_t_2[0]); - - /* "View.MemoryView":1182 - * - * for shape in src.shape[:ndim]: - * size *= shape # <<<<<<<<<<<<<< - * - * return size - */ - __pyx_v_size = (__pyx_v_size * __pyx_v_shape); - } - - /* "View.MemoryView":1184 - * size *= shape - * - * return size # <<<<<<<<<<<<<< - * - * @cname('__pyx_fill_contig_strides_array') - */ - __pyx_r = __pyx_v_size; - goto __pyx_L0; - - /* "View.MemoryView":1177 - * - * @cname('__pyx_memoryview_slice_get_size') - * cdef Py_ssize_t slice_get_size(__Pyx_memviewslice *src, int ndim) nogil: # <<<<<<<<<<<<<< - * "Return the size of the memory occupied by the slice in number of bytes" - * cdef Py_ssize_t shape, size = src.memview.view.itemsize - */ - - /* function exit code */ - __pyx_L0:; - return __pyx_r; -} - -/* "View.MemoryView":1187 - * - * @cname('__pyx_fill_contig_strides_array') - * cdef Py_ssize_t fill_contig_strides_array( # <<<<<<<<<<<<<< - * Py_ssize_t *shape, Py_ssize_t *strides, Py_ssize_t stride, - * int ndim, char order) nogil: - */ - -static Py_ssize_t __pyx_fill_contig_strides_array(Py_ssize_t *__pyx_v_shape, Py_ssize_t *__pyx_v_strides, Py_ssize_t __pyx_v_stride, int __pyx_v_ndim, char __pyx_v_order) { - int __pyx_v_idx; - Py_ssize_t __pyx_r; - int __pyx_t_1; - int __pyx_t_2; - int __pyx_t_3; - int __pyx_t_4; - - /* "View.MemoryView":1196 - * cdef int idx - * - * if order == 'F': # <<<<<<<<<<<<<< - * for idx in range(ndim): - * strides[idx] = stride - */ - __pyx_t_1 = ((__pyx_v_order == 'F') != 0); - if (__pyx_t_1) { - - /* "View.MemoryView":1197 - * - * if order == 'F': - * for idx in range(ndim): # <<<<<<<<<<<<<< - * strides[idx] = stride - * stride *= shape[idx] - */ - __pyx_t_2 = __pyx_v_ndim; - __pyx_t_3 = __pyx_t_2; - for (__pyx_t_4 = 0; __pyx_t_4 < __pyx_t_3; __pyx_t_4+=1) { - __pyx_v_idx = __pyx_t_4; - - /* "View.MemoryView":1198 - * if order == 'F': - * for idx in range(ndim): - * strides[idx] = stride # <<<<<<<<<<<<<< - * stride *= shape[idx] - * else: - */ - (__pyx_v_strides[__pyx_v_idx]) = __pyx_v_stride; - - /* "View.MemoryView":1199 - * for idx in range(ndim): - * strides[idx] = stride - * stride *= shape[idx] # <<<<<<<<<<<<<< - * else: - * for idx in range(ndim - 1, -1, -1): - */ - __pyx_v_stride = (__pyx_v_stride * (__pyx_v_shape[__pyx_v_idx])); - } - - /* "View.MemoryView":1196 - * cdef int idx - * - * if order == 'F': # <<<<<<<<<<<<<< - * for idx in range(ndim): - * strides[idx] = stride - */ - goto __pyx_L3; - } - - /* "View.MemoryView":1201 - * stride *= shape[idx] - * else: - * for idx in range(ndim - 1, -1, -1): # <<<<<<<<<<<<<< - * strides[idx] = stride - * stride *= shape[idx] - */ - /*else*/ { - for (__pyx_t_2 = (__pyx_v_ndim - 1); __pyx_t_2 > -1; __pyx_t_2-=1) { - __pyx_v_idx = __pyx_t_2; - - /* "View.MemoryView":1202 - * else: - * for idx in range(ndim - 1, -1, -1): - * strides[idx] = stride # <<<<<<<<<<<<<< - * stride *= shape[idx] - * - */ - (__pyx_v_strides[__pyx_v_idx]) = __pyx_v_stride; - - /* "View.MemoryView":1203 - * for idx in range(ndim - 1, -1, -1): - * strides[idx] = stride - * stride *= shape[idx] # <<<<<<<<<<<<<< - * - * return stride - */ - __pyx_v_stride = (__pyx_v_stride * (__pyx_v_shape[__pyx_v_idx])); - } - } - __pyx_L3:; - - /* "View.MemoryView":1205 - * stride *= shape[idx] - * - * return stride # <<<<<<<<<<<<<< - * - * @cname('__pyx_memoryview_copy_data_to_temp') - */ - __pyx_r = __pyx_v_stride; - goto __pyx_L0; - - /* "View.MemoryView":1187 - * - * @cname('__pyx_fill_contig_strides_array') - * cdef Py_ssize_t fill_contig_strides_array( # <<<<<<<<<<<<<< - * Py_ssize_t *shape, Py_ssize_t *strides, Py_ssize_t stride, - * int ndim, char order) nogil: - */ - - /* function exit code */ - __pyx_L0:; - return __pyx_r; -} - -/* "View.MemoryView":1208 - * - * @cname('__pyx_memoryview_copy_data_to_temp') - * cdef void *copy_data_to_temp(__Pyx_memviewslice *src, # <<<<<<<<<<<<<< - * __Pyx_memviewslice *tmpslice, - * char order, - */ - -static void *__pyx_memoryview_copy_data_to_temp(__Pyx_memviewslice *__pyx_v_src, __Pyx_memviewslice *__pyx_v_tmpslice, char __pyx_v_order, int __pyx_v_ndim) { - int __pyx_v_i; - void *__pyx_v_result; - size_t __pyx_v_itemsize; - size_t __pyx_v_size; - void *__pyx_r; - Py_ssize_t __pyx_t_1; - int __pyx_t_2; - int __pyx_t_3; - struct __pyx_memoryview_obj *__pyx_t_4; - int __pyx_t_5; - int __pyx_t_6; - int __pyx_lineno = 0; - const char *__pyx_filename = NULL; - int __pyx_clineno = 0; - - /* "View.MemoryView":1219 - * cdef void *result - * - * cdef size_t itemsize = src.memview.view.itemsize # <<<<<<<<<<<<<< - * cdef size_t size = slice_get_size(src, ndim) - * - */ - __pyx_t_1 = __pyx_v_src->memview->view.itemsize; - __pyx_v_itemsize = __pyx_t_1; - - /* "View.MemoryView":1220 - * - * cdef size_t itemsize = src.memview.view.itemsize - * cdef size_t size = slice_get_size(src, ndim) # <<<<<<<<<<<<<< - * - * result = malloc(size) - */ - __pyx_v_size = __pyx_memoryview_slice_get_size(__pyx_v_src, __pyx_v_ndim); - - /* "View.MemoryView":1222 - * cdef size_t size = slice_get_size(src, ndim) - * - * result = malloc(size) # <<<<<<<<<<<<<< - * if not result: - * _err(MemoryError, NULL) - */ - __pyx_v_result = malloc(__pyx_v_size); - - /* "View.MemoryView":1223 - * - * result = malloc(size) - * if not result: # <<<<<<<<<<<<<< - * _err(MemoryError, NULL) - * - */ - __pyx_t_2 = ((!(__pyx_v_result != 0)) != 0); - if (__pyx_t_2) { - - /* "View.MemoryView":1224 - * result = malloc(size) - * if not result: - * _err(MemoryError, NULL) # <<<<<<<<<<<<<< - * - * - */ - __pyx_t_3 = __pyx_memoryview_err(__pyx_builtin_MemoryError, NULL); if (unlikely(__pyx_t_3 == ((int)-1))) __PYX_ERR(1, 1224, __pyx_L1_error) - - /* "View.MemoryView":1223 - * - * result = malloc(size) - * if not result: # <<<<<<<<<<<<<< - * _err(MemoryError, NULL) - * - */ - } - - /* "View.MemoryView":1227 - * - * - * tmpslice.data = result # <<<<<<<<<<<<<< - * tmpslice.memview = src.memview - * for i in range(ndim): - */ - __pyx_v_tmpslice->data = ((char *)__pyx_v_result); - - /* "View.MemoryView":1228 - * - * tmpslice.data = result - * tmpslice.memview = src.memview # <<<<<<<<<<<<<< - * for i in range(ndim): - * tmpslice.shape[i] = src.shape[i] - */ - __pyx_t_4 = __pyx_v_src->memview; - __pyx_v_tmpslice->memview = __pyx_t_4; - - /* "View.MemoryView":1229 - * tmpslice.data = result - * tmpslice.memview = src.memview - * for i in range(ndim): # <<<<<<<<<<<<<< - * tmpslice.shape[i] = src.shape[i] - * tmpslice.suboffsets[i] = -1 - */ - __pyx_t_3 = __pyx_v_ndim; - __pyx_t_5 = __pyx_t_3; - for (__pyx_t_6 = 0; __pyx_t_6 < __pyx_t_5; __pyx_t_6+=1) { - __pyx_v_i = __pyx_t_6; - - /* "View.MemoryView":1230 - * tmpslice.memview = src.memview - * for i in range(ndim): - * tmpslice.shape[i] = src.shape[i] # <<<<<<<<<<<<<< - * tmpslice.suboffsets[i] = -1 - * - */ - (__pyx_v_tmpslice->shape[__pyx_v_i]) = (__pyx_v_src->shape[__pyx_v_i]); - - /* "View.MemoryView":1231 - * for i in range(ndim): - * tmpslice.shape[i] = src.shape[i] - * tmpslice.suboffsets[i] = -1 # <<<<<<<<<<<<<< - * - * fill_contig_strides_array(&tmpslice.shape[0], &tmpslice.strides[0], itemsize, - */ - (__pyx_v_tmpslice->suboffsets[__pyx_v_i]) = -1L; - } - - /* "View.MemoryView":1233 - * tmpslice.suboffsets[i] = -1 - * - * fill_contig_strides_array(&tmpslice.shape[0], &tmpslice.strides[0], itemsize, # <<<<<<<<<<<<<< - * ndim, order) - * - */ - (void)(__pyx_fill_contig_strides_array((&(__pyx_v_tmpslice->shape[0])), (&(__pyx_v_tmpslice->strides[0])), __pyx_v_itemsize, __pyx_v_ndim, __pyx_v_order)); - - /* "View.MemoryView":1237 - * - * - * for i in range(ndim): # <<<<<<<<<<<<<< - * if tmpslice.shape[i] == 1: - * tmpslice.strides[i] = 0 - */ - __pyx_t_3 = __pyx_v_ndim; - __pyx_t_5 = __pyx_t_3; - for (__pyx_t_6 = 0; __pyx_t_6 < __pyx_t_5; __pyx_t_6+=1) { - __pyx_v_i = __pyx_t_6; - - /* "View.MemoryView":1238 - * - * for i in range(ndim): - * if tmpslice.shape[i] == 1: # <<<<<<<<<<<<<< - * tmpslice.strides[i] = 0 - * - */ - __pyx_t_2 = (((__pyx_v_tmpslice->shape[__pyx_v_i]) == 1) != 0); - if (__pyx_t_2) { - - /* "View.MemoryView":1239 - * for i in range(ndim): - * if tmpslice.shape[i] == 1: - * tmpslice.strides[i] = 0 # <<<<<<<<<<<<<< - * - * if slice_is_contig(src[0], order, ndim): - */ - (__pyx_v_tmpslice->strides[__pyx_v_i]) = 0; - - /* "View.MemoryView":1238 - * - * for i in range(ndim): - * if tmpslice.shape[i] == 1: # <<<<<<<<<<<<<< - * tmpslice.strides[i] = 0 - * - */ - } - } - - /* "View.MemoryView":1241 - * tmpslice.strides[i] = 0 - * - * if slice_is_contig(src[0], order, ndim): # <<<<<<<<<<<<<< - * memcpy(result, src.data, size) - * else: - */ - __pyx_t_2 = (__pyx_memviewslice_is_contig((__pyx_v_src[0]), __pyx_v_order, __pyx_v_ndim) != 0); - if (__pyx_t_2) { - - /* "View.MemoryView":1242 - * - * if slice_is_contig(src[0], order, ndim): - * memcpy(result, src.data, size) # <<<<<<<<<<<<<< - * else: - * copy_strided_to_strided(src, tmpslice, ndim, itemsize) - */ - (void)(memcpy(__pyx_v_result, __pyx_v_src->data, __pyx_v_size)); - - /* "View.MemoryView":1241 - * tmpslice.strides[i] = 0 - * - * if slice_is_contig(src[0], order, ndim): # <<<<<<<<<<<<<< - * memcpy(result, src.data, size) - * else: - */ - goto __pyx_L9; - } - - /* "View.MemoryView":1244 - * memcpy(result, src.data, size) - * else: - * copy_strided_to_strided(src, tmpslice, ndim, itemsize) # <<<<<<<<<<<<<< - * - * return result - */ - /*else*/ { - copy_strided_to_strided(__pyx_v_src, __pyx_v_tmpslice, __pyx_v_ndim, __pyx_v_itemsize); - } - __pyx_L9:; - - /* "View.MemoryView":1246 - * copy_strided_to_strided(src, tmpslice, ndim, itemsize) - * - * return result # <<<<<<<<<<<<<< - * - * - */ - __pyx_r = __pyx_v_result; - goto __pyx_L0; - - /* "View.MemoryView":1208 - * - * @cname('__pyx_memoryview_copy_data_to_temp') - * cdef void *copy_data_to_temp(__Pyx_memviewslice *src, # <<<<<<<<<<<<<< - * __Pyx_memviewslice *tmpslice, - * char order, - */ - - /* function exit code */ - __pyx_L1_error:; - { - #ifdef WITH_THREAD - PyGILState_STATE __pyx_gilstate_save = __Pyx_PyGILState_Ensure(); - #endif - __Pyx_AddTraceback("View.MemoryView.copy_data_to_temp", __pyx_clineno, __pyx_lineno, __pyx_filename); - #ifdef WITH_THREAD - __Pyx_PyGILState_Release(__pyx_gilstate_save); - #endif - } - __pyx_r = NULL; - __pyx_L0:; - return __pyx_r; -} - -/* "View.MemoryView":1251 - * - * @cname('__pyx_memoryview_err_extents') - * cdef int _err_extents(int i, Py_ssize_t extent1, # <<<<<<<<<<<<<< - * Py_ssize_t extent2) except -1 with gil: - * raise ValueError("got differing extents in dimension %d (got %d and %d)" % - */ - -static int __pyx_memoryview_err_extents(int __pyx_v_i, Py_ssize_t __pyx_v_extent1, Py_ssize_t __pyx_v_extent2) { - int __pyx_r; - __Pyx_RefNannyDeclarations - PyObject *__pyx_t_1 = NULL; - PyObject *__pyx_t_2 = NULL; - PyObject *__pyx_t_3 = NULL; - PyObject *__pyx_t_4 = NULL; - int __pyx_lineno = 0; - const char *__pyx_filename = NULL; - int __pyx_clineno = 0; - #ifdef WITH_THREAD - PyGILState_STATE __pyx_gilstate_save = __Pyx_PyGILState_Ensure(); - #endif - __Pyx_RefNannySetupContext("_err_extents", 0); - - /* "View.MemoryView":1254 - * Py_ssize_t extent2) except -1 with gil: - * raise ValueError("got differing extents in dimension %d (got %d and %d)" % - * (i, extent1, extent2)) # <<<<<<<<<<<<<< - * - * @cname('__pyx_memoryview_err_dim') - */ - __pyx_t_1 = __Pyx_PyInt_From_int(__pyx_v_i); if (unlikely(!__pyx_t_1)) __PYX_ERR(1, 1254, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - __pyx_t_2 = PyInt_FromSsize_t(__pyx_v_extent1); if (unlikely(!__pyx_t_2)) __PYX_ERR(1, 1254, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - __pyx_t_3 = PyInt_FromSsize_t(__pyx_v_extent2); if (unlikely(!__pyx_t_3)) __PYX_ERR(1, 1254, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_3); - __pyx_t_4 = PyTuple_New(3); if (unlikely(!__pyx_t_4)) __PYX_ERR(1, 1254, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_4); - __Pyx_GIVEREF(__pyx_t_1); - PyTuple_SET_ITEM(__pyx_t_4, 0, __pyx_t_1); - __Pyx_GIVEREF(__pyx_t_2); - PyTuple_SET_ITEM(__pyx_t_4, 1, __pyx_t_2); - __Pyx_GIVEREF(__pyx_t_3); - PyTuple_SET_ITEM(__pyx_t_4, 2, __pyx_t_3); - __pyx_t_1 = 0; - __pyx_t_2 = 0; - __pyx_t_3 = 0; - - /* "View.MemoryView":1253 - * cdef int _err_extents(int i, Py_ssize_t extent1, - * Py_ssize_t extent2) except -1 with gil: - * raise ValueError("got differing extents in dimension %d (got %d and %d)" % # <<<<<<<<<<<<<< - * (i, extent1, extent2)) - * - */ - __pyx_t_3 = __Pyx_PyString_Format(__pyx_kp_s_got_differing_extents_in_dimensi, __pyx_t_4); if (unlikely(!__pyx_t_3)) __PYX_ERR(1, 1253, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_3); - __Pyx_DECREF(__pyx_t_4); __pyx_t_4 = 0; - __pyx_t_4 = __Pyx_PyObject_CallOneArg(__pyx_builtin_ValueError, __pyx_t_3); if (unlikely(!__pyx_t_4)) __PYX_ERR(1, 1253, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_4); - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - __Pyx_Raise(__pyx_t_4, 0, 0, 0); - __Pyx_DECREF(__pyx_t_4); __pyx_t_4 = 0; - __PYX_ERR(1, 1253, __pyx_L1_error) - - /* "View.MemoryView":1251 - * - * @cname('__pyx_memoryview_err_extents') - * cdef int _err_extents(int i, Py_ssize_t extent1, # <<<<<<<<<<<<<< - * Py_ssize_t extent2) except -1 with gil: - * raise ValueError("got differing extents in dimension %d (got %d and %d)" % - */ - - /* function exit code */ - __pyx_L1_error:; - __Pyx_XDECREF(__pyx_t_1); - __Pyx_XDECREF(__pyx_t_2); - __Pyx_XDECREF(__pyx_t_3); - __Pyx_XDECREF(__pyx_t_4); - __Pyx_AddTraceback("View.MemoryView._err_extents", __pyx_clineno, __pyx_lineno, __pyx_filename); - __pyx_r = -1; - __Pyx_RefNannyFinishContext(); - #ifdef WITH_THREAD - __Pyx_PyGILState_Release(__pyx_gilstate_save); - #endif - return __pyx_r; -} - -/* "View.MemoryView":1257 - * - * @cname('__pyx_memoryview_err_dim') - * cdef int _err_dim(object error, char *msg, int dim) except -1 with gil: # <<<<<<<<<<<<<< - * raise error(msg.decode('ascii') % dim) - * - */ - -static int __pyx_memoryview_err_dim(PyObject *__pyx_v_error, char *__pyx_v_msg, int __pyx_v_dim) { - int __pyx_r; - __Pyx_RefNannyDeclarations - PyObject *__pyx_t_1 = NULL; - PyObject *__pyx_t_2 = NULL; - PyObject *__pyx_t_3 = NULL; - PyObject *__pyx_t_4 = NULL; - int __pyx_lineno = 0; - const char *__pyx_filename = NULL; - int __pyx_clineno = 0; - #ifdef WITH_THREAD - PyGILState_STATE __pyx_gilstate_save = __Pyx_PyGILState_Ensure(); - #endif - __Pyx_RefNannySetupContext("_err_dim", 0); - __Pyx_INCREF(__pyx_v_error); - - /* "View.MemoryView":1258 - * @cname('__pyx_memoryview_err_dim') - * cdef int _err_dim(object error, char *msg, int dim) except -1 with gil: - * raise error(msg.decode('ascii') % dim) # <<<<<<<<<<<<<< - * - * @cname('__pyx_memoryview_err') - */ - __pyx_t_2 = __Pyx_decode_c_string(__pyx_v_msg, 0, strlen(__pyx_v_msg), NULL, NULL, PyUnicode_DecodeASCII); if (unlikely(!__pyx_t_2)) __PYX_ERR(1, 1258, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - __pyx_t_3 = __Pyx_PyInt_From_int(__pyx_v_dim); if (unlikely(!__pyx_t_3)) __PYX_ERR(1, 1258, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_3); - __pyx_t_4 = PyUnicode_Format(__pyx_t_2, __pyx_t_3); if (unlikely(!__pyx_t_4)) __PYX_ERR(1, 1258, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_4); - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - __Pyx_INCREF(__pyx_v_error); - __pyx_t_3 = __pyx_v_error; __pyx_t_2 = NULL; - if (CYTHON_UNPACK_METHODS && unlikely(PyMethod_Check(__pyx_t_3))) { - __pyx_t_2 = PyMethod_GET_SELF(__pyx_t_3); - if (likely(__pyx_t_2)) { - PyObject* function = PyMethod_GET_FUNCTION(__pyx_t_3); - __Pyx_INCREF(__pyx_t_2); - __Pyx_INCREF(function); - __Pyx_DECREF_SET(__pyx_t_3, function); - } - } - __pyx_t_1 = (__pyx_t_2) ? __Pyx_PyObject_Call2Args(__pyx_t_3, __pyx_t_2, __pyx_t_4) : __Pyx_PyObject_CallOneArg(__pyx_t_3, __pyx_t_4); - __Pyx_XDECREF(__pyx_t_2); __pyx_t_2 = 0; - __Pyx_DECREF(__pyx_t_4); __pyx_t_4 = 0; - if (unlikely(!__pyx_t_1)) __PYX_ERR(1, 1258, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - __Pyx_Raise(__pyx_t_1, 0, 0, 0); - __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; - __PYX_ERR(1, 1258, __pyx_L1_error) - - /* "View.MemoryView":1257 - * - * @cname('__pyx_memoryview_err_dim') - * cdef int _err_dim(object error, char *msg, int dim) except -1 with gil: # <<<<<<<<<<<<<< - * raise error(msg.decode('ascii') % dim) - * - */ - - /* function exit code */ - __pyx_L1_error:; - __Pyx_XDECREF(__pyx_t_1); - __Pyx_XDECREF(__pyx_t_2); - __Pyx_XDECREF(__pyx_t_3); - __Pyx_XDECREF(__pyx_t_4); - __Pyx_AddTraceback("View.MemoryView._err_dim", __pyx_clineno, __pyx_lineno, __pyx_filename); - __pyx_r = -1; - __Pyx_XDECREF(__pyx_v_error); - __Pyx_RefNannyFinishContext(); - #ifdef WITH_THREAD - __Pyx_PyGILState_Release(__pyx_gilstate_save); - #endif - return __pyx_r; -} - -/* "View.MemoryView":1261 - * - * @cname('__pyx_memoryview_err') - * cdef int _err(object error, char *msg) except -1 with gil: # <<<<<<<<<<<<<< - * if msg != NULL: - * raise error(msg.decode('ascii')) - */ - -static int __pyx_memoryview_err(PyObject *__pyx_v_error, char *__pyx_v_msg) { - int __pyx_r; - __Pyx_RefNannyDeclarations - int __pyx_t_1; - PyObject *__pyx_t_2 = NULL; - PyObject *__pyx_t_3 = NULL; - PyObject *__pyx_t_4 = NULL; - PyObject *__pyx_t_5 = NULL; - int __pyx_lineno = 0; - const char *__pyx_filename = NULL; - int __pyx_clineno = 0; - #ifdef WITH_THREAD - PyGILState_STATE __pyx_gilstate_save = __Pyx_PyGILState_Ensure(); - #endif - __Pyx_RefNannySetupContext("_err", 0); - __Pyx_INCREF(__pyx_v_error); - - /* "View.MemoryView":1262 - * @cname('__pyx_memoryview_err') - * cdef int _err(object error, char *msg) except -1 with gil: - * if msg != NULL: # <<<<<<<<<<<<<< - * raise error(msg.decode('ascii')) - * else: - */ - __pyx_t_1 = ((__pyx_v_msg != NULL) != 0); - if (unlikely(__pyx_t_1)) { - - /* "View.MemoryView":1263 - * cdef int _err(object error, char *msg) except -1 with gil: - * if msg != NULL: - * raise error(msg.decode('ascii')) # <<<<<<<<<<<<<< - * else: - * raise error - */ - __pyx_t_3 = __Pyx_decode_c_string(__pyx_v_msg, 0, strlen(__pyx_v_msg), NULL, NULL, PyUnicode_DecodeASCII); if (unlikely(!__pyx_t_3)) __PYX_ERR(1, 1263, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_3); - __Pyx_INCREF(__pyx_v_error); - __pyx_t_4 = __pyx_v_error; __pyx_t_5 = NULL; - if (CYTHON_UNPACK_METHODS && unlikely(PyMethod_Check(__pyx_t_4))) { - __pyx_t_5 = PyMethod_GET_SELF(__pyx_t_4); - if (likely(__pyx_t_5)) { - PyObject* function = PyMethod_GET_FUNCTION(__pyx_t_4); - __Pyx_INCREF(__pyx_t_5); - __Pyx_INCREF(function); - __Pyx_DECREF_SET(__pyx_t_4, function); - } - } - __pyx_t_2 = (__pyx_t_5) ? __Pyx_PyObject_Call2Args(__pyx_t_4, __pyx_t_5, __pyx_t_3) : __Pyx_PyObject_CallOneArg(__pyx_t_4, __pyx_t_3); - __Pyx_XDECREF(__pyx_t_5); __pyx_t_5 = 0; - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - if (unlikely(!__pyx_t_2)) __PYX_ERR(1, 1263, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - __Pyx_DECREF(__pyx_t_4); __pyx_t_4 = 0; - __Pyx_Raise(__pyx_t_2, 0, 0, 0); - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - __PYX_ERR(1, 1263, __pyx_L1_error) - - /* "View.MemoryView":1262 - * @cname('__pyx_memoryview_err') - * cdef int _err(object error, char *msg) except -1 with gil: - * if msg != NULL: # <<<<<<<<<<<<<< - * raise error(msg.decode('ascii')) - * else: - */ - } - - /* "View.MemoryView":1265 - * raise error(msg.decode('ascii')) - * else: - * raise error # <<<<<<<<<<<<<< - * - * @cname('__pyx_memoryview_copy_contents') - */ - /*else*/ { - __Pyx_Raise(__pyx_v_error, 0, 0, 0); - __PYX_ERR(1, 1265, __pyx_L1_error) - } - - /* "View.MemoryView":1261 - * - * @cname('__pyx_memoryview_err') - * cdef int _err(object error, char *msg) except -1 with gil: # <<<<<<<<<<<<<< - * if msg != NULL: - * raise error(msg.decode('ascii')) - */ - - /* function exit code */ - __pyx_L1_error:; - __Pyx_XDECREF(__pyx_t_2); - __Pyx_XDECREF(__pyx_t_3); - __Pyx_XDECREF(__pyx_t_4); - __Pyx_XDECREF(__pyx_t_5); - __Pyx_AddTraceback("View.MemoryView._err", __pyx_clineno, __pyx_lineno, __pyx_filename); - __pyx_r = -1; - __Pyx_XDECREF(__pyx_v_error); - __Pyx_RefNannyFinishContext(); - #ifdef WITH_THREAD - __Pyx_PyGILState_Release(__pyx_gilstate_save); - #endif - return __pyx_r; -} - -/* "View.MemoryView":1268 - * - * @cname('__pyx_memoryview_copy_contents') - * cdef int memoryview_copy_contents(__Pyx_memviewslice src, # <<<<<<<<<<<<<< - * __Pyx_memviewslice dst, - * int src_ndim, int dst_ndim, - */ - -static int __pyx_memoryview_copy_contents(__Pyx_memviewslice __pyx_v_src, __Pyx_memviewslice __pyx_v_dst, int __pyx_v_src_ndim, int __pyx_v_dst_ndim, int __pyx_v_dtype_is_object) { - void *__pyx_v_tmpdata; - size_t __pyx_v_itemsize; - int __pyx_v_i; - char __pyx_v_order; - int __pyx_v_broadcasting; - int __pyx_v_direct_copy; - __Pyx_memviewslice __pyx_v_tmp; - int __pyx_v_ndim; - int __pyx_r; - Py_ssize_t __pyx_t_1; - int __pyx_t_2; - int __pyx_t_3; - int __pyx_t_4; - int __pyx_t_5; - int __pyx_t_6; - void *__pyx_t_7; - int __pyx_t_8; - int __pyx_lineno = 0; - const char *__pyx_filename = NULL; - int __pyx_clineno = 0; - - /* "View.MemoryView":1276 - * Check for overlapping memory and verify the shapes. - * """ - * cdef void *tmpdata = NULL # <<<<<<<<<<<<<< - * cdef size_t itemsize = src.memview.view.itemsize - * cdef int i - */ - __pyx_v_tmpdata = NULL; - - /* "View.MemoryView":1277 - * """ - * cdef void *tmpdata = NULL - * cdef size_t itemsize = src.memview.view.itemsize # <<<<<<<<<<<<<< - * cdef int i - * cdef char order = get_best_order(&src, src_ndim) - */ - __pyx_t_1 = __pyx_v_src.memview->view.itemsize; - __pyx_v_itemsize = __pyx_t_1; - - /* "View.MemoryView":1279 - * cdef size_t itemsize = src.memview.view.itemsize - * cdef int i - * cdef char order = get_best_order(&src, src_ndim) # <<<<<<<<<<<<<< - * cdef bint broadcasting = False - * cdef bint direct_copy = False - */ - __pyx_v_order = __pyx_get_best_slice_order((&__pyx_v_src), __pyx_v_src_ndim); - - /* "View.MemoryView":1280 - * cdef int i - * cdef char order = get_best_order(&src, src_ndim) - * cdef bint broadcasting = False # <<<<<<<<<<<<<< - * cdef bint direct_copy = False - * cdef __Pyx_memviewslice tmp - */ - __pyx_v_broadcasting = 0; - - /* "View.MemoryView":1281 - * cdef char order = get_best_order(&src, src_ndim) - * cdef bint broadcasting = False - * cdef bint direct_copy = False # <<<<<<<<<<<<<< - * cdef __Pyx_memviewslice tmp - * - */ - __pyx_v_direct_copy = 0; - - /* "View.MemoryView":1284 - * cdef __Pyx_memviewslice tmp - * - * if src_ndim < dst_ndim: # <<<<<<<<<<<<<< - * broadcast_leading(&src, src_ndim, dst_ndim) - * elif dst_ndim < src_ndim: - */ - __pyx_t_2 = ((__pyx_v_src_ndim < __pyx_v_dst_ndim) != 0); - if (__pyx_t_2) { - - /* "View.MemoryView":1285 - * - * if src_ndim < dst_ndim: - * broadcast_leading(&src, src_ndim, dst_ndim) # <<<<<<<<<<<<<< - * elif dst_ndim < src_ndim: - * broadcast_leading(&dst, dst_ndim, src_ndim) - */ - __pyx_memoryview_broadcast_leading((&__pyx_v_src), __pyx_v_src_ndim, __pyx_v_dst_ndim); - - /* "View.MemoryView":1284 - * cdef __Pyx_memviewslice tmp - * - * if src_ndim < dst_ndim: # <<<<<<<<<<<<<< - * broadcast_leading(&src, src_ndim, dst_ndim) - * elif dst_ndim < src_ndim: - */ - goto __pyx_L3; - } - - /* "View.MemoryView":1286 - * if src_ndim < dst_ndim: - * broadcast_leading(&src, src_ndim, dst_ndim) - * elif dst_ndim < src_ndim: # <<<<<<<<<<<<<< - * broadcast_leading(&dst, dst_ndim, src_ndim) - * - */ - __pyx_t_2 = ((__pyx_v_dst_ndim < __pyx_v_src_ndim) != 0); - if (__pyx_t_2) { - - /* "View.MemoryView":1287 - * broadcast_leading(&src, src_ndim, dst_ndim) - * elif dst_ndim < src_ndim: - * broadcast_leading(&dst, dst_ndim, src_ndim) # <<<<<<<<<<<<<< - * - * cdef int ndim = max(src_ndim, dst_ndim) - */ - __pyx_memoryview_broadcast_leading((&__pyx_v_dst), __pyx_v_dst_ndim, __pyx_v_src_ndim); - - /* "View.MemoryView":1286 - * if src_ndim < dst_ndim: - * broadcast_leading(&src, src_ndim, dst_ndim) - * elif dst_ndim < src_ndim: # <<<<<<<<<<<<<< - * broadcast_leading(&dst, dst_ndim, src_ndim) - * - */ - } - __pyx_L3:; - - /* "View.MemoryView":1289 - * broadcast_leading(&dst, dst_ndim, src_ndim) - * - * cdef int ndim = max(src_ndim, dst_ndim) # <<<<<<<<<<<<<< - * - * for i in range(ndim): - */ - __pyx_t_3 = __pyx_v_dst_ndim; - __pyx_t_4 = __pyx_v_src_ndim; - if (((__pyx_t_3 > __pyx_t_4) != 0)) { - __pyx_t_5 = __pyx_t_3; - } else { - __pyx_t_5 = __pyx_t_4; - } - __pyx_v_ndim = __pyx_t_5; - - /* "View.MemoryView":1291 - * cdef int ndim = max(src_ndim, dst_ndim) - * - * for i in range(ndim): # <<<<<<<<<<<<<< - * if src.shape[i] != dst.shape[i]: - * if src.shape[i] == 1: - */ - __pyx_t_5 = __pyx_v_ndim; - __pyx_t_3 = __pyx_t_5; - for (__pyx_t_4 = 0; __pyx_t_4 < __pyx_t_3; __pyx_t_4+=1) { - __pyx_v_i = __pyx_t_4; - - /* "View.MemoryView":1292 - * - * for i in range(ndim): - * if src.shape[i] != dst.shape[i]: # <<<<<<<<<<<<<< - * if src.shape[i] == 1: - * broadcasting = True - */ - __pyx_t_2 = (((__pyx_v_src.shape[__pyx_v_i]) != (__pyx_v_dst.shape[__pyx_v_i])) != 0); - if (__pyx_t_2) { - - /* "View.MemoryView":1293 - * for i in range(ndim): - * if src.shape[i] != dst.shape[i]: - * if src.shape[i] == 1: # <<<<<<<<<<<<<< - * broadcasting = True - * src.strides[i] = 0 - */ - __pyx_t_2 = (((__pyx_v_src.shape[__pyx_v_i]) == 1) != 0); - if (__pyx_t_2) { - - /* "View.MemoryView":1294 - * if src.shape[i] != dst.shape[i]: - * if src.shape[i] == 1: - * broadcasting = True # <<<<<<<<<<<<<< - * src.strides[i] = 0 - * else: - */ - __pyx_v_broadcasting = 1; - - /* "View.MemoryView":1295 - * if src.shape[i] == 1: - * broadcasting = True - * src.strides[i] = 0 # <<<<<<<<<<<<<< - * else: - * _err_extents(i, dst.shape[i], src.shape[i]) - */ - (__pyx_v_src.strides[__pyx_v_i]) = 0; - - /* "View.MemoryView":1293 - * for i in range(ndim): - * if src.shape[i] != dst.shape[i]: - * if src.shape[i] == 1: # <<<<<<<<<<<<<< - * broadcasting = True - * src.strides[i] = 0 - */ - goto __pyx_L7; - } - - /* "View.MemoryView":1297 - * src.strides[i] = 0 - * else: - * _err_extents(i, dst.shape[i], src.shape[i]) # <<<<<<<<<<<<<< - * - * if src.suboffsets[i] >= 0: - */ - /*else*/ { - __pyx_t_6 = __pyx_memoryview_err_extents(__pyx_v_i, (__pyx_v_dst.shape[__pyx_v_i]), (__pyx_v_src.shape[__pyx_v_i])); if (unlikely(__pyx_t_6 == ((int)-1))) __PYX_ERR(1, 1297, __pyx_L1_error) - } - __pyx_L7:; - - /* "View.MemoryView":1292 - * - * for i in range(ndim): - * if src.shape[i] != dst.shape[i]: # <<<<<<<<<<<<<< - * if src.shape[i] == 1: - * broadcasting = True - */ - } - - /* "View.MemoryView":1299 - * _err_extents(i, dst.shape[i], src.shape[i]) - * - * if src.suboffsets[i] >= 0: # <<<<<<<<<<<<<< - * _err_dim(ValueError, "Dimension %d is not direct", i) - * - */ - __pyx_t_2 = (((__pyx_v_src.suboffsets[__pyx_v_i]) >= 0) != 0); - if (__pyx_t_2) { - - /* "View.MemoryView":1300 - * - * if src.suboffsets[i] >= 0: - * _err_dim(ValueError, "Dimension %d is not direct", i) # <<<<<<<<<<<<<< - * - * if slices_overlap(&src, &dst, ndim, itemsize): - */ - __pyx_t_6 = __pyx_memoryview_err_dim(__pyx_builtin_ValueError, ((char *)"Dimension %d is not direct"), __pyx_v_i); if (unlikely(__pyx_t_6 == ((int)-1))) __PYX_ERR(1, 1300, __pyx_L1_error) - - /* "View.MemoryView":1299 - * _err_extents(i, dst.shape[i], src.shape[i]) - * - * if src.suboffsets[i] >= 0: # <<<<<<<<<<<<<< - * _err_dim(ValueError, "Dimension %d is not direct", i) - * - */ - } - } - - /* "View.MemoryView":1302 - * _err_dim(ValueError, "Dimension %d is not direct", i) - * - * if slices_overlap(&src, &dst, ndim, itemsize): # <<<<<<<<<<<<<< - * - * if not slice_is_contig(src, order, ndim): - */ - __pyx_t_2 = (__pyx_slices_overlap((&__pyx_v_src), (&__pyx_v_dst), __pyx_v_ndim, __pyx_v_itemsize) != 0); - if (__pyx_t_2) { - - /* "View.MemoryView":1304 - * if slices_overlap(&src, &dst, ndim, itemsize): - * - * if not slice_is_contig(src, order, ndim): # <<<<<<<<<<<<<< - * order = get_best_order(&dst, ndim) - * - */ - __pyx_t_2 = ((!(__pyx_memviewslice_is_contig(__pyx_v_src, __pyx_v_order, __pyx_v_ndim) != 0)) != 0); - if (__pyx_t_2) { - - /* "View.MemoryView":1305 - * - * if not slice_is_contig(src, order, ndim): - * order = get_best_order(&dst, ndim) # <<<<<<<<<<<<<< - * - * tmpdata = copy_data_to_temp(&src, &tmp, order, ndim) - */ - __pyx_v_order = __pyx_get_best_slice_order((&__pyx_v_dst), __pyx_v_ndim); - - /* "View.MemoryView":1304 - * if slices_overlap(&src, &dst, ndim, itemsize): - * - * if not slice_is_contig(src, order, ndim): # <<<<<<<<<<<<<< - * order = get_best_order(&dst, ndim) - * - */ - } - - /* "View.MemoryView":1307 - * order = get_best_order(&dst, ndim) - * - * tmpdata = copy_data_to_temp(&src, &tmp, order, ndim) # <<<<<<<<<<<<<< - * src = tmp - * - */ - __pyx_t_7 = __pyx_memoryview_copy_data_to_temp((&__pyx_v_src), (&__pyx_v_tmp), __pyx_v_order, __pyx_v_ndim); if (unlikely(__pyx_t_7 == ((void *)NULL))) __PYX_ERR(1, 1307, __pyx_L1_error) - __pyx_v_tmpdata = __pyx_t_7; - - /* "View.MemoryView":1308 - * - * tmpdata = copy_data_to_temp(&src, &tmp, order, ndim) - * src = tmp # <<<<<<<<<<<<<< - * - * if not broadcasting: - */ - __pyx_v_src = __pyx_v_tmp; - - /* "View.MemoryView":1302 - * _err_dim(ValueError, "Dimension %d is not direct", i) - * - * if slices_overlap(&src, &dst, ndim, itemsize): # <<<<<<<<<<<<<< - * - * if not slice_is_contig(src, order, ndim): - */ - } - - /* "View.MemoryView":1310 - * src = tmp - * - * if not broadcasting: # <<<<<<<<<<<<<< - * - * - */ - __pyx_t_2 = ((!(__pyx_v_broadcasting != 0)) != 0); - if (__pyx_t_2) { - - /* "View.MemoryView":1313 - * - * - * if slice_is_contig(src, 'C', ndim): # <<<<<<<<<<<<<< - * direct_copy = slice_is_contig(dst, 'C', ndim) - * elif slice_is_contig(src, 'F', ndim): - */ - __pyx_t_2 = (__pyx_memviewslice_is_contig(__pyx_v_src, 'C', __pyx_v_ndim) != 0); - if (__pyx_t_2) { - - /* "View.MemoryView":1314 - * - * if slice_is_contig(src, 'C', ndim): - * direct_copy = slice_is_contig(dst, 'C', ndim) # <<<<<<<<<<<<<< - * elif slice_is_contig(src, 'F', ndim): - * direct_copy = slice_is_contig(dst, 'F', ndim) - */ - __pyx_v_direct_copy = __pyx_memviewslice_is_contig(__pyx_v_dst, 'C', __pyx_v_ndim); - - /* "View.MemoryView":1313 - * - * - * if slice_is_contig(src, 'C', ndim): # <<<<<<<<<<<<<< - * direct_copy = slice_is_contig(dst, 'C', ndim) - * elif slice_is_contig(src, 'F', ndim): - */ - goto __pyx_L12; - } - - /* "View.MemoryView":1315 - * if slice_is_contig(src, 'C', ndim): - * direct_copy = slice_is_contig(dst, 'C', ndim) - * elif slice_is_contig(src, 'F', ndim): # <<<<<<<<<<<<<< - * direct_copy = slice_is_contig(dst, 'F', ndim) - * - */ - __pyx_t_2 = (__pyx_memviewslice_is_contig(__pyx_v_src, 'F', __pyx_v_ndim) != 0); - if (__pyx_t_2) { - - /* "View.MemoryView":1316 - * direct_copy = slice_is_contig(dst, 'C', ndim) - * elif slice_is_contig(src, 'F', ndim): - * direct_copy = slice_is_contig(dst, 'F', ndim) # <<<<<<<<<<<<<< - * - * if direct_copy: - */ - __pyx_v_direct_copy = __pyx_memviewslice_is_contig(__pyx_v_dst, 'F', __pyx_v_ndim); - - /* "View.MemoryView":1315 - * if slice_is_contig(src, 'C', ndim): - * direct_copy = slice_is_contig(dst, 'C', ndim) - * elif slice_is_contig(src, 'F', ndim): # <<<<<<<<<<<<<< - * direct_copy = slice_is_contig(dst, 'F', ndim) - * - */ - } - __pyx_L12:; - - /* "View.MemoryView":1318 - * direct_copy = slice_is_contig(dst, 'F', ndim) - * - * if direct_copy: # <<<<<<<<<<<<<< - * - * refcount_copying(&dst, dtype_is_object, ndim, False) - */ - __pyx_t_2 = (__pyx_v_direct_copy != 0); - if (__pyx_t_2) { - - /* "View.MemoryView":1320 - * if direct_copy: - * - * refcount_copying(&dst, dtype_is_object, ndim, False) # <<<<<<<<<<<<<< - * memcpy(dst.data, src.data, slice_get_size(&src, ndim)) - * refcount_copying(&dst, dtype_is_object, ndim, True) - */ - __pyx_memoryview_refcount_copying((&__pyx_v_dst), __pyx_v_dtype_is_object, __pyx_v_ndim, 0); - - /* "View.MemoryView":1321 - * - * refcount_copying(&dst, dtype_is_object, ndim, False) - * memcpy(dst.data, src.data, slice_get_size(&src, ndim)) # <<<<<<<<<<<<<< - * refcount_copying(&dst, dtype_is_object, ndim, True) - * free(tmpdata) - */ - (void)(memcpy(__pyx_v_dst.data, __pyx_v_src.data, __pyx_memoryview_slice_get_size((&__pyx_v_src), __pyx_v_ndim))); - - /* "View.MemoryView":1322 - * refcount_copying(&dst, dtype_is_object, ndim, False) - * memcpy(dst.data, src.data, slice_get_size(&src, ndim)) - * refcount_copying(&dst, dtype_is_object, ndim, True) # <<<<<<<<<<<<<< - * free(tmpdata) - * return 0 - */ - __pyx_memoryview_refcount_copying((&__pyx_v_dst), __pyx_v_dtype_is_object, __pyx_v_ndim, 1); - - /* "View.MemoryView":1323 - * memcpy(dst.data, src.data, slice_get_size(&src, ndim)) - * refcount_copying(&dst, dtype_is_object, ndim, True) - * free(tmpdata) # <<<<<<<<<<<<<< - * return 0 - * - */ - free(__pyx_v_tmpdata); - - /* "View.MemoryView":1324 - * refcount_copying(&dst, dtype_is_object, ndim, True) - * free(tmpdata) - * return 0 # <<<<<<<<<<<<<< - * - * if order == 'F' == get_best_order(&dst, ndim): - */ - __pyx_r = 0; - goto __pyx_L0; - - /* "View.MemoryView":1318 - * direct_copy = slice_is_contig(dst, 'F', ndim) - * - * if direct_copy: # <<<<<<<<<<<<<< - * - * refcount_copying(&dst, dtype_is_object, ndim, False) - */ - } - - /* "View.MemoryView":1310 - * src = tmp - * - * if not broadcasting: # <<<<<<<<<<<<<< - * - * - */ - } - - /* "View.MemoryView":1326 - * return 0 - * - * if order == 'F' == get_best_order(&dst, ndim): # <<<<<<<<<<<<<< - * - * - */ - __pyx_t_2 = (__pyx_v_order == 'F'); - if (__pyx_t_2) { - __pyx_t_2 = ('F' == __pyx_get_best_slice_order((&__pyx_v_dst), __pyx_v_ndim)); - } - __pyx_t_8 = (__pyx_t_2 != 0); - if (__pyx_t_8) { - - /* "View.MemoryView":1329 - * - * - * transpose_memslice(&src) # <<<<<<<<<<<<<< - * transpose_memslice(&dst) - * - */ - __pyx_t_5 = __pyx_memslice_transpose((&__pyx_v_src)); if (unlikely(__pyx_t_5 == ((int)0))) __PYX_ERR(1, 1329, __pyx_L1_error) - - /* "View.MemoryView":1330 - * - * transpose_memslice(&src) - * transpose_memslice(&dst) # <<<<<<<<<<<<<< - * - * refcount_copying(&dst, dtype_is_object, ndim, False) - */ - __pyx_t_5 = __pyx_memslice_transpose((&__pyx_v_dst)); if (unlikely(__pyx_t_5 == ((int)0))) __PYX_ERR(1, 1330, __pyx_L1_error) - - /* "View.MemoryView":1326 - * return 0 - * - * if order == 'F' == get_best_order(&dst, ndim): # <<<<<<<<<<<<<< - * - * - */ - } - - /* "View.MemoryView":1332 - * transpose_memslice(&dst) - * - * refcount_copying(&dst, dtype_is_object, ndim, False) # <<<<<<<<<<<<<< - * copy_strided_to_strided(&src, &dst, ndim, itemsize) - * refcount_copying(&dst, dtype_is_object, ndim, True) - */ - __pyx_memoryview_refcount_copying((&__pyx_v_dst), __pyx_v_dtype_is_object, __pyx_v_ndim, 0); - - /* "View.MemoryView":1333 - * - * refcount_copying(&dst, dtype_is_object, ndim, False) - * copy_strided_to_strided(&src, &dst, ndim, itemsize) # <<<<<<<<<<<<<< - * refcount_copying(&dst, dtype_is_object, ndim, True) - * - */ - copy_strided_to_strided((&__pyx_v_src), (&__pyx_v_dst), __pyx_v_ndim, __pyx_v_itemsize); - - /* "View.MemoryView":1334 - * refcount_copying(&dst, dtype_is_object, ndim, False) - * copy_strided_to_strided(&src, &dst, ndim, itemsize) - * refcount_copying(&dst, dtype_is_object, ndim, True) # <<<<<<<<<<<<<< - * - * free(tmpdata) - */ - __pyx_memoryview_refcount_copying((&__pyx_v_dst), __pyx_v_dtype_is_object, __pyx_v_ndim, 1); - - /* "View.MemoryView":1336 - * refcount_copying(&dst, dtype_is_object, ndim, True) - * - * free(tmpdata) # <<<<<<<<<<<<<< - * return 0 - * - */ - free(__pyx_v_tmpdata); - - /* "View.MemoryView":1337 - * - * free(tmpdata) - * return 0 # <<<<<<<<<<<<<< - * - * @cname('__pyx_memoryview_broadcast_leading') - */ - __pyx_r = 0; - goto __pyx_L0; - - /* "View.MemoryView":1268 - * - * @cname('__pyx_memoryview_copy_contents') - * cdef int memoryview_copy_contents(__Pyx_memviewslice src, # <<<<<<<<<<<<<< - * __Pyx_memviewslice dst, - * int src_ndim, int dst_ndim, - */ - - /* function exit code */ - __pyx_L1_error:; - { - #ifdef WITH_THREAD - PyGILState_STATE __pyx_gilstate_save = __Pyx_PyGILState_Ensure(); - #endif - __Pyx_AddTraceback("View.MemoryView.memoryview_copy_contents", __pyx_clineno, __pyx_lineno, __pyx_filename); - #ifdef WITH_THREAD - __Pyx_PyGILState_Release(__pyx_gilstate_save); - #endif - } - __pyx_r = -1; - __pyx_L0:; - return __pyx_r; -} - -/* "View.MemoryView":1340 - * - * @cname('__pyx_memoryview_broadcast_leading') - * cdef void broadcast_leading(__Pyx_memviewslice *mslice, # <<<<<<<<<<<<<< - * int ndim, - * int ndim_other) nogil: - */ - -static void __pyx_memoryview_broadcast_leading(__Pyx_memviewslice *__pyx_v_mslice, int __pyx_v_ndim, int __pyx_v_ndim_other) { - int __pyx_v_i; - int __pyx_v_offset; - int __pyx_t_1; - int __pyx_t_2; - int __pyx_t_3; - - /* "View.MemoryView":1344 - * int ndim_other) nogil: - * cdef int i - * cdef int offset = ndim_other - ndim # <<<<<<<<<<<<<< - * - * for i in range(ndim - 1, -1, -1): - */ - __pyx_v_offset = (__pyx_v_ndim_other - __pyx_v_ndim); - - /* "View.MemoryView":1346 - * cdef int offset = ndim_other - ndim - * - * for i in range(ndim - 1, -1, -1): # <<<<<<<<<<<<<< - * mslice.shape[i + offset] = mslice.shape[i] - * mslice.strides[i + offset] = mslice.strides[i] - */ - for (__pyx_t_1 = (__pyx_v_ndim - 1); __pyx_t_1 > -1; __pyx_t_1-=1) { - __pyx_v_i = __pyx_t_1; - - /* "View.MemoryView":1347 - * - * for i in range(ndim - 1, -1, -1): - * mslice.shape[i + offset] = mslice.shape[i] # <<<<<<<<<<<<<< - * mslice.strides[i + offset] = mslice.strides[i] - * mslice.suboffsets[i + offset] = mslice.suboffsets[i] - */ - (__pyx_v_mslice->shape[(__pyx_v_i + __pyx_v_offset)]) = (__pyx_v_mslice->shape[__pyx_v_i]); - - /* "View.MemoryView":1348 - * for i in range(ndim - 1, -1, -1): - * mslice.shape[i + offset] = mslice.shape[i] - * mslice.strides[i + offset] = mslice.strides[i] # <<<<<<<<<<<<<< - * mslice.suboffsets[i + offset] = mslice.suboffsets[i] - * - */ - (__pyx_v_mslice->strides[(__pyx_v_i + __pyx_v_offset)]) = (__pyx_v_mslice->strides[__pyx_v_i]); - - /* "View.MemoryView":1349 - * mslice.shape[i + offset] = mslice.shape[i] - * mslice.strides[i + offset] = mslice.strides[i] - * mslice.suboffsets[i + offset] = mslice.suboffsets[i] # <<<<<<<<<<<<<< - * - * for i in range(offset): - */ - (__pyx_v_mslice->suboffsets[(__pyx_v_i + __pyx_v_offset)]) = (__pyx_v_mslice->suboffsets[__pyx_v_i]); - } - - /* "View.MemoryView":1351 - * mslice.suboffsets[i + offset] = mslice.suboffsets[i] - * - * for i in range(offset): # <<<<<<<<<<<<<< - * mslice.shape[i] = 1 - * mslice.strides[i] = mslice.strides[0] - */ - __pyx_t_1 = __pyx_v_offset; - __pyx_t_2 = __pyx_t_1; - for (__pyx_t_3 = 0; __pyx_t_3 < __pyx_t_2; __pyx_t_3+=1) { - __pyx_v_i = __pyx_t_3; - - /* "View.MemoryView":1352 - * - * for i in range(offset): - * mslice.shape[i] = 1 # <<<<<<<<<<<<<< - * mslice.strides[i] = mslice.strides[0] - * mslice.suboffsets[i] = -1 - */ - (__pyx_v_mslice->shape[__pyx_v_i]) = 1; - - /* "View.MemoryView":1353 - * for i in range(offset): - * mslice.shape[i] = 1 - * mslice.strides[i] = mslice.strides[0] # <<<<<<<<<<<<<< - * mslice.suboffsets[i] = -1 - * - */ - (__pyx_v_mslice->strides[__pyx_v_i]) = (__pyx_v_mslice->strides[0]); - - /* "View.MemoryView":1354 - * mslice.shape[i] = 1 - * mslice.strides[i] = mslice.strides[0] - * mslice.suboffsets[i] = -1 # <<<<<<<<<<<<<< - * - * - */ - (__pyx_v_mslice->suboffsets[__pyx_v_i]) = -1L; - } - - /* "View.MemoryView":1340 - * - * @cname('__pyx_memoryview_broadcast_leading') - * cdef void broadcast_leading(__Pyx_memviewslice *mslice, # <<<<<<<<<<<<<< - * int ndim, - * int ndim_other) nogil: - */ - - /* function exit code */ -} - -/* "View.MemoryView":1362 - * - * @cname('__pyx_memoryview_refcount_copying') - * cdef void refcount_copying(__Pyx_memviewslice *dst, bint dtype_is_object, # <<<<<<<<<<<<<< - * int ndim, bint inc) nogil: - * - */ - -static void __pyx_memoryview_refcount_copying(__Pyx_memviewslice *__pyx_v_dst, int __pyx_v_dtype_is_object, int __pyx_v_ndim, int __pyx_v_inc) { - int __pyx_t_1; - - /* "View.MemoryView":1366 - * - * - * if dtype_is_object: # <<<<<<<<<<<<<< - * refcount_objects_in_slice_with_gil(dst.data, dst.shape, - * dst.strides, ndim, inc) - */ - __pyx_t_1 = (__pyx_v_dtype_is_object != 0); - if (__pyx_t_1) { - - /* "View.MemoryView":1367 - * - * if dtype_is_object: - * refcount_objects_in_slice_with_gil(dst.data, dst.shape, # <<<<<<<<<<<<<< - * dst.strides, ndim, inc) - * - */ - __pyx_memoryview_refcount_objects_in_slice_with_gil(__pyx_v_dst->data, __pyx_v_dst->shape, __pyx_v_dst->strides, __pyx_v_ndim, __pyx_v_inc); - - /* "View.MemoryView":1366 - * - * - * if dtype_is_object: # <<<<<<<<<<<<<< - * refcount_objects_in_slice_with_gil(dst.data, dst.shape, - * dst.strides, ndim, inc) - */ - } - - /* "View.MemoryView":1362 - * - * @cname('__pyx_memoryview_refcount_copying') - * cdef void refcount_copying(__Pyx_memviewslice *dst, bint dtype_is_object, # <<<<<<<<<<<<<< - * int ndim, bint inc) nogil: - * - */ - - /* function exit code */ -} - -/* "View.MemoryView":1371 - * - * @cname('__pyx_memoryview_refcount_objects_in_slice_with_gil') - * cdef void refcount_objects_in_slice_with_gil(char *data, Py_ssize_t *shape, # <<<<<<<<<<<<<< - * Py_ssize_t *strides, int ndim, - * bint inc) with gil: - */ - -static void __pyx_memoryview_refcount_objects_in_slice_with_gil(char *__pyx_v_data, Py_ssize_t *__pyx_v_shape, Py_ssize_t *__pyx_v_strides, int __pyx_v_ndim, int __pyx_v_inc) { - __Pyx_RefNannyDeclarations - #ifdef WITH_THREAD - PyGILState_STATE __pyx_gilstate_save = __Pyx_PyGILState_Ensure(); - #endif - __Pyx_RefNannySetupContext("refcount_objects_in_slice_with_gil", 0); - - /* "View.MemoryView":1374 - * Py_ssize_t *strides, int ndim, - * bint inc) with gil: - * refcount_objects_in_slice(data, shape, strides, ndim, inc) # <<<<<<<<<<<<<< - * - * @cname('__pyx_memoryview_refcount_objects_in_slice') - */ - __pyx_memoryview_refcount_objects_in_slice(__pyx_v_data, __pyx_v_shape, __pyx_v_strides, __pyx_v_ndim, __pyx_v_inc); - - /* "View.MemoryView":1371 - * - * @cname('__pyx_memoryview_refcount_objects_in_slice_with_gil') - * cdef void refcount_objects_in_slice_with_gil(char *data, Py_ssize_t *shape, # <<<<<<<<<<<<<< - * Py_ssize_t *strides, int ndim, - * bint inc) with gil: - */ - - /* function exit code */ - __Pyx_RefNannyFinishContext(); - #ifdef WITH_THREAD - __Pyx_PyGILState_Release(__pyx_gilstate_save); - #endif -} - -/* "View.MemoryView":1377 - * - * @cname('__pyx_memoryview_refcount_objects_in_slice') - * cdef void refcount_objects_in_slice(char *data, Py_ssize_t *shape, # <<<<<<<<<<<<<< - * Py_ssize_t *strides, int ndim, bint inc): - * cdef Py_ssize_t i - */ - -static void __pyx_memoryview_refcount_objects_in_slice(char *__pyx_v_data, Py_ssize_t *__pyx_v_shape, Py_ssize_t *__pyx_v_strides, int __pyx_v_ndim, int __pyx_v_inc) { - CYTHON_UNUSED Py_ssize_t __pyx_v_i; - __Pyx_RefNannyDeclarations - Py_ssize_t __pyx_t_1; - Py_ssize_t __pyx_t_2; - Py_ssize_t __pyx_t_3; - int __pyx_t_4; - __Pyx_RefNannySetupContext("refcount_objects_in_slice", 0); - - /* "View.MemoryView":1381 - * cdef Py_ssize_t i - * - * for i in range(shape[0]): # <<<<<<<<<<<<<< - * if ndim == 1: - * if inc: - */ - __pyx_t_1 = (__pyx_v_shape[0]); - __pyx_t_2 = __pyx_t_1; - for (__pyx_t_3 = 0; __pyx_t_3 < __pyx_t_2; __pyx_t_3+=1) { - __pyx_v_i = __pyx_t_3; - - /* "View.MemoryView":1382 - * - * for i in range(shape[0]): - * if ndim == 1: # <<<<<<<<<<<<<< - * if inc: - * Py_INCREF(( data)[0]) - */ - __pyx_t_4 = ((__pyx_v_ndim == 1) != 0); - if (__pyx_t_4) { - - /* "View.MemoryView":1383 - * for i in range(shape[0]): - * if ndim == 1: - * if inc: # <<<<<<<<<<<<<< - * Py_INCREF(( data)[0]) - * else: - */ - __pyx_t_4 = (__pyx_v_inc != 0); - if (__pyx_t_4) { - - /* "View.MemoryView":1384 - * if ndim == 1: - * if inc: - * Py_INCREF(( data)[0]) # <<<<<<<<<<<<<< - * else: - * Py_DECREF(( data)[0]) - */ - Py_INCREF((((PyObject **)__pyx_v_data)[0])); - - /* "View.MemoryView":1383 - * for i in range(shape[0]): - * if ndim == 1: - * if inc: # <<<<<<<<<<<<<< - * Py_INCREF(( data)[0]) - * else: - */ - goto __pyx_L6; - } - - /* "View.MemoryView":1386 - * Py_INCREF(( data)[0]) - * else: - * Py_DECREF(( data)[0]) # <<<<<<<<<<<<<< - * else: - * refcount_objects_in_slice(data, shape + 1, strides + 1, - */ - /*else*/ { - Py_DECREF((((PyObject **)__pyx_v_data)[0])); - } - __pyx_L6:; - - /* "View.MemoryView":1382 - * - * for i in range(shape[0]): - * if ndim == 1: # <<<<<<<<<<<<<< - * if inc: - * Py_INCREF(( data)[0]) - */ - goto __pyx_L5; - } - - /* "View.MemoryView":1388 - * Py_DECREF(( data)[0]) - * else: - * refcount_objects_in_slice(data, shape + 1, strides + 1, # <<<<<<<<<<<<<< - * ndim - 1, inc) - * - */ - /*else*/ { - - /* "View.MemoryView":1389 - * else: - * refcount_objects_in_slice(data, shape + 1, strides + 1, - * ndim - 1, inc) # <<<<<<<<<<<<<< - * - * data += strides[0] - */ - __pyx_memoryview_refcount_objects_in_slice(__pyx_v_data, (__pyx_v_shape + 1), (__pyx_v_strides + 1), (__pyx_v_ndim - 1), __pyx_v_inc); - } - __pyx_L5:; - - /* "View.MemoryView":1391 - * ndim - 1, inc) - * - * data += strides[0] # <<<<<<<<<<<<<< - * - * - */ - __pyx_v_data = (__pyx_v_data + (__pyx_v_strides[0])); - } - - /* "View.MemoryView":1377 - * - * @cname('__pyx_memoryview_refcount_objects_in_slice') - * cdef void refcount_objects_in_slice(char *data, Py_ssize_t *shape, # <<<<<<<<<<<<<< - * Py_ssize_t *strides, int ndim, bint inc): - * cdef Py_ssize_t i - */ - - /* function exit code */ - __Pyx_RefNannyFinishContext(); -} - -/* "View.MemoryView":1397 - * - * @cname('__pyx_memoryview_slice_assign_scalar') - * cdef void slice_assign_scalar(__Pyx_memviewslice *dst, int ndim, # <<<<<<<<<<<<<< - * size_t itemsize, void *item, - * bint dtype_is_object) nogil: - */ - -static void __pyx_memoryview_slice_assign_scalar(__Pyx_memviewslice *__pyx_v_dst, int __pyx_v_ndim, size_t __pyx_v_itemsize, void *__pyx_v_item, int __pyx_v_dtype_is_object) { - - /* "View.MemoryView":1400 - * size_t itemsize, void *item, - * bint dtype_is_object) nogil: - * refcount_copying(dst, dtype_is_object, ndim, False) # <<<<<<<<<<<<<< - * _slice_assign_scalar(dst.data, dst.shape, dst.strides, ndim, - * itemsize, item) - */ - __pyx_memoryview_refcount_copying(__pyx_v_dst, __pyx_v_dtype_is_object, __pyx_v_ndim, 0); - - /* "View.MemoryView":1401 - * bint dtype_is_object) nogil: - * refcount_copying(dst, dtype_is_object, ndim, False) - * _slice_assign_scalar(dst.data, dst.shape, dst.strides, ndim, # <<<<<<<<<<<<<< - * itemsize, item) - * refcount_copying(dst, dtype_is_object, ndim, True) - */ - __pyx_memoryview__slice_assign_scalar(__pyx_v_dst->data, __pyx_v_dst->shape, __pyx_v_dst->strides, __pyx_v_ndim, __pyx_v_itemsize, __pyx_v_item); - - /* "View.MemoryView":1403 - * _slice_assign_scalar(dst.data, dst.shape, dst.strides, ndim, - * itemsize, item) - * refcount_copying(dst, dtype_is_object, ndim, True) # <<<<<<<<<<<<<< - * - * - */ - __pyx_memoryview_refcount_copying(__pyx_v_dst, __pyx_v_dtype_is_object, __pyx_v_ndim, 1); - - /* "View.MemoryView":1397 - * - * @cname('__pyx_memoryview_slice_assign_scalar') - * cdef void slice_assign_scalar(__Pyx_memviewslice *dst, int ndim, # <<<<<<<<<<<<<< - * size_t itemsize, void *item, - * bint dtype_is_object) nogil: - */ - - /* function exit code */ -} - -/* "View.MemoryView":1407 - * - * @cname('__pyx_memoryview__slice_assign_scalar') - * cdef void _slice_assign_scalar(char *data, Py_ssize_t *shape, # <<<<<<<<<<<<<< - * Py_ssize_t *strides, int ndim, - * size_t itemsize, void *item) nogil: - */ - -static void __pyx_memoryview__slice_assign_scalar(char *__pyx_v_data, Py_ssize_t *__pyx_v_shape, Py_ssize_t *__pyx_v_strides, int __pyx_v_ndim, size_t __pyx_v_itemsize, void *__pyx_v_item) { - CYTHON_UNUSED Py_ssize_t __pyx_v_i; - Py_ssize_t __pyx_v_stride; - Py_ssize_t __pyx_v_extent; - int __pyx_t_1; - Py_ssize_t __pyx_t_2; - Py_ssize_t __pyx_t_3; - Py_ssize_t __pyx_t_4; - - /* "View.MemoryView":1411 - * size_t itemsize, void *item) nogil: - * cdef Py_ssize_t i - * cdef Py_ssize_t stride = strides[0] # <<<<<<<<<<<<<< - * cdef Py_ssize_t extent = shape[0] - * - */ - __pyx_v_stride = (__pyx_v_strides[0]); - - /* "View.MemoryView":1412 - * cdef Py_ssize_t i - * cdef Py_ssize_t stride = strides[0] - * cdef Py_ssize_t extent = shape[0] # <<<<<<<<<<<<<< - * - * if ndim == 1: - */ - __pyx_v_extent = (__pyx_v_shape[0]); - - /* "View.MemoryView":1414 - * cdef Py_ssize_t extent = shape[0] - * - * if ndim == 1: # <<<<<<<<<<<<<< - * for i in range(extent): - * memcpy(data, item, itemsize) - */ - __pyx_t_1 = ((__pyx_v_ndim == 1) != 0); - if (__pyx_t_1) { - - /* "View.MemoryView":1415 - * - * if ndim == 1: - * for i in range(extent): # <<<<<<<<<<<<<< - * memcpy(data, item, itemsize) - * data += stride - */ - __pyx_t_2 = __pyx_v_extent; - __pyx_t_3 = __pyx_t_2; - for (__pyx_t_4 = 0; __pyx_t_4 < __pyx_t_3; __pyx_t_4+=1) { - __pyx_v_i = __pyx_t_4; - - /* "View.MemoryView":1416 - * if ndim == 1: - * for i in range(extent): - * memcpy(data, item, itemsize) # <<<<<<<<<<<<<< - * data += stride - * else: - */ - (void)(memcpy(__pyx_v_data, __pyx_v_item, __pyx_v_itemsize)); - - /* "View.MemoryView":1417 - * for i in range(extent): - * memcpy(data, item, itemsize) - * data += stride # <<<<<<<<<<<<<< - * else: - * for i in range(extent): - */ - __pyx_v_data = (__pyx_v_data + __pyx_v_stride); - } - - /* "View.MemoryView":1414 - * cdef Py_ssize_t extent = shape[0] - * - * if ndim == 1: # <<<<<<<<<<<<<< - * for i in range(extent): - * memcpy(data, item, itemsize) - */ - goto __pyx_L3; - } - - /* "View.MemoryView":1419 - * data += stride - * else: - * for i in range(extent): # <<<<<<<<<<<<<< - * _slice_assign_scalar(data, shape + 1, strides + 1, - * ndim - 1, itemsize, item) - */ - /*else*/ { - __pyx_t_2 = __pyx_v_extent; - __pyx_t_3 = __pyx_t_2; - for (__pyx_t_4 = 0; __pyx_t_4 < __pyx_t_3; __pyx_t_4+=1) { - __pyx_v_i = __pyx_t_4; - - /* "View.MemoryView":1420 - * else: - * for i in range(extent): - * _slice_assign_scalar(data, shape + 1, strides + 1, # <<<<<<<<<<<<<< - * ndim - 1, itemsize, item) - * data += stride - */ - __pyx_memoryview__slice_assign_scalar(__pyx_v_data, (__pyx_v_shape + 1), (__pyx_v_strides + 1), (__pyx_v_ndim - 1), __pyx_v_itemsize, __pyx_v_item); - - /* "View.MemoryView":1422 - * _slice_assign_scalar(data, shape + 1, strides + 1, - * ndim - 1, itemsize, item) - * data += stride # <<<<<<<<<<<<<< - * - * - */ - __pyx_v_data = (__pyx_v_data + __pyx_v_stride); - } - } - __pyx_L3:; - - /* "View.MemoryView":1407 - * - * @cname('__pyx_memoryview__slice_assign_scalar') - * cdef void _slice_assign_scalar(char *data, Py_ssize_t *shape, # <<<<<<<<<<<<<< - * Py_ssize_t *strides, int ndim, - * size_t itemsize, void *item) nogil: - */ - - /* function exit code */ -} - -/* "(tree fragment)":1 - * def __pyx_unpickle_Enum(__pyx_type, long __pyx_checksum, __pyx_state): # <<<<<<<<<<<<<< - * cdef object __pyx_PickleError - * cdef object __pyx_result - */ - -/* Python wrapper */ -static PyObject *__pyx_pw_15View_dot_MemoryView_1__pyx_unpickle_Enum(PyObject *__pyx_self, PyObject *__pyx_args, PyObject *__pyx_kwds); /*proto*/ -static PyMethodDef __pyx_mdef_15View_dot_MemoryView_1__pyx_unpickle_Enum = {"__pyx_unpickle_Enum", (PyCFunction)(void*)(PyCFunctionWithKeywords)__pyx_pw_15View_dot_MemoryView_1__pyx_unpickle_Enum, METH_VARARGS|METH_KEYWORDS, 0}; -static PyObject *__pyx_pw_15View_dot_MemoryView_1__pyx_unpickle_Enum(PyObject *__pyx_self, PyObject *__pyx_args, PyObject *__pyx_kwds) { - PyObject *__pyx_v___pyx_type = 0; - long __pyx_v___pyx_checksum; - PyObject *__pyx_v___pyx_state = 0; - int __pyx_lineno = 0; - const char *__pyx_filename = NULL; - int __pyx_clineno = 0; - PyObject *__pyx_r = 0; - __Pyx_RefNannyDeclarations - __Pyx_RefNannySetupContext("__pyx_unpickle_Enum (wrapper)", 0); - { - static PyObject **__pyx_pyargnames[] = {&__pyx_n_s_pyx_type,&__pyx_n_s_pyx_checksum,&__pyx_n_s_pyx_state,0}; - PyObject* values[3] = {0,0,0}; - if (unlikely(__pyx_kwds)) { - Py_ssize_t kw_args; - const Py_ssize_t pos_args = PyTuple_GET_SIZE(__pyx_args); - switch (pos_args) { - case 3: values[2] = PyTuple_GET_ITEM(__pyx_args, 2); - CYTHON_FALLTHROUGH; - case 2: values[1] = PyTuple_GET_ITEM(__pyx_args, 1); - CYTHON_FALLTHROUGH; - case 1: values[0] = PyTuple_GET_ITEM(__pyx_args, 0); - CYTHON_FALLTHROUGH; - case 0: break; - default: goto __pyx_L5_argtuple_error; - } - kw_args = PyDict_Size(__pyx_kwds); - switch (pos_args) { - case 0: - if (likely((values[0] = __Pyx_PyDict_GetItemStr(__pyx_kwds, __pyx_n_s_pyx_type)) != 0)) kw_args--; - else goto __pyx_L5_argtuple_error; - CYTHON_FALLTHROUGH; - case 1: - if (likely((values[1] = __Pyx_PyDict_GetItemStr(__pyx_kwds, __pyx_n_s_pyx_checksum)) != 0)) kw_args--; - else { - __Pyx_RaiseArgtupleInvalid("__pyx_unpickle_Enum", 1, 3, 3, 1); __PYX_ERR(1, 1, __pyx_L3_error) - } - CYTHON_FALLTHROUGH; - case 2: - if (likely((values[2] = __Pyx_PyDict_GetItemStr(__pyx_kwds, __pyx_n_s_pyx_state)) != 0)) kw_args--; - else { - __Pyx_RaiseArgtupleInvalid("__pyx_unpickle_Enum", 1, 3, 3, 2); __PYX_ERR(1, 1, __pyx_L3_error) - } - } - if (unlikely(kw_args > 0)) { - if (unlikely(__Pyx_ParseOptionalKeywords(__pyx_kwds, __pyx_pyargnames, 0, values, pos_args, "__pyx_unpickle_Enum") < 0)) __PYX_ERR(1, 1, __pyx_L3_error) - } - } else if (PyTuple_GET_SIZE(__pyx_args) != 3) { - goto __pyx_L5_argtuple_error; - } else { - values[0] = PyTuple_GET_ITEM(__pyx_args, 0); - values[1] = PyTuple_GET_ITEM(__pyx_args, 1); - values[2] = PyTuple_GET_ITEM(__pyx_args, 2); - } - __pyx_v___pyx_type = values[0]; - __pyx_v___pyx_checksum = __Pyx_PyInt_As_long(values[1]); if (unlikely((__pyx_v___pyx_checksum == (long)-1) && PyErr_Occurred())) __PYX_ERR(1, 1, __pyx_L3_error) - __pyx_v___pyx_state = values[2]; - } - goto __pyx_L4_argument_unpacking_done; - __pyx_L5_argtuple_error:; - __Pyx_RaiseArgtupleInvalid("__pyx_unpickle_Enum", 1, 3, 3, PyTuple_GET_SIZE(__pyx_args)); __PYX_ERR(1, 1, __pyx_L3_error) - __pyx_L3_error:; - __Pyx_AddTraceback("View.MemoryView.__pyx_unpickle_Enum", __pyx_clineno, __pyx_lineno, __pyx_filename); - __Pyx_RefNannyFinishContext(); - return NULL; - __pyx_L4_argument_unpacking_done:; - __pyx_r = __pyx_pf_15View_dot_MemoryView___pyx_unpickle_Enum(__pyx_self, __pyx_v___pyx_type, __pyx_v___pyx_checksum, __pyx_v___pyx_state); - - /* function exit code */ - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} - -static PyObject *__pyx_pf_15View_dot_MemoryView___pyx_unpickle_Enum(CYTHON_UNUSED PyObject *__pyx_self, PyObject *__pyx_v___pyx_type, long __pyx_v___pyx_checksum, PyObject *__pyx_v___pyx_state) { - PyObject *__pyx_v___pyx_PickleError = 0; - PyObject *__pyx_v___pyx_result = 0; - PyObject *__pyx_r = NULL; - __Pyx_RefNannyDeclarations - int __pyx_t_1; - PyObject *__pyx_t_2 = NULL; - PyObject *__pyx_t_3 = NULL; - PyObject *__pyx_t_4 = NULL; - PyObject *__pyx_t_5 = NULL; - int __pyx_t_6; - int __pyx_lineno = 0; - const char *__pyx_filename = NULL; - int __pyx_clineno = 0; - __Pyx_RefNannySetupContext("__pyx_unpickle_Enum", 0); - - /* "(tree fragment)":4 - * cdef object __pyx_PickleError - * cdef object __pyx_result - * if __pyx_checksum != 0xb068931: # <<<<<<<<<<<<<< - * from pickle import PickleError as __pyx_PickleError - * raise __pyx_PickleError("Incompatible checksums (%s vs 0xb068931 = (name))" % __pyx_checksum) - */ - __pyx_t_1 = ((__pyx_v___pyx_checksum != 0xb068931) != 0); - if (__pyx_t_1) { - - /* "(tree fragment)":5 - * cdef object __pyx_result - * if __pyx_checksum != 0xb068931: - * from pickle import PickleError as __pyx_PickleError # <<<<<<<<<<<<<< - * raise __pyx_PickleError("Incompatible checksums (%s vs 0xb068931 = (name))" % __pyx_checksum) - * __pyx_result = Enum.__new__(__pyx_type) - */ - __pyx_t_2 = PyList_New(1); if (unlikely(!__pyx_t_2)) __PYX_ERR(1, 5, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - __Pyx_INCREF(__pyx_n_s_PickleError); - __Pyx_GIVEREF(__pyx_n_s_PickleError); - PyList_SET_ITEM(__pyx_t_2, 0, __pyx_n_s_PickleError); - __pyx_t_3 = __Pyx_Import(__pyx_n_s_pickle, __pyx_t_2, 0); if (unlikely(!__pyx_t_3)) __PYX_ERR(1, 5, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_3); - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - __pyx_t_2 = __Pyx_ImportFrom(__pyx_t_3, __pyx_n_s_PickleError); if (unlikely(!__pyx_t_2)) __PYX_ERR(1, 5, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - __Pyx_INCREF(__pyx_t_2); - __pyx_v___pyx_PickleError = __pyx_t_2; - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - - /* "(tree fragment)":6 - * if __pyx_checksum != 0xb068931: - * from pickle import PickleError as __pyx_PickleError - * raise __pyx_PickleError("Incompatible checksums (%s vs 0xb068931 = (name))" % __pyx_checksum) # <<<<<<<<<<<<<< - * __pyx_result = Enum.__new__(__pyx_type) - * if __pyx_state is not None: - */ - __pyx_t_2 = __Pyx_PyInt_From_long(__pyx_v___pyx_checksum); if (unlikely(!__pyx_t_2)) __PYX_ERR(1, 6, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - __pyx_t_4 = __Pyx_PyString_Format(__pyx_kp_s_Incompatible_checksums_s_vs_0xb0, __pyx_t_2); if (unlikely(!__pyx_t_4)) __PYX_ERR(1, 6, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_4); - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - __Pyx_INCREF(__pyx_v___pyx_PickleError); - __pyx_t_2 = __pyx_v___pyx_PickleError; __pyx_t_5 = NULL; - if (CYTHON_UNPACK_METHODS && unlikely(PyMethod_Check(__pyx_t_2))) { - __pyx_t_5 = PyMethod_GET_SELF(__pyx_t_2); - if (likely(__pyx_t_5)) { - PyObject* function = PyMethod_GET_FUNCTION(__pyx_t_2); - __Pyx_INCREF(__pyx_t_5); - __Pyx_INCREF(function); - __Pyx_DECREF_SET(__pyx_t_2, function); - } - } - __pyx_t_3 = (__pyx_t_5) ? __Pyx_PyObject_Call2Args(__pyx_t_2, __pyx_t_5, __pyx_t_4) : __Pyx_PyObject_CallOneArg(__pyx_t_2, __pyx_t_4); - __Pyx_XDECREF(__pyx_t_5); __pyx_t_5 = 0; - __Pyx_DECREF(__pyx_t_4); __pyx_t_4 = 0; - if (unlikely(!__pyx_t_3)) __PYX_ERR(1, 6, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_3); - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - __Pyx_Raise(__pyx_t_3, 0, 0, 0); - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - __PYX_ERR(1, 6, __pyx_L1_error) - - /* "(tree fragment)":4 - * cdef object __pyx_PickleError - * cdef object __pyx_result - * if __pyx_checksum != 0xb068931: # <<<<<<<<<<<<<< - * from pickle import PickleError as __pyx_PickleError - * raise __pyx_PickleError("Incompatible checksums (%s vs 0xb068931 = (name))" % __pyx_checksum) - */ - } - - /* "(tree fragment)":7 - * from pickle import PickleError as __pyx_PickleError - * raise __pyx_PickleError("Incompatible checksums (%s vs 0xb068931 = (name))" % __pyx_checksum) - * __pyx_result = Enum.__new__(__pyx_type) # <<<<<<<<<<<<<< - * if __pyx_state is not None: - * __pyx_unpickle_Enum__set_state( __pyx_result, __pyx_state) - */ - __pyx_t_2 = __Pyx_PyObject_GetAttrStr(((PyObject *)__pyx_MemviewEnum_type), __pyx_n_s_new); if (unlikely(!__pyx_t_2)) __PYX_ERR(1, 7, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - __pyx_t_4 = NULL; - if (CYTHON_UNPACK_METHODS && likely(PyMethod_Check(__pyx_t_2))) { - __pyx_t_4 = PyMethod_GET_SELF(__pyx_t_2); - if (likely(__pyx_t_4)) { - PyObject* function = PyMethod_GET_FUNCTION(__pyx_t_2); - __Pyx_INCREF(__pyx_t_4); - __Pyx_INCREF(function); - __Pyx_DECREF_SET(__pyx_t_2, function); - } - } - __pyx_t_3 = (__pyx_t_4) ? __Pyx_PyObject_Call2Args(__pyx_t_2, __pyx_t_4, __pyx_v___pyx_type) : __Pyx_PyObject_CallOneArg(__pyx_t_2, __pyx_v___pyx_type); - __Pyx_XDECREF(__pyx_t_4); __pyx_t_4 = 0; - if (unlikely(!__pyx_t_3)) __PYX_ERR(1, 7, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_3); - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - __pyx_v___pyx_result = __pyx_t_3; - __pyx_t_3 = 0; - - /* "(tree fragment)":8 - * raise __pyx_PickleError("Incompatible checksums (%s vs 0xb068931 = (name))" % __pyx_checksum) - * __pyx_result = Enum.__new__(__pyx_type) - * if __pyx_state is not None: # <<<<<<<<<<<<<< - * __pyx_unpickle_Enum__set_state( __pyx_result, __pyx_state) - * return __pyx_result - */ - __pyx_t_1 = (__pyx_v___pyx_state != Py_None); - __pyx_t_6 = (__pyx_t_1 != 0); - if (__pyx_t_6) { - - /* "(tree fragment)":9 - * __pyx_result = Enum.__new__(__pyx_type) - * if __pyx_state is not None: - * __pyx_unpickle_Enum__set_state( __pyx_result, __pyx_state) # <<<<<<<<<<<<<< - * return __pyx_result - * cdef __pyx_unpickle_Enum__set_state(Enum __pyx_result, tuple __pyx_state): - */ - if (!(likely(PyTuple_CheckExact(__pyx_v___pyx_state))||((__pyx_v___pyx_state) == Py_None)||(PyErr_Format(PyExc_TypeError, "Expected %.16s, got %.200s", "tuple", Py_TYPE(__pyx_v___pyx_state)->tp_name), 0))) __PYX_ERR(1, 9, __pyx_L1_error) - __pyx_t_3 = __pyx_unpickle_Enum__set_state(((struct __pyx_MemviewEnum_obj *)__pyx_v___pyx_result), ((PyObject*)__pyx_v___pyx_state)); if (unlikely(!__pyx_t_3)) __PYX_ERR(1, 9, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_3); - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - - /* "(tree fragment)":8 - * raise __pyx_PickleError("Incompatible checksums (%s vs 0xb068931 = (name))" % __pyx_checksum) - * __pyx_result = Enum.__new__(__pyx_type) - * if __pyx_state is not None: # <<<<<<<<<<<<<< - * __pyx_unpickle_Enum__set_state( __pyx_result, __pyx_state) - * return __pyx_result - */ - } - - /* "(tree fragment)":10 - * if __pyx_state is not None: - * __pyx_unpickle_Enum__set_state( __pyx_result, __pyx_state) - * return __pyx_result # <<<<<<<<<<<<<< - * cdef __pyx_unpickle_Enum__set_state(Enum __pyx_result, tuple __pyx_state): - * __pyx_result.name = __pyx_state[0] - */ - __Pyx_XDECREF(__pyx_r); - __Pyx_INCREF(__pyx_v___pyx_result); - __pyx_r = __pyx_v___pyx_result; - goto __pyx_L0; - - /* "(tree fragment)":1 - * def __pyx_unpickle_Enum(__pyx_type, long __pyx_checksum, __pyx_state): # <<<<<<<<<<<<<< - * cdef object __pyx_PickleError - * cdef object __pyx_result - */ - - /* function exit code */ - __pyx_L1_error:; - __Pyx_XDECREF(__pyx_t_2); - __Pyx_XDECREF(__pyx_t_3); - __Pyx_XDECREF(__pyx_t_4); - __Pyx_XDECREF(__pyx_t_5); - __Pyx_AddTraceback("View.MemoryView.__pyx_unpickle_Enum", __pyx_clineno, __pyx_lineno, __pyx_filename); - __pyx_r = NULL; - __pyx_L0:; - __Pyx_XDECREF(__pyx_v___pyx_PickleError); - __Pyx_XDECREF(__pyx_v___pyx_result); - __Pyx_XGIVEREF(__pyx_r); - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} - -/* "(tree fragment)":11 - * __pyx_unpickle_Enum__set_state( __pyx_result, __pyx_state) - * return __pyx_result - * cdef __pyx_unpickle_Enum__set_state(Enum __pyx_result, tuple __pyx_state): # <<<<<<<<<<<<<< - * __pyx_result.name = __pyx_state[0] - * if len(__pyx_state) > 1 and hasattr(__pyx_result, '__dict__'): - */ - -static PyObject *__pyx_unpickle_Enum__set_state(struct __pyx_MemviewEnum_obj *__pyx_v___pyx_result, PyObject *__pyx_v___pyx_state) { - PyObject *__pyx_r = NULL; - __Pyx_RefNannyDeclarations - PyObject *__pyx_t_1 = NULL; - int __pyx_t_2; - Py_ssize_t __pyx_t_3; - int __pyx_t_4; - int __pyx_t_5; - PyObject *__pyx_t_6 = NULL; - PyObject *__pyx_t_7 = NULL; - PyObject *__pyx_t_8 = NULL; - int __pyx_lineno = 0; - const char *__pyx_filename = NULL; - int __pyx_clineno = 0; - __Pyx_RefNannySetupContext("__pyx_unpickle_Enum__set_state", 0); - - /* "(tree fragment)":12 - * return __pyx_result - * cdef __pyx_unpickle_Enum__set_state(Enum __pyx_result, tuple __pyx_state): - * __pyx_result.name = __pyx_state[0] # <<<<<<<<<<<<<< - * if len(__pyx_state) > 1 and hasattr(__pyx_result, '__dict__'): - * __pyx_result.__dict__.update(__pyx_state[1]) - */ - if (unlikely(__pyx_v___pyx_state == Py_None)) { - PyErr_SetString(PyExc_TypeError, "'NoneType' object is not subscriptable"); - __PYX_ERR(1, 12, __pyx_L1_error) - } - __pyx_t_1 = __Pyx_GetItemInt_Tuple(__pyx_v___pyx_state, 0, long, 1, __Pyx_PyInt_From_long, 0, 0, 1); if (unlikely(!__pyx_t_1)) __PYX_ERR(1, 12, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - __Pyx_GIVEREF(__pyx_t_1); - __Pyx_GOTREF(__pyx_v___pyx_result->name); - __Pyx_DECREF(__pyx_v___pyx_result->name); - __pyx_v___pyx_result->name = __pyx_t_1; - __pyx_t_1 = 0; - - /* "(tree fragment)":13 - * cdef __pyx_unpickle_Enum__set_state(Enum __pyx_result, tuple __pyx_state): - * __pyx_result.name = __pyx_state[0] - * if len(__pyx_state) > 1 and hasattr(__pyx_result, '__dict__'): # <<<<<<<<<<<<<< - * __pyx_result.__dict__.update(__pyx_state[1]) - */ - if (unlikely(__pyx_v___pyx_state == Py_None)) { - PyErr_SetString(PyExc_TypeError, "object of type 'NoneType' has no len()"); - __PYX_ERR(1, 13, __pyx_L1_error) - } - __pyx_t_3 = PyTuple_GET_SIZE(__pyx_v___pyx_state); if (unlikely(__pyx_t_3 == ((Py_ssize_t)-1))) __PYX_ERR(1, 13, __pyx_L1_error) - __pyx_t_4 = ((__pyx_t_3 > 1) != 0); - if (__pyx_t_4) { - } else { - __pyx_t_2 = __pyx_t_4; - goto __pyx_L4_bool_binop_done; - } - __pyx_t_4 = __Pyx_HasAttr(((PyObject *)__pyx_v___pyx_result), __pyx_n_s_dict); if (unlikely(__pyx_t_4 == ((int)-1))) __PYX_ERR(1, 13, __pyx_L1_error) - __pyx_t_5 = (__pyx_t_4 != 0); - __pyx_t_2 = __pyx_t_5; - __pyx_L4_bool_binop_done:; - if (__pyx_t_2) { - - /* "(tree fragment)":14 - * __pyx_result.name = __pyx_state[0] - * if len(__pyx_state) > 1 and hasattr(__pyx_result, '__dict__'): - * __pyx_result.__dict__.update(__pyx_state[1]) # <<<<<<<<<<<<<< - */ - __pyx_t_6 = __Pyx_PyObject_GetAttrStr(((PyObject *)__pyx_v___pyx_result), __pyx_n_s_dict); if (unlikely(!__pyx_t_6)) __PYX_ERR(1, 14, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_6); - __pyx_t_7 = __Pyx_PyObject_GetAttrStr(__pyx_t_6, __pyx_n_s_update); if (unlikely(!__pyx_t_7)) __PYX_ERR(1, 14, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_7); - __Pyx_DECREF(__pyx_t_6); __pyx_t_6 = 0; - if (unlikely(__pyx_v___pyx_state == Py_None)) { - PyErr_SetString(PyExc_TypeError, "'NoneType' object is not subscriptable"); - __PYX_ERR(1, 14, __pyx_L1_error) - } - __pyx_t_6 = __Pyx_GetItemInt_Tuple(__pyx_v___pyx_state, 1, long, 1, __Pyx_PyInt_From_long, 0, 0, 1); if (unlikely(!__pyx_t_6)) __PYX_ERR(1, 14, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_6); - __pyx_t_8 = NULL; - if (CYTHON_UNPACK_METHODS && likely(PyMethod_Check(__pyx_t_7))) { - __pyx_t_8 = PyMethod_GET_SELF(__pyx_t_7); - if (likely(__pyx_t_8)) { - PyObject* function = PyMethod_GET_FUNCTION(__pyx_t_7); - __Pyx_INCREF(__pyx_t_8); - __Pyx_INCREF(function); - __Pyx_DECREF_SET(__pyx_t_7, function); - } - } - __pyx_t_1 = (__pyx_t_8) ? __Pyx_PyObject_Call2Args(__pyx_t_7, __pyx_t_8, __pyx_t_6) : __Pyx_PyObject_CallOneArg(__pyx_t_7, __pyx_t_6); - __Pyx_XDECREF(__pyx_t_8); __pyx_t_8 = 0; - __Pyx_DECREF(__pyx_t_6); __pyx_t_6 = 0; - if (unlikely(!__pyx_t_1)) __PYX_ERR(1, 14, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - __Pyx_DECREF(__pyx_t_7); __pyx_t_7 = 0; - __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; - - /* "(tree fragment)":13 - * cdef __pyx_unpickle_Enum__set_state(Enum __pyx_result, tuple __pyx_state): - * __pyx_result.name = __pyx_state[0] - * if len(__pyx_state) > 1 and hasattr(__pyx_result, '__dict__'): # <<<<<<<<<<<<<< - * __pyx_result.__dict__.update(__pyx_state[1]) - */ - } - - /* "(tree fragment)":11 - * __pyx_unpickle_Enum__set_state( __pyx_result, __pyx_state) - * return __pyx_result - * cdef __pyx_unpickle_Enum__set_state(Enum __pyx_result, tuple __pyx_state): # <<<<<<<<<<<<<< - * __pyx_result.name = __pyx_state[0] - * if len(__pyx_state) > 1 and hasattr(__pyx_result, '__dict__'): - */ - - /* function exit code */ - __pyx_r = Py_None; __Pyx_INCREF(Py_None); - goto __pyx_L0; - __pyx_L1_error:; - __Pyx_XDECREF(__pyx_t_1); - __Pyx_XDECREF(__pyx_t_6); - __Pyx_XDECREF(__pyx_t_7); - __Pyx_XDECREF(__pyx_t_8); - __Pyx_AddTraceback("View.MemoryView.__pyx_unpickle_Enum__set_state", __pyx_clineno, __pyx_lineno, __pyx_filename); - __pyx_r = 0; - __pyx_L0:; - __Pyx_XGIVEREF(__pyx_r); - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} -static struct __pyx_vtabstruct_array __pyx_vtable_array; - -static PyObject *__pyx_tp_new_array(PyTypeObject *t, PyObject *a, PyObject *k) { - struct __pyx_array_obj *p; - PyObject *o; - if (likely((t->tp_flags & Py_TPFLAGS_IS_ABSTRACT) == 0)) { - o = (*t->tp_alloc)(t, 0); - } else { - o = (PyObject *) PyBaseObject_Type.tp_new(t, __pyx_empty_tuple, 0); - } - if (unlikely(!o)) return 0; - p = ((struct __pyx_array_obj *)o); - p->__pyx_vtab = __pyx_vtabptr_array; - p->mode = ((PyObject*)Py_None); Py_INCREF(Py_None); - p->_format = ((PyObject*)Py_None); Py_INCREF(Py_None); - if (unlikely(__pyx_array___cinit__(o, a, k) < 0)) goto bad; - return o; - bad: - Py_DECREF(o); o = 0; - return NULL; -} - -static void __pyx_tp_dealloc_array(PyObject *o) { - struct __pyx_array_obj *p = (struct __pyx_array_obj *)o; - #if CYTHON_USE_TP_FINALIZE - if (unlikely(PyType_HasFeature(Py_TYPE(o), Py_TPFLAGS_HAVE_FINALIZE) && Py_TYPE(o)->tp_finalize) && (!PyType_IS_GC(Py_TYPE(o)) || !_PyGC_FINALIZED(o))) { - if (PyObject_CallFinalizerFromDealloc(o)) return; - } - #endif - { - PyObject *etype, *eval, *etb; - PyErr_Fetch(&etype, &eval, &etb); - __Pyx_SET_REFCNT(o, Py_REFCNT(o) + 1); - __pyx_array___dealloc__(o); - __Pyx_SET_REFCNT(o, Py_REFCNT(o) - 1); - PyErr_Restore(etype, eval, etb); - } - Py_CLEAR(p->mode); - Py_CLEAR(p->_format); - (*Py_TYPE(o)->tp_free)(o); -} -static PyObject *__pyx_sq_item_array(PyObject *o, Py_ssize_t i) { - PyObject *r; - PyObject *x = PyInt_FromSsize_t(i); if(!x) return 0; - r = Py_TYPE(o)->tp_as_mapping->mp_subscript(o, x); - Py_DECREF(x); - return r; -} - -static int __pyx_mp_ass_subscript_array(PyObject *o, PyObject *i, PyObject *v) { - if (v) { - return __pyx_array___setitem__(o, i, v); - } - else { - PyErr_Format(PyExc_NotImplementedError, - "Subscript deletion not supported by %.200s", Py_TYPE(o)->tp_name); - return -1; - } -} - -static PyObject *__pyx_tp_getattro_array(PyObject *o, PyObject *n) { - PyObject *v = __Pyx_PyObject_GenericGetAttr(o, n); - if (!v && PyErr_ExceptionMatches(PyExc_AttributeError)) { - PyErr_Clear(); - v = __pyx_array___getattr__(o, n); - } - return v; -} - -static PyObject *__pyx_getprop___pyx_array_memview(PyObject *o, CYTHON_UNUSED void *x) { - return __pyx_pw_15View_dot_MemoryView_5array_7memview_1__get__(o); -} - -static PyMethodDef __pyx_methods_array[] = { - {"__getattr__", (PyCFunction)__pyx_array___getattr__, METH_O|METH_COEXIST, 0}, - {"__reduce_cython__", (PyCFunction)__pyx_pw___pyx_array_1__reduce_cython__, METH_NOARGS, 0}, - {"__setstate_cython__", (PyCFunction)__pyx_pw___pyx_array_3__setstate_cython__, METH_O, 0}, - {0, 0, 0, 0} -}; - -static struct PyGetSetDef __pyx_getsets_array[] = { - {(char *)"memview", __pyx_getprop___pyx_array_memview, 0, (char *)0, 0}, - {0, 0, 0, 0, 0} -}; - -static PySequenceMethods __pyx_tp_as_sequence_array = { - __pyx_array___len__, /*sq_length*/ - 0, /*sq_concat*/ - 0, /*sq_repeat*/ - __pyx_sq_item_array, /*sq_item*/ - 0, /*sq_slice*/ - 0, /*sq_ass_item*/ - 0, /*sq_ass_slice*/ - 0, /*sq_contains*/ - 0, /*sq_inplace_concat*/ - 0, /*sq_inplace_repeat*/ -}; - -static PyMappingMethods __pyx_tp_as_mapping_array = { - __pyx_array___len__, /*mp_length*/ - __pyx_array___getitem__, /*mp_subscript*/ - __pyx_mp_ass_subscript_array, /*mp_ass_subscript*/ -}; - -static PyBufferProcs __pyx_tp_as_buffer_array = { - #if PY_MAJOR_VERSION < 3 - 0, /*bf_getreadbuffer*/ - #endif - #if PY_MAJOR_VERSION < 3 - 0, /*bf_getwritebuffer*/ - #endif - #if PY_MAJOR_VERSION < 3 - 0, /*bf_getsegcount*/ - #endif - #if PY_MAJOR_VERSION < 3 - 0, /*bf_getcharbuffer*/ - #endif - __pyx_array_getbuffer, /*bf_getbuffer*/ - 0, /*bf_releasebuffer*/ -}; - -static PyTypeObject __pyx_type___pyx_array = { - PyVarObject_HEAD_INIT(0, 0) - "monotonic_align.core.array", /*tp_name*/ - sizeof(struct __pyx_array_obj), /*tp_basicsize*/ - 0, /*tp_itemsize*/ - __pyx_tp_dealloc_array, /*tp_dealloc*/ - #if PY_VERSION_HEX < 0x030800b4 - 0, /*tp_print*/ - #endif - #if PY_VERSION_HEX >= 0x030800b4 - 0, /*tp_vectorcall_offset*/ - #endif - 0, /*tp_getattr*/ - 0, /*tp_setattr*/ - #if PY_MAJOR_VERSION < 3 - 0, /*tp_compare*/ - #endif - #if PY_MAJOR_VERSION >= 3 - 0, /*tp_as_async*/ - #endif - 0, /*tp_repr*/ - 0, /*tp_as_number*/ - &__pyx_tp_as_sequence_array, /*tp_as_sequence*/ - &__pyx_tp_as_mapping_array, /*tp_as_mapping*/ - 0, /*tp_hash*/ - 0, /*tp_call*/ - 0, /*tp_str*/ - __pyx_tp_getattro_array, /*tp_getattro*/ - 0, /*tp_setattro*/ - &__pyx_tp_as_buffer_array, /*tp_as_buffer*/ - Py_TPFLAGS_DEFAULT|Py_TPFLAGS_HAVE_VERSION_TAG|Py_TPFLAGS_CHECKTYPES|Py_TPFLAGS_HAVE_NEWBUFFER|Py_TPFLAGS_BASETYPE, /*tp_flags*/ - 0, /*tp_doc*/ - 0, /*tp_traverse*/ - 0, /*tp_clear*/ - 0, /*tp_richcompare*/ - 0, /*tp_weaklistoffset*/ - 0, /*tp_iter*/ - 0, /*tp_iternext*/ - __pyx_methods_array, /*tp_methods*/ - 0, /*tp_members*/ - __pyx_getsets_array, /*tp_getset*/ - 0, /*tp_base*/ - 0, /*tp_dict*/ - 0, /*tp_descr_get*/ - 0, /*tp_descr_set*/ - 0, /*tp_dictoffset*/ - 0, /*tp_init*/ - 0, /*tp_alloc*/ - __pyx_tp_new_array, /*tp_new*/ - 0, /*tp_free*/ - 0, /*tp_is_gc*/ - 0, /*tp_bases*/ - 0, /*tp_mro*/ - 0, /*tp_cache*/ - 0, /*tp_subclasses*/ - 0, /*tp_weaklist*/ - 0, /*tp_del*/ - 0, /*tp_version_tag*/ - #if PY_VERSION_HEX >= 0x030400a1 - 0, /*tp_finalize*/ - #endif - #if PY_VERSION_HEX >= 0x030800b1 - 0, /*tp_vectorcall*/ - #endif - #if PY_VERSION_HEX >= 0x030800b4 && PY_VERSION_HEX < 0x03090000 - 0, /*tp_print*/ - #endif -}; - -static PyObject *__pyx_tp_new_Enum(PyTypeObject *t, CYTHON_UNUSED PyObject *a, CYTHON_UNUSED PyObject *k) { - struct __pyx_MemviewEnum_obj *p; - PyObject *o; - if (likely((t->tp_flags & Py_TPFLAGS_IS_ABSTRACT) == 0)) { - o = (*t->tp_alloc)(t, 0); - } else { - o = (PyObject *) PyBaseObject_Type.tp_new(t, __pyx_empty_tuple, 0); - } - if (unlikely(!o)) return 0; - p = ((struct __pyx_MemviewEnum_obj *)o); - p->name = Py_None; Py_INCREF(Py_None); - return o; -} - -static void __pyx_tp_dealloc_Enum(PyObject *o) { - struct __pyx_MemviewEnum_obj *p = (struct __pyx_MemviewEnum_obj *)o; - #if CYTHON_USE_TP_FINALIZE - if (unlikely(PyType_HasFeature(Py_TYPE(o), Py_TPFLAGS_HAVE_FINALIZE) && Py_TYPE(o)->tp_finalize) && !_PyGC_FINALIZED(o)) { - if (PyObject_CallFinalizerFromDealloc(o)) return; - } - #endif - PyObject_GC_UnTrack(o); - Py_CLEAR(p->name); - (*Py_TYPE(o)->tp_free)(o); -} - -static int __pyx_tp_traverse_Enum(PyObject *o, visitproc v, void *a) { - int e; - struct __pyx_MemviewEnum_obj *p = (struct __pyx_MemviewEnum_obj *)o; - if (p->name) { - e = (*v)(p->name, a); if (e) return e; - } - return 0; -} - -static int __pyx_tp_clear_Enum(PyObject *o) { - PyObject* tmp; - struct __pyx_MemviewEnum_obj *p = (struct __pyx_MemviewEnum_obj *)o; - tmp = ((PyObject*)p->name); - p->name = Py_None; Py_INCREF(Py_None); - Py_XDECREF(tmp); - return 0; -} - -static PyMethodDef __pyx_methods_Enum[] = { - {"__reduce_cython__", (PyCFunction)__pyx_pw___pyx_MemviewEnum_1__reduce_cython__, METH_NOARGS, 0}, - {"__setstate_cython__", (PyCFunction)__pyx_pw___pyx_MemviewEnum_3__setstate_cython__, METH_O, 0}, - {0, 0, 0, 0} -}; - -static PyTypeObject __pyx_type___pyx_MemviewEnum = { - PyVarObject_HEAD_INIT(0, 0) - "monotonic_align.core.Enum", /*tp_name*/ - sizeof(struct __pyx_MemviewEnum_obj), /*tp_basicsize*/ - 0, /*tp_itemsize*/ - __pyx_tp_dealloc_Enum, /*tp_dealloc*/ - #if PY_VERSION_HEX < 0x030800b4 - 0, /*tp_print*/ - #endif - #if PY_VERSION_HEX >= 0x030800b4 - 0, /*tp_vectorcall_offset*/ - #endif - 0, /*tp_getattr*/ - 0, /*tp_setattr*/ - #if PY_MAJOR_VERSION < 3 - 0, /*tp_compare*/ - #endif - #if PY_MAJOR_VERSION >= 3 - 0, /*tp_as_async*/ - #endif - __pyx_MemviewEnum___repr__, /*tp_repr*/ - 0, /*tp_as_number*/ - 0, /*tp_as_sequence*/ - 0, /*tp_as_mapping*/ - 0, /*tp_hash*/ - 0, /*tp_call*/ - 0, /*tp_str*/ - 0, /*tp_getattro*/ - 0, /*tp_setattro*/ - 0, /*tp_as_buffer*/ - Py_TPFLAGS_DEFAULT|Py_TPFLAGS_HAVE_VERSION_TAG|Py_TPFLAGS_CHECKTYPES|Py_TPFLAGS_HAVE_NEWBUFFER|Py_TPFLAGS_BASETYPE|Py_TPFLAGS_HAVE_GC, /*tp_flags*/ - 0, /*tp_doc*/ - __pyx_tp_traverse_Enum, /*tp_traverse*/ - __pyx_tp_clear_Enum, /*tp_clear*/ - 0, /*tp_richcompare*/ - 0, /*tp_weaklistoffset*/ - 0, /*tp_iter*/ - 0, /*tp_iternext*/ - __pyx_methods_Enum, /*tp_methods*/ - 0, /*tp_members*/ - 0, /*tp_getset*/ - 0, /*tp_base*/ - 0, /*tp_dict*/ - 0, /*tp_descr_get*/ - 0, /*tp_descr_set*/ - 0, /*tp_dictoffset*/ - __pyx_MemviewEnum___init__, /*tp_init*/ - 0, /*tp_alloc*/ - __pyx_tp_new_Enum, /*tp_new*/ - 0, /*tp_free*/ - 0, /*tp_is_gc*/ - 0, /*tp_bases*/ - 0, /*tp_mro*/ - 0, /*tp_cache*/ - 0, /*tp_subclasses*/ - 0, /*tp_weaklist*/ - 0, /*tp_del*/ - 0, /*tp_version_tag*/ - #if PY_VERSION_HEX >= 0x030400a1 - 0, /*tp_finalize*/ - #endif - #if PY_VERSION_HEX >= 0x030800b1 - 0, /*tp_vectorcall*/ - #endif - #if PY_VERSION_HEX >= 0x030800b4 && PY_VERSION_HEX < 0x03090000 - 0, /*tp_print*/ - #endif -}; -static struct __pyx_vtabstruct_memoryview __pyx_vtable_memoryview; - -static PyObject *__pyx_tp_new_memoryview(PyTypeObject *t, PyObject *a, PyObject *k) { - struct __pyx_memoryview_obj *p; - PyObject *o; - if (likely((t->tp_flags & Py_TPFLAGS_IS_ABSTRACT) == 0)) { - o = (*t->tp_alloc)(t, 0); - } else { - o = (PyObject *) PyBaseObject_Type.tp_new(t, __pyx_empty_tuple, 0); - } - if (unlikely(!o)) return 0; - p = ((struct __pyx_memoryview_obj *)o); - p->__pyx_vtab = __pyx_vtabptr_memoryview; - p->obj = Py_None; Py_INCREF(Py_None); - p->_size = Py_None; Py_INCREF(Py_None); - p->_array_interface = Py_None; Py_INCREF(Py_None); - p->view.obj = NULL; - if (unlikely(__pyx_memoryview___cinit__(o, a, k) < 0)) goto bad; - return o; - bad: - Py_DECREF(o); o = 0; - return NULL; -} - -static void __pyx_tp_dealloc_memoryview(PyObject *o) { - struct __pyx_memoryview_obj *p = (struct __pyx_memoryview_obj *)o; - #if CYTHON_USE_TP_FINALIZE - if (unlikely(PyType_HasFeature(Py_TYPE(o), Py_TPFLAGS_HAVE_FINALIZE) && Py_TYPE(o)->tp_finalize) && !_PyGC_FINALIZED(o)) { - if (PyObject_CallFinalizerFromDealloc(o)) return; - } - #endif - PyObject_GC_UnTrack(o); - { - PyObject *etype, *eval, *etb; - PyErr_Fetch(&etype, &eval, &etb); - __Pyx_SET_REFCNT(o, Py_REFCNT(o) + 1); - __pyx_memoryview___dealloc__(o); - __Pyx_SET_REFCNT(o, Py_REFCNT(o) - 1); - PyErr_Restore(etype, eval, etb); - } - Py_CLEAR(p->obj); - Py_CLEAR(p->_size); - Py_CLEAR(p->_array_interface); - (*Py_TYPE(o)->tp_free)(o); -} - -static int __pyx_tp_traverse_memoryview(PyObject *o, visitproc v, void *a) { - int e; - struct __pyx_memoryview_obj *p = (struct __pyx_memoryview_obj *)o; - if (p->obj) { - e = (*v)(p->obj, a); if (e) return e; - } - if (p->_size) { - e = (*v)(p->_size, a); if (e) return e; - } - if (p->_array_interface) { - e = (*v)(p->_array_interface, a); if (e) return e; - } - if (p->view.obj) { - e = (*v)(p->view.obj, a); if (e) return e; - } - return 0; -} - -static int __pyx_tp_clear_memoryview(PyObject *o) { - PyObject* tmp; - struct __pyx_memoryview_obj *p = (struct __pyx_memoryview_obj *)o; - tmp = ((PyObject*)p->obj); - p->obj = Py_None; Py_INCREF(Py_None); - Py_XDECREF(tmp); - tmp = ((PyObject*)p->_size); - p->_size = Py_None; Py_INCREF(Py_None); - Py_XDECREF(tmp); - tmp = ((PyObject*)p->_array_interface); - p->_array_interface = Py_None; Py_INCREF(Py_None); - Py_XDECREF(tmp); - Py_CLEAR(p->view.obj); - return 0; -} -static PyObject *__pyx_sq_item_memoryview(PyObject *o, Py_ssize_t i) { - PyObject *r; - PyObject *x = PyInt_FromSsize_t(i); if(!x) return 0; - r = Py_TYPE(o)->tp_as_mapping->mp_subscript(o, x); - Py_DECREF(x); - return r; -} - -static int __pyx_mp_ass_subscript_memoryview(PyObject *o, PyObject *i, PyObject *v) { - if (v) { - return __pyx_memoryview___setitem__(o, i, v); - } - else { - PyErr_Format(PyExc_NotImplementedError, - "Subscript deletion not supported by %.200s", Py_TYPE(o)->tp_name); - return -1; - } -} - -static PyObject *__pyx_getprop___pyx_memoryview_T(PyObject *o, CYTHON_UNUSED void *x) { - return __pyx_pw_15View_dot_MemoryView_10memoryview_1T_1__get__(o); -} - -static PyObject *__pyx_getprop___pyx_memoryview_base(PyObject *o, CYTHON_UNUSED void *x) { - return __pyx_pw_15View_dot_MemoryView_10memoryview_4base_1__get__(o); -} - -static PyObject *__pyx_getprop___pyx_memoryview_shape(PyObject *o, CYTHON_UNUSED void *x) { - return __pyx_pw_15View_dot_MemoryView_10memoryview_5shape_1__get__(o); -} - -static PyObject *__pyx_getprop___pyx_memoryview_strides(PyObject *o, CYTHON_UNUSED void *x) { - return __pyx_pw_15View_dot_MemoryView_10memoryview_7strides_1__get__(o); -} - -static PyObject *__pyx_getprop___pyx_memoryview_suboffsets(PyObject *o, CYTHON_UNUSED void *x) { - return __pyx_pw_15View_dot_MemoryView_10memoryview_10suboffsets_1__get__(o); -} - -static PyObject *__pyx_getprop___pyx_memoryview_ndim(PyObject *o, CYTHON_UNUSED void *x) { - return __pyx_pw_15View_dot_MemoryView_10memoryview_4ndim_1__get__(o); -} - -static PyObject *__pyx_getprop___pyx_memoryview_itemsize(PyObject *o, CYTHON_UNUSED void *x) { - return __pyx_pw_15View_dot_MemoryView_10memoryview_8itemsize_1__get__(o); -} - -static PyObject *__pyx_getprop___pyx_memoryview_nbytes(PyObject *o, CYTHON_UNUSED void *x) { - return __pyx_pw_15View_dot_MemoryView_10memoryview_6nbytes_1__get__(o); -} - -static PyObject *__pyx_getprop___pyx_memoryview_size(PyObject *o, CYTHON_UNUSED void *x) { - return __pyx_pw_15View_dot_MemoryView_10memoryview_4size_1__get__(o); -} - -static PyMethodDef __pyx_methods_memoryview[] = { - {"is_c_contig", (PyCFunction)__pyx_memoryview_is_c_contig, METH_NOARGS, 0}, - {"is_f_contig", (PyCFunction)__pyx_memoryview_is_f_contig, METH_NOARGS, 0}, - {"copy", (PyCFunction)__pyx_memoryview_copy, METH_NOARGS, 0}, - {"copy_fortran", (PyCFunction)__pyx_memoryview_copy_fortran, METH_NOARGS, 0}, - {"__reduce_cython__", (PyCFunction)__pyx_pw___pyx_memoryview_1__reduce_cython__, METH_NOARGS, 0}, - {"__setstate_cython__", (PyCFunction)__pyx_pw___pyx_memoryview_3__setstate_cython__, METH_O, 0}, - {0, 0, 0, 0} -}; - -static struct PyGetSetDef __pyx_getsets_memoryview[] = { - {(char *)"T", __pyx_getprop___pyx_memoryview_T, 0, (char *)0, 0}, - {(char *)"base", __pyx_getprop___pyx_memoryview_base, 0, (char *)0, 0}, - {(char *)"shape", __pyx_getprop___pyx_memoryview_shape, 0, (char *)0, 0}, - {(char *)"strides", __pyx_getprop___pyx_memoryview_strides, 0, (char *)0, 0}, - {(char *)"suboffsets", __pyx_getprop___pyx_memoryview_suboffsets, 0, (char *)0, 0}, - {(char *)"ndim", __pyx_getprop___pyx_memoryview_ndim, 0, (char *)0, 0}, - {(char *)"itemsize", __pyx_getprop___pyx_memoryview_itemsize, 0, (char *)0, 0}, - {(char *)"nbytes", __pyx_getprop___pyx_memoryview_nbytes, 0, (char *)0, 0}, - {(char *)"size", __pyx_getprop___pyx_memoryview_size, 0, (char *)0, 0}, - {0, 0, 0, 0, 0} -}; - -static PySequenceMethods __pyx_tp_as_sequence_memoryview = { - __pyx_memoryview___len__, /*sq_length*/ - 0, /*sq_concat*/ - 0, /*sq_repeat*/ - __pyx_sq_item_memoryview, /*sq_item*/ - 0, /*sq_slice*/ - 0, /*sq_ass_item*/ - 0, /*sq_ass_slice*/ - 0, /*sq_contains*/ - 0, /*sq_inplace_concat*/ - 0, /*sq_inplace_repeat*/ -}; - -static PyMappingMethods __pyx_tp_as_mapping_memoryview = { - __pyx_memoryview___len__, /*mp_length*/ - __pyx_memoryview___getitem__, /*mp_subscript*/ - __pyx_mp_ass_subscript_memoryview, /*mp_ass_subscript*/ -}; - -static PyBufferProcs __pyx_tp_as_buffer_memoryview = { - #if PY_MAJOR_VERSION < 3 - 0, /*bf_getreadbuffer*/ - #endif - #if PY_MAJOR_VERSION < 3 - 0, /*bf_getwritebuffer*/ - #endif - #if PY_MAJOR_VERSION < 3 - 0, /*bf_getsegcount*/ - #endif - #if PY_MAJOR_VERSION < 3 - 0, /*bf_getcharbuffer*/ - #endif - __pyx_memoryview_getbuffer, /*bf_getbuffer*/ - 0, /*bf_releasebuffer*/ -}; - -static PyTypeObject __pyx_type___pyx_memoryview = { - PyVarObject_HEAD_INIT(0, 0) - "monotonic_align.core.memoryview", /*tp_name*/ - sizeof(struct __pyx_memoryview_obj), /*tp_basicsize*/ - 0, /*tp_itemsize*/ - __pyx_tp_dealloc_memoryview, /*tp_dealloc*/ - #if PY_VERSION_HEX < 0x030800b4 - 0, /*tp_print*/ - #endif - #if PY_VERSION_HEX >= 0x030800b4 - 0, /*tp_vectorcall_offset*/ - #endif - 0, /*tp_getattr*/ - 0, /*tp_setattr*/ - #if PY_MAJOR_VERSION < 3 - 0, /*tp_compare*/ - #endif - #if PY_MAJOR_VERSION >= 3 - 0, /*tp_as_async*/ - #endif - __pyx_memoryview___repr__, /*tp_repr*/ - 0, /*tp_as_number*/ - &__pyx_tp_as_sequence_memoryview, /*tp_as_sequence*/ - &__pyx_tp_as_mapping_memoryview, /*tp_as_mapping*/ - 0, /*tp_hash*/ - 0, /*tp_call*/ - __pyx_memoryview___str__, /*tp_str*/ - 0, /*tp_getattro*/ - 0, /*tp_setattro*/ - &__pyx_tp_as_buffer_memoryview, /*tp_as_buffer*/ - Py_TPFLAGS_DEFAULT|Py_TPFLAGS_HAVE_VERSION_TAG|Py_TPFLAGS_CHECKTYPES|Py_TPFLAGS_HAVE_NEWBUFFER|Py_TPFLAGS_BASETYPE|Py_TPFLAGS_HAVE_GC, /*tp_flags*/ - 0, /*tp_doc*/ - __pyx_tp_traverse_memoryview, /*tp_traverse*/ - __pyx_tp_clear_memoryview, /*tp_clear*/ - 0, /*tp_richcompare*/ - 0, /*tp_weaklistoffset*/ - 0, /*tp_iter*/ - 0, /*tp_iternext*/ - __pyx_methods_memoryview, /*tp_methods*/ - 0, /*tp_members*/ - __pyx_getsets_memoryview, /*tp_getset*/ - 0, /*tp_base*/ - 0, /*tp_dict*/ - 0, /*tp_descr_get*/ - 0, /*tp_descr_set*/ - 0, /*tp_dictoffset*/ - 0, /*tp_init*/ - 0, /*tp_alloc*/ - __pyx_tp_new_memoryview, /*tp_new*/ - 0, /*tp_free*/ - 0, /*tp_is_gc*/ - 0, /*tp_bases*/ - 0, /*tp_mro*/ - 0, /*tp_cache*/ - 0, /*tp_subclasses*/ - 0, /*tp_weaklist*/ - 0, /*tp_del*/ - 0, /*tp_version_tag*/ - #if PY_VERSION_HEX >= 0x030400a1 - 0, /*tp_finalize*/ - #endif - #if PY_VERSION_HEX >= 0x030800b1 - 0, /*tp_vectorcall*/ - #endif - #if PY_VERSION_HEX >= 0x030800b4 && PY_VERSION_HEX < 0x03090000 - 0, /*tp_print*/ - #endif -}; -static struct __pyx_vtabstruct__memoryviewslice __pyx_vtable__memoryviewslice; - -static PyObject *__pyx_tp_new__memoryviewslice(PyTypeObject *t, PyObject *a, PyObject *k) { - struct __pyx_memoryviewslice_obj *p; - PyObject *o = __pyx_tp_new_memoryview(t, a, k); - if (unlikely(!o)) return 0; - p = ((struct __pyx_memoryviewslice_obj *)o); - p->__pyx_base.__pyx_vtab = (struct __pyx_vtabstruct_memoryview*)__pyx_vtabptr__memoryviewslice; - p->from_object = Py_None; Py_INCREF(Py_None); - p->from_slice.memview = NULL; - return o; -} - -static void __pyx_tp_dealloc__memoryviewslice(PyObject *o) { - struct __pyx_memoryviewslice_obj *p = (struct __pyx_memoryviewslice_obj *)o; - #if CYTHON_USE_TP_FINALIZE - if (unlikely(PyType_HasFeature(Py_TYPE(o), Py_TPFLAGS_HAVE_FINALIZE) && Py_TYPE(o)->tp_finalize) && !_PyGC_FINALIZED(o)) { - if (PyObject_CallFinalizerFromDealloc(o)) return; - } - #endif - PyObject_GC_UnTrack(o); - { - PyObject *etype, *eval, *etb; - PyErr_Fetch(&etype, &eval, &etb); - __Pyx_SET_REFCNT(o, Py_REFCNT(o) + 1); - __pyx_memoryviewslice___dealloc__(o); - __Pyx_SET_REFCNT(o, Py_REFCNT(o) - 1); - PyErr_Restore(etype, eval, etb); - } - Py_CLEAR(p->from_object); - PyObject_GC_Track(o); - __pyx_tp_dealloc_memoryview(o); -} - -static int __pyx_tp_traverse__memoryviewslice(PyObject *o, visitproc v, void *a) { - int e; - struct __pyx_memoryviewslice_obj *p = (struct __pyx_memoryviewslice_obj *)o; - e = __pyx_tp_traverse_memoryview(o, v, a); if (e) return e; - if (p->from_object) { - e = (*v)(p->from_object, a); if (e) return e; - } - return 0; -} - -static int __pyx_tp_clear__memoryviewslice(PyObject *o) { - PyObject* tmp; - struct __pyx_memoryviewslice_obj *p = (struct __pyx_memoryviewslice_obj *)o; - __pyx_tp_clear_memoryview(o); - tmp = ((PyObject*)p->from_object); - p->from_object = Py_None; Py_INCREF(Py_None); - Py_XDECREF(tmp); - __PYX_XDEC_MEMVIEW(&p->from_slice, 1); - return 0; -} - -static PyObject *__pyx_getprop___pyx_memoryviewslice_base(PyObject *o, CYTHON_UNUSED void *x) { - return __pyx_pw_15View_dot_MemoryView_16_memoryviewslice_4base_1__get__(o); -} - -static PyMethodDef __pyx_methods__memoryviewslice[] = { - {"__reduce_cython__", (PyCFunction)__pyx_pw___pyx_memoryviewslice_1__reduce_cython__, METH_NOARGS, 0}, - {"__setstate_cython__", (PyCFunction)__pyx_pw___pyx_memoryviewslice_3__setstate_cython__, METH_O, 0}, - {0, 0, 0, 0} -}; - -static struct PyGetSetDef __pyx_getsets__memoryviewslice[] = { - {(char *)"base", __pyx_getprop___pyx_memoryviewslice_base, 0, (char *)0, 0}, - {0, 0, 0, 0, 0} -}; - -static PyTypeObject __pyx_type___pyx_memoryviewslice = { - PyVarObject_HEAD_INIT(0, 0) - "monotonic_align.core._memoryviewslice", /*tp_name*/ - sizeof(struct __pyx_memoryviewslice_obj), /*tp_basicsize*/ - 0, /*tp_itemsize*/ - __pyx_tp_dealloc__memoryviewslice, /*tp_dealloc*/ - #if PY_VERSION_HEX < 0x030800b4 - 0, /*tp_print*/ - #endif - #if PY_VERSION_HEX >= 0x030800b4 - 0, /*tp_vectorcall_offset*/ - #endif - 0, /*tp_getattr*/ - 0, /*tp_setattr*/ - #if PY_MAJOR_VERSION < 3 - 0, /*tp_compare*/ - #endif - #if PY_MAJOR_VERSION >= 3 - 0, /*tp_as_async*/ - #endif - #if CYTHON_COMPILING_IN_PYPY - __pyx_memoryview___repr__, /*tp_repr*/ - #else - 0, /*tp_repr*/ - #endif - 0, /*tp_as_number*/ - 0, /*tp_as_sequence*/ - 0, /*tp_as_mapping*/ - 0, /*tp_hash*/ - 0, /*tp_call*/ - #if CYTHON_COMPILING_IN_PYPY - __pyx_memoryview___str__, /*tp_str*/ - #else - 0, /*tp_str*/ - #endif - 0, /*tp_getattro*/ - 0, /*tp_setattro*/ - 0, /*tp_as_buffer*/ - Py_TPFLAGS_DEFAULT|Py_TPFLAGS_HAVE_VERSION_TAG|Py_TPFLAGS_CHECKTYPES|Py_TPFLAGS_HAVE_NEWBUFFER|Py_TPFLAGS_BASETYPE|Py_TPFLAGS_HAVE_GC, /*tp_flags*/ - "Internal class for passing memoryview slices to Python", /*tp_doc*/ - __pyx_tp_traverse__memoryviewslice, /*tp_traverse*/ - __pyx_tp_clear__memoryviewslice, /*tp_clear*/ - 0, /*tp_richcompare*/ - 0, /*tp_weaklistoffset*/ - 0, /*tp_iter*/ - 0, /*tp_iternext*/ - __pyx_methods__memoryviewslice, /*tp_methods*/ - 0, /*tp_members*/ - __pyx_getsets__memoryviewslice, /*tp_getset*/ - 0, /*tp_base*/ - 0, /*tp_dict*/ - 0, /*tp_descr_get*/ - 0, /*tp_descr_set*/ - 0, /*tp_dictoffset*/ - 0, /*tp_init*/ - 0, /*tp_alloc*/ - __pyx_tp_new__memoryviewslice, /*tp_new*/ - 0, /*tp_free*/ - 0, /*tp_is_gc*/ - 0, /*tp_bases*/ - 0, /*tp_mro*/ - 0, /*tp_cache*/ - 0, /*tp_subclasses*/ - 0, /*tp_weaklist*/ - 0, /*tp_del*/ - 0, /*tp_version_tag*/ - #if PY_VERSION_HEX >= 0x030400a1 - 0, /*tp_finalize*/ - #endif - #if PY_VERSION_HEX >= 0x030800b1 - 0, /*tp_vectorcall*/ - #endif - #if PY_VERSION_HEX >= 0x030800b4 && PY_VERSION_HEX < 0x03090000 - 0, /*tp_print*/ - #endif -}; - -static PyMethodDef __pyx_methods[] = { - {"maximum_path_c", (PyCFunction)(void*)(PyCFunctionWithKeywords)__pyx_pw_15monotonic_align_4core_1maximum_path_c, METH_VARARGS|METH_KEYWORDS, 0}, - {0, 0, 0, 0} -}; - -#if PY_MAJOR_VERSION >= 3 -#if CYTHON_PEP489_MULTI_PHASE_INIT -static PyObject* __pyx_pymod_create(PyObject *spec, PyModuleDef *def); /*proto*/ -static int __pyx_pymod_exec_core(PyObject* module); /*proto*/ -static PyModuleDef_Slot __pyx_moduledef_slots[] = { - {Py_mod_create, (void*)__pyx_pymod_create}, - {Py_mod_exec, (void*)__pyx_pymod_exec_core}, - {0, NULL} -}; -#endif - -static struct PyModuleDef __pyx_moduledef = { - PyModuleDef_HEAD_INIT, - "core", - 0, /* m_doc */ - #if CYTHON_PEP489_MULTI_PHASE_INIT - 0, /* m_size */ - #else - -1, /* m_size */ - #endif - __pyx_methods /* m_methods */, - #if CYTHON_PEP489_MULTI_PHASE_INIT - __pyx_moduledef_slots, /* m_slots */ - #else - NULL, /* m_reload */ - #endif - NULL, /* m_traverse */ - NULL, /* m_clear */ - NULL /* m_free */ -}; -#endif -#ifndef CYTHON_SMALL_CODE -#if defined(__clang__) - #define CYTHON_SMALL_CODE -#elif defined(__GNUC__) && (__GNUC__ > 4 || (__GNUC__ == 4 && __GNUC_MINOR__ >= 3)) - #define CYTHON_SMALL_CODE __attribute__((cold)) -#else - #define CYTHON_SMALL_CODE -#endif -#endif - -static __Pyx_StringTabEntry __pyx_string_tab[] = { - {&__pyx_n_s_ASCII, __pyx_k_ASCII, sizeof(__pyx_k_ASCII), 0, 0, 1, 1}, - {&__pyx_kp_s_Buffer_view_does_not_expose_stri, __pyx_k_Buffer_view_does_not_expose_stri, sizeof(__pyx_k_Buffer_view_does_not_expose_stri), 0, 0, 1, 0}, - {&__pyx_kp_s_Can_only_create_a_buffer_that_is, __pyx_k_Can_only_create_a_buffer_that_is, sizeof(__pyx_k_Can_only_create_a_buffer_that_is), 0, 0, 1, 0}, - {&__pyx_kp_s_Cannot_assign_to_read_only_memor, __pyx_k_Cannot_assign_to_read_only_memor, sizeof(__pyx_k_Cannot_assign_to_read_only_memor), 0, 0, 1, 0}, - {&__pyx_kp_s_Cannot_create_writable_memory_vi, __pyx_k_Cannot_create_writable_memory_vi, sizeof(__pyx_k_Cannot_create_writable_memory_vi), 0, 0, 1, 0}, - {&__pyx_kp_s_Cannot_index_with_type_s, __pyx_k_Cannot_index_with_type_s, sizeof(__pyx_k_Cannot_index_with_type_s), 0, 0, 1, 0}, - {&__pyx_n_s_Ellipsis, __pyx_k_Ellipsis, sizeof(__pyx_k_Ellipsis), 0, 0, 1, 1}, - {&__pyx_kp_s_Empty_shape_tuple_for_cython_arr, __pyx_k_Empty_shape_tuple_for_cython_arr, sizeof(__pyx_k_Empty_shape_tuple_for_cython_arr), 0, 0, 1, 0}, - {&__pyx_kp_s_Incompatible_checksums_s_vs_0xb0, __pyx_k_Incompatible_checksums_s_vs_0xb0, sizeof(__pyx_k_Incompatible_checksums_s_vs_0xb0), 0, 0, 1, 0}, - {&__pyx_n_s_IndexError, __pyx_k_IndexError, sizeof(__pyx_k_IndexError), 0, 0, 1, 1}, - {&__pyx_kp_s_Indirect_dimensions_not_supporte, __pyx_k_Indirect_dimensions_not_supporte, sizeof(__pyx_k_Indirect_dimensions_not_supporte), 0, 0, 1, 0}, - {&__pyx_kp_s_Invalid_mode_expected_c_or_fortr, __pyx_k_Invalid_mode_expected_c_or_fortr, sizeof(__pyx_k_Invalid_mode_expected_c_or_fortr), 0, 0, 1, 0}, - {&__pyx_kp_s_Invalid_shape_in_axis_d_d, __pyx_k_Invalid_shape_in_axis_d_d, sizeof(__pyx_k_Invalid_shape_in_axis_d_d), 0, 0, 1, 0}, - {&__pyx_n_s_MemoryError, __pyx_k_MemoryError, sizeof(__pyx_k_MemoryError), 0, 0, 1, 1}, - {&__pyx_kp_s_MemoryView_of_r_at_0x_x, __pyx_k_MemoryView_of_r_at_0x_x, sizeof(__pyx_k_MemoryView_of_r_at_0x_x), 0, 0, 1, 0}, - {&__pyx_kp_s_MemoryView_of_r_object, __pyx_k_MemoryView_of_r_object, sizeof(__pyx_k_MemoryView_of_r_object), 0, 0, 1, 0}, - {&__pyx_n_b_O, __pyx_k_O, sizeof(__pyx_k_O), 0, 0, 0, 1}, - {&__pyx_kp_s_Out_of_bounds_on_buffer_access_a, __pyx_k_Out_of_bounds_on_buffer_access_a, sizeof(__pyx_k_Out_of_bounds_on_buffer_access_a), 0, 0, 1, 0}, - {&__pyx_n_s_PickleError, __pyx_k_PickleError, sizeof(__pyx_k_PickleError), 0, 0, 1, 1}, - {&__pyx_n_s_TypeError, __pyx_k_TypeError, sizeof(__pyx_k_TypeError), 0, 0, 1, 1}, - {&__pyx_kp_s_Unable_to_convert_item_to_object, __pyx_k_Unable_to_convert_item_to_object, sizeof(__pyx_k_Unable_to_convert_item_to_object), 0, 0, 1, 0}, - {&__pyx_n_s_ValueError, __pyx_k_ValueError, sizeof(__pyx_k_ValueError), 0, 0, 1, 1}, - {&__pyx_n_s_View_MemoryView, __pyx_k_View_MemoryView, sizeof(__pyx_k_View_MemoryView), 0, 0, 1, 1}, - {&__pyx_n_s_allocate_buffer, __pyx_k_allocate_buffer, sizeof(__pyx_k_allocate_buffer), 0, 0, 1, 1}, - {&__pyx_n_s_base, __pyx_k_base, sizeof(__pyx_k_base), 0, 0, 1, 1}, - {&__pyx_n_s_c, __pyx_k_c, sizeof(__pyx_k_c), 0, 0, 1, 1}, - {&__pyx_n_u_c, __pyx_k_c, sizeof(__pyx_k_c), 0, 1, 0, 1}, - {&__pyx_n_s_class, __pyx_k_class, sizeof(__pyx_k_class), 0, 0, 1, 1}, - {&__pyx_n_s_cline_in_traceback, __pyx_k_cline_in_traceback, sizeof(__pyx_k_cline_in_traceback), 0, 0, 1, 1}, - {&__pyx_kp_s_contiguous_and_direct, __pyx_k_contiguous_and_direct, sizeof(__pyx_k_contiguous_and_direct), 0, 0, 1, 0}, - {&__pyx_kp_s_contiguous_and_indirect, __pyx_k_contiguous_and_indirect, sizeof(__pyx_k_contiguous_and_indirect), 0, 0, 1, 0}, - {&__pyx_n_s_dict, __pyx_k_dict, sizeof(__pyx_k_dict), 0, 0, 1, 1}, - {&__pyx_n_s_dtype_is_object, __pyx_k_dtype_is_object, sizeof(__pyx_k_dtype_is_object), 0, 0, 1, 1}, - {&__pyx_n_s_encode, __pyx_k_encode, sizeof(__pyx_k_encode), 0, 0, 1, 1}, - {&__pyx_n_s_enumerate, __pyx_k_enumerate, sizeof(__pyx_k_enumerate), 0, 0, 1, 1}, - {&__pyx_n_s_error, __pyx_k_error, sizeof(__pyx_k_error), 0, 0, 1, 1}, - {&__pyx_n_s_flags, __pyx_k_flags, sizeof(__pyx_k_flags), 0, 0, 1, 1}, - {&__pyx_n_s_format, __pyx_k_format, sizeof(__pyx_k_format), 0, 0, 1, 1}, - {&__pyx_n_s_fortran, __pyx_k_fortran, sizeof(__pyx_k_fortran), 0, 0, 1, 1}, - {&__pyx_n_u_fortran, __pyx_k_fortran, sizeof(__pyx_k_fortran), 0, 1, 0, 1}, - {&__pyx_n_s_getstate, __pyx_k_getstate, sizeof(__pyx_k_getstate), 0, 0, 1, 1}, - {&__pyx_kp_s_got_differing_extents_in_dimensi, __pyx_k_got_differing_extents_in_dimensi, sizeof(__pyx_k_got_differing_extents_in_dimensi), 0, 0, 1, 0}, - {&__pyx_n_s_id, __pyx_k_id, sizeof(__pyx_k_id), 0, 0, 1, 1}, - {&__pyx_n_s_import, __pyx_k_import, sizeof(__pyx_k_import), 0, 0, 1, 1}, - {&__pyx_n_s_itemsize, __pyx_k_itemsize, sizeof(__pyx_k_itemsize), 0, 0, 1, 1}, - {&__pyx_kp_s_itemsize_0_for_cython_array, __pyx_k_itemsize_0_for_cython_array, sizeof(__pyx_k_itemsize_0_for_cython_array), 0, 0, 1, 0}, - {&__pyx_n_s_main, __pyx_k_main, sizeof(__pyx_k_main), 0, 0, 1, 1}, - {&__pyx_n_s_memview, __pyx_k_memview, sizeof(__pyx_k_memview), 0, 0, 1, 1}, - {&__pyx_n_s_mode, __pyx_k_mode, sizeof(__pyx_k_mode), 0, 0, 1, 1}, - {&__pyx_n_s_name, __pyx_k_name, sizeof(__pyx_k_name), 0, 0, 1, 1}, - {&__pyx_n_s_name_2, __pyx_k_name_2, sizeof(__pyx_k_name_2), 0, 0, 1, 1}, - {&__pyx_n_s_ndim, __pyx_k_ndim, sizeof(__pyx_k_ndim), 0, 0, 1, 1}, - {&__pyx_n_s_new, __pyx_k_new, sizeof(__pyx_k_new), 0, 0, 1, 1}, - {&__pyx_kp_s_no_default___reduce___due_to_non, __pyx_k_no_default___reduce___due_to_non, sizeof(__pyx_k_no_default___reduce___due_to_non), 0, 0, 1, 0}, - {&__pyx_n_s_obj, __pyx_k_obj, sizeof(__pyx_k_obj), 0, 0, 1, 1}, - {&__pyx_n_s_pack, __pyx_k_pack, sizeof(__pyx_k_pack), 0, 0, 1, 1}, - {&__pyx_n_s_paths, __pyx_k_paths, sizeof(__pyx_k_paths), 0, 0, 1, 1}, - {&__pyx_n_s_pickle, __pyx_k_pickle, sizeof(__pyx_k_pickle), 0, 0, 1, 1}, - {&__pyx_n_s_pyx_PickleError, __pyx_k_pyx_PickleError, sizeof(__pyx_k_pyx_PickleError), 0, 0, 1, 1}, - {&__pyx_n_s_pyx_checksum, __pyx_k_pyx_checksum, sizeof(__pyx_k_pyx_checksum), 0, 0, 1, 1}, - {&__pyx_n_s_pyx_getbuffer, __pyx_k_pyx_getbuffer, sizeof(__pyx_k_pyx_getbuffer), 0, 0, 1, 1}, - {&__pyx_n_s_pyx_result, __pyx_k_pyx_result, sizeof(__pyx_k_pyx_result), 0, 0, 1, 1}, - {&__pyx_n_s_pyx_state, __pyx_k_pyx_state, sizeof(__pyx_k_pyx_state), 0, 0, 1, 1}, - {&__pyx_n_s_pyx_type, __pyx_k_pyx_type, sizeof(__pyx_k_pyx_type), 0, 0, 1, 1}, - {&__pyx_n_s_pyx_unpickle_Enum, __pyx_k_pyx_unpickle_Enum, sizeof(__pyx_k_pyx_unpickle_Enum), 0, 0, 1, 1}, - {&__pyx_n_s_pyx_vtable, __pyx_k_pyx_vtable, sizeof(__pyx_k_pyx_vtable), 0, 0, 1, 1}, - {&__pyx_n_s_range, __pyx_k_range, sizeof(__pyx_k_range), 0, 0, 1, 1}, - {&__pyx_n_s_reduce, __pyx_k_reduce, sizeof(__pyx_k_reduce), 0, 0, 1, 1}, - {&__pyx_n_s_reduce_cython, __pyx_k_reduce_cython, sizeof(__pyx_k_reduce_cython), 0, 0, 1, 1}, - {&__pyx_n_s_reduce_ex, __pyx_k_reduce_ex, sizeof(__pyx_k_reduce_ex), 0, 0, 1, 1}, - {&__pyx_n_s_setstate, __pyx_k_setstate, sizeof(__pyx_k_setstate), 0, 0, 1, 1}, - {&__pyx_n_s_setstate_cython, __pyx_k_setstate_cython, sizeof(__pyx_k_setstate_cython), 0, 0, 1, 1}, - {&__pyx_n_s_shape, __pyx_k_shape, sizeof(__pyx_k_shape), 0, 0, 1, 1}, - {&__pyx_n_s_size, __pyx_k_size, sizeof(__pyx_k_size), 0, 0, 1, 1}, - {&__pyx_n_s_start, __pyx_k_start, sizeof(__pyx_k_start), 0, 0, 1, 1}, - {&__pyx_n_s_step, __pyx_k_step, sizeof(__pyx_k_step), 0, 0, 1, 1}, - {&__pyx_n_s_stop, __pyx_k_stop, sizeof(__pyx_k_stop), 0, 0, 1, 1}, - {&__pyx_kp_s_strided_and_direct, __pyx_k_strided_and_direct, sizeof(__pyx_k_strided_and_direct), 0, 0, 1, 0}, - {&__pyx_kp_s_strided_and_direct_or_indirect, __pyx_k_strided_and_direct_or_indirect, sizeof(__pyx_k_strided_and_direct_or_indirect), 0, 0, 1, 0}, - {&__pyx_kp_s_strided_and_indirect, __pyx_k_strided_and_indirect, sizeof(__pyx_k_strided_and_indirect), 0, 0, 1, 0}, - {&__pyx_kp_s_stringsource, __pyx_k_stringsource, sizeof(__pyx_k_stringsource), 0, 0, 1, 0}, - {&__pyx_n_s_struct, __pyx_k_struct, sizeof(__pyx_k_struct), 0, 0, 1, 1}, - {&__pyx_n_s_t_xs, __pyx_k_t_xs, sizeof(__pyx_k_t_xs), 0, 0, 1, 1}, - {&__pyx_n_s_t_ys, __pyx_k_t_ys, sizeof(__pyx_k_t_ys), 0, 0, 1, 1}, - {&__pyx_n_s_test, __pyx_k_test, sizeof(__pyx_k_test), 0, 0, 1, 1}, - {&__pyx_kp_s_unable_to_allocate_array_data, __pyx_k_unable_to_allocate_array_data, sizeof(__pyx_k_unable_to_allocate_array_data), 0, 0, 1, 0}, - {&__pyx_kp_s_unable_to_allocate_shape_and_str, __pyx_k_unable_to_allocate_shape_and_str, sizeof(__pyx_k_unable_to_allocate_shape_and_str), 0, 0, 1, 0}, - {&__pyx_n_s_unpack, __pyx_k_unpack, sizeof(__pyx_k_unpack), 0, 0, 1, 1}, - {&__pyx_n_s_update, __pyx_k_update, sizeof(__pyx_k_update), 0, 0, 1, 1}, - {&__pyx_n_s_values, __pyx_k_values, sizeof(__pyx_k_values), 0, 0, 1, 1}, - {0, 0, 0, 0, 0, 0, 0} -}; -static CYTHON_SMALL_CODE int __Pyx_InitCachedBuiltins(void) { - __pyx_builtin_range = __Pyx_GetBuiltinName(__pyx_n_s_range); if (!__pyx_builtin_range) __PYX_ERR(0, 15, __pyx_L1_error) - __pyx_builtin_ValueError = __Pyx_GetBuiltinName(__pyx_n_s_ValueError); if (!__pyx_builtin_ValueError) __PYX_ERR(1, 133, __pyx_L1_error) - __pyx_builtin_MemoryError = __Pyx_GetBuiltinName(__pyx_n_s_MemoryError); if (!__pyx_builtin_MemoryError) __PYX_ERR(1, 148, __pyx_L1_error) - __pyx_builtin_enumerate = __Pyx_GetBuiltinName(__pyx_n_s_enumerate); if (!__pyx_builtin_enumerate) __PYX_ERR(1, 151, __pyx_L1_error) - __pyx_builtin_TypeError = __Pyx_GetBuiltinName(__pyx_n_s_TypeError); if (!__pyx_builtin_TypeError) __PYX_ERR(1, 2, __pyx_L1_error) - __pyx_builtin_Ellipsis = __Pyx_GetBuiltinName(__pyx_n_s_Ellipsis); if (!__pyx_builtin_Ellipsis) __PYX_ERR(1, 404, __pyx_L1_error) - __pyx_builtin_id = __Pyx_GetBuiltinName(__pyx_n_s_id); if (!__pyx_builtin_id) __PYX_ERR(1, 613, __pyx_L1_error) - __pyx_builtin_IndexError = __Pyx_GetBuiltinName(__pyx_n_s_IndexError); if (!__pyx_builtin_IndexError) __PYX_ERR(1, 832, __pyx_L1_error) - return 0; - __pyx_L1_error:; - return -1; -} - -static CYTHON_SMALL_CODE int __Pyx_InitCachedConstants(void) { - __Pyx_RefNannyDeclarations - __Pyx_RefNannySetupContext("__Pyx_InitCachedConstants", 0); - - /* "View.MemoryView":133 - * - * if not self.ndim: - * raise ValueError("Empty shape tuple for cython.array") # <<<<<<<<<<<<<< - * - * if itemsize <= 0: - */ - __pyx_tuple__2 = PyTuple_Pack(1, __pyx_kp_s_Empty_shape_tuple_for_cython_arr); if (unlikely(!__pyx_tuple__2)) __PYX_ERR(1, 133, __pyx_L1_error) - __Pyx_GOTREF(__pyx_tuple__2); - __Pyx_GIVEREF(__pyx_tuple__2); - - /* "View.MemoryView":136 - * - * if itemsize <= 0: - * raise ValueError("itemsize <= 0 for cython.array") # <<<<<<<<<<<<<< - * - * if not isinstance(format, bytes): - */ - __pyx_tuple__3 = PyTuple_Pack(1, __pyx_kp_s_itemsize_0_for_cython_array); if (unlikely(!__pyx_tuple__3)) __PYX_ERR(1, 136, __pyx_L1_error) - __Pyx_GOTREF(__pyx_tuple__3); - __Pyx_GIVEREF(__pyx_tuple__3); - - /* "View.MemoryView":148 - * - * if not self._shape: - * raise MemoryError("unable to allocate shape and strides.") # <<<<<<<<<<<<<< - * - * - */ - __pyx_tuple__4 = PyTuple_Pack(1, __pyx_kp_s_unable_to_allocate_shape_and_str); if (unlikely(!__pyx_tuple__4)) __PYX_ERR(1, 148, __pyx_L1_error) - __Pyx_GOTREF(__pyx_tuple__4); - __Pyx_GIVEREF(__pyx_tuple__4); - - /* "View.MemoryView":176 - * self.data = malloc(self.len) - * if not self.data: - * raise MemoryError("unable to allocate array data.") # <<<<<<<<<<<<<< - * - * if self.dtype_is_object: - */ - __pyx_tuple__5 = PyTuple_Pack(1, __pyx_kp_s_unable_to_allocate_array_data); if (unlikely(!__pyx_tuple__5)) __PYX_ERR(1, 176, __pyx_L1_error) - __Pyx_GOTREF(__pyx_tuple__5); - __Pyx_GIVEREF(__pyx_tuple__5); - - /* "View.MemoryView":192 - * bufmode = PyBUF_F_CONTIGUOUS | PyBUF_ANY_CONTIGUOUS - * if not (flags & bufmode): - * raise ValueError("Can only create a buffer that is contiguous in memory.") # <<<<<<<<<<<<<< - * info.buf = self.data - * info.len = self.len - */ - __pyx_tuple__6 = PyTuple_Pack(1, __pyx_kp_s_Can_only_create_a_buffer_that_is); if (unlikely(!__pyx_tuple__6)) __PYX_ERR(1, 192, __pyx_L1_error) - __Pyx_GOTREF(__pyx_tuple__6); - __Pyx_GIVEREF(__pyx_tuple__6); - - /* "(tree fragment)":2 - * def __reduce_cython__(self): - * raise TypeError("no default __reduce__ due to non-trivial __cinit__") # <<<<<<<<<<<<<< - * def __setstate_cython__(self, __pyx_state): - * raise TypeError("no default __reduce__ due to non-trivial __cinit__") - */ - __pyx_tuple__7 = PyTuple_Pack(1, __pyx_kp_s_no_default___reduce___due_to_non); if (unlikely(!__pyx_tuple__7)) __PYX_ERR(1, 2, __pyx_L1_error) - __Pyx_GOTREF(__pyx_tuple__7); - __Pyx_GIVEREF(__pyx_tuple__7); - - /* "(tree fragment)":4 - * raise TypeError("no default __reduce__ due to non-trivial __cinit__") - * def __setstate_cython__(self, __pyx_state): - * raise TypeError("no default __reduce__ due to non-trivial __cinit__") # <<<<<<<<<<<<<< - */ - __pyx_tuple__8 = PyTuple_Pack(1, __pyx_kp_s_no_default___reduce___due_to_non); if (unlikely(!__pyx_tuple__8)) __PYX_ERR(1, 4, __pyx_L1_error) - __Pyx_GOTREF(__pyx_tuple__8); - __Pyx_GIVEREF(__pyx_tuple__8); - - /* "View.MemoryView":418 - * def __setitem__(memoryview self, object index, object value): - * if self.view.readonly: - * raise TypeError("Cannot assign to read-only memoryview") # <<<<<<<<<<<<<< - * - * have_slices, index = _unellipsify(index, self.view.ndim) - */ - __pyx_tuple__9 = PyTuple_Pack(1, __pyx_kp_s_Cannot_assign_to_read_only_memor); if (unlikely(!__pyx_tuple__9)) __PYX_ERR(1, 418, __pyx_L1_error) - __Pyx_GOTREF(__pyx_tuple__9); - __Pyx_GIVEREF(__pyx_tuple__9); - - /* "View.MemoryView":495 - * result = struct.unpack(self.view.format, bytesitem) - * except struct.error: - * raise ValueError("Unable to convert item to object") # <<<<<<<<<<<<<< - * else: - * if len(self.view.format) == 1: - */ - __pyx_tuple__10 = PyTuple_Pack(1, __pyx_kp_s_Unable_to_convert_item_to_object); if (unlikely(!__pyx_tuple__10)) __PYX_ERR(1, 495, __pyx_L1_error) - __Pyx_GOTREF(__pyx_tuple__10); - __Pyx_GIVEREF(__pyx_tuple__10); - - /* "View.MemoryView":520 - * def __getbuffer__(self, Py_buffer *info, int flags): - * if flags & PyBUF_WRITABLE and self.view.readonly: - * raise ValueError("Cannot create writable memory view from read-only memoryview") # <<<<<<<<<<<<<< - * - * if flags & PyBUF_ND: - */ - __pyx_tuple__11 = PyTuple_Pack(1, __pyx_kp_s_Cannot_create_writable_memory_vi); if (unlikely(!__pyx_tuple__11)) __PYX_ERR(1, 520, __pyx_L1_error) - __Pyx_GOTREF(__pyx_tuple__11); - __Pyx_GIVEREF(__pyx_tuple__11); - - /* "View.MemoryView":570 - * if self.view.strides == NULL: - * - * raise ValueError("Buffer view does not expose strides") # <<<<<<<<<<<<<< - * - * return tuple([stride for stride in self.view.strides[:self.view.ndim]]) - */ - __pyx_tuple__12 = PyTuple_Pack(1, __pyx_kp_s_Buffer_view_does_not_expose_stri); if (unlikely(!__pyx_tuple__12)) __PYX_ERR(1, 570, __pyx_L1_error) - __Pyx_GOTREF(__pyx_tuple__12); - __Pyx_GIVEREF(__pyx_tuple__12); - - /* "View.MemoryView":577 - * def suboffsets(self): - * if self.view.suboffsets == NULL: - * return (-1,) * self.view.ndim # <<<<<<<<<<<<<< - * - * return tuple([suboffset for suboffset in self.view.suboffsets[:self.view.ndim]]) - */ - __pyx_tuple__13 = PyTuple_New(1); if (unlikely(!__pyx_tuple__13)) __PYX_ERR(1, 577, __pyx_L1_error) - __Pyx_GOTREF(__pyx_tuple__13); - __Pyx_INCREF(__pyx_int_neg_1); - __Pyx_GIVEREF(__pyx_int_neg_1); - PyTuple_SET_ITEM(__pyx_tuple__13, 0, __pyx_int_neg_1); - __Pyx_GIVEREF(__pyx_tuple__13); - - /* "(tree fragment)":2 - * def __reduce_cython__(self): - * raise TypeError("no default __reduce__ due to non-trivial __cinit__") # <<<<<<<<<<<<<< - * def __setstate_cython__(self, __pyx_state): - * raise TypeError("no default __reduce__ due to non-trivial __cinit__") - */ - __pyx_tuple__14 = PyTuple_Pack(1, __pyx_kp_s_no_default___reduce___due_to_non); if (unlikely(!__pyx_tuple__14)) __PYX_ERR(1, 2, __pyx_L1_error) - __Pyx_GOTREF(__pyx_tuple__14); - __Pyx_GIVEREF(__pyx_tuple__14); - - /* "(tree fragment)":4 - * raise TypeError("no default __reduce__ due to non-trivial __cinit__") - * def __setstate_cython__(self, __pyx_state): - * raise TypeError("no default __reduce__ due to non-trivial __cinit__") # <<<<<<<<<<<<<< - */ - __pyx_tuple__15 = PyTuple_Pack(1, __pyx_kp_s_no_default___reduce___due_to_non); if (unlikely(!__pyx_tuple__15)) __PYX_ERR(1, 4, __pyx_L1_error) - __Pyx_GOTREF(__pyx_tuple__15); - __Pyx_GIVEREF(__pyx_tuple__15); - - /* "View.MemoryView":682 - * if item is Ellipsis: - * if not seen_ellipsis: - * result.extend([slice(None)] * (ndim - len(tup) + 1)) # <<<<<<<<<<<<<< - * seen_ellipsis = True - * else: - */ - __pyx_slice__16 = PySlice_New(Py_None, Py_None, Py_None); if (unlikely(!__pyx_slice__16)) __PYX_ERR(1, 682, __pyx_L1_error) - __Pyx_GOTREF(__pyx_slice__16); - __Pyx_GIVEREF(__pyx_slice__16); - - /* "View.MemoryView":703 - * for suboffset in suboffsets[:ndim]: - * if suboffset >= 0: - * raise ValueError("Indirect dimensions not supported") # <<<<<<<<<<<<<< - * - * - */ - __pyx_tuple__17 = PyTuple_Pack(1, __pyx_kp_s_Indirect_dimensions_not_supporte); if (unlikely(!__pyx_tuple__17)) __PYX_ERR(1, 703, __pyx_L1_error) - __Pyx_GOTREF(__pyx_tuple__17); - __Pyx_GIVEREF(__pyx_tuple__17); - - /* "(tree fragment)":2 - * def __reduce_cython__(self): - * raise TypeError("no default __reduce__ due to non-trivial __cinit__") # <<<<<<<<<<<<<< - * def __setstate_cython__(self, __pyx_state): - * raise TypeError("no default __reduce__ due to non-trivial __cinit__") - */ - __pyx_tuple__18 = PyTuple_Pack(1, __pyx_kp_s_no_default___reduce___due_to_non); if (unlikely(!__pyx_tuple__18)) __PYX_ERR(1, 2, __pyx_L1_error) - __Pyx_GOTREF(__pyx_tuple__18); - __Pyx_GIVEREF(__pyx_tuple__18); - - /* "(tree fragment)":4 - * raise TypeError("no default __reduce__ due to non-trivial __cinit__") - * def __setstate_cython__(self, __pyx_state): - * raise TypeError("no default __reduce__ due to non-trivial __cinit__") # <<<<<<<<<<<<<< - */ - __pyx_tuple__19 = PyTuple_Pack(1, __pyx_kp_s_no_default___reduce___due_to_non); if (unlikely(!__pyx_tuple__19)) __PYX_ERR(1, 4, __pyx_L1_error) - __Pyx_GOTREF(__pyx_tuple__19); - __Pyx_GIVEREF(__pyx_tuple__19); - - /* "View.MemoryView":286 - * return self.name - * - * cdef generic = Enum("") # <<<<<<<<<<<<<< - * cdef strided = Enum("") # default - * cdef indirect = Enum("") - */ - __pyx_tuple__20 = PyTuple_Pack(1, __pyx_kp_s_strided_and_direct_or_indirect); if (unlikely(!__pyx_tuple__20)) __PYX_ERR(1, 286, __pyx_L1_error) - __Pyx_GOTREF(__pyx_tuple__20); - __Pyx_GIVEREF(__pyx_tuple__20); - - /* "View.MemoryView":287 - * - * cdef generic = Enum("") - * cdef strided = Enum("") # default # <<<<<<<<<<<<<< - * cdef indirect = Enum("") - * - */ - __pyx_tuple__21 = PyTuple_Pack(1, __pyx_kp_s_strided_and_direct); if (unlikely(!__pyx_tuple__21)) __PYX_ERR(1, 287, __pyx_L1_error) - __Pyx_GOTREF(__pyx_tuple__21); - __Pyx_GIVEREF(__pyx_tuple__21); - - /* "View.MemoryView":288 - * cdef generic = Enum("") - * cdef strided = Enum("") # default - * cdef indirect = Enum("") # <<<<<<<<<<<<<< - * - * - */ - __pyx_tuple__22 = PyTuple_Pack(1, __pyx_kp_s_strided_and_indirect); if (unlikely(!__pyx_tuple__22)) __PYX_ERR(1, 288, __pyx_L1_error) - __Pyx_GOTREF(__pyx_tuple__22); - __Pyx_GIVEREF(__pyx_tuple__22); - - /* "View.MemoryView":291 - * - * - * cdef contiguous = Enum("") # <<<<<<<<<<<<<< - * cdef indirect_contiguous = Enum("") - * - */ - __pyx_tuple__23 = PyTuple_Pack(1, __pyx_kp_s_contiguous_and_direct); if (unlikely(!__pyx_tuple__23)) __PYX_ERR(1, 291, __pyx_L1_error) - __Pyx_GOTREF(__pyx_tuple__23); - __Pyx_GIVEREF(__pyx_tuple__23); - - /* "View.MemoryView":292 - * - * cdef contiguous = Enum("") - * cdef indirect_contiguous = Enum("") # <<<<<<<<<<<<<< - * - * - */ - __pyx_tuple__24 = PyTuple_Pack(1, __pyx_kp_s_contiguous_and_indirect); if (unlikely(!__pyx_tuple__24)) __PYX_ERR(1, 292, __pyx_L1_error) - __Pyx_GOTREF(__pyx_tuple__24); - __Pyx_GIVEREF(__pyx_tuple__24); - - /* "(tree fragment)":1 - * def __pyx_unpickle_Enum(__pyx_type, long __pyx_checksum, __pyx_state): # <<<<<<<<<<<<<< - * cdef object __pyx_PickleError - * cdef object __pyx_result - */ - __pyx_tuple__25 = PyTuple_Pack(5, __pyx_n_s_pyx_type, __pyx_n_s_pyx_checksum, __pyx_n_s_pyx_state, __pyx_n_s_pyx_PickleError, __pyx_n_s_pyx_result); if (unlikely(!__pyx_tuple__25)) __PYX_ERR(1, 1, __pyx_L1_error) - __Pyx_GOTREF(__pyx_tuple__25); - __Pyx_GIVEREF(__pyx_tuple__25); - __pyx_codeobj__26 = (PyObject*)__Pyx_PyCode_New(3, 0, 5, 0, CO_OPTIMIZED|CO_NEWLOCALS, __pyx_empty_bytes, __pyx_empty_tuple, __pyx_empty_tuple, __pyx_tuple__25, __pyx_empty_tuple, __pyx_empty_tuple, __pyx_kp_s_stringsource, __pyx_n_s_pyx_unpickle_Enum, 1, __pyx_empty_bytes); if (unlikely(!__pyx_codeobj__26)) __PYX_ERR(1, 1, __pyx_L1_error) - __Pyx_RefNannyFinishContext(); - return 0; - __pyx_L1_error:; - __Pyx_RefNannyFinishContext(); - return -1; -} - -static CYTHON_SMALL_CODE int __Pyx_InitGlobals(void) { - /* InitThreads.init */ - #ifdef WITH_THREAD -PyEval_InitThreads(); -#endif - -if (unlikely(PyErr_Occurred())) __PYX_ERR(0, 1, __pyx_L1_error) - - if (__Pyx_InitStrings(__pyx_string_tab) < 0) __PYX_ERR(0, 1, __pyx_L1_error); - __pyx_int_0 = PyInt_FromLong(0); if (unlikely(!__pyx_int_0)) __PYX_ERR(0, 1, __pyx_L1_error) - __pyx_int_1 = PyInt_FromLong(1); if (unlikely(!__pyx_int_1)) __PYX_ERR(0, 1, __pyx_L1_error) - __pyx_int_184977713 = PyInt_FromLong(184977713L); if (unlikely(!__pyx_int_184977713)) __PYX_ERR(0, 1, __pyx_L1_error) - __pyx_int_neg_1 = PyInt_FromLong(-1); if (unlikely(!__pyx_int_neg_1)) __PYX_ERR(0, 1, __pyx_L1_error) - return 0; - __pyx_L1_error:; - return -1; -} - -static CYTHON_SMALL_CODE int __Pyx_modinit_global_init_code(void); /*proto*/ -static CYTHON_SMALL_CODE int __Pyx_modinit_variable_export_code(void); /*proto*/ -static CYTHON_SMALL_CODE int __Pyx_modinit_function_export_code(void); /*proto*/ -static CYTHON_SMALL_CODE int __Pyx_modinit_type_init_code(void); /*proto*/ -static CYTHON_SMALL_CODE int __Pyx_modinit_type_import_code(void); /*proto*/ -static CYTHON_SMALL_CODE int __Pyx_modinit_variable_import_code(void); /*proto*/ -static CYTHON_SMALL_CODE int __Pyx_modinit_function_import_code(void); /*proto*/ - -static int __Pyx_modinit_global_init_code(void) { - __Pyx_RefNannyDeclarations - __Pyx_RefNannySetupContext("__Pyx_modinit_global_init_code", 0); - /*--- Global init code ---*/ - generic = Py_None; Py_INCREF(Py_None); - strided = Py_None; Py_INCREF(Py_None); - indirect = Py_None; Py_INCREF(Py_None); - contiguous = Py_None; Py_INCREF(Py_None); - indirect_contiguous = Py_None; Py_INCREF(Py_None); - __Pyx_RefNannyFinishContext(); - return 0; -} - -static int __Pyx_modinit_variable_export_code(void) { - __Pyx_RefNannyDeclarations - __Pyx_RefNannySetupContext("__Pyx_modinit_variable_export_code", 0); - /*--- Variable export code ---*/ - __Pyx_RefNannyFinishContext(); - return 0; -} - -static int __Pyx_modinit_function_export_code(void) { - __Pyx_RefNannyDeclarations - __Pyx_RefNannySetupContext("__Pyx_modinit_function_export_code", 0); - /*--- Function export code ---*/ - __Pyx_RefNannyFinishContext(); - return 0; -} - -static int __Pyx_modinit_type_init_code(void) { - __Pyx_RefNannyDeclarations - int __pyx_lineno = 0; - const char *__pyx_filename = NULL; - int __pyx_clineno = 0; - __Pyx_RefNannySetupContext("__Pyx_modinit_type_init_code", 0); - /*--- Type init code ---*/ - __pyx_vtabptr_array = &__pyx_vtable_array; - __pyx_vtable_array.get_memview = (PyObject *(*)(struct __pyx_array_obj *))__pyx_array_get_memview; - if (PyType_Ready(&__pyx_type___pyx_array) < 0) __PYX_ERR(1, 105, __pyx_L1_error) - #if PY_VERSION_HEX < 0x030800B1 - __pyx_type___pyx_array.tp_print = 0; - #endif - if (__Pyx_SetVtable(__pyx_type___pyx_array.tp_dict, __pyx_vtabptr_array) < 0) __PYX_ERR(1, 105, __pyx_L1_error) - if (__Pyx_setup_reduce((PyObject*)&__pyx_type___pyx_array) < 0) __PYX_ERR(1, 105, __pyx_L1_error) - __pyx_array_type = &__pyx_type___pyx_array; - if (PyType_Ready(&__pyx_type___pyx_MemviewEnum) < 0) __PYX_ERR(1, 279, __pyx_L1_error) - #if PY_VERSION_HEX < 0x030800B1 - __pyx_type___pyx_MemviewEnum.tp_print = 0; - #endif - if ((CYTHON_USE_TYPE_SLOTS && CYTHON_USE_PYTYPE_LOOKUP) && likely(!__pyx_type___pyx_MemviewEnum.tp_dictoffset && __pyx_type___pyx_MemviewEnum.tp_getattro == PyObject_GenericGetAttr)) { - __pyx_type___pyx_MemviewEnum.tp_getattro = __Pyx_PyObject_GenericGetAttr; - } - if (__Pyx_setup_reduce((PyObject*)&__pyx_type___pyx_MemviewEnum) < 0) __PYX_ERR(1, 279, __pyx_L1_error) - __pyx_MemviewEnum_type = &__pyx_type___pyx_MemviewEnum; - __pyx_vtabptr_memoryview = &__pyx_vtable_memoryview; - __pyx_vtable_memoryview.get_item_pointer = (char *(*)(struct __pyx_memoryview_obj *, PyObject *))__pyx_memoryview_get_item_pointer; - __pyx_vtable_memoryview.is_slice = (PyObject *(*)(struct __pyx_memoryview_obj *, PyObject *))__pyx_memoryview_is_slice; - __pyx_vtable_memoryview.setitem_slice_assignment = (PyObject *(*)(struct __pyx_memoryview_obj *, PyObject *, PyObject *))__pyx_memoryview_setitem_slice_assignment; - __pyx_vtable_memoryview.setitem_slice_assign_scalar = (PyObject *(*)(struct __pyx_memoryview_obj *, struct __pyx_memoryview_obj *, PyObject *))__pyx_memoryview_setitem_slice_assign_scalar; - __pyx_vtable_memoryview.setitem_indexed = (PyObject *(*)(struct __pyx_memoryview_obj *, PyObject *, PyObject *))__pyx_memoryview_setitem_indexed; - __pyx_vtable_memoryview.convert_item_to_object = (PyObject *(*)(struct __pyx_memoryview_obj *, char *))__pyx_memoryview_convert_item_to_object; - __pyx_vtable_memoryview.assign_item_from_object = (PyObject *(*)(struct __pyx_memoryview_obj *, char *, PyObject *))__pyx_memoryview_assign_item_from_object; - if (PyType_Ready(&__pyx_type___pyx_memoryview) < 0) __PYX_ERR(1, 330, __pyx_L1_error) - #if PY_VERSION_HEX < 0x030800B1 - __pyx_type___pyx_memoryview.tp_print = 0; - #endif - if ((CYTHON_USE_TYPE_SLOTS && CYTHON_USE_PYTYPE_LOOKUP) && likely(!__pyx_type___pyx_memoryview.tp_dictoffset && __pyx_type___pyx_memoryview.tp_getattro == PyObject_GenericGetAttr)) { - __pyx_type___pyx_memoryview.tp_getattro = __Pyx_PyObject_GenericGetAttr; - } - if (__Pyx_SetVtable(__pyx_type___pyx_memoryview.tp_dict, __pyx_vtabptr_memoryview) < 0) __PYX_ERR(1, 330, __pyx_L1_error) - if (__Pyx_setup_reduce((PyObject*)&__pyx_type___pyx_memoryview) < 0) __PYX_ERR(1, 330, __pyx_L1_error) - __pyx_memoryview_type = &__pyx_type___pyx_memoryview; - __pyx_vtabptr__memoryviewslice = &__pyx_vtable__memoryviewslice; - __pyx_vtable__memoryviewslice.__pyx_base = *__pyx_vtabptr_memoryview; - __pyx_vtable__memoryviewslice.__pyx_base.convert_item_to_object = (PyObject *(*)(struct __pyx_memoryview_obj *, char *))__pyx_memoryviewslice_convert_item_to_object; - __pyx_vtable__memoryviewslice.__pyx_base.assign_item_from_object = (PyObject *(*)(struct __pyx_memoryview_obj *, char *, PyObject *))__pyx_memoryviewslice_assign_item_from_object; - __pyx_type___pyx_memoryviewslice.tp_base = __pyx_memoryview_type; - if (PyType_Ready(&__pyx_type___pyx_memoryviewslice) < 0) __PYX_ERR(1, 965, __pyx_L1_error) - #if PY_VERSION_HEX < 0x030800B1 - __pyx_type___pyx_memoryviewslice.tp_print = 0; - #endif - if ((CYTHON_USE_TYPE_SLOTS && CYTHON_USE_PYTYPE_LOOKUP) && likely(!__pyx_type___pyx_memoryviewslice.tp_dictoffset && __pyx_type___pyx_memoryviewslice.tp_getattro == PyObject_GenericGetAttr)) { - __pyx_type___pyx_memoryviewslice.tp_getattro = __Pyx_PyObject_GenericGetAttr; - } - if (__Pyx_SetVtable(__pyx_type___pyx_memoryviewslice.tp_dict, __pyx_vtabptr__memoryviewslice) < 0) __PYX_ERR(1, 965, __pyx_L1_error) - if (__Pyx_setup_reduce((PyObject*)&__pyx_type___pyx_memoryviewslice) < 0) __PYX_ERR(1, 965, __pyx_L1_error) - __pyx_memoryviewslice_type = &__pyx_type___pyx_memoryviewslice; - __Pyx_RefNannyFinishContext(); - return 0; - __pyx_L1_error:; - __Pyx_RefNannyFinishContext(); - return -1; -} - -static int __Pyx_modinit_type_import_code(void) { - __Pyx_RefNannyDeclarations - __Pyx_RefNannySetupContext("__Pyx_modinit_type_import_code", 0); - /*--- Type import code ---*/ - __Pyx_RefNannyFinishContext(); - return 0; -} - -static int __Pyx_modinit_variable_import_code(void) { - __Pyx_RefNannyDeclarations - __Pyx_RefNannySetupContext("__Pyx_modinit_variable_import_code", 0); - /*--- Variable import code ---*/ - __Pyx_RefNannyFinishContext(); - return 0; -} - -static int __Pyx_modinit_function_import_code(void) { - __Pyx_RefNannyDeclarations - __Pyx_RefNannySetupContext("__Pyx_modinit_function_import_code", 0); - /*--- Function import code ---*/ - __Pyx_RefNannyFinishContext(); - return 0; -} - - -#ifndef CYTHON_NO_PYINIT_EXPORT -#define __Pyx_PyMODINIT_FUNC PyMODINIT_FUNC -#elif PY_MAJOR_VERSION < 3 -#ifdef __cplusplus -#define __Pyx_PyMODINIT_FUNC extern "C" void -#else -#define __Pyx_PyMODINIT_FUNC void -#endif -#else -#ifdef __cplusplus -#define __Pyx_PyMODINIT_FUNC extern "C" PyObject * -#else -#define __Pyx_PyMODINIT_FUNC PyObject * -#endif -#endif - - -#if PY_MAJOR_VERSION < 3 -__Pyx_PyMODINIT_FUNC initcore(void) CYTHON_SMALL_CODE; /*proto*/ -__Pyx_PyMODINIT_FUNC initcore(void) -#else -__Pyx_PyMODINIT_FUNC PyInit_core(void) CYTHON_SMALL_CODE; /*proto*/ -__Pyx_PyMODINIT_FUNC PyInit_core(void) -#if CYTHON_PEP489_MULTI_PHASE_INIT -{ - return PyModuleDef_Init(&__pyx_moduledef); -} -static CYTHON_SMALL_CODE int __Pyx_check_single_interpreter(void) { - #if PY_VERSION_HEX >= 0x030700A1 - static PY_INT64_T main_interpreter_id = -1; - PY_INT64_T current_id = PyInterpreterState_GetID(PyThreadState_Get()->interp); - if (main_interpreter_id == -1) { - main_interpreter_id = current_id; - return (unlikely(current_id == -1)) ? -1 : 0; - } else if (unlikely(main_interpreter_id != current_id)) - #else - static PyInterpreterState *main_interpreter = NULL; - PyInterpreterState *current_interpreter = PyThreadState_Get()->interp; - if (!main_interpreter) { - main_interpreter = current_interpreter; - } else if (unlikely(main_interpreter != current_interpreter)) - #endif - { - PyErr_SetString( - PyExc_ImportError, - "Interpreter change detected - this module can only be loaded into one interpreter per process."); - return -1; - } - return 0; -} -static CYTHON_SMALL_CODE int __Pyx_copy_spec_to_module(PyObject *spec, PyObject *moddict, const char* from_name, const char* to_name, int allow_none) { - PyObject *value = PyObject_GetAttrString(spec, from_name); - int result = 0; - if (likely(value)) { - if (allow_none || value != Py_None) { - result = PyDict_SetItemString(moddict, to_name, value); - } - Py_DECREF(value); - } else if (PyErr_ExceptionMatches(PyExc_AttributeError)) { - PyErr_Clear(); - } else { - result = -1; - } - return result; -} -static CYTHON_SMALL_CODE PyObject* __pyx_pymod_create(PyObject *spec, CYTHON_UNUSED PyModuleDef *def) { - PyObject *module = NULL, *moddict, *modname; - if (__Pyx_check_single_interpreter()) - return NULL; - if (__pyx_m) - return __Pyx_NewRef(__pyx_m); - modname = PyObject_GetAttrString(spec, "name"); - if (unlikely(!modname)) goto bad; - module = PyModule_NewObject(modname); - Py_DECREF(modname); - if (unlikely(!module)) goto bad; - moddict = PyModule_GetDict(module); - if (unlikely(!moddict)) goto bad; - if (unlikely(__Pyx_copy_spec_to_module(spec, moddict, "loader", "__loader__", 1) < 0)) goto bad; - if (unlikely(__Pyx_copy_spec_to_module(spec, moddict, "origin", "__file__", 1) < 0)) goto bad; - if (unlikely(__Pyx_copy_spec_to_module(spec, moddict, "parent", "__package__", 1) < 0)) goto bad; - if (unlikely(__Pyx_copy_spec_to_module(spec, moddict, "submodule_search_locations", "__path__", 0) < 0)) goto bad; - return module; -bad: - Py_XDECREF(module); - return NULL; -} - - -static CYTHON_SMALL_CODE int __pyx_pymod_exec_core(PyObject *__pyx_pyinit_module) -#endif -#endif -{ - PyObject *__pyx_t_1 = NULL; - static PyThread_type_lock __pyx_t_2[8]; - int __pyx_lineno = 0; - const char *__pyx_filename = NULL; - int __pyx_clineno = 0; - __Pyx_RefNannyDeclarations - #if CYTHON_PEP489_MULTI_PHASE_INIT - if (__pyx_m) { - if (__pyx_m == __pyx_pyinit_module) return 0; - PyErr_SetString(PyExc_RuntimeError, "Module 'core' has already been imported. Re-initialisation is not supported."); - return -1; - } - #elif PY_MAJOR_VERSION >= 3 - if (__pyx_m) return __Pyx_NewRef(__pyx_m); - #endif - #if CYTHON_REFNANNY -__Pyx_RefNanny = __Pyx_RefNannyImportAPI("refnanny"); -if (!__Pyx_RefNanny) { - PyErr_Clear(); - __Pyx_RefNanny = __Pyx_RefNannyImportAPI("Cython.Runtime.refnanny"); - if (!__Pyx_RefNanny) - Py_FatalError("failed to import 'refnanny' module"); -} -#endif - __Pyx_RefNannySetupContext("__Pyx_PyMODINIT_FUNC PyInit_core(void)", 0); - if (__Pyx_check_binary_version() < 0) __PYX_ERR(0, 1, __pyx_L1_error) - #ifdef __Pxy_PyFrame_Initialize_Offsets - __Pxy_PyFrame_Initialize_Offsets(); - #endif - __pyx_empty_tuple = PyTuple_New(0); if (unlikely(!__pyx_empty_tuple)) __PYX_ERR(0, 1, __pyx_L1_error) - __pyx_empty_bytes = PyBytes_FromStringAndSize("", 0); if (unlikely(!__pyx_empty_bytes)) __PYX_ERR(0, 1, __pyx_L1_error) - __pyx_empty_unicode = PyUnicode_FromStringAndSize("", 0); if (unlikely(!__pyx_empty_unicode)) __PYX_ERR(0, 1, __pyx_L1_error) - #ifdef __Pyx_CyFunction_USED - if (__pyx_CyFunction_init() < 0) __PYX_ERR(0, 1, __pyx_L1_error) - #endif - #ifdef __Pyx_FusedFunction_USED - if (__pyx_FusedFunction_init() < 0) __PYX_ERR(0, 1, __pyx_L1_error) - #endif - #ifdef __Pyx_Coroutine_USED - if (__pyx_Coroutine_init() < 0) __PYX_ERR(0, 1, __pyx_L1_error) - #endif - #ifdef __Pyx_Generator_USED - if (__pyx_Generator_init() < 0) __PYX_ERR(0, 1, __pyx_L1_error) - #endif - #ifdef __Pyx_AsyncGen_USED - if (__pyx_AsyncGen_init() < 0) __PYX_ERR(0, 1, __pyx_L1_error) - #endif - #ifdef __Pyx_StopAsyncIteration_USED - if (__pyx_StopAsyncIteration_init() < 0) __PYX_ERR(0, 1, __pyx_L1_error) - #endif - /*--- Library function declarations ---*/ - /*--- Threads initialization code ---*/ - #if defined(__PYX_FORCE_INIT_THREADS) && __PYX_FORCE_INIT_THREADS - #ifdef WITH_THREAD /* Python build with threading support? */ - PyEval_InitThreads(); - #endif - #endif - /*--- Module creation code ---*/ - #if CYTHON_PEP489_MULTI_PHASE_INIT - __pyx_m = __pyx_pyinit_module; - Py_INCREF(__pyx_m); - #else - #if PY_MAJOR_VERSION < 3 - __pyx_m = Py_InitModule4("core", __pyx_methods, 0, 0, PYTHON_API_VERSION); Py_XINCREF(__pyx_m); - #else - __pyx_m = PyModule_Create(&__pyx_moduledef); - #endif - if (unlikely(!__pyx_m)) __PYX_ERR(0, 1, __pyx_L1_error) - #endif - __pyx_d = PyModule_GetDict(__pyx_m); if (unlikely(!__pyx_d)) __PYX_ERR(0, 1, __pyx_L1_error) - Py_INCREF(__pyx_d); - __pyx_b = PyImport_AddModule(__Pyx_BUILTIN_MODULE_NAME); if (unlikely(!__pyx_b)) __PYX_ERR(0, 1, __pyx_L1_error) - Py_INCREF(__pyx_b); - __pyx_cython_runtime = PyImport_AddModule((char *) "cython_runtime"); if (unlikely(!__pyx_cython_runtime)) __PYX_ERR(0, 1, __pyx_L1_error) - Py_INCREF(__pyx_cython_runtime); - if (PyObject_SetAttrString(__pyx_m, "__builtins__", __pyx_b) < 0) __PYX_ERR(0, 1, __pyx_L1_error); - /*--- Initialize various global constants etc. ---*/ - if (__Pyx_InitGlobals() < 0) __PYX_ERR(0, 1, __pyx_L1_error) - #if PY_MAJOR_VERSION < 3 && (__PYX_DEFAULT_STRING_ENCODING_IS_ASCII || __PYX_DEFAULT_STRING_ENCODING_IS_DEFAULT) - if (__Pyx_init_sys_getdefaultencoding_params() < 0) __PYX_ERR(0, 1, __pyx_L1_error) - #endif - if (__pyx_module_is_main_monotonic_align__core) { - if (PyObject_SetAttr(__pyx_m, __pyx_n_s_name_2, __pyx_n_s_main) < 0) __PYX_ERR(0, 1, __pyx_L1_error) - } - #if PY_MAJOR_VERSION >= 3 - { - PyObject *modules = PyImport_GetModuleDict(); if (unlikely(!modules)) __PYX_ERR(0, 1, __pyx_L1_error) - if (!PyDict_GetItemString(modules, "monotonic_align.core")) { - if (unlikely(PyDict_SetItemString(modules, "monotonic_align.core", __pyx_m) < 0)) __PYX_ERR(0, 1, __pyx_L1_error) - } - } - #endif - /*--- Builtin init code ---*/ - if (__Pyx_InitCachedBuiltins() < 0) __PYX_ERR(0, 1, __pyx_L1_error) - /*--- Constants init code ---*/ - if (__Pyx_InitCachedConstants() < 0) __PYX_ERR(0, 1, __pyx_L1_error) - /*--- Global type/function init code ---*/ - (void)__Pyx_modinit_global_init_code(); - (void)__Pyx_modinit_variable_export_code(); - (void)__Pyx_modinit_function_export_code(); - if (unlikely(__Pyx_modinit_type_init_code() < 0)) __PYX_ERR(0, 1, __pyx_L1_error) - (void)__Pyx_modinit_type_import_code(); - (void)__Pyx_modinit_variable_import_code(); - (void)__Pyx_modinit_function_import_code(); - /*--- Execution code ---*/ - #if defined(__Pyx_Generator_USED) || defined(__Pyx_Coroutine_USED) - if (__Pyx_patch_abc() < 0) __PYX_ERR(0, 1, __pyx_L1_error) - #endif - - /* "monotonic_align/core.pyx":7 - * @cython.boundscheck(False) - * @cython.wraparound(False) - * cdef void maximum_path_each(int[:,::1] path, float[:,::1] value, int t_y, int t_x, float max_neg_val=-1e9) nogil: # <<<<<<<<<<<<<< - * cdef int x - * cdef int y - */ - __pyx_k_ = (-1e9); - - /* "monotonic_align/core.pyx":1 - * cimport cython # <<<<<<<<<<<<<< - * from cython.parallel import prange - * - */ - __pyx_t_1 = __Pyx_PyDict_NewPresized(0); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 1, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - if (PyDict_SetItem(__pyx_d, __pyx_n_s_test, __pyx_t_1) < 0) __PYX_ERR(0, 1, __pyx_L1_error) - __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; - - /* "View.MemoryView":209 - * info.obj = self - * - * __pyx_getbuffer = capsule( &__pyx_array_getbuffer, "getbuffer(obj, view, flags)") # <<<<<<<<<<<<<< - * - * def __dealloc__(array self): - */ - __pyx_t_1 = __pyx_capsule_create(((void *)(&__pyx_array_getbuffer)), ((char *)"getbuffer(obj, view, flags)")); if (unlikely(!__pyx_t_1)) __PYX_ERR(1, 209, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - if (PyDict_SetItem((PyObject *)__pyx_array_type->tp_dict, __pyx_n_s_pyx_getbuffer, __pyx_t_1) < 0) __PYX_ERR(1, 209, __pyx_L1_error) - __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; - PyType_Modified(__pyx_array_type); - - /* "View.MemoryView":286 - * return self.name - * - * cdef generic = Enum("") # <<<<<<<<<<<<<< - * cdef strided = Enum("") # default - * cdef indirect = Enum("") - */ - __pyx_t_1 = __Pyx_PyObject_Call(((PyObject *)__pyx_MemviewEnum_type), __pyx_tuple__20, NULL); if (unlikely(!__pyx_t_1)) __PYX_ERR(1, 286, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - __Pyx_XGOTREF(generic); - __Pyx_DECREF_SET(generic, __pyx_t_1); - __Pyx_GIVEREF(__pyx_t_1); - __pyx_t_1 = 0; - - /* "View.MemoryView":287 - * - * cdef generic = Enum("") - * cdef strided = Enum("") # default # <<<<<<<<<<<<<< - * cdef indirect = Enum("") - * - */ - __pyx_t_1 = __Pyx_PyObject_Call(((PyObject *)__pyx_MemviewEnum_type), __pyx_tuple__21, NULL); if (unlikely(!__pyx_t_1)) __PYX_ERR(1, 287, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - __Pyx_XGOTREF(strided); - __Pyx_DECREF_SET(strided, __pyx_t_1); - __Pyx_GIVEREF(__pyx_t_1); - __pyx_t_1 = 0; - - /* "View.MemoryView":288 - * cdef generic = Enum("") - * cdef strided = Enum("") # default - * cdef indirect = Enum("") # <<<<<<<<<<<<<< - * - * - */ - __pyx_t_1 = __Pyx_PyObject_Call(((PyObject *)__pyx_MemviewEnum_type), __pyx_tuple__22, NULL); if (unlikely(!__pyx_t_1)) __PYX_ERR(1, 288, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - __Pyx_XGOTREF(indirect); - __Pyx_DECREF_SET(indirect, __pyx_t_1); - __Pyx_GIVEREF(__pyx_t_1); - __pyx_t_1 = 0; - - /* "View.MemoryView":291 - * - * - * cdef contiguous = Enum("") # <<<<<<<<<<<<<< - * cdef indirect_contiguous = Enum("") - * - */ - __pyx_t_1 = __Pyx_PyObject_Call(((PyObject *)__pyx_MemviewEnum_type), __pyx_tuple__23, NULL); if (unlikely(!__pyx_t_1)) __PYX_ERR(1, 291, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - __Pyx_XGOTREF(contiguous); - __Pyx_DECREF_SET(contiguous, __pyx_t_1); - __Pyx_GIVEREF(__pyx_t_1); - __pyx_t_1 = 0; - - /* "View.MemoryView":292 - * - * cdef contiguous = Enum("") - * cdef indirect_contiguous = Enum("") # <<<<<<<<<<<<<< - * - * - */ - __pyx_t_1 = __Pyx_PyObject_Call(((PyObject *)__pyx_MemviewEnum_type), __pyx_tuple__24, NULL); if (unlikely(!__pyx_t_1)) __PYX_ERR(1, 292, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - __Pyx_XGOTREF(indirect_contiguous); - __Pyx_DECREF_SET(indirect_contiguous, __pyx_t_1); - __Pyx_GIVEREF(__pyx_t_1); - __pyx_t_1 = 0; - - /* "View.MemoryView":316 - * - * DEF THREAD_LOCKS_PREALLOCATED = 8 - * cdef int __pyx_memoryview_thread_locks_used = 0 # <<<<<<<<<<<<<< - * cdef PyThread_type_lock[THREAD_LOCKS_PREALLOCATED] __pyx_memoryview_thread_locks = [ - * PyThread_allocate_lock(), - */ - __pyx_memoryview_thread_locks_used = 0; - - /* "View.MemoryView":317 - * DEF THREAD_LOCKS_PREALLOCATED = 8 - * cdef int __pyx_memoryview_thread_locks_used = 0 - * cdef PyThread_type_lock[THREAD_LOCKS_PREALLOCATED] __pyx_memoryview_thread_locks = [ # <<<<<<<<<<<<<< - * PyThread_allocate_lock(), - * PyThread_allocate_lock(), - */ - __pyx_t_2[0] = PyThread_allocate_lock(); - __pyx_t_2[1] = PyThread_allocate_lock(); - __pyx_t_2[2] = PyThread_allocate_lock(); - __pyx_t_2[3] = PyThread_allocate_lock(); - __pyx_t_2[4] = PyThread_allocate_lock(); - __pyx_t_2[5] = PyThread_allocate_lock(); - __pyx_t_2[6] = PyThread_allocate_lock(); - __pyx_t_2[7] = PyThread_allocate_lock(); - memcpy(&(__pyx_memoryview_thread_locks[0]), __pyx_t_2, sizeof(__pyx_memoryview_thread_locks[0]) * (8)); - - /* "View.MemoryView":549 - * info.obj = self - * - * __pyx_getbuffer = capsule( &__pyx_memoryview_getbuffer, "getbuffer(obj, view, flags)") # <<<<<<<<<<<<<< - * - * - */ - __pyx_t_1 = __pyx_capsule_create(((void *)(&__pyx_memoryview_getbuffer)), ((char *)"getbuffer(obj, view, flags)")); if (unlikely(!__pyx_t_1)) __PYX_ERR(1, 549, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - if (PyDict_SetItem((PyObject *)__pyx_memoryview_type->tp_dict, __pyx_n_s_pyx_getbuffer, __pyx_t_1) < 0) __PYX_ERR(1, 549, __pyx_L1_error) - __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; - PyType_Modified(__pyx_memoryview_type); - - /* "View.MemoryView":995 - * return self.from_object - * - * __pyx_getbuffer = capsule( &__pyx_memoryview_getbuffer, "getbuffer(obj, view, flags)") # <<<<<<<<<<<<<< - * - * - */ - __pyx_t_1 = __pyx_capsule_create(((void *)(&__pyx_memoryview_getbuffer)), ((char *)"getbuffer(obj, view, flags)")); if (unlikely(!__pyx_t_1)) __PYX_ERR(1, 995, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - if (PyDict_SetItem((PyObject *)__pyx_memoryviewslice_type->tp_dict, __pyx_n_s_pyx_getbuffer, __pyx_t_1) < 0) __PYX_ERR(1, 995, __pyx_L1_error) - __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; - PyType_Modified(__pyx_memoryviewslice_type); - - /* "(tree fragment)":1 - * def __pyx_unpickle_Enum(__pyx_type, long __pyx_checksum, __pyx_state): # <<<<<<<<<<<<<< - * cdef object __pyx_PickleError - * cdef object __pyx_result - */ - __pyx_t_1 = PyCFunction_NewEx(&__pyx_mdef_15View_dot_MemoryView_1__pyx_unpickle_Enum, NULL, __pyx_n_s_View_MemoryView); if (unlikely(!__pyx_t_1)) __PYX_ERR(1, 1, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - if (PyDict_SetItem(__pyx_d, __pyx_n_s_pyx_unpickle_Enum, __pyx_t_1) < 0) __PYX_ERR(1, 1, __pyx_L1_error) - __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; - - /* "(tree fragment)":11 - * __pyx_unpickle_Enum__set_state( __pyx_result, __pyx_state) - * return __pyx_result - * cdef __pyx_unpickle_Enum__set_state(Enum __pyx_result, tuple __pyx_state): # <<<<<<<<<<<<<< - * __pyx_result.name = __pyx_state[0] - * if len(__pyx_state) > 1 and hasattr(__pyx_result, '__dict__'): - */ - - /*--- Wrapped vars code ---*/ - - goto __pyx_L0; - __pyx_L1_error:; - __Pyx_XDECREF(__pyx_t_1); - if (__pyx_m) { - if (__pyx_d) { - __Pyx_AddTraceback("init monotonic_align.core", __pyx_clineno, __pyx_lineno, __pyx_filename); - } - Py_CLEAR(__pyx_m); - } else if (!PyErr_Occurred()) { - PyErr_SetString(PyExc_ImportError, "init monotonic_align.core"); - } - __pyx_L0:; - __Pyx_RefNannyFinishContext(); - #if CYTHON_PEP489_MULTI_PHASE_INIT - return (__pyx_m != NULL) ? 0 : -1; - #elif PY_MAJOR_VERSION >= 3 - return __pyx_m; - #else - return; - #endif -} - -/* --- Runtime support code --- */ -/* Refnanny */ -#if CYTHON_REFNANNY -static __Pyx_RefNannyAPIStruct *__Pyx_RefNannyImportAPI(const char *modname) { - PyObject *m = NULL, *p = NULL; - void *r = NULL; - m = PyImport_ImportModule(modname); - if (!m) goto end; - p = PyObject_GetAttrString(m, "RefNannyAPI"); - if (!p) goto end; - r = PyLong_AsVoidPtr(p); -end: - Py_XDECREF(p); - Py_XDECREF(m); - return (__Pyx_RefNannyAPIStruct *)r; -} -#endif - -/* PyObjectGetAttrStr */ -#if CYTHON_USE_TYPE_SLOTS -static CYTHON_INLINE PyObject* __Pyx_PyObject_GetAttrStr(PyObject* obj, PyObject* attr_name) { - PyTypeObject* tp = Py_TYPE(obj); - if (likely(tp->tp_getattro)) - return tp->tp_getattro(obj, attr_name); -#if PY_MAJOR_VERSION < 3 - if (likely(tp->tp_getattr)) - return tp->tp_getattr(obj, PyString_AS_STRING(attr_name)); -#endif - return PyObject_GetAttr(obj, attr_name); -} -#endif - -/* GetBuiltinName */ -static PyObject *__Pyx_GetBuiltinName(PyObject *name) { - PyObject* result = __Pyx_PyObject_GetAttrStr(__pyx_b, name); - if (unlikely(!result)) { - PyErr_Format(PyExc_NameError, -#if PY_MAJOR_VERSION >= 3 - "name '%U' is not defined", name); -#else - "name '%.200s' is not defined", PyString_AS_STRING(name)); -#endif - } - return result; -} - -/* MemviewSliceInit */ -static int -__Pyx_init_memviewslice(struct __pyx_memoryview_obj *memview, - int ndim, - __Pyx_memviewslice *memviewslice, - int memview_is_new_reference) -{ - __Pyx_RefNannyDeclarations - int i, retval=-1; - Py_buffer *buf = &memview->view; - __Pyx_RefNannySetupContext("init_memviewslice", 0); - if (unlikely(memviewslice->memview || memviewslice->data)) { - PyErr_SetString(PyExc_ValueError, - "memviewslice is already initialized!"); - goto fail; - } - if (buf->strides) { - for (i = 0; i < ndim; i++) { - memviewslice->strides[i] = buf->strides[i]; - } - } else { - Py_ssize_t stride = buf->itemsize; - for (i = ndim - 1; i >= 0; i--) { - memviewslice->strides[i] = stride; - stride *= buf->shape[i]; - } - } - for (i = 0; i < ndim; i++) { - memviewslice->shape[i] = buf->shape[i]; - if (buf->suboffsets) { - memviewslice->suboffsets[i] = buf->suboffsets[i]; - } else { - memviewslice->suboffsets[i] = -1; - } - } - memviewslice->memview = memview; - memviewslice->data = (char *)buf->buf; - if (__pyx_add_acquisition_count(memview) == 0 && !memview_is_new_reference) { - Py_INCREF(memview); - } - retval = 0; - goto no_fail; -fail: - memviewslice->memview = 0; - memviewslice->data = 0; - retval = -1; -no_fail: - __Pyx_RefNannyFinishContext(); - return retval; -} -#ifndef Py_NO_RETURN -#define Py_NO_RETURN -#endif -static void __pyx_fatalerror(const char *fmt, ...) Py_NO_RETURN { - va_list vargs; - char msg[200]; -#ifdef HAVE_STDARG_PROTOTYPES - va_start(vargs, fmt); -#else - va_start(vargs); -#endif - vsnprintf(msg, 200, fmt, vargs); - va_end(vargs); - Py_FatalError(msg); -} -static CYTHON_INLINE int -__pyx_add_acquisition_count_locked(__pyx_atomic_int *acquisition_count, - PyThread_type_lock lock) -{ - int result; - PyThread_acquire_lock(lock, 1); - result = (*acquisition_count)++; - PyThread_release_lock(lock); - return result; -} -static CYTHON_INLINE int -__pyx_sub_acquisition_count_locked(__pyx_atomic_int *acquisition_count, - PyThread_type_lock lock) -{ - int result; - PyThread_acquire_lock(lock, 1); - result = (*acquisition_count)--; - PyThread_release_lock(lock); - return result; -} -static CYTHON_INLINE void -__Pyx_INC_MEMVIEW(__Pyx_memviewslice *memslice, int have_gil, int lineno) -{ - int first_time; - struct __pyx_memoryview_obj *memview = memslice->memview; - if (unlikely(!memview || (PyObject *) memview == Py_None)) - return; - if (unlikely(__pyx_get_slice_count(memview) < 0)) - __pyx_fatalerror("Acquisition count is %d (line %d)", - __pyx_get_slice_count(memview), lineno); - first_time = __pyx_add_acquisition_count(memview) == 0; - if (unlikely(first_time)) { - if (have_gil) { - Py_INCREF((PyObject *) memview); - } else { - PyGILState_STATE _gilstate = PyGILState_Ensure(); - Py_INCREF((PyObject *) memview); - PyGILState_Release(_gilstate); - } - } -} -static CYTHON_INLINE void __Pyx_XDEC_MEMVIEW(__Pyx_memviewslice *memslice, - int have_gil, int lineno) { - int last_time; - struct __pyx_memoryview_obj *memview = memslice->memview; - if (unlikely(!memview || (PyObject *) memview == Py_None)) { - memslice->memview = NULL; - return; - } - if (unlikely(__pyx_get_slice_count(memview) <= 0)) - __pyx_fatalerror("Acquisition count is %d (line %d)", - __pyx_get_slice_count(memview), lineno); - last_time = __pyx_sub_acquisition_count(memview) == 1; - memslice->data = NULL; - if (unlikely(last_time)) { - if (have_gil) { - Py_CLEAR(memslice->memview); - } else { - PyGILState_STATE _gilstate = PyGILState_Ensure(); - Py_CLEAR(memslice->memview); - PyGILState_Release(_gilstate); - } - } else { - memslice->memview = NULL; - } -} - -/* RaiseArgTupleInvalid */ -static void __Pyx_RaiseArgtupleInvalid( - const char* func_name, - int exact, - Py_ssize_t num_min, - Py_ssize_t num_max, - Py_ssize_t num_found) -{ - Py_ssize_t num_expected; - const char *more_or_less; - if (num_found < num_min) { - num_expected = num_min; - more_or_less = "at least"; - } else { - num_expected = num_max; - more_or_less = "at most"; - } - if (exact) { - more_or_less = "exactly"; - } - PyErr_Format(PyExc_TypeError, - "%.200s() takes %.8s %" CYTHON_FORMAT_SSIZE_T "d positional argument%.1s (%" CYTHON_FORMAT_SSIZE_T "d given)", - func_name, more_or_less, num_expected, - (num_expected == 1) ? "" : "s", num_found); -} - -/* RaiseDoubleKeywords */ -static void __Pyx_RaiseDoubleKeywordsError( - const char* func_name, - PyObject* kw_name) -{ - PyErr_Format(PyExc_TypeError, - #if PY_MAJOR_VERSION >= 3 - "%s() got multiple values for keyword argument '%U'", func_name, kw_name); - #else - "%s() got multiple values for keyword argument '%s'", func_name, - PyString_AsString(kw_name)); - #endif -} - -/* ParseKeywords */ -static int __Pyx_ParseOptionalKeywords( - PyObject *kwds, - PyObject **argnames[], - PyObject *kwds2, - PyObject *values[], - Py_ssize_t num_pos_args, - const char* function_name) -{ - PyObject *key = 0, *value = 0; - Py_ssize_t pos = 0; - PyObject*** name; - PyObject*** first_kw_arg = argnames + num_pos_args; - while (PyDict_Next(kwds, &pos, &key, &value)) { - name = first_kw_arg; - while (*name && (**name != key)) name++; - if (*name) { - values[name-argnames] = value; - continue; - } - name = first_kw_arg; - #if PY_MAJOR_VERSION < 3 - if (likely(PyString_Check(key))) { - while (*name) { - if ((CYTHON_COMPILING_IN_PYPY || PyString_GET_SIZE(**name) == PyString_GET_SIZE(key)) - && _PyString_Eq(**name, key)) { - values[name-argnames] = value; - break; - } - name++; - } - if (*name) continue; - else { - PyObject*** argname = argnames; - while (argname != first_kw_arg) { - if ((**argname == key) || ( - (CYTHON_COMPILING_IN_PYPY || PyString_GET_SIZE(**argname) == PyString_GET_SIZE(key)) - && _PyString_Eq(**argname, key))) { - goto arg_passed_twice; - } - argname++; - } - } - } else - #endif - if (likely(PyUnicode_Check(key))) { - while (*name) { - int cmp = (**name == key) ? 0 : - #if !CYTHON_COMPILING_IN_PYPY && PY_MAJOR_VERSION >= 3 - (__Pyx_PyUnicode_GET_LENGTH(**name) != __Pyx_PyUnicode_GET_LENGTH(key)) ? 1 : - #endif - PyUnicode_Compare(**name, key); - if (cmp < 0 && unlikely(PyErr_Occurred())) goto bad; - if (cmp == 0) { - values[name-argnames] = value; - break; - } - name++; - } - if (*name) continue; - else { - PyObject*** argname = argnames; - while (argname != first_kw_arg) { - int cmp = (**argname == key) ? 0 : - #if !CYTHON_COMPILING_IN_PYPY && PY_MAJOR_VERSION >= 3 - (__Pyx_PyUnicode_GET_LENGTH(**argname) != __Pyx_PyUnicode_GET_LENGTH(key)) ? 1 : - #endif - PyUnicode_Compare(**argname, key); - if (cmp < 0 && unlikely(PyErr_Occurred())) goto bad; - if (cmp == 0) goto arg_passed_twice; - argname++; - } - } - } else - goto invalid_keyword_type; - if (kwds2) { - if (unlikely(PyDict_SetItem(kwds2, key, value))) goto bad; - } else { - goto invalid_keyword; - } - } - return 0; -arg_passed_twice: - __Pyx_RaiseDoubleKeywordsError(function_name, key); - goto bad; -invalid_keyword_type: - PyErr_Format(PyExc_TypeError, - "%.200s() keywords must be strings", function_name); - goto bad; -invalid_keyword: - PyErr_Format(PyExc_TypeError, - #if PY_MAJOR_VERSION < 3 - "%.200s() got an unexpected keyword argument '%.200s'", - function_name, PyString_AsString(key)); - #else - "%s() got an unexpected keyword argument '%U'", - function_name, key); - #endif -bad: - return -1; -} - -/* None */ -static CYTHON_INLINE void __Pyx_RaiseUnboundLocalError(const char *varname) { - PyErr_Format(PyExc_UnboundLocalError, "local variable '%s' referenced before assignment", varname); -} - -/* ArgTypeTest */ -static int __Pyx__ArgTypeTest(PyObject *obj, PyTypeObject *type, const char *name, int exact) -{ - if (unlikely(!type)) { - PyErr_SetString(PyExc_SystemError, "Missing type object"); - return 0; - } - else if (exact) { - #if PY_MAJOR_VERSION == 2 - if ((type == &PyBaseString_Type) && likely(__Pyx_PyBaseString_CheckExact(obj))) return 1; - #endif - } - else { - if (likely(__Pyx_TypeCheck(obj, type))) return 1; - } - PyErr_Format(PyExc_TypeError, - "Argument '%.200s' has incorrect type (expected %.200s, got %.200s)", - name, type->tp_name, Py_TYPE(obj)->tp_name); - return 0; -} - -/* PyObjectCall */ -#if CYTHON_COMPILING_IN_CPYTHON -static CYTHON_INLINE PyObject* __Pyx_PyObject_Call(PyObject *func, PyObject *arg, PyObject *kw) { - PyObject *result; - ternaryfunc call = func->ob_type->tp_call; - if (unlikely(!call)) - return PyObject_Call(func, arg, kw); - if (unlikely(Py_EnterRecursiveCall((char*)" while calling a Python object"))) - return NULL; - result = (*call)(func, arg, kw); - Py_LeaveRecursiveCall(); - if (unlikely(!result) && unlikely(!PyErr_Occurred())) { - PyErr_SetString( - PyExc_SystemError, - "NULL result without error in PyObject_Call"); - } - return result; -} -#endif - -/* PyErrFetchRestore */ -#if CYTHON_FAST_THREAD_STATE -static CYTHON_INLINE void __Pyx_ErrRestoreInState(PyThreadState *tstate, PyObject *type, PyObject *value, PyObject *tb) { - PyObject *tmp_type, *tmp_value, *tmp_tb; - tmp_type = tstate->curexc_type; - tmp_value = tstate->curexc_value; - tmp_tb = tstate->curexc_traceback; - tstate->curexc_type = type; - tstate->curexc_value = value; - tstate->curexc_traceback = tb; - Py_XDECREF(tmp_type); - Py_XDECREF(tmp_value); - Py_XDECREF(tmp_tb); -} -static CYTHON_INLINE void __Pyx_ErrFetchInState(PyThreadState *tstate, PyObject **type, PyObject **value, PyObject **tb) { - *type = tstate->curexc_type; - *value = tstate->curexc_value; - *tb = tstate->curexc_traceback; - tstate->curexc_type = 0; - tstate->curexc_value = 0; - tstate->curexc_traceback = 0; -} -#endif - -/* RaiseException */ -#if PY_MAJOR_VERSION < 3 -static void __Pyx_Raise(PyObject *type, PyObject *value, PyObject *tb, - CYTHON_UNUSED PyObject *cause) { - __Pyx_PyThreadState_declare - Py_XINCREF(type); - if (!value || value == Py_None) - value = NULL; - else - Py_INCREF(value); - if (!tb || tb == Py_None) - tb = NULL; - else { - Py_INCREF(tb); - if (!PyTraceBack_Check(tb)) { - PyErr_SetString(PyExc_TypeError, - "raise: arg 3 must be a traceback or None"); - goto raise_error; - } - } - if (PyType_Check(type)) { -#if CYTHON_COMPILING_IN_PYPY - if (!value) { - Py_INCREF(Py_None); - value = Py_None; - } -#endif - PyErr_NormalizeException(&type, &value, &tb); - } else { - if (value) { - PyErr_SetString(PyExc_TypeError, - "instance exception may not have a separate value"); - goto raise_error; - } - value = type; - type = (PyObject*) Py_TYPE(type); - Py_INCREF(type); - if (!PyType_IsSubtype((PyTypeObject *)type, (PyTypeObject *)PyExc_BaseException)) { - PyErr_SetString(PyExc_TypeError, - "raise: exception class must be a subclass of BaseException"); - goto raise_error; - } - } - __Pyx_PyThreadState_assign - __Pyx_ErrRestore(type, value, tb); - return; -raise_error: - Py_XDECREF(value); - Py_XDECREF(type); - Py_XDECREF(tb); - return; -} -#else -static void __Pyx_Raise(PyObject *type, PyObject *value, PyObject *tb, PyObject *cause) { - PyObject* owned_instance = NULL; - if (tb == Py_None) { - tb = 0; - } else if (tb && !PyTraceBack_Check(tb)) { - PyErr_SetString(PyExc_TypeError, - "raise: arg 3 must be a traceback or None"); - goto bad; - } - if (value == Py_None) - value = 0; - if (PyExceptionInstance_Check(type)) { - if (value) { - PyErr_SetString(PyExc_TypeError, - "instance exception may not have a separate value"); - goto bad; - } - value = type; - type = (PyObject*) Py_TYPE(value); - } else if (PyExceptionClass_Check(type)) { - PyObject *instance_class = NULL; - if (value && PyExceptionInstance_Check(value)) { - instance_class = (PyObject*) Py_TYPE(value); - if (instance_class != type) { - int is_subclass = PyObject_IsSubclass(instance_class, type); - if (!is_subclass) { - instance_class = NULL; - } else if (unlikely(is_subclass == -1)) { - goto bad; - } else { - type = instance_class; - } - } - } - if (!instance_class) { - PyObject *args; - if (!value) - args = PyTuple_New(0); - else if (PyTuple_Check(value)) { - Py_INCREF(value); - args = value; - } else - args = PyTuple_Pack(1, value); - if (!args) - goto bad; - owned_instance = PyObject_Call(type, args, NULL); - Py_DECREF(args); - if (!owned_instance) - goto bad; - value = owned_instance; - if (!PyExceptionInstance_Check(value)) { - PyErr_Format(PyExc_TypeError, - "calling %R should have returned an instance of " - "BaseException, not %R", - type, Py_TYPE(value)); - goto bad; - } - } - } else { - PyErr_SetString(PyExc_TypeError, - "raise: exception class must be a subclass of BaseException"); - goto bad; - } - if (cause) { - PyObject *fixed_cause; - if (cause == Py_None) { - fixed_cause = NULL; - } else if (PyExceptionClass_Check(cause)) { - fixed_cause = PyObject_CallObject(cause, NULL); - if (fixed_cause == NULL) - goto bad; - } else if (PyExceptionInstance_Check(cause)) { - fixed_cause = cause; - Py_INCREF(fixed_cause); - } else { - PyErr_SetString(PyExc_TypeError, - "exception causes must derive from " - "BaseException"); - goto bad; - } - PyException_SetCause(value, fixed_cause); - } - PyErr_SetObject(type, value); - if (tb) { -#if CYTHON_COMPILING_IN_PYPY - PyObject *tmp_type, *tmp_value, *tmp_tb; - PyErr_Fetch(&tmp_type, &tmp_value, &tmp_tb); - Py_INCREF(tb); - PyErr_Restore(tmp_type, tmp_value, tb); - Py_XDECREF(tmp_tb); -#else - PyThreadState *tstate = __Pyx_PyThreadState_Current; - PyObject* tmp_tb = tstate->curexc_traceback; - if (tb != tmp_tb) { - Py_INCREF(tb); - tstate->curexc_traceback = tb; - Py_XDECREF(tmp_tb); - } -#endif - } -bad: - Py_XDECREF(owned_instance); - return; -} -#endif - -/* PyCFunctionFastCall */ -#if CYTHON_FAST_PYCCALL -static CYTHON_INLINE PyObject * __Pyx_PyCFunction_FastCall(PyObject *func_obj, PyObject **args, Py_ssize_t nargs) { - PyCFunctionObject *func = (PyCFunctionObject*)func_obj; - PyCFunction meth = PyCFunction_GET_FUNCTION(func); - PyObject *self = PyCFunction_GET_SELF(func); - int flags = PyCFunction_GET_FLAGS(func); - assert(PyCFunction_Check(func)); - assert(METH_FASTCALL == (flags & ~(METH_CLASS | METH_STATIC | METH_COEXIST | METH_KEYWORDS | METH_STACKLESS))); - assert(nargs >= 0); - assert(nargs == 0 || args != NULL); - /* _PyCFunction_FastCallDict() must not be called with an exception set, - because it may clear it (directly or indirectly) and so the - caller loses its exception */ - assert(!PyErr_Occurred()); - if ((PY_VERSION_HEX < 0x030700A0) || unlikely(flags & METH_KEYWORDS)) { - return (*((__Pyx_PyCFunctionFastWithKeywords)(void*)meth)) (self, args, nargs, NULL); - } else { - return (*((__Pyx_PyCFunctionFast)(void*)meth)) (self, args, nargs); - } -} -#endif - -/* PyFunctionFastCall */ -#if CYTHON_FAST_PYCALL -static PyObject* __Pyx_PyFunction_FastCallNoKw(PyCodeObject *co, PyObject **args, Py_ssize_t na, - PyObject *globals) { - PyFrameObject *f; - PyThreadState *tstate = __Pyx_PyThreadState_Current; - PyObject **fastlocals; - Py_ssize_t i; - PyObject *result; - assert(globals != NULL); - /* XXX Perhaps we should create a specialized - PyFrame_New() that doesn't take locals, but does - take builtins without sanity checking them. - */ - assert(tstate != NULL); - f = PyFrame_New(tstate, co, globals, NULL); - if (f == NULL) { - return NULL; - } - fastlocals = __Pyx_PyFrame_GetLocalsplus(f); - for (i = 0; i < na; i++) { - Py_INCREF(*args); - fastlocals[i] = *args++; - } - result = PyEval_EvalFrameEx(f,0); - ++tstate->recursion_depth; - Py_DECREF(f); - --tstate->recursion_depth; - return result; -} -#if 1 || PY_VERSION_HEX < 0x030600B1 -static PyObject *__Pyx_PyFunction_FastCallDict(PyObject *func, PyObject **args, Py_ssize_t nargs, PyObject *kwargs) { - PyCodeObject *co = (PyCodeObject *)PyFunction_GET_CODE(func); - PyObject *globals = PyFunction_GET_GLOBALS(func); - PyObject *argdefs = PyFunction_GET_DEFAULTS(func); - PyObject *closure; -#if PY_MAJOR_VERSION >= 3 - PyObject *kwdefs; -#endif - PyObject *kwtuple, **k; - PyObject **d; - Py_ssize_t nd; - Py_ssize_t nk; - PyObject *result; - assert(kwargs == NULL || PyDict_Check(kwargs)); - nk = kwargs ? PyDict_Size(kwargs) : 0; - if (Py_EnterRecursiveCall((char*)" while calling a Python object")) { - return NULL; - } - if ( -#if PY_MAJOR_VERSION >= 3 - co->co_kwonlyargcount == 0 && -#endif - likely(kwargs == NULL || nk == 0) && - co->co_flags == (CO_OPTIMIZED | CO_NEWLOCALS | CO_NOFREE)) { - if (argdefs == NULL && co->co_argcount == nargs) { - result = __Pyx_PyFunction_FastCallNoKw(co, args, nargs, globals); - goto done; - } - else if (nargs == 0 && argdefs != NULL - && co->co_argcount == Py_SIZE(argdefs)) { - /* function called with no arguments, but all parameters have - a default value: use default values as arguments .*/ - args = &PyTuple_GET_ITEM(argdefs, 0); - result =__Pyx_PyFunction_FastCallNoKw(co, args, Py_SIZE(argdefs), globals); - goto done; - } - } - if (kwargs != NULL) { - Py_ssize_t pos, i; - kwtuple = PyTuple_New(2 * nk); - if (kwtuple == NULL) { - result = NULL; - goto done; - } - k = &PyTuple_GET_ITEM(kwtuple, 0); - pos = i = 0; - while (PyDict_Next(kwargs, &pos, &k[i], &k[i+1])) { - Py_INCREF(k[i]); - Py_INCREF(k[i+1]); - i += 2; - } - nk = i / 2; - } - else { - kwtuple = NULL; - k = NULL; - } - closure = PyFunction_GET_CLOSURE(func); -#if PY_MAJOR_VERSION >= 3 - kwdefs = PyFunction_GET_KW_DEFAULTS(func); -#endif - if (argdefs != NULL) { - d = &PyTuple_GET_ITEM(argdefs, 0); - nd = Py_SIZE(argdefs); - } - else { - d = NULL; - nd = 0; - } -#if PY_MAJOR_VERSION >= 3 - result = PyEval_EvalCodeEx((PyObject*)co, globals, (PyObject *)NULL, - args, (int)nargs, - k, (int)nk, - d, (int)nd, kwdefs, closure); -#else - result = PyEval_EvalCodeEx(co, globals, (PyObject *)NULL, - args, (int)nargs, - k, (int)nk, - d, (int)nd, closure); -#endif - Py_XDECREF(kwtuple); -done: - Py_LeaveRecursiveCall(); - return result; -} -#endif -#endif - -/* PyObjectCall2Args */ -static CYTHON_UNUSED PyObject* __Pyx_PyObject_Call2Args(PyObject* function, PyObject* arg1, PyObject* arg2) { - PyObject *args, *result = NULL; - #if CYTHON_FAST_PYCALL - if (PyFunction_Check(function)) { - PyObject *args[2] = {arg1, arg2}; - return __Pyx_PyFunction_FastCall(function, args, 2); - } - #endif - #if CYTHON_FAST_PYCCALL - if (__Pyx_PyFastCFunction_Check(function)) { - PyObject *args[2] = {arg1, arg2}; - return __Pyx_PyCFunction_FastCall(function, args, 2); - } - #endif - args = PyTuple_New(2); - if (unlikely(!args)) goto done; - Py_INCREF(arg1); - PyTuple_SET_ITEM(args, 0, arg1); - Py_INCREF(arg2); - PyTuple_SET_ITEM(args, 1, arg2); - Py_INCREF(function); - result = __Pyx_PyObject_Call(function, args, NULL); - Py_DECREF(args); - Py_DECREF(function); -done: - return result; -} - -/* PyObjectCallMethO */ -#if CYTHON_COMPILING_IN_CPYTHON -static CYTHON_INLINE PyObject* __Pyx_PyObject_CallMethO(PyObject *func, PyObject *arg) { - PyObject *self, *result; - PyCFunction cfunc; - cfunc = PyCFunction_GET_FUNCTION(func); - self = PyCFunction_GET_SELF(func); - if (unlikely(Py_EnterRecursiveCall((char*)" while calling a Python object"))) - return NULL; - result = cfunc(self, arg); - Py_LeaveRecursiveCall(); - if (unlikely(!result) && unlikely(!PyErr_Occurred())) { - PyErr_SetString( - PyExc_SystemError, - "NULL result without error in PyObject_Call"); - } - return result; -} -#endif - -/* PyObjectCallOneArg */ -#if CYTHON_COMPILING_IN_CPYTHON -static PyObject* __Pyx__PyObject_CallOneArg(PyObject *func, PyObject *arg) { - PyObject *result; - PyObject *args = PyTuple_New(1); - if (unlikely(!args)) return NULL; - Py_INCREF(arg); - PyTuple_SET_ITEM(args, 0, arg); - result = __Pyx_PyObject_Call(func, args, NULL); - Py_DECREF(args); - return result; -} -static CYTHON_INLINE PyObject* __Pyx_PyObject_CallOneArg(PyObject *func, PyObject *arg) { -#if CYTHON_FAST_PYCALL - if (PyFunction_Check(func)) { - return __Pyx_PyFunction_FastCall(func, &arg, 1); - } -#endif - if (likely(PyCFunction_Check(func))) { - if (likely(PyCFunction_GET_FLAGS(func) & METH_O)) { - return __Pyx_PyObject_CallMethO(func, arg); -#if CYTHON_FAST_PYCCALL - } else if (PyCFunction_GET_FLAGS(func) & METH_FASTCALL) { - return __Pyx_PyCFunction_FastCall(func, &arg, 1); -#endif - } - } - return __Pyx__PyObject_CallOneArg(func, arg); -} -#else -static CYTHON_INLINE PyObject* __Pyx_PyObject_CallOneArg(PyObject *func, PyObject *arg) { - PyObject *result; - PyObject *args = PyTuple_Pack(1, arg); - if (unlikely(!args)) return NULL; - result = __Pyx_PyObject_Call(func, args, NULL); - Py_DECREF(args); - return result; -} -#endif - -/* BytesEquals */ -static CYTHON_INLINE int __Pyx_PyBytes_Equals(PyObject* s1, PyObject* s2, int equals) { -#if CYTHON_COMPILING_IN_PYPY - return PyObject_RichCompareBool(s1, s2, equals); -#else - if (s1 == s2) { - return (equals == Py_EQ); - } else if (PyBytes_CheckExact(s1) & PyBytes_CheckExact(s2)) { - const char *ps1, *ps2; - Py_ssize_t length = PyBytes_GET_SIZE(s1); - if (length != PyBytes_GET_SIZE(s2)) - return (equals == Py_NE); - ps1 = PyBytes_AS_STRING(s1); - ps2 = PyBytes_AS_STRING(s2); - if (ps1[0] != ps2[0]) { - return (equals == Py_NE); - } else if (length == 1) { - return (equals == Py_EQ); - } else { - int result; -#if CYTHON_USE_UNICODE_INTERNALS - Py_hash_t hash1, hash2; - hash1 = ((PyBytesObject*)s1)->ob_shash; - hash2 = ((PyBytesObject*)s2)->ob_shash; - if (hash1 != hash2 && hash1 != -1 && hash2 != -1) { - return (equals == Py_NE); - } -#endif - result = memcmp(ps1, ps2, (size_t)length); - return (equals == Py_EQ) ? (result == 0) : (result != 0); - } - } else if ((s1 == Py_None) & PyBytes_CheckExact(s2)) { - return (equals == Py_NE); - } else if ((s2 == Py_None) & PyBytes_CheckExact(s1)) { - return (equals == Py_NE); - } else { - int result; - PyObject* py_result = PyObject_RichCompare(s1, s2, equals); - if (!py_result) - return -1; - result = __Pyx_PyObject_IsTrue(py_result); - Py_DECREF(py_result); - return result; - } -#endif -} - -/* UnicodeEquals */ -static CYTHON_INLINE int __Pyx_PyUnicode_Equals(PyObject* s1, PyObject* s2, int equals) { -#if CYTHON_COMPILING_IN_PYPY - return PyObject_RichCompareBool(s1, s2, equals); -#else -#if PY_MAJOR_VERSION < 3 - PyObject* owned_ref = NULL; -#endif - int s1_is_unicode, s2_is_unicode; - if (s1 == s2) { - goto return_eq; - } - s1_is_unicode = PyUnicode_CheckExact(s1); - s2_is_unicode = PyUnicode_CheckExact(s2); -#if PY_MAJOR_VERSION < 3 - if ((s1_is_unicode & (!s2_is_unicode)) && PyString_CheckExact(s2)) { - owned_ref = PyUnicode_FromObject(s2); - if (unlikely(!owned_ref)) - return -1; - s2 = owned_ref; - s2_is_unicode = 1; - } else if ((s2_is_unicode & (!s1_is_unicode)) && PyString_CheckExact(s1)) { - owned_ref = PyUnicode_FromObject(s1); - if (unlikely(!owned_ref)) - return -1; - s1 = owned_ref; - s1_is_unicode = 1; - } else if (((!s2_is_unicode) & (!s1_is_unicode))) { - return __Pyx_PyBytes_Equals(s1, s2, equals); - } -#endif - if (s1_is_unicode & s2_is_unicode) { - Py_ssize_t length; - int kind; - void *data1, *data2; - if (unlikely(__Pyx_PyUnicode_READY(s1) < 0) || unlikely(__Pyx_PyUnicode_READY(s2) < 0)) - return -1; - length = __Pyx_PyUnicode_GET_LENGTH(s1); - if (length != __Pyx_PyUnicode_GET_LENGTH(s2)) { - goto return_ne; - } -#if CYTHON_USE_UNICODE_INTERNALS - { - Py_hash_t hash1, hash2; - #if CYTHON_PEP393_ENABLED - hash1 = ((PyASCIIObject*)s1)->hash; - hash2 = ((PyASCIIObject*)s2)->hash; - #else - hash1 = ((PyUnicodeObject*)s1)->hash; - hash2 = ((PyUnicodeObject*)s2)->hash; - #endif - if (hash1 != hash2 && hash1 != -1 && hash2 != -1) { - goto return_ne; - } - } -#endif - kind = __Pyx_PyUnicode_KIND(s1); - if (kind != __Pyx_PyUnicode_KIND(s2)) { - goto return_ne; - } - data1 = __Pyx_PyUnicode_DATA(s1); - data2 = __Pyx_PyUnicode_DATA(s2); - if (__Pyx_PyUnicode_READ(kind, data1, 0) != __Pyx_PyUnicode_READ(kind, data2, 0)) { - goto return_ne; - } else if (length == 1) { - goto return_eq; - } else { - int result = memcmp(data1, data2, (size_t)(length * kind)); - #if PY_MAJOR_VERSION < 3 - Py_XDECREF(owned_ref); - #endif - return (equals == Py_EQ) ? (result == 0) : (result != 0); - } - } else if ((s1 == Py_None) & s2_is_unicode) { - goto return_ne; - } else if ((s2 == Py_None) & s1_is_unicode) { - goto return_ne; - } else { - int result; - PyObject* py_result = PyObject_RichCompare(s1, s2, equals); - #if PY_MAJOR_VERSION < 3 - Py_XDECREF(owned_ref); - #endif - if (!py_result) - return -1; - result = __Pyx_PyObject_IsTrue(py_result); - Py_DECREF(py_result); - return result; - } -return_eq: - #if PY_MAJOR_VERSION < 3 - Py_XDECREF(owned_ref); - #endif - return (equals == Py_EQ); -return_ne: - #if PY_MAJOR_VERSION < 3 - Py_XDECREF(owned_ref); - #endif - return (equals == Py_NE); -#endif -} - -/* None */ -static CYTHON_INLINE Py_ssize_t __Pyx_div_Py_ssize_t(Py_ssize_t a, Py_ssize_t b) { - Py_ssize_t q = a / b; - Py_ssize_t r = a - q*b; - q -= ((r != 0) & ((r ^ b) < 0)); - return q; -} - -/* GetAttr */ -static CYTHON_INLINE PyObject *__Pyx_GetAttr(PyObject *o, PyObject *n) { -#if CYTHON_USE_TYPE_SLOTS -#if PY_MAJOR_VERSION >= 3 - if (likely(PyUnicode_Check(n))) -#else - if (likely(PyString_Check(n))) -#endif - return __Pyx_PyObject_GetAttrStr(o, n); -#endif - return PyObject_GetAttr(o, n); -} - -/* GetItemInt */ -static PyObject *__Pyx_GetItemInt_Generic(PyObject *o, PyObject* j) { - PyObject *r; - if (!j) return NULL; - r = PyObject_GetItem(o, j); - Py_DECREF(j); - return r; -} -static CYTHON_INLINE PyObject *__Pyx_GetItemInt_List_Fast(PyObject *o, Py_ssize_t i, - CYTHON_NCP_UNUSED int wraparound, - CYTHON_NCP_UNUSED int boundscheck) { -#if CYTHON_ASSUME_SAFE_MACROS && !CYTHON_AVOID_BORROWED_REFS - Py_ssize_t wrapped_i = i; - if (wraparound & unlikely(i < 0)) { - wrapped_i += PyList_GET_SIZE(o); - } - if ((!boundscheck) || likely(__Pyx_is_valid_index(wrapped_i, PyList_GET_SIZE(o)))) { - PyObject *r = PyList_GET_ITEM(o, wrapped_i); - Py_INCREF(r); - return r; - } - return __Pyx_GetItemInt_Generic(o, PyInt_FromSsize_t(i)); -#else - return PySequence_GetItem(o, i); -#endif -} -static CYTHON_INLINE PyObject *__Pyx_GetItemInt_Tuple_Fast(PyObject *o, Py_ssize_t i, - CYTHON_NCP_UNUSED int wraparound, - CYTHON_NCP_UNUSED int boundscheck) { -#if CYTHON_ASSUME_SAFE_MACROS && !CYTHON_AVOID_BORROWED_REFS - Py_ssize_t wrapped_i = i; - if (wraparound & unlikely(i < 0)) { - wrapped_i += PyTuple_GET_SIZE(o); - } - if ((!boundscheck) || likely(__Pyx_is_valid_index(wrapped_i, PyTuple_GET_SIZE(o)))) { - PyObject *r = PyTuple_GET_ITEM(o, wrapped_i); - Py_INCREF(r); - return r; - } - return __Pyx_GetItemInt_Generic(o, PyInt_FromSsize_t(i)); -#else - return PySequence_GetItem(o, i); -#endif -} -static CYTHON_INLINE PyObject *__Pyx_GetItemInt_Fast(PyObject *o, Py_ssize_t i, int is_list, - CYTHON_NCP_UNUSED int wraparound, - CYTHON_NCP_UNUSED int boundscheck) { -#if CYTHON_ASSUME_SAFE_MACROS && !CYTHON_AVOID_BORROWED_REFS && CYTHON_USE_TYPE_SLOTS - if (is_list || PyList_CheckExact(o)) { - Py_ssize_t n = ((!wraparound) | likely(i >= 0)) ? i : i + PyList_GET_SIZE(o); - if ((!boundscheck) || (likely(__Pyx_is_valid_index(n, PyList_GET_SIZE(o))))) { - PyObject *r = PyList_GET_ITEM(o, n); - Py_INCREF(r); - return r; - } - } - else if (PyTuple_CheckExact(o)) { - Py_ssize_t n = ((!wraparound) | likely(i >= 0)) ? i : i + PyTuple_GET_SIZE(o); - if ((!boundscheck) || likely(__Pyx_is_valid_index(n, PyTuple_GET_SIZE(o)))) { - PyObject *r = PyTuple_GET_ITEM(o, n); - Py_INCREF(r); - return r; - } - } else { - PySequenceMethods *m = Py_TYPE(o)->tp_as_sequence; - if (likely(m && m->sq_item)) { - if (wraparound && unlikely(i < 0) && likely(m->sq_length)) { - Py_ssize_t l = m->sq_length(o); - if (likely(l >= 0)) { - i += l; - } else { - if (!PyErr_ExceptionMatches(PyExc_OverflowError)) - return NULL; - PyErr_Clear(); - } - } - return m->sq_item(o, i); - } - } -#else - if (is_list || PySequence_Check(o)) { - return PySequence_GetItem(o, i); - } -#endif - return __Pyx_GetItemInt_Generic(o, PyInt_FromSsize_t(i)); -} - -/* ObjectGetItem */ -#if CYTHON_USE_TYPE_SLOTS -static PyObject *__Pyx_PyObject_GetIndex(PyObject *obj, PyObject* index) { - PyObject *runerr; - Py_ssize_t key_value; - PySequenceMethods *m = Py_TYPE(obj)->tp_as_sequence; - if (unlikely(!(m && m->sq_item))) { - PyErr_Format(PyExc_TypeError, "'%.200s' object is not subscriptable", Py_TYPE(obj)->tp_name); - return NULL; - } - key_value = __Pyx_PyIndex_AsSsize_t(index); - if (likely(key_value != -1 || !(runerr = PyErr_Occurred()))) { - return __Pyx_GetItemInt_Fast(obj, key_value, 0, 1, 1); - } - if (PyErr_GivenExceptionMatches(runerr, PyExc_OverflowError)) { - PyErr_Clear(); - PyErr_Format(PyExc_IndexError, "cannot fit '%.200s' into an index-sized integer", Py_TYPE(index)->tp_name); - } - return NULL; -} -static PyObject *__Pyx_PyObject_GetItem(PyObject *obj, PyObject* key) { - PyMappingMethods *m = Py_TYPE(obj)->tp_as_mapping; - if (likely(m && m->mp_subscript)) { - return m->mp_subscript(obj, key); - } - return __Pyx_PyObject_GetIndex(obj, key); -} -#endif - -/* decode_c_string */ -static CYTHON_INLINE PyObject* __Pyx_decode_c_string( - const char* cstring, Py_ssize_t start, Py_ssize_t stop, - const char* encoding, const char* errors, - PyObject* (*decode_func)(const char *s, Py_ssize_t size, const char *errors)) { - Py_ssize_t length; - if (unlikely((start < 0) | (stop < 0))) { - size_t slen = strlen(cstring); - if (unlikely(slen > (size_t) PY_SSIZE_T_MAX)) { - PyErr_SetString(PyExc_OverflowError, - "c-string too long to convert to Python"); - return NULL; - } - length = (Py_ssize_t) slen; - if (start < 0) { - start += length; - if (start < 0) - start = 0; - } - if (stop < 0) - stop += length; - } - if (unlikely(stop <= start)) - return __Pyx_NewRef(__pyx_empty_unicode); - length = stop - start; - cstring += start; - if (decode_func) { - return decode_func(cstring, length, errors); - } else { - return PyUnicode_Decode(cstring, length, encoding, errors); - } -} - -/* PyErrExceptionMatches */ -#if CYTHON_FAST_THREAD_STATE -static int __Pyx_PyErr_ExceptionMatchesTuple(PyObject *exc_type, PyObject *tuple) { - Py_ssize_t i, n; - n = PyTuple_GET_SIZE(tuple); -#if PY_MAJOR_VERSION >= 3 - for (i=0; icurexc_type; - if (exc_type == err) return 1; - if (unlikely(!exc_type)) return 0; - if (unlikely(PyTuple_Check(err))) - return __Pyx_PyErr_ExceptionMatchesTuple(exc_type, err); - return __Pyx_PyErr_GivenExceptionMatches(exc_type, err); -} -#endif - -/* GetAttr3 */ -static PyObject *__Pyx_GetAttr3Default(PyObject *d) { - __Pyx_PyThreadState_declare - __Pyx_PyThreadState_assign - if (unlikely(!__Pyx_PyErr_ExceptionMatches(PyExc_AttributeError))) - return NULL; - __Pyx_PyErr_Clear(); - Py_INCREF(d); - return d; -} -static CYTHON_INLINE PyObject *__Pyx_GetAttr3(PyObject *o, PyObject *n, PyObject *d) { - PyObject *r = __Pyx_GetAttr(o, n); - return (likely(r)) ? r : __Pyx_GetAttr3Default(d); -} - -/* PyDictVersioning */ -#if CYTHON_USE_DICT_VERSIONS && CYTHON_USE_TYPE_SLOTS -static CYTHON_INLINE PY_UINT64_T __Pyx_get_tp_dict_version(PyObject *obj) { - PyObject *dict = Py_TYPE(obj)->tp_dict; - return likely(dict) ? __PYX_GET_DICT_VERSION(dict) : 0; -} -static CYTHON_INLINE PY_UINT64_T __Pyx_get_object_dict_version(PyObject *obj) { - PyObject **dictptr = NULL; - Py_ssize_t offset = Py_TYPE(obj)->tp_dictoffset; - if (offset) { -#if CYTHON_COMPILING_IN_CPYTHON - dictptr = (likely(offset > 0)) ? (PyObject **) ((char *)obj + offset) : _PyObject_GetDictPtr(obj); -#else - dictptr = _PyObject_GetDictPtr(obj); -#endif - } - return (dictptr && *dictptr) ? __PYX_GET_DICT_VERSION(*dictptr) : 0; -} -static CYTHON_INLINE int __Pyx_object_dict_version_matches(PyObject* obj, PY_UINT64_T tp_dict_version, PY_UINT64_T obj_dict_version) { - PyObject *dict = Py_TYPE(obj)->tp_dict; - if (unlikely(!dict) || unlikely(tp_dict_version != __PYX_GET_DICT_VERSION(dict))) - return 0; - return obj_dict_version == __Pyx_get_object_dict_version(obj); -} -#endif - -/* GetModuleGlobalName */ -#if CYTHON_USE_DICT_VERSIONS -static PyObject *__Pyx__GetModuleGlobalName(PyObject *name, PY_UINT64_T *dict_version, PyObject **dict_cached_value) -#else -static CYTHON_INLINE PyObject *__Pyx__GetModuleGlobalName(PyObject *name) -#endif -{ - PyObject *result; -#if !CYTHON_AVOID_BORROWED_REFS -#if CYTHON_COMPILING_IN_CPYTHON && PY_VERSION_HEX >= 0x030500A1 - result = _PyDict_GetItem_KnownHash(__pyx_d, name, ((PyASCIIObject *) name)->hash); - __PYX_UPDATE_DICT_CACHE(__pyx_d, result, *dict_cached_value, *dict_version) - if (likely(result)) { - return __Pyx_NewRef(result); - } else if (unlikely(PyErr_Occurred())) { - return NULL; - } -#else - result = PyDict_GetItem(__pyx_d, name); - __PYX_UPDATE_DICT_CACHE(__pyx_d, result, *dict_cached_value, *dict_version) - if (likely(result)) { - return __Pyx_NewRef(result); - } -#endif -#else - result = PyObject_GetItem(__pyx_d, name); - __PYX_UPDATE_DICT_CACHE(__pyx_d, result, *dict_cached_value, *dict_version) - if (likely(result)) { - return __Pyx_NewRef(result); - } - PyErr_Clear(); -#endif - return __Pyx_GetBuiltinName(name); -} - -/* RaiseTooManyValuesToUnpack */ -static CYTHON_INLINE void __Pyx_RaiseTooManyValuesError(Py_ssize_t expected) { - PyErr_Format(PyExc_ValueError, - "too many values to unpack (expected %" CYTHON_FORMAT_SSIZE_T "d)", expected); -} - -/* RaiseNeedMoreValuesToUnpack */ -static CYTHON_INLINE void __Pyx_RaiseNeedMoreValuesError(Py_ssize_t index) { - PyErr_Format(PyExc_ValueError, - "need more than %" CYTHON_FORMAT_SSIZE_T "d value%.1s to unpack", - index, (index == 1) ? "" : "s"); -} - -/* RaiseNoneIterError */ -static CYTHON_INLINE void __Pyx_RaiseNoneNotIterableError(void) { - PyErr_SetString(PyExc_TypeError, "'NoneType' object is not iterable"); -} - -/* ExtTypeTest */ -static CYTHON_INLINE int __Pyx_TypeTest(PyObject *obj, PyTypeObject *type) { - if (unlikely(!type)) { - PyErr_SetString(PyExc_SystemError, "Missing type object"); - return 0; - } - if (likely(__Pyx_TypeCheck(obj, type))) - return 1; - PyErr_Format(PyExc_TypeError, "Cannot convert %.200s to %.200s", - Py_TYPE(obj)->tp_name, type->tp_name); - return 0; -} - -/* GetTopmostException */ -#if CYTHON_USE_EXC_INFO_STACK -static _PyErr_StackItem * -__Pyx_PyErr_GetTopmostException(PyThreadState *tstate) -{ - _PyErr_StackItem *exc_info = tstate->exc_info; - while ((exc_info->exc_type == NULL || exc_info->exc_type == Py_None) && - exc_info->previous_item != NULL) - { - exc_info = exc_info->previous_item; - } - return exc_info; -} -#endif - -/* SaveResetException */ -#if CYTHON_FAST_THREAD_STATE -static CYTHON_INLINE void __Pyx__ExceptionSave(PyThreadState *tstate, PyObject **type, PyObject **value, PyObject **tb) { - #if CYTHON_USE_EXC_INFO_STACK - _PyErr_StackItem *exc_info = __Pyx_PyErr_GetTopmostException(tstate); - *type = exc_info->exc_type; - *value = exc_info->exc_value; - *tb = exc_info->exc_traceback; - #else - *type = tstate->exc_type; - *value = tstate->exc_value; - *tb = tstate->exc_traceback; - #endif - Py_XINCREF(*type); - Py_XINCREF(*value); - Py_XINCREF(*tb); -} -static CYTHON_INLINE void __Pyx__ExceptionReset(PyThreadState *tstate, PyObject *type, PyObject *value, PyObject *tb) { - PyObject *tmp_type, *tmp_value, *tmp_tb; - #if CYTHON_USE_EXC_INFO_STACK - _PyErr_StackItem *exc_info = tstate->exc_info; - tmp_type = exc_info->exc_type; - tmp_value = exc_info->exc_value; - tmp_tb = exc_info->exc_traceback; - exc_info->exc_type = type; - exc_info->exc_value = value; - exc_info->exc_traceback = tb; - #else - tmp_type = tstate->exc_type; - tmp_value = tstate->exc_value; - tmp_tb = tstate->exc_traceback; - tstate->exc_type = type; - tstate->exc_value = value; - tstate->exc_traceback = tb; - #endif - Py_XDECREF(tmp_type); - Py_XDECREF(tmp_value); - Py_XDECREF(tmp_tb); -} -#endif - -/* GetException */ -#if CYTHON_FAST_THREAD_STATE -static int __Pyx__GetException(PyThreadState *tstate, PyObject **type, PyObject **value, PyObject **tb) -#else -static int __Pyx_GetException(PyObject **type, PyObject **value, PyObject **tb) -#endif -{ - PyObject *local_type, *local_value, *local_tb; -#if CYTHON_FAST_THREAD_STATE - PyObject *tmp_type, *tmp_value, *tmp_tb; - local_type = tstate->curexc_type; - local_value = tstate->curexc_value; - local_tb = tstate->curexc_traceback; - tstate->curexc_type = 0; - tstate->curexc_value = 0; - tstate->curexc_traceback = 0; -#else - PyErr_Fetch(&local_type, &local_value, &local_tb); -#endif - PyErr_NormalizeException(&local_type, &local_value, &local_tb); -#if CYTHON_FAST_THREAD_STATE - if (unlikely(tstate->curexc_type)) -#else - if (unlikely(PyErr_Occurred())) -#endif - goto bad; - #if PY_MAJOR_VERSION >= 3 - if (local_tb) { - if (unlikely(PyException_SetTraceback(local_value, local_tb) < 0)) - goto bad; - } - #endif - Py_XINCREF(local_tb); - Py_XINCREF(local_type); - Py_XINCREF(local_value); - *type = local_type; - *value = local_value; - *tb = local_tb; -#if CYTHON_FAST_THREAD_STATE - #if CYTHON_USE_EXC_INFO_STACK - { - _PyErr_StackItem *exc_info = tstate->exc_info; - tmp_type = exc_info->exc_type; - tmp_value = exc_info->exc_value; - tmp_tb = exc_info->exc_traceback; - exc_info->exc_type = local_type; - exc_info->exc_value = local_value; - exc_info->exc_traceback = local_tb; - } - #else - tmp_type = tstate->exc_type; - tmp_value = tstate->exc_value; - tmp_tb = tstate->exc_traceback; - tstate->exc_type = local_type; - tstate->exc_value = local_value; - tstate->exc_traceback = local_tb; - #endif - Py_XDECREF(tmp_type); - Py_XDECREF(tmp_value); - Py_XDECREF(tmp_tb); -#else - PyErr_SetExcInfo(local_type, local_value, local_tb); -#endif - return 0; -bad: - *type = 0; - *value = 0; - *tb = 0; - Py_XDECREF(local_type); - Py_XDECREF(local_value); - Py_XDECREF(local_tb); - return -1; -} - -/* SwapException */ -#if CYTHON_FAST_THREAD_STATE -static CYTHON_INLINE void __Pyx__ExceptionSwap(PyThreadState *tstate, PyObject **type, PyObject **value, PyObject **tb) { - PyObject *tmp_type, *tmp_value, *tmp_tb; - #if CYTHON_USE_EXC_INFO_STACK - _PyErr_StackItem *exc_info = tstate->exc_info; - tmp_type = exc_info->exc_type; - tmp_value = exc_info->exc_value; - tmp_tb = exc_info->exc_traceback; - exc_info->exc_type = *type; - exc_info->exc_value = *value; - exc_info->exc_traceback = *tb; - #else - tmp_type = tstate->exc_type; - tmp_value = tstate->exc_value; - tmp_tb = tstate->exc_traceback; - tstate->exc_type = *type; - tstate->exc_value = *value; - tstate->exc_traceback = *tb; - #endif - *type = tmp_type; - *value = tmp_value; - *tb = tmp_tb; -} -#else -static CYTHON_INLINE void __Pyx_ExceptionSwap(PyObject **type, PyObject **value, PyObject **tb) { - PyObject *tmp_type, *tmp_value, *tmp_tb; - PyErr_GetExcInfo(&tmp_type, &tmp_value, &tmp_tb); - PyErr_SetExcInfo(*type, *value, *tb); - *type = tmp_type; - *value = tmp_value; - *tb = tmp_tb; -} -#endif - -/* Import */ -static PyObject *__Pyx_Import(PyObject *name, PyObject *from_list, int level) { - PyObject *empty_list = 0; - PyObject *module = 0; - PyObject *global_dict = 0; - PyObject *empty_dict = 0; - PyObject *list; - #if PY_MAJOR_VERSION < 3 - PyObject *py_import; - py_import = __Pyx_PyObject_GetAttrStr(__pyx_b, __pyx_n_s_import); - if (!py_import) - goto bad; - #endif - if (from_list) - list = from_list; - else { - empty_list = PyList_New(0); - if (!empty_list) - goto bad; - list = empty_list; - } - global_dict = PyModule_GetDict(__pyx_m); - if (!global_dict) - goto bad; - empty_dict = PyDict_New(); - if (!empty_dict) - goto bad; - { - #if PY_MAJOR_VERSION >= 3 - if (level == -1) { - if ((1) && (strchr(__Pyx_MODULE_NAME, '.'))) { - module = PyImport_ImportModuleLevelObject( - name, global_dict, empty_dict, list, 1); - if (!module) { - if (!PyErr_ExceptionMatches(PyExc_ImportError)) - goto bad; - PyErr_Clear(); - } - } - level = 0; - } - #endif - if (!module) { - #if PY_MAJOR_VERSION < 3 - PyObject *py_level = PyInt_FromLong(level); - if (!py_level) - goto bad; - module = PyObject_CallFunctionObjArgs(py_import, - name, global_dict, empty_dict, list, py_level, (PyObject *)NULL); - Py_DECREF(py_level); - #else - module = PyImport_ImportModuleLevelObject( - name, global_dict, empty_dict, list, level); - #endif - } - } -bad: - #if PY_MAJOR_VERSION < 3 - Py_XDECREF(py_import); - #endif - Py_XDECREF(empty_list); - Py_XDECREF(empty_dict); - return module; -} - -/* FastTypeChecks */ -#if CYTHON_COMPILING_IN_CPYTHON -static int __Pyx_InBases(PyTypeObject *a, PyTypeObject *b) { - while (a) { - a = a->tp_base; - if (a == b) - return 1; - } - return b == &PyBaseObject_Type; -} -static CYTHON_INLINE int __Pyx_IsSubtype(PyTypeObject *a, PyTypeObject *b) { - PyObject *mro; - if (a == b) return 1; - mro = a->tp_mro; - if (likely(mro)) { - Py_ssize_t i, n; - n = PyTuple_GET_SIZE(mro); - for (i = 0; i < n; i++) { - if (PyTuple_GET_ITEM(mro, i) == (PyObject *)b) - return 1; - } - return 0; - } - return __Pyx_InBases(a, b); -} -#if PY_MAJOR_VERSION == 2 -static int __Pyx_inner_PyErr_GivenExceptionMatches2(PyObject *err, PyObject* exc_type1, PyObject* exc_type2) { - PyObject *exception, *value, *tb; - int res; - __Pyx_PyThreadState_declare - __Pyx_PyThreadState_assign - __Pyx_ErrFetch(&exception, &value, &tb); - res = exc_type1 ? PyObject_IsSubclass(err, exc_type1) : 0; - if (unlikely(res == -1)) { - PyErr_WriteUnraisable(err); - res = 0; - } - if (!res) { - res = PyObject_IsSubclass(err, exc_type2); - if (unlikely(res == -1)) { - PyErr_WriteUnraisable(err); - res = 0; - } - } - __Pyx_ErrRestore(exception, value, tb); - return res; -} -#else -static CYTHON_INLINE int __Pyx_inner_PyErr_GivenExceptionMatches2(PyObject *err, PyObject* exc_type1, PyObject *exc_type2) { - int res = exc_type1 ? __Pyx_IsSubtype((PyTypeObject*)err, (PyTypeObject*)exc_type1) : 0; - if (!res) { - res = __Pyx_IsSubtype((PyTypeObject*)err, (PyTypeObject*)exc_type2); - } - return res; -} -#endif -static int __Pyx_PyErr_GivenExceptionMatchesTuple(PyObject *exc_type, PyObject *tuple) { - Py_ssize_t i, n; - assert(PyExceptionClass_Check(exc_type)); - n = PyTuple_GET_SIZE(tuple); -#if PY_MAJOR_VERSION >= 3 - for (i=0; i= 0 || (x^b) >= 0)) - return PyInt_FromLong(x); - return PyLong_Type.tp_as_number->nb_add(op1, op2); - } - #endif - #if CYTHON_USE_PYLONG_INTERNALS - if (likely(PyLong_CheckExact(op1))) { - const long b = intval; - long a, x; -#ifdef HAVE_LONG_LONG - const PY_LONG_LONG llb = intval; - PY_LONG_LONG lla, llx; -#endif - const digit* digits = ((PyLongObject*)op1)->ob_digit; - const Py_ssize_t size = Py_SIZE(op1); - if (likely(__Pyx_sst_abs(size) <= 1)) { - a = likely(size) ? digits[0] : 0; - if (size == -1) a = -a; - } else { - switch (size) { - case -2: - if (8 * sizeof(long) - 1 > 2 * PyLong_SHIFT) { - a = -(long) (((((unsigned long)digits[1]) << PyLong_SHIFT) | (unsigned long)digits[0])); - break; -#ifdef HAVE_LONG_LONG - } else if (8 * sizeof(PY_LONG_LONG) - 1 > 2 * PyLong_SHIFT) { - lla = -(PY_LONG_LONG) (((((unsigned PY_LONG_LONG)digits[1]) << PyLong_SHIFT) | (unsigned PY_LONG_LONG)digits[0])); - goto long_long; -#endif - } - CYTHON_FALLTHROUGH; - case 2: - if (8 * sizeof(long) - 1 > 2 * PyLong_SHIFT) { - a = (long) (((((unsigned long)digits[1]) << PyLong_SHIFT) | (unsigned long)digits[0])); - break; -#ifdef HAVE_LONG_LONG - } else if (8 * sizeof(PY_LONG_LONG) - 1 > 2 * PyLong_SHIFT) { - lla = (PY_LONG_LONG) (((((unsigned PY_LONG_LONG)digits[1]) << PyLong_SHIFT) | (unsigned PY_LONG_LONG)digits[0])); - goto long_long; -#endif - } - CYTHON_FALLTHROUGH; - case -3: - if (8 * sizeof(long) - 1 > 3 * PyLong_SHIFT) { - a = -(long) (((((((unsigned long)digits[2]) << PyLong_SHIFT) | (unsigned long)digits[1]) << PyLong_SHIFT) | (unsigned long)digits[0])); - break; -#ifdef HAVE_LONG_LONG - } else if (8 * sizeof(PY_LONG_LONG) - 1 > 3 * PyLong_SHIFT) { - lla = -(PY_LONG_LONG) (((((((unsigned PY_LONG_LONG)digits[2]) << PyLong_SHIFT) | (unsigned PY_LONG_LONG)digits[1]) << PyLong_SHIFT) | (unsigned PY_LONG_LONG)digits[0])); - goto long_long; -#endif - } - CYTHON_FALLTHROUGH; - case 3: - if (8 * sizeof(long) - 1 > 3 * PyLong_SHIFT) { - a = (long) (((((((unsigned long)digits[2]) << PyLong_SHIFT) | (unsigned long)digits[1]) << PyLong_SHIFT) | (unsigned long)digits[0])); - break; -#ifdef HAVE_LONG_LONG - } else if (8 * sizeof(PY_LONG_LONG) - 1 > 3 * PyLong_SHIFT) { - lla = (PY_LONG_LONG) (((((((unsigned PY_LONG_LONG)digits[2]) << PyLong_SHIFT) | (unsigned PY_LONG_LONG)digits[1]) << PyLong_SHIFT) | (unsigned PY_LONG_LONG)digits[0])); - goto long_long; -#endif - } - CYTHON_FALLTHROUGH; - case -4: - if (8 * sizeof(long) - 1 > 4 * PyLong_SHIFT) { - a = -(long) (((((((((unsigned long)digits[3]) << PyLong_SHIFT) | (unsigned long)digits[2]) << PyLong_SHIFT) | (unsigned long)digits[1]) << PyLong_SHIFT) | (unsigned long)digits[0])); - break; -#ifdef HAVE_LONG_LONG - } else if (8 * sizeof(PY_LONG_LONG) - 1 > 4 * PyLong_SHIFT) { - lla = -(PY_LONG_LONG) (((((((((unsigned PY_LONG_LONG)digits[3]) << PyLong_SHIFT) | (unsigned PY_LONG_LONG)digits[2]) << PyLong_SHIFT) | (unsigned PY_LONG_LONG)digits[1]) << PyLong_SHIFT) | (unsigned PY_LONG_LONG)digits[0])); - goto long_long; -#endif - } - CYTHON_FALLTHROUGH; - case 4: - if (8 * sizeof(long) - 1 > 4 * PyLong_SHIFT) { - a = (long) (((((((((unsigned long)digits[3]) << PyLong_SHIFT) | (unsigned long)digits[2]) << PyLong_SHIFT) | (unsigned long)digits[1]) << PyLong_SHIFT) | (unsigned long)digits[0])); - break; -#ifdef HAVE_LONG_LONG - } else if (8 * sizeof(PY_LONG_LONG) - 1 > 4 * PyLong_SHIFT) { - lla = (PY_LONG_LONG) (((((((((unsigned PY_LONG_LONG)digits[3]) << PyLong_SHIFT) | (unsigned PY_LONG_LONG)digits[2]) << PyLong_SHIFT) | (unsigned PY_LONG_LONG)digits[1]) << PyLong_SHIFT) | (unsigned PY_LONG_LONG)digits[0])); - goto long_long; -#endif - } - CYTHON_FALLTHROUGH; - default: return PyLong_Type.tp_as_number->nb_add(op1, op2); - } - } - x = a + b; - return PyLong_FromLong(x); -#ifdef HAVE_LONG_LONG - long_long: - llx = lla + llb; - return PyLong_FromLongLong(llx); -#endif - - - } - #endif - if (PyFloat_CheckExact(op1)) { - const long b = intval; - double a = PyFloat_AS_DOUBLE(op1); - double result; - PyFPE_START_PROTECT("add", return NULL) - result = ((double)a) + (double)b; - PyFPE_END_PROTECT(result) - return PyFloat_FromDouble(result); - } - return (inplace ? PyNumber_InPlaceAdd : PyNumber_Add)(op1, op2); -} -#endif - -/* None */ -static CYTHON_INLINE long __Pyx_div_long(long a, long b) { - long q = a / b; - long r = a - q*b; - q -= ((r != 0) & ((r ^ b) < 0)); - return q; -} - -/* ImportFrom */ -static PyObject* __Pyx_ImportFrom(PyObject* module, PyObject* name) { - PyObject* value = __Pyx_PyObject_GetAttrStr(module, name); - if (unlikely(!value) && PyErr_ExceptionMatches(PyExc_AttributeError)) { - PyErr_Format(PyExc_ImportError, - #if PY_MAJOR_VERSION < 3 - "cannot import name %.230s", PyString_AS_STRING(name)); - #else - "cannot import name %S", name); - #endif - } - return value; -} - -/* HasAttr */ -static CYTHON_INLINE int __Pyx_HasAttr(PyObject *o, PyObject *n) { - PyObject *r; - if (unlikely(!__Pyx_PyBaseString_Check(n))) { - PyErr_SetString(PyExc_TypeError, - "hasattr(): attribute name must be string"); - return -1; - } - r = __Pyx_GetAttr(o, n); - if (unlikely(!r)) { - PyErr_Clear(); - return 0; - } else { - Py_DECREF(r); - return 1; - } -} - -/* PyObject_GenericGetAttrNoDict */ -#if CYTHON_USE_TYPE_SLOTS && CYTHON_USE_PYTYPE_LOOKUP && PY_VERSION_HEX < 0x03070000 -static PyObject *__Pyx_RaiseGenericGetAttributeError(PyTypeObject *tp, PyObject *attr_name) { - PyErr_Format(PyExc_AttributeError, -#if PY_MAJOR_VERSION >= 3 - "'%.50s' object has no attribute '%U'", - tp->tp_name, attr_name); -#else - "'%.50s' object has no attribute '%.400s'", - tp->tp_name, PyString_AS_STRING(attr_name)); -#endif - return NULL; -} -static CYTHON_INLINE PyObject* __Pyx_PyObject_GenericGetAttrNoDict(PyObject* obj, PyObject* attr_name) { - PyObject *descr; - PyTypeObject *tp = Py_TYPE(obj); - if (unlikely(!PyString_Check(attr_name))) { - return PyObject_GenericGetAttr(obj, attr_name); - } - assert(!tp->tp_dictoffset); - descr = _PyType_Lookup(tp, attr_name); - if (unlikely(!descr)) { - return __Pyx_RaiseGenericGetAttributeError(tp, attr_name); - } - Py_INCREF(descr); - #if PY_MAJOR_VERSION < 3 - if (likely(PyType_HasFeature(Py_TYPE(descr), Py_TPFLAGS_HAVE_CLASS))) - #endif - { - descrgetfunc f = Py_TYPE(descr)->tp_descr_get; - if (unlikely(f)) { - PyObject *res = f(descr, obj, (PyObject *)tp); - Py_DECREF(descr); - return res; - } - } - return descr; -} -#endif - -/* PyObject_GenericGetAttr */ -#if CYTHON_USE_TYPE_SLOTS && CYTHON_USE_PYTYPE_LOOKUP && PY_VERSION_HEX < 0x03070000 -static PyObject* __Pyx_PyObject_GenericGetAttr(PyObject* obj, PyObject* attr_name) { - if (unlikely(Py_TYPE(obj)->tp_dictoffset)) { - return PyObject_GenericGetAttr(obj, attr_name); - } - return __Pyx_PyObject_GenericGetAttrNoDict(obj, attr_name); -} -#endif - -/* SetVTable */ -static int __Pyx_SetVtable(PyObject *dict, void *vtable) { -#if PY_VERSION_HEX >= 0x02070000 - PyObject *ob = PyCapsule_New(vtable, 0, 0); -#else - PyObject *ob = PyCObject_FromVoidPtr(vtable, 0); -#endif - if (!ob) - goto bad; - if (PyDict_SetItem(dict, __pyx_n_s_pyx_vtable, ob) < 0) - goto bad; - Py_DECREF(ob); - return 0; -bad: - Py_XDECREF(ob); - return -1; -} - -/* PyObjectGetAttrStrNoError */ -static void __Pyx_PyObject_GetAttrStr_ClearAttributeError(void) { - __Pyx_PyThreadState_declare - __Pyx_PyThreadState_assign - if (likely(__Pyx_PyErr_ExceptionMatches(PyExc_AttributeError))) - __Pyx_PyErr_Clear(); -} -static CYTHON_INLINE PyObject* __Pyx_PyObject_GetAttrStrNoError(PyObject* obj, PyObject* attr_name) { - PyObject *result; -#if CYTHON_COMPILING_IN_CPYTHON && CYTHON_USE_TYPE_SLOTS && PY_VERSION_HEX >= 0x030700B1 - PyTypeObject* tp = Py_TYPE(obj); - if (likely(tp->tp_getattro == PyObject_GenericGetAttr)) { - return _PyObject_GenericGetAttrWithDict(obj, attr_name, NULL, 1); - } -#endif - result = __Pyx_PyObject_GetAttrStr(obj, attr_name); - if (unlikely(!result)) { - __Pyx_PyObject_GetAttrStr_ClearAttributeError(); - } - return result; -} - -/* SetupReduce */ -static int __Pyx_setup_reduce_is_named(PyObject* meth, PyObject* name) { - int ret; - PyObject *name_attr; - name_attr = __Pyx_PyObject_GetAttrStr(meth, __pyx_n_s_name_2); - if (likely(name_attr)) { - ret = PyObject_RichCompareBool(name_attr, name, Py_EQ); - } else { - ret = -1; - } - if (unlikely(ret < 0)) { - PyErr_Clear(); - ret = 0; - } - Py_XDECREF(name_attr); - return ret; -} -static int __Pyx_setup_reduce(PyObject* type_obj) { - int ret = 0; - PyObject *object_reduce = NULL; - PyObject *object_reduce_ex = NULL; - PyObject *reduce = NULL; - PyObject *reduce_ex = NULL; - PyObject *reduce_cython = NULL; - PyObject *setstate = NULL; - PyObject *setstate_cython = NULL; -#if CYTHON_USE_PYTYPE_LOOKUP - if (_PyType_Lookup((PyTypeObject*)type_obj, __pyx_n_s_getstate)) goto __PYX_GOOD; -#else - if (PyObject_HasAttr(type_obj, __pyx_n_s_getstate)) goto __PYX_GOOD; -#endif -#if CYTHON_USE_PYTYPE_LOOKUP - object_reduce_ex = _PyType_Lookup(&PyBaseObject_Type, __pyx_n_s_reduce_ex); if (!object_reduce_ex) goto __PYX_BAD; -#else - object_reduce_ex = __Pyx_PyObject_GetAttrStr((PyObject*)&PyBaseObject_Type, __pyx_n_s_reduce_ex); if (!object_reduce_ex) goto __PYX_BAD; -#endif - reduce_ex = __Pyx_PyObject_GetAttrStr(type_obj, __pyx_n_s_reduce_ex); if (unlikely(!reduce_ex)) goto __PYX_BAD; - if (reduce_ex == object_reduce_ex) { -#if CYTHON_USE_PYTYPE_LOOKUP - object_reduce = _PyType_Lookup(&PyBaseObject_Type, __pyx_n_s_reduce); if (!object_reduce) goto __PYX_BAD; -#else - object_reduce = __Pyx_PyObject_GetAttrStr((PyObject*)&PyBaseObject_Type, __pyx_n_s_reduce); if (!object_reduce) goto __PYX_BAD; -#endif - reduce = __Pyx_PyObject_GetAttrStr(type_obj, __pyx_n_s_reduce); if (unlikely(!reduce)) goto __PYX_BAD; - if (reduce == object_reduce || __Pyx_setup_reduce_is_named(reduce, __pyx_n_s_reduce_cython)) { - reduce_cython = __Pyx_PyObject_GetAttrStrNoError(type_obj, __pyx_n_s_reduce_cython); - if (likely(reduce_cython)) { - ret = PyDict_SetItem(((PyTypeObject*)type_obj)->tp_dict, __pyx_n_s_reduce, reduce_cython); if (unlikely(ret < 0)) goto __PYX_BAD; - ret = PyDict_DelItem(((PyTypeObject*)type_obj)->tp_dict, __pyx_n_s_reduce_cython); if (unlikely(ret < 0)) goto __PYX_BAD; - } else if (reduce == object_reduce || PyErr_Occurred()) { - goto __PYX_BAD; - } - setstate = __Pyx_PyObject_GetAttrStr(type_obj, __pyx_n_s_setstate); - if (!setstate) PyErr_Clear(); - if (!setstate || __Pyx_setup_reduce_is_named(setstate, __pyx_n_s_setstate_cython)) { - setstate_cython = __Pyx_PyObject_GetAttrStrNoError(type_obj, __pyx_n_s_setstate_cython); - if (likely(setstate_cython)) { - ret = PyDict_SetItem(((PyTypeObject*)type_obj)->tp_dict, __pyx_n_s_setstate, setstate_cython); if (unlikely(ret < 0)) goto __PYX_BAD; - ret = PyDict_DelItem(((PyTypeObject*)type_obj)->tp_dict, __pyx_n_s_setstate_cython); if (unlikely(ret < 0)) goto __PYX_BAD; - } else if (!setstate || PyErr_Occurred()) { - goto __PYX_BAD; - } - } - PyType_Modified((PyTypeObject*)type_obj); - } - } - goto __PYX_GOOD; -__PYX_BAD: - if (!PyErr_Occurred()) - PyErr_Format(PyExc_RuntimeError, "Unable to initialize pickling for %s", ((PyTypeObject*)type_obj)->tp_name); - ret = -1; -__PYX_GOOD: -#if !CYTHON_USE_PYTYPE_LOOKUP - Py_XDECREF(object_reduce); - Py_XDECREF(object_reduce_ex); -#endif - Py_XDECREF(reduce); - Py_XDECREF(reduce_ex); - Py_XDECREF(reduce_cython); - Py_XDECREF(setstate); - Py_XDECREF(setstate_cython); - return ret; -} - -/* CLineInTraceback */ -#ifndef CYTHON_CLINE_IN_TRACEBACK -static int __Pyx_CLineForTraceback(CYTHON_NCP_UNUSED PyThreadState *tstate, int c_line) { - PyObject *use_cline; - PyObject *ptype, *pvalue, *ptraceback; -#if CYTHON_COMPILING_IN_CPYTHON - PyObject **cython_runtime_dict; -#endif - if (unlikely(!__pyx_cython_runtime)) { - return c_line; - } - __Pyx_ErrFetchInState(tstate, &ptype, &pvalue, &ptraceback); -#if CYTHON_COMPILING_IN_CPYTHON - cython_runtime_dict = _PyObject_GetDictPtr(__pyx_cython_runtime); - if (likely(cython_runtime_dict)) { - __PYX_PY_DICT_LOOKUP_IF_MODIFIED( - use_cline, *cython_runtime_dict, - __Pyx_PyDict_GetItemStr(*cython_runtime_dict, __pyx_n_s_cline_in_traceback)) - } else -#endif - { - PyObject *use_cline_obj = __Pyx_PyObject_GetAttrStr(__pyx_cython_runtime, __pyx_n_s_cline_in_traceback); - if (use_cline_obj) { - use_cline = PyObject_Not(use_cline_obj) ? Py_False : Py_True; - Py_DECREF(use_cline_obj); - } else { - PyErr_Clear(); - use_cline = NULL; - } - } - if (!use_cline) { - c_line = 0; - PyObject_SetAttr(__pyx_cython_runtime, __pyx_n_s_cline_in_traceback, Py_False); - } - else if (use_cline == Py_False || (use_cline != Py_True && PyObject_Not(use_cline) != 0)) { - c_line = 0; - } - __Pyx_ErrRestoreInState(tstate, ptype, pvalue, ptraceback); - return c_line; -} -#endif - -/* CodeObjectCache */ -static int __pyx_bisect_code_objects(__Pyx_CodeObjectCacheEntry* entries, int count, int code_line) { - int start = 0, mid = 0, end = count - 1; - if (end >= 0 && code_line > entries[end].code_line) { - return count; - } - while (start < end) { - mid = start + (end - start) / 2; - if (code_line < entries[mid].code_line) { - end = mid; - } else if (code_line > entries[mid].code_line) { - start = mid + 1; - } else { - return mid; - } - } - if (code_line <= entries[mid].code_line) { - return mid; - } else { - return mid + 1; - } -} -static PyCodeObject *__pyx_find_code_object(int code_line) { - PyCodeObject* code_object; - int pos; - if (unlikely(!code_line) || unlikely(!__pyx_code_cache.entries)) { - return NULL; - } - pos = __pyx_bisect_code_objects(__pyx_code_cache.entries, __pyx_code_cache.count, code_line); - if (unlikely(pos >= __pyx_code_cache.count) || unlikely(__pyx_code_cache.entries[pos].code_line != code_line)) { - return NULL; - } - code_object = __pyx_code_cache.entries[pos].code_object; - Py_INCREF(code_object); - return code_object; -} -static void __pyx_insert_code_object(int code_line, PyCodeObject* code_object) { - int pos, i; - __Pyx_CodeObjectCacheEntry* entries = __pyx_code_cache.entries; - if (unlikely(!code_line)) { - return; - } - if (unlikely(!entries)) { - entries = (__Pyx_CodeObjectCacheEntry*)PyMem_Malloc(64*sizeof(__Pyx_CodeObjectCacheEntry)); - if (likely(entries)) { - __pyx_code_cache.entries = entries; - __pyx_code_cache.max_count = 64; - __pyx_code_cache.count = 1; - entries[0].code_line = code_line; - entries[0].code_object = code_object; - Py_INCREF(code_object); - } - return; - } - pos = __pyx_bisect_code_objects(__pyx_code_cache.entries, __pyx_code_cache.count, code_line); - if ((pos < __pyx_code_cache.count) && unlikely(__pyx_code_cache.entries[pos].code_line == code_line)) { - PyCodeObject* tmp = entries[pos].code_object; - entries[pos].code_object = code_object; - Py_DECREF(tmp); - return; - } - if (__pyx_code_cache.count == __pyx_code_cache.max_count) { - int new_max = __pyx_code_cache.max_count + 64; - entries = (__Pyx_CodeObjectCacheEntry*)PyMem_Realloc( - __pyx_code_cache.entries, ((size_t)new_max) * sizeof(__Pyx_CodeObjectCacheEntry)); - if (unlikely(!entries)) { - return; - } - __pyx_code_cache.entries = entries; - __pyx_code_cache.max_count = new_max; - } - for (i=__pyx_code_cache.count; i>pos; i--) { - entries[i] = entries[i-1]; - } - entries[pos].code_line = code_line; - entries[pos].code_object = code_object; - __pyx_code_cache.count++; - Py_INCREF(code_object); -} - -/* AddTraceback */ -#include "compile.h" -#include "frameobject.h" -#include "traceback.h" -static PyCodeObject* __Pyx_CreateCodeObjectForTraceback( - const char *funcname, int c_line, - int py_line, const char *filename) { - PyCodeObject *py_code = 0; - PyObject *py_srcfile = 0; - PyObject *py_funcname = 0; - #if PY_MAJOR_VERSION < 3 - py_srcfile = PyString_FromString(filename); - #else - py_srcfile = PyUnicode_FromString(filename); - #endif - if (!py_srcfile) goto bad; - if (c_line) { - #if PY_MAJOR_VERSION < 3 - py_funcname = PyString_FromFormat( "%s (%s:%d)", funcname, __pyx_cfilenm, c_line); - #else - py_funcname = PyUnicode_FromFormat( "%s (%s:%d)", funcname, __pyx_cfilenm, c_line); - #endif - } - else { - #if PY_MAJOR_VERSION < 3 - py_funcname = PyString_FromString(funcname); - #else - py_funcname = PyUnicode_FromString(funcname); - #endif - } - if (!py_funcname) goto bad; - py_code = __Pyx_PyCode_New( - 0, - 0, - 0, - 0, - 0, - __pyx_empty_bytes, /*PyObject *code,*/ - __pyx_empty_tuple, /*PyObject *consts,*/ - __pyx_empty_tuple, /*PyObject *names,*/ - __pyx_empty_tuple, /*PyObject *varnames,*/ - __pyx_empty_tuple, /*PyObject *freevars,*/ - __pyx_empty_tuple, /*PyObject *cellvars,*/ - py_srcfile, /*PyObject *filename,*/ - py_funcname, /*PyObject *name,*/ - py_line, - __pyx_empty_bytes /*PyObject *lnotab*/ - ); - Py_DECREF(py_srcfile); - Py_DECREF(py_funcname); - return py_code; -bad: - Py_XDECREF(py_srcfile); - Py_XDECREF(py_funcname); - return NULL; -} -static void __Pyx_AddTraceback(const char *funcname, int c_line, - int py_line, const char *filename) { - PyCodeObject *py_code = 0; - PyFrameObject *py_frame = 0; - PyThreadState *tstate = __Pyx_PyThreadState_Current; - if (c_line) { - c_line = __Pyx_CLineForTraceback(tstate, c_line); - } - py_code = __pyx_find_code_object(c_line ? -c_line : py_line); - if (!py_code) { - py_code = __Pyx_CreateCodeObjectForTraceback( - funcname, c_line, py_line, filename); - if (!py_code) goto bad; - __pyx_insert_code_object(c_line ? -c_line : py_line, py_code); - } - py_frame = PyFrame_New( - tstate, /*PyThreadState *tstate,*/ - py_code, /*PyCodeObject *code,*/ - __pyx_d, /*PyObject *globals,*/ - 0 /*PyObject *locals*/ - ); - if (!py_frame) goto bad; - __Pyx_PyFrame_SetLineNumber(py_frame, py_line); - PyTraceBack_Here(py_frame); -bad: - Py_XDECREF(py_code); - Py_XDECREF(py_frame); -} - -#if PY_MAJOR_VERSION < 3 -static int __Pyx_GetBuffer(PyObject *obj, Py_buffer *view, int flags) { - if (PyObject_CheckBuffer(obj)) return PyObject_GetBuffer(obj, view, flags); - if (__Pyx_TypeCheck(obj, __pyx_array_type)) return __pyx_array_getbuffer(obj, view, flags); - if (__Pyx_TypeCheck(obj, __pyx_memoryview_type)) return __pyx_memoryview_getbuffer(obj, view, flags); - PyErr_Format(PyExc_TypeError, "'%.200s' does not have the buffer interface", Py_TYPE(obj)->tp_name); - return -1; -} -static void __Pyx_ReleaseBuffer(Py_buffer *view) { - PyObject *obj = view->obj; - if (!obj) return; - if (PyObject_CheckBuffer(obj)) { - PyBuffer_Release(view); - return; - } - if ((0)) {} - view->obj = NULL; - Py_DECREF(obj); -} -#endif - - -/* MemviewSliceIsContig */ -static int -__pyx_memviewslice_is_contig(const __Pyx_memviewslice mvs, char order, int ndim) -{ - int i, index, step, start; - Py_ssize_t itemsize = mvs.memview->view.itemsize; - if (order == 'F') { - step = 1; - start = 0; - } else { - step = -1; - start = ndim - 1; - } - for (i = 0; i < ndim; i++) { - index = start + step * i; - if (mvs.suboffsets[index] >= 0 || mvs.strides[index] != itemsize) - return 0; - itemsize *= mvs.shape[index]; - } - return 1; -} - -/* OverlappingSlices */ -static void -__pyx_get_array_memory_extents(__Pyx_memviewslice *slice, - void **out_start, void **out_end, - int ndim, size_t itemsize) -{ - char *start, *end; - int i; - start = end = slice->data; - for (i = 0; i < ndim; i++) { - Py_ssize_t stride = slice->strides[i]; - Py_ssize_t extent = slice->shape[i]; - if (extent == 0) { - *out_start = *out_end = start; - return; - } else { - if (stride > 0) - end += stride * (extent - 1); - else - start += stride * (extent - 1); - } - } - *out_start = start; - *out_end = end + itemsize; -} -static int -__pyx_slices_overlap(__Pyx_memviewslice *slice1, - __Pyx_memviewslice *slice2, - int ndim, size_t itemsize) -{ - void *start1, *end1, *start2, *end2; - __pyx_get_array_memory_extents(slice1, &start1, &end1, ndim, itemsize); - __pyx_get_array_memory_extents(slice2, &start2, &end2, ndim, itemsize); - return (start1 < end2) && (start2 < end1); -} - -/* Capsule */ -static CYTHON_INLINE PyObject * -__pyx_capsule_create(void *p, CYTHON_UNUSED const char *sig) -{ - PyObject *cobj; -#if PY_VERSION_HEX >= 0x02070000 - cobj = PyCapsule_New(p, sig, NULL); -#else - cobj = PyCObject_FromVoidPtr(p, NULL); -#endif - return cobj; -} - -/* IsLittleEndian */ -static CYTHON_INLINE int __Pyx_Is_Little_Endian(void) -{ - union { - uint32_t u32; - uint8_t u8[4]; - } S; - S.u32 = 0x01020304; - return S.u8[0] == 4; -} - -/* BufferFormatCheck */ -static void __Pyx_BufFmt_Init(__Pyx_BufFmt_Context* ctx, - __Pyx_BufFmt_StackElem* stack, - __Pyx_TypeInfo* type) { - stack[0].field = &ctx->root; - stack[0].parent_offset = 0; - ctx->root.type = type; - ctx->root.name = "buffer dtype"; - ctx->root.offset = 0; - ctx->head = stack; - ctx->head->field = &ctx->root; - ctx->fmt_offset = 0; - ctx->head->parent_offset = 0; - ctx->new_packmode = '@'; - ctx->enc_packmode = '@'; - ctx->new_count = 1; - ctx->enc_count = 0; - ctx->enc_type = 0; - ctx->is_complex = 0; - ctx->is_valid_array = 0; - ctx->struct_alignment = 0; - while (type->typegroup == 'S') { - ++ctx->head; - ctx->head->field = type->fields; - ctx->head->parent_offset = 0; - type = type->fields->type; - } -} -static int __Pyx_BufFmt_ParseNumber(const char** ts) { - int count; - const char* t = *ts; - if (*t < '0' || *t > '9') { - return -1; - } else { - count = *t++ - '0'; - while (*t >= '0' && *t <= '9') { - count *= 10; - count += *t++ - '0'; - } - } - *ts = t; - return count; -} -static int __Pyx_BufFmt_ExpectNumber(const char **ts) { - int number = __Pyx_BufFmt_ParseNumber(ts); - if (number == -1) - PyErr_Format(PyExc_ValueError,\ - "Does not understand character buffer dtype format string ('%c')", **ts); - return number; -} -static void __Pyx_BufFmt_RaiseUnexpectedChar(char ch) { - PyErr_Format(PyExc_ValueError, - "Unexpected format string character: '%c'", ch); -} -static const char* __Pyx_BufFmt_DescribeTypeChar(char ch, int is_complex) { - switch (ch) { - case '?': return "'bool'"; - case 'c': return "'char'"; - case 'b': return "'signed char'"; - case 'B': return "'unsigned char'"; - case 'h': return "'short'"; - case 'H': return "'unsigned short'"; - case 'i': return "'int'"; - case 'I': return "'unsigned int'"; - case 'l': return "'long'"; - case 'L': return "'unsigned long'"; - case 'q': return "'long long'"; - case 'Q': return "'unsigned long long'"; - case 'f': return (is_complex ? "'complex float'" : "'float'"); - case 'd': return (is_complex ? "'complex double'" : "'double'"); - case 'g': return (is_complex ? "'complex long double'" : "'long double'"); - case 'T': return "a struct"; - case 'O': return "Python object"; - case 'P': return "a pointer"; - case 's': case 'p': return "a string"; - case 0: return "end"; - default: return "unparseable format string"; - } -} -static size_t __Pyx_BufFmt_TypeCharToStandardSize(char ch, int is_complex) { - switch (ch) { - case '?': case 'c': case 'b': case 'B': case 's': case 'p': return 1; - case 'h': case 'H': return 2; - case 'i': case 'I': case 'l': case 'L': return 4; - case 'q': case 'Q': return 8; - case 'f': return (is_complex ? 8 : 4); - case 'd': return (is_complex ? 16 : 8); - case 'g': { - PyErr_SetString(PyExc_ValueError, "Python does not define a standard format string size for long double ('g').."); - return 0; - } - case 'O': case 'P': return sizeof(void*); - default: - __Pyx_BufFmt_RaiseUnexpectedChar(ch); - return 0; - } -} -static size_t __Pyx_BufFmt_TypeCharToNativeSize(char ch, int is_complex) { - switch (ch) { - case '?': case 'c': case 'b': case 'B': case 's': case 'p': return 1; - case 'h': case 'H': return sizeof(short); - case 'i': case 'I': return sizeof(int); - case 'l': case 'L': return sizeof(long); - #ifdef HAVE_LONG_LONG - case 'q': case 'Q': return sizeof(PY_LONG_LONG); - #endif - case 'f': return sizeof(float) * (is_complex ? 2 : 1); - case 'd': return sizeof(double) * (is_complex ? 2 : 1); - case 'g': return sizeof(long double) * (is_complex ? 2 : 1); - case 'O': case 'P': return sizeof(void*); - default: { - __Pyx_BufFmt_RaiseUnexpectedChar(ch); - return 0; - } - } -} -typedef struct { char c; short x; } __Pyx_st_short; -typedef struct { char c; int x; } __Pyx_st_int; -typedef struct { char c; long x; } __Pyx_st_long; -typedef struct { char c; float x; } __Pyx_st_float; -typedef struct { char c; double x; } __Pyx_st_double; -typedef struct { char c; long double x; } __Pyx_st_longdouble; -typedef struct { char c; void *x; } __Pyx_st_void_p; -#ifdef HAVE_LONG_LONG -typedef struct { char c; PY_LONG_LONG x; } __Pyx_st_longlong; -#endif -static size_t __Pyx_BufFmt_TypeCharToAlignment(char ch, CYTHON_UNUSED int is_complex) { - switch (ch) { - case '?': case 'c': case 'b': case 'B': case 's': case 'p': return 1; - case 'h': case 'H': return sizeof(__Pyx_st_short) - sizeof(short); - case 'i': case 'I': return sizeof(__Pyx_st_int) - sizeof(int); - case 'l': case 'L': return sizeof(__Pyx_st_long) - sizeof(long); -#ifdef HAVE_LONG_LONG - case 'q': case 'Q': return sizeof(__Pyx_st_longlong) - sizeof(PY_LONG_LONG); -#endif - case 'f': return sizeof(__Pyx_st_float) - sizeof(float); - case 'd': return sizeof(__Pyx_st_double) - sizeof(double); - case 'g': return sizeof(__Pyx_st_longdouble) - sizeof(long double); - case 'P': case 'O': return sizeof(__Pyx_st_void_p) - sizeof(void*); - default: - __Pyx_BufFmt_RaiseUnexpectedChar(ch); - return 0; - } -} -/* These are for computing the padding at the end of the struct to align - on the first member of the struct. This will probably the same as above, - but we don't have any guarantees. - */ -typedef struct { short x; char c; } __Pyx_pad_short; -typedef struct { int x; char c; } __Pyx_pad_int; -typedef struct { long x; char c; } __Pyx_pad_long; -typedef struct { float x; char c; } __Pyx_pad_float; -typedef struct { double x; char c; } __Pyx_pad_double; -typedef struct { long double x; char c; } __Pyx_pad_longdouble; -typedef struct { void *x; char c; } __Pyx_pad_void_p; -#ifdef HAVE_LONG_LONG -typedef struct { PY_LONG_LONG x; char c; } __Pyx_pad_longlong; -#endif -static size_t __Pyx_BufFmt_TypeCharToPadding(char ch, CYTHON_UNUSED int is_complex) { - switch (ch) { - case '?': case 'c': case 'b': case 'B': case 's': case 'p': return 1; - case 'h': case 'H': return sizeof(__Pyx_pad_short) - sizeof(short); - case 'i': case 'I': return sizeof(__Pyx_pad_int) - sizeof(int); - case 'l': case 'L': return sizeof(__Pyx_pad_long) - sizeof(long); -#ifdef HAVE_LONG_LONG - case 'q': case 'Q': return sizeof(__Pyx_pad_longlong) - sizeof(PY_LONG_LONG); -#endif - case 'f': return sizeof(__Pyx_pad_float) - sizeof(float); - case 'd': return sizeof(__Pyx_pad_double) - sizeof(double); - case 'g': return sizeof(__Pyx_pad_longdouble) - sizeof(long double); - case 'P': case 'O': return sizeof(__Pyx_pad_void_p) - sizeof(void*); - default: - __Pyx_BufFmt_RaiseUnexpectedChar(ch); - return 0; - } -} -static char __Pyx_BufFmt_TypeCharToGroup(char ch, int is_complex) { - switch (ch) { - case 'c': - return 'H'; - case 'b': case 'h': case 'i': - case 'l': case 'q': case 's': case 'p': - return 'I'; - case '?': case 'B': case 'H': case 'I': case 'L': case 'Q': - return 'U'; - case 'f': case 'd': case 'g': - return (is_complex ? 'C' : 'R'); - case 'O': - return 'O'; - case 'P': - return 'P'; - default: { - __Pyx_BufFmt_RaiseUnexpectedChar(ch); - return 0; - } - } -} -static void __Pyx_BufFmt_RaiseExpected(__Pyx_BufFmt_Context* ctx) { - if (ctx->head == NULL || ctx->head->field == &ctx->root) { - const char* expected; - const char* quote; - if (ctx->head == NULL) { - expected = "end"; - quote = ""; - } else { - expected = ctx->head->field->type->name; - quote = "'"; - } - PyErr_Format(PyExc_ValueError, - "Buffer dtype mismatch, expected %s%s%s but got %s", - quote, expected, quote, - __Pyx_BufFmt_DescribeTypeChar(ctx->enc_type, ctx->is_complex)); - } else { - __Pyx_StructField* field = ctx->head->field; - __Pyx_StructField* parent = (ctx->head - 1)->field; - PyErr_Format(PyExc_ValueError, - "Buffer dtype mismatch, expected '%s' but got %s in '%s.%s'", - field->type->name, __Pyx_BufFmt_DescribeTypeChar(ctx->enc_type, ctx->is_complex), - parent->type->name, field->name); - } -} -static int __Pyx_BufFmt_ProcessTypeChunk(__Pyx_BufFmt_Context* ctx) { - char group; - size_t size, offset, arraysize = 1; - if (ctx->enc_type == 0) return 0; - if (ctx->head->field->type->arraysize[0]) { - int i, ndim = 0; - if (ctx->enc_type == 's' || ctx->enc_type == 'p') { - ctx->is_valid_array = ctx->head->field->type->ndim == 1; - ndim = 1; - if (ctx->enc_count != ctx->head->field->type->arraysize[0]) { - PyErr_Format(PyExc_ValueError, - "Expected a dimension of size %zu, got %zu", - ctx->head->field->type->arraysize[0], ctx->enc_count); - return -1; - } - } - if (!ctx->is_valid_array) { - PyErr_Format(PyExc_ValueError, "Expected %d dimensions, got %d", - ctx->head->field->type->ndim, ndim); - return -1; - } - for (i = 0; i < ctx->head->field->type->ndim; i++) { - arraysize *= ctx->head->field->type->arraysize[i]; - } - ctx->is_valid_array = 0; - ctx->enc_count = 1; - } - group = __Pyx_BufFmt_TypeCharToGroup(ctx->enc_type, ctx->is_complex); - do { - __Pyx_StructField* field = ctx->head->field; - __Pyx_TypeInfo* type = field->type; - if (ctx->enc_packmode == '@' || ctx->enc_packmode == '^') { - size = __Pyx_BufFmt_TypeCharToNativeSize(ctx->enc_type, ctx->is_complex); - } else { - size = __Pyx_BufFmt_TypeCharToStandardSize(ctx->enc_type, ctx->is_complex); - } - if (ctx->enc_packmode == '@') { - size_t align_at = __Pyx_BufFmt_TypeCharToAlignment(ctx->enc_type, ctx->is_complex); - size_t align_mod_offset; - if (align_at == 0) return -1; - align_mod_offset = ctx->fmt_offset % align_at; - if (align_mod_offset > 0) ctx->fmt_offset += align_at - align_mod_offset; - if (ctx->struct_alignment == 0) - ctx->struct_alignment = __Pyx_BufFmt_TypeCharToPadding(ctx->enc_type, - ctx->is_complex); - } - if (type->size != size || type->typegroup != group) { - if (type->typegroup == 'C' && type->fields != NULL) { - size_t parent_offset = ctx->head->parent_offset + field->offset; - ++ctx->head; - ctx->head->field = type->fields; - ctx->head->parent_offset = parent_offset; - continue; - } - if ((type->typegroup == 'H' || group == 'H') && type->size == size) { - } else { - __Pyx_BufFmt_RaiseExpected(ctx); - return -1; - } - } - offset = ctx->head->parent_offset + field->offset; - if (ctx->fmt_offset != offset) { - PyErr_Format(PyExc_ValueError, - "Buffer dtype mismatch; next field is at offset %" CYTHON_FORMAT_SSIZE_T "d but %" CYTHON_FORMAT_SSIZE_T "d expected", - (Py_ssize_t)ctx->fmt_offset, (Py_ssize_t)offset); - return -1; - } - ctx->fmt_offset += size; - if (arraysize) - ctx->fmt_offset += (arraysize - 1) * size; - --ctx->enc_count; - while (1) { - if (field == &ctx->root) { - ctx->head = NULL; - if (ctx->enc_count != 0) { - __Pyx_BufFmt_RaiseExpected(ctx); - return -1; - } - break; - } - ctx->head->field = ++field; - if (field->type == NULL) { - --ctx->head; - field = ctx->head->field; - continue; - } else if (field->type->typegroup == 'S') { - size_t parent_offset = ctx->head->parent_offset + field->offset; - if (field->type->fields->type == NULL) continue; - field = field->type->fields; - ++ctx->head; - ctx->head->field = field; - ctx->head->parent_offset = parent_offset; - break; - } else { - break; - } - } - } while (ctx->enc_count); - ctx->enc_type = 0; - ctx->is_complex = 0; - return 0; -} -static PyObject * -__pyx_buffmt_parse_array(__Pyx_BufFmt_Context* ctx, const char** tsp) -{ - const char *ts = *tsp; - int i = 0, number, ndim; - ++ts; - if (ctx->new_count != 1) { - PyErr_SetString(PyExc_ValueError, - "Cannot handle repeated arrays in format string"); - return NULL; - } - if (__Pyx_BufFmt_ProcessTypeChunk(ctx) == -1) return NULL; - ndim = ctx->head->field->type->ndim; - while (*ts && *ts != ')') { - switch (*ts) { - case ' ': case '\f': case '\r': case '\n': case '\t': case '\v': continue; - default: break; - } - number = __Pyx_BufFmt_ExpectNumber(&ts); - if (number == -1) return NULL; - if (i < ndim && (size_t) number != ctx->head->field->type->arraysize[i]) - return PyErr_Format(PyExc_ValueError, - "Expected a dimension of size %zu, got %d", - ctx->head->field->type->arraysize[i], number); - if (*ts != ',' && *ts != ')') - return PyErr_Format(PyExc_ValueError, - "Expected a comma in format string, got '%c'", *ts); - if (*ts == ',') ts++; - i++; - } - if (i != ndim) - return PyErr_Format(PyExc_ValueError, "Expected %d dimension(s), got %d", - ctx->head->field->type->ndim, i); - if (!*ts) { - PyErr_SetString(PyExc_ValueError, - "Unexpected end of format string, expected ')'"); - return NULL; - } - ctx->is_valid_array = 1; - ctx->new_count = 1; - *tsp = ++ts; - return Py_None; -} -static const char* __Pyx_BufFmt_CheckString(__Pyx_BufFmt_Context* ctx, const char* ts) { - int got_Z = 0; - while (1) { - switch(*ts) { - case 0: - if (ctx->enc_type != 0 && ctx->head == NULL) { - __Pyx_BufFmt_RaiseExpected(ctx); - return NULL; - } - if (__Pyx_BufFmt_ProcessTypeChunk(ctx) == -1) return NULL; - if (ctx->head != NULL) { - __Pyx_BufFmt_RaiseExpected(ctx); - return NULL; - } - return ts; - case ' ': - case '\r': - case '\n': - ++ts; - break; - case '<': - if (!__Pyx_Is_Little_Endian()) { - PyErr_SetString(PyExc_ValueError, "Little-endian buffer not supported on big-endian compiler"); - return NULL; - } - ctx->new_packmode = '='; - ++ts; - break; - case '>': - case '!': - if (__Pyx_Is_Little_Endian()) { - PyErr_SetString(PyExc_ValueError, "Big-endian buffer not supported on little-endian compiler"); - return NULL; - } - ctx->new_packmode = '='; - ++ts; - break; - case '=': - case '@': - case '^': - ctx->new_packmode = *ts++; - break; - case 'T': - { - const char* ts_after_sub; - size_t i, struct_count = ctx->new_count; - size_t struct_alignment = ctx->struct_alignment; - ctx->new_count = 1; - ++ts; - if (*ts != '{') { - PyErr_SetString(PyExc_ValueError, "Buffer acquisition: Expected '{' after 'T'"); - return NULL; - } - if (__Pyx_BufFmt_ProcessTypeChunk(ctx) == -1) return NULL; - ctx->enc_type = 0; - ctx->enc_count = 0; - ctx->struct_alignment = 0; - ++ts; - ts_after_sub = ts; - for (i = 0; i != struct_count; ++i) { - ts_after_sub = __Pyx_BufFmt_CheckString(ctx, ts); - if (!ts_after_sub) return NULL; - } - ts = ts_after_sub; - if (struct_alignment) ctx->struct_alignment = struct_alignment; - } - break; - case '}': - { - size_t alignment = ctx->struct_alignment; - ++ts; - if (__Pyx_BufFmt_ProcessTypeChunk(ctx) == -1) return NULL; - ctx->enc_type = 0; - if (alignment && ctx->fmt_offset % alignment) { - ctx->fmt_offset += alignment - (ctx->fmt_offset % alignment); - } - } - return ts; - case 'x': - if (__Pyx_BufFmt_ProcessTypeChunk(ctx) == -1) return NULL; - ctx->fmt_offset += ctx->new_count; - ctx->new_count = 1; - ctx->enc_count = 0; - ctx->enc_type = 0; - ctx->enc_packmode = ctx->new_packmode; - ++ts; - break; - case 'Z': - got_Z = 1; - ++ts; - if (*ts != 'f' && *ts != 'd' && *ts != 'g') { - __Pyx_BufFmt_RaiseUnexpectedChar('Z'); - return NULL; - } - CYTHON_FALLTHROUGH; - case '?': case 'c': case 'b': case 'B': case 'h': case 'H': case 'i': case 'I': - case 'l': case 'L': case 'q': case 'Q': - case 'f': case 'd': case 'g': - case 'O': case 'p': - if ((ctx->enc_type == *ts) && (got_Z == ctx->is_complex) && - (ctx->enc_packmode == ctx->new_packmode) && (!ctx->is_valid_array)) { - ctx->enc_count += ctx->new_count; - ctx->new_count = 1; - got_Z = 0; - ++ts; - break; - } - CYTHON_FALLTHROUGH; - case 's': - if (__Pyx_BufFmt_ProcessTypeChunk(ctx) == -1) return NULL; - ctx->enc_count = ctx->new_count; - ctx->enc_packmode = ctx->new_packmode; - ctx->enc_type = *ts; - ctx->is_complex = got_Z; - ++ts; - ctx->new_count = 1; - got_Z = 0; - break; - case ':': - ++ts; - while(*ts != ':') ++ts; - ++ts; - break; - case '(': - if (!__pyx_buffmt_parse_array(ctx, &ts)) return NULL; - break; - default: - { - int number = __Pyx_BufFmt_ExpectNumber(&ts); - if (number == -1) return NULL; - ctx->new_count = (size_t)number; - } - } - } -} - -/* TypeInfoCompare */ - static int -__pyx_typeinfo_cmp(__Pyx_TypeInfo *a, __Pyx_TypeInfo *b) -{ - int i; - if (!a || !b) - return 0; - if (a == b) - return 1; - if (a->size != b->size || a->typegroup != b->typegroup || - a->is_unsigned != b->is_unsigned || a->ndim != b->ndim) { - if (a->typegroup == 'H' || b->typegroup == 'H') { - return a->size == b->size; - } else { - return 0; - } - } - if (a->ndim) { - for (i = 0; i < a->ndim; i++) - if (a->arraysize[i] != b->arraysize[i]) - return 0; - } - if (a->typegroup == 'S') { - if (a->flags != b->flags) - return 0; - if (a->fields || b->fields) { - if (!(a->fields && b->fields)) - return 0; - for (i = 0; a->fields[i].type && b->fields[i].type; i++) { - __Pyx_StructField *field_a = a->fields + i; - __Pyx_StructField *field_b = b->fields + i; - if (field_a->offset != field_b->offset || - !__pyx_typeinfo_cmp(field_a->type, field_b->type)) - return 0; - } - return !a->fields[i].type && !b->fields[i].type; - } - } - return 1; -} - -/* MemviewSliceValidateAndInit */ - static int -__pyx_check_strides(Py_buffer *buf, int dim, int ndim, int spec) -{ - if (buf->shape[dim] <= 1) - return 1; - if (buf->strides) { - if (spec & __Pyx_MEMVIEW_CONTIG) { - if (spec & (__Pyx_MEMVIEW_PTR|__Pyx_MEMVIEW_FULL)) { - if (unlikely(buf->strides[dim] != sizeof(void *))) { - PyErr_Format(PyExc_ValueError, - "Buffer is not indirectly contiguous " - "in dimension %d.", dim); - goto fail; - } - } else if (unlikely(buf->strides[dim] != buf->itemsize)) { - PyErr_SetString(PyExc_ValueError, - "Buffer and memoryview are not contiguous " - "in the same dimension."); - goto fail; - } - } - if (spec & __Pyx_MEMVIEW_FOLLOW) { - Py_ssize_t stride = buf->strides[dim]; - if (stride < 0) - stride = -stride; - if (unlikely(stride < buf->itemsize)) { - PyErr_SetString(PyExc_ValueError, - "Buffer and memoryview are not contiguous " - "in the same dimension."); - goto fail; - } - } - } else { - if (unlikely(spec & __Pyx_MEMVIEW_CONTIG && dim != ndim - 1)) { - PyErr_Format(PyExc_ValueError, - "C-contiguous buffer is not contiguous in " - "dimension %d", dim); - goto fail; - } else if (unlikely(spec & (__Pyx_MEMVIEW_PTR))) { - PyErr_Format(PyExc_ValueError, - "C-contiguous buffer is not indirect in " - "dimension %d", dim); - goto fail; - } else if (unlikely(buf->suboffsets)) { - PyErr_SetString(PyExc_ValueError, - "Buffer exposes suboffsets but no strides"); - goto fail; - } - } - return 1; -fail: - return 0; -} -static int -__pyx_check_suboffsets(Py_buffer *buf, int dim, CYTHON_UNUSED int ndim, int spec) -{ - if (spec & __Pyx_MEMVIEW_DIRECT) { - if (unlikely(buf->suboffsets && buf->suboffsets[dim] >= 0)) { - PyErr_Format(PyExc_ValueError, - "Buffer not compatible with direct access " - "in dimension %d.", dim); - goto fail; - } - } - if (spec & __Pyx_MEMVIEW_PTR) { - if (unlikely(!buf->suboffsets || (buf->suboffsets[dim] < 0))) { - PyErr_Format(PyExc_ValueError, - "Buffer is not indirectly accessible " - "in dimension %d.", dim); - goto fail; - } - } - return 1; -fail: - return 0; -} -static int -__pyx_verify_contig(Py_buffer *buf, int ndim, int c_or_f_flag) -{ - int i; - if (c_or_f_flag & __Pyx_IS_F_CONTIG) { - Py_ssize_t stride = 1; - for (i = 0; i < ndim; i++) { - if (unlikely(stride * buf->itemsize != buf->strides[i] && buf->shape[i] > 1)) { - PyErr_SetString(PyExc_ValueError, - "Buffer not fortran contiguous."); - goto fail; - } - stride = stride * buf->shape[i]; - } - } else if (c_or_f_flag & __Pyx_IS_C_CONTIG) { - Py_ssize_t stride = 1; - for (i = ndim - 1; i >- 1; i--) { - if (unlikely(stride * buf->itemsize != buf->strides[i] && buf->shape[i] > 1)) { - PyErr_SetString(PyExc_ValueError, - "Buffer not C contiguous."); - goto fail; - } - stride = stride * buf->shape[i]; - } - } - return 1; -fail: - return 0; -} -static int __Pyx_ValidateAndInit_memviewslice( - int *axes_specs, - int c_or_f_flag, - int buf_flags, - int ndim, - __Pyx_TypeInfo *dtype, - __Pyx_BufFmt_StackElem stack[], - __Pyx_memviewslice *memviewslice, - PyObject *original_obj) -{ - struct __pyx_memoryview_obj *memview, *new_memview; - __Pyx_RefNannyDeclarations - Py_buffer *buf; - int i, spec = 0, retval = -1; - __Pyx_BufFmt_Context ctx; - int from_memoryview = __pyx_memoryview_check(original_obj); - __Pyx_RefNannySetupContext("ValidateAndInit_memviewslice", 0); - if (from_memoryview && __pyx_typeinfo_cmp(dtype, ((struct __pyx_memoryview_obj *) - original_obj)->typeinfo)) { - memview = (struct __pyx_memoryview_obj *) original_obj; - new_memview = NULL; - } else { - memview = (struct __pyx_memoryview_obj *) __pyx_memoryview_new( - original_obj, buf_flags, 0, dtype); - new_memview = memview; - if (unlikely(!memview)) - goto fail; - } - buf = &memview->view; - if (unlikely(buf->ndim != ndim)) { - PyErr_Format(PyExc_ValueError, - "Buffer has wrong number of dimensions (expected %d, got %d)", - ndim, buf->ndim); - goto fail; - } - if (new_memview) { - __Pyx_BufFmt_Init(&ctx, stack, dtype); - if (unlikely(!__Pyx_BufFmt_CheckString(&ctx, buf->format))) goto fail; - } - if (unlikely((unsigned) buf->itemsize != dtype->size)) { - PyErr_Format(PyExc_ValueError, - "Item size of buffer (%" CYTHON_FORMAT_SSIZE_T "u byte%s) " - "does not match size of '%s' (%" CYTHON_FORMAT_SSIZE_T "u byte%s)", - buf->itemsize, - (buf->itemsize > 1) ? "s" : "", - dtype->name, - dtype->size, - (dtype->size > 1) ? "s" : ""); - goto fail; - } - if (buf->len > 0) { - for (i = 0; i < ndim; i++) { - spec = axes_specs[i]; - if (unlikely(!__pyx_check_strides(buf, i, ndim, spec))) - goto fail; - if (unlikely(!__pyx_check_suboffsets(buf, i, ndim, spec))) - goto fail; - } - if (unlikely(buf->strides && !__pyx_verify_contig(buf, ndim, c_or_f_flag))) - goto fail; - } - if (unlikely(__Pyx_init_memviewslice(memview, ndim, memviewslice, - new_memview != NULL) == -1)) { - goto fail; - } - retval = 0; - goto no_fail; -fail: - Py_XDECREF(new_memview); - retval = -1; -no_fail: - __Pyx_RefNannyFinishContext(); - return retval; -} - -/* ObjectToMemviewSlice */ - static CYTHON_INLINE __Pyx_memviewslice __Pyx_PyObject_to_MemoryviewSlice_d_d_dc_int(PyObject *obj, int writable_flag) { - __Pyx_memviewslice result = { 0, 0, { 0 }, { 0 }, { 0 } }; - __Pyx_BufFmt_StackElem stack[1]; - int axes_specs[] = { (__Pyx_MEMVIEW_DIRECT | __Pyx_MEMVIEW_FOLLOW), (__Pyx_MEMVIEW_DIRECT | __Pyx_MEMVIEW_FOLLOW), (__Pyx_MEMVIEW_DIRECT | __Pyx_MEMVIEW_CONTIG) }; - int retcode; - if (obj == Py_None) { - result.memview = (struct __pyx_memoryview_obj *) Py_None; - return result; - } - retcode = __Pyx_ValidateAndInit_memviewslice(axes_specs, __Pyx_IS_C_CONTIG, - (PyBUF_C_CONTIGUOUS | PyBUF_FORMAT) | writable_flag, 3, - &__Pyx_TypeInfo_int, stack, - &result, obj); - if (unlikely(retcode == -1)) - goto __pyx_fail; - return result; -__pyx_fail: - result.memview = NULL; - result.data = NULL; - return result; -} - -/* ObjectToMemviewSlice */ - static CYTHON_INLINE __Pyx_memviewslice __Pyx_PyObject_to_MemoryviewSlice_d_d_dc_float(PyObject *obj, int writable_flag) { - __Pyx_memviewslice result = { 0, 0, { 0 }, { 0 }, { 0 } }; - __Pyx_BufFmt_StackElem stack[1]; - int axes_specs[] = { (__Pyx_MEMVIEW_DIRECT | __Pyx_MEMVIEW_FOLLOW), (__Pyx_MEMVIEW_DIRECT | __Pyx_MEMVIEW_FOLLOW), (__Pyx_MEMVIEW_DIRECT | __Pyx_MEMVIEW_CONTIG) }; - int retcode; - if (obj == Py_None) { - result.memview = (struct __pyx_memoryview_obj *) Py_None; - return result; - } - retcode = __Pyx_ValidateAndInit_memviewslice(axes_specs, __Pyx_IS_C_CONTIG, - (PyBUF_C_CONTIGUOUS | PyBUF_FORMAT) | writable_flag, 3, - &__Pyx_TypeInfo_float, stack, - &result, obj); - if (unlikely(retcode == -1)) - goto __pyx_fail; - return result; -__pyx_fail: - result.memview = NULL; - result.data = NULL; - return result; -} - -/* ObjectToMemviewSlice */ - static CYTHON_INLINE __Pyx_memviewslice __Pyx_PyObject_to_MemoryviewSlice_dc_int(PyObject *obj, int writable_flag) { - __Pyx_memviewslice result = { 0, 0, { 0 }, { 0 }, { 0 } }; - __Pyx_BufFmt_StackElem stack[1]; - int axes_specs[] = { (__Pyx_MEMVIEW_DIRECT | __Pyx_MEMVIEW_CONTIG) }; - int retcode; - if (obj == Py_None) { - result.memview = (struct __pyx_memoryview_obj *) Py_None; - return result; - } - retcode = __Pyx_ValidateAndInit_memviewslice(axes_specs, __Pyx_IS_C_CONTIG, - (PyBUF_C_CONTIGUOUS | PyBUF_FORMAT) | writable_flag, 1, - &__Pyx_TypeInfo_int, stack, - &result, obj); - if (unlikely(retcode == -1)) - goto __pyx_fail; - return result; -__pyx_fail: - result.memview = NULL; - result.data = NULL; - return result; -} - -/* CIntToPy */ - static CYTHON_INLINE PyObject* __Pyx_PyInt_From_int(int value) { - const int neg_one = (int) ((int) 0 - (int) 1), const_zero = (int) 0; - const int is_unsigned = neg_one > const_zero; - if (is_unsigned) { - if (sizeof(int) < sizeof(long)) { - return PyInt_FromLong((long) value); - } else if (sizeof(int) <= sizeof(unsigned long)) { - return PyLong_FromUnsignedLong((unsigned long) value); -#ifdef HAVE_LONG_LONG - } else if (sizeof(int) <= sizeof(unsigned PY_LONG_LONG)) { - return PyLong_FromUnsignedLongLong((unsigned PY_LONG_LONG) value); -#endif - } - } else { - if (sizeof(int) <= sizeof(long)) { - return PyInt_FromLong((long) value); -#ifdef HAVE_LONG_LONG - } else if (sizeof(int) <= sizeof(PY_LONG_LONG)) { - return PyLong_FromLongLong((PY_LONG_LONG) value); -#endif - } - } - { - int one = 1; int little = (int)*(unsigned char *)&one; - unsigned char *bytes = (unsigned char *)&value; - return _PyLong_FromByteArray(bytes, sizeof(int), - little, !is_unsigned); - } -} - -/* CIntFromPyVerify */ - #define __PYX_VERIFY_RETURN_INT(target_type, func_type, func_value)\ - __PYX__VERIFY_RETURN_INT(target_type, func_type, func_value, 0) -#define __PYX_VERIFY_RETURN_INT_EXC(target_type, func_type, func_value)\ - __PYX__VERIFY_RETURN_INT(target_type, func_type, func_value, 1) -#define __PYX__VERIFY_RETURN_INT(target_type, func_type, func_value, exc)\ - {\ - func_type value = func_value;\ - if (sizeof(target_type) < sizeof(func_type)) {\ - if (unlikely(value != (func_type) (target_type) value)) {\ - func_type zero = 0;\ - if (exc && unlikely(value == (func_type)-1 && PyErr_Occurred()))\ - return (target_type) -1;\ - if (is_unsigned && unlikely(value < zero))\ - goto raise_neg_overflow;\ - else\ - goto raise_overflow;\ - }\ - }\ - return (target_type) value;\ - } - -/* CIntToPy */ - static CYTHON_INLINE PyObject* __Pyx_PyInt_From_long(long value) { - const long neg_one = (long) ((long) 0 - (long) 1), const_zero = (long) 0; - const int is_unsigned = neg_one > const_zero; - if (is_unsigned) { - if (sizeof(long) < sizeof(long)) { - return PyInt_FromLong((long) value); - } else if (sizeof(long) <= sizeof(unsigned long)) { - return PyLong_FromUnsignedLong((unsigned long) value); -#ifdef HAVE_LONG_LONG - } else if (sizeof(long) <= sizeof(unsigned PY_LONG_LONG)) { - return PyLong_FromUnsignedLongLong((unsigned PY_LONG_LONG) value); -#endif - } - } else { - if (sizeof(long) <= sizeof(long)) { - return PyInt_FromLong((long) value); -#ifdef HAVE_LONG_LONG - } else if (sizeof(long) <= sizeof(PY_LONG_LONG)) { - return PyLong_FromLongLong((PY_LONG_LONG) value); -#endif - } - } - { - int one = 1; int little = (int)*(unsigned char *)&one; - unsigned char *bytes = (unsigned char *)&value; - return _PyLong_FromByteArray(bytes, sizeof(long), - little, !is_unsigned); - } -} - -/* MemviewSliceCopyTemplate */ - static __Pyx_memviewslice -__pyx_memoryview_copy_new_contig(const __Pyx_memviewslice *from_mvs, - const char *mode, int ndim, - size_t sizeof_dtype, int contig_flag, - int dtype_is_object) -{ - __Pyx_RefNannyDeclarations - int i; - __Pyx_memviewslice new_mvs = { 0, 0, { 0 }, { 0 }, { 0 } }; - struct __pyx_memoryview_obj *from_memview = from_mvs->memview; - Py_buffer *buf = &from_memview->view; - PyObject *shape_tuple = NULL; - PyObject *temp_int = NULL; - struct __pyx_array_obj *array_obj = NULL; - struct __pyx_memoryview_obj *memview_obj = NULL; - __Pyx_RefNannySetupContext("__pyx_memoryview_copy_new_contig", 0); - for (i = 0; i < ndim; i++) { - if (unlikely(from_mvs->suboffsets[i] >= 0)) { - PyErr_Format(PyExc_ValueError, "Cannot copy memoryview slice with " - "indirect dimensions (axis %d)", i); - goto fail; - } - } - shape_tuple = PyTuple_New(ndim); - if (unlikely(!shape_tuple)) { - goto fail; - } - __Pyx_GOTREF(shape_tuple); - for(i = 0; i < ndim; i++) { - temp_int = PyInt_FromSsize_t(from_mvs->shape[i]); - if(unlikely(!temp_int)) { - goto fail; - } else { - PyTuple_SET_ITEM(shape_tuple, i, temp_int); - temp_int = NULL; - } - } - array_obj = __pyx_array_new(shape_tuple, sizeof_dtype, buf->format, (char *) mode, NULL); - if (unlikely(!array_obj)) { - goto fail; - } - __Pyx_GOTREF(array_obj); - memview_obj = (struct __pyx_memoryview_obj *) __pyx_memoryview_new( - (PyObject *) array_obj, contig_flag, - dtype_is_object, - from_mvs->memview->typeinfo); - if (unlikely(!memview_obj)) - goto fail; - if (unlikely(__Pyx_init_memviewslice(memview_obj, ndim, &new_mvs, 1) < 0)) - goto fail; - if (unlikely(__pyx_memoryview_copy_contents(*from_mvs, new_mvs, ndim, ndim, - dtype_is_object) < 0)) - goto fail; - goto no_fail; -fail: - __Pyx_XDECREF(new_mvs.memview); - new_mvs.memview = NULL; - new_mvs.data = NULL; -no_fail: - __Pyx_XDECREF(shape_tuple); - __Pyx_XDECREF(temp_int); - __Pyx_XDECREF(array_obj); - __Pyx_RefNannyFinishContext(); - return new_mvs; -} - -/* CIntFromPy */ - static CYTHON_INLINE int __Pyx_PyInt_As_int(PyObject *x) { - const int neg_one = (int) ((int) 0 - (int) 1), const_zero = (int) 0; - const int is_unsigned = neg_one > const_zero; -#if PY_MAJOR_VERSION < 3 - if (likely(PyInt_Check(x))) { - if (sizeof(int) < sizeof(long)) { - __PYX_VERIFY_RETURN_INT(int, long, PyInt_AS_LONG(x)) - } else { - long val = PyInt_AS_LONG(x); - if (is_unsigned && unlikely(val < 0)) { - goto raise_neg_overflow; - } - return (int) val; - } - } else -#endif - if (likely(PyLong_Check(x))) { - if (is_unsigned) { -#if CYTHON_USE_PYLONG_INTERNALS - const digit* digits = ((PyLongObject*)x)->ob_digit; - switch (Py_SIZE(x)) { - case 0: return (int) 0; - case 1: __PYX_VERIFY_RETURN_INT(int, digit, digits[0]) - case 2: - if (8 * sizeof(int) > 1 * PyLong_SHIFT) { - if (8 * sizeof(unsigned long) > 2 * PyLong_SHIFT) { - __PYX_VERIFY_RETURN_INT(int, unsigned long, (((((unsigned long)digits[1]) << PyLong_SHIFT) | (unsigned long)digits[0]))) - } else if (8 * sizeof(int) >= 2 * PyLong_SHIFT) { - return (int) (((((int)digits[1]) << PyLong_SHIFT) | (int)digits[0])); - } - } - break; - case 3: - if (8 * sizeof(int) > 2 * PyLong_SHIFT) { - if (8 * sizeof(unsigned long) > 3 * PyLong_SHIFT) { - __PYX_VERIFY_RETURN_INT(int, unsigned long, (((((((unsigned long)digits[2]) << PyLong_SHIFT) | (unsigned long)digits[1]) << PyLong_SHIFT) | (unsigned long)digits[0]))) - } else if (8 * sizeof(int) >= 3 * PyLong_SHIFT) { - return (int) (((((((int)digits[2]) << PyLong_SHIFT) | (int)digits[1]) << PyLong_SHIFT) | (int)digits[0])); - } - } - break; - case 4: - if (8 * sizeof(int) > 3 * PyLong_SHIFT) { - if (8 * sizeof(unsigned long) > 4 * PyLong_SHIFT) { - __PYX_VERIFY_RETURN_INT(int, unsigned long, (((((((((unsigned long)digits[3]) << PyLong_SHIFT) | (unsigned long)digits[2]) << PyLong_SHIFT) | (unsigned long)digits[1]) << PyLong_SHIFT) | (unsigned long)digits[0]))) - } else if (8 * sizeof(int) >= 4 * PyLong_SHIFT) { - return (int) (((((((((int)digits[3]) << PyLong_SHIFT) | (int)digits[2]) << PyLong_SHIFT) | (int)digits[1]) << PyLong_SHIFT) | (int)digits[0])); - } - } - break; - } -#endif -#if CYTHON_COMPILING_IN_CPYTHON - if (unlikely(Py_SIZE(x) < 0)) { - goto raise_neg_overflow; - } -#else - { - int result = PyObject_RichCompareBool(x, Py_False, Py_LT); - if (unlikely(result < 0)) - return (int) -1; - if (unlikely(result == 1)) - goto raise_neg_overflow; - } -#endif - if (sizeof(int) <= sizeof(unsigned long)) { - __PYX_VERIFY_RETURN_INT_EXC(int, unsigned long, PyLong_AsUnsignedLong(x)) -#ifdef HAVE_LONG_LONG - } else if (sizeof(int) <= sizeof(unsigned PY_LONG_LONG)) { - __PYX_VERIFY_RETURN_INT_EXC(int, unsigned PY_LONG_LONG, PyLong_AsUnsignedLongLong(x)) -#endif - } - } else { -#if CYTHON_USE_PYLONG_INTERNALS - const digit* digits = ((PyLongObject*)x)->ob_digit; - switch (Py_SIZE(x)) { - case 0: return (int) 0; - case -1: __PYX_VERIFY_RETURN_INT(int, sdigit, (sdigit) (-(sdigit)digits[0])) - case 1: __PYX_VERIFY_RETURN_INT(int, digit, +digits[0]) - case -2: - if (8 * sizeof(int) - 1 > 1 * PyLong_SHIFT) { - if (8 * sizeof(unsigned long) > 2 * PyLong_SHIFT) { - __PYX_VERIFY_RETURN_INT(int, long, -(long) (((((unsigned long)digits[1]) << PyLong_SHIFT) | (unsigned long)digits[0]))) - } else if (8 * sizeof(int) - 1 > 2 * PyLong_SHIFT) { - return (int) (((int)-1)*(((((int)digits[1]) << PyLong_SHIFT) | (int)digits[0]))); - } - } - break; - case 2: - if (8 * sizeof(int) > 1 * PyLong_SHIFT) { - if (8 * sizeof(unsigned long) > 2 * PyLong_SHIFT) { - __PYX_VERIFY_RETURN_INT(int, unsigned long, (((((unsigned long)digits[1]) << PyLong_SHIFT) | (unsigned long)digits[0]))) - } else if (8 * sizeof(int) - 1 > 2 * PyLong_SHIFT) { - return (int) ((((((int)digits[1]) << PyLong_SHIFT) | (int)digits[0]))); - } - } - break; - case -3: - if (8 * sizeof(int) - 1 > 2 * PyLong_SHIFT) { - if (8 * sizeof(unsigned long) > 3 * PyLong_SHIFT) { - __PYX_VERIFY_RETURN_INT(int, long, -(long) (((((((unsigned long)digits[2]) << PyLong_SHIFT) | (unsigned long)digits[1]) << PyLong_SHIFT) | (unsigned long)digits[0]))) - } else if (8 * sizeof(int) - 1 > 3 * PyLong_SHIFT) { - return (int) (((int)-1)*(((((((int)digits[2]) << PyLong_SHIFT) | (int)digits[1]) << PyLong_SHIFT) | (int)digits[0]))); - } - } - break; - case 3: - if (8 * sizeof(int) > 2 * PyLong_SHIFT) { - if (8 * sizeof(unsigned long) > 3 * PyLong_SHIFT) { - __PYX_VERIFY_RETURN_INT(int, unsigned long, (((((((unsigned long)digits[2]) << PyLong_SHIFT) | (unsigned long)digits[1]) << PyLong_SHIFT) | (unsigned long)digits[0]))) - } else if (8 * sizeof(int) - 1 > 3 * PyLong_SHIFT) { - return (int) ((((((((int)digits[2]) << PyLong_SHIFT) | (int)digits[1]) << PyLong_SHIFT) | (int)digits[0]))); - } - } - break; - case -4: - if (8 * sizeof(int) - 1 > 3 * PyLong_SHIFT) { - if (8 * sizeof(unsigned long) > 4 * PyLong_SHIFT) { - __PYX_VERIFY_RETURN_INT(int, long, -(long) (((((((((unsigned long)digits[3]) << PyLong_SHIFT) | (unsigned long)digits[2]) << PyLong_SHIFT) | (unsigned long)digits[1]) << PyLong_SHIFT) | (unsigned long)digits[0]))) - } else if (8 * sizeof(int) - 1 > 4 * PyLong_SHIFT) { - return (int) (((int)-1)*(((((((((int)digits[3]) << PyLong_SHIFT) | (int)digits[2]) << PyLong_SHIFT) | (int)digits[1]) << PyLong_SHIFT) | (int)digits[0]))); - } - } - break; - case 4: - if (8 * sizeof(int) > 3 * PyLong_SHIFT) { - if (8 * sizeof(unsigned long) > 4 * PyLong_SHIFT) { - __PYX_VERIFY_RETURN_INT(int, unsigned long, (((((((((unsigned long)digits[3]) << PyLong_SHIFT) | (unsigned long)digits[2]) << PyLong_SHIFT) | (unsigned long)digits[1]) << PyLong_SHIFT) | (unsigned long)digits[0]))) - } else if (8 * sizeof(int) - 1 > 4 * PyLong_SHIFT) { - return (int) ((((((((((int)digits[3]) << PyLong_SHIFT) | (int)digits[2]) << PyLong_SHIFT) | (int)digits[1]) << PyLong_SHIFT) | (int)digits[0]))); - } - } - break; - } -#endif - if (sizeof(int) <= sizeof(long)) { - __PYX_VERIFY_RETURN_INT_EXC(int, long, PyLong_AsLong(x)) -#ifdef HAVE_LONG_LONG - } else if (sizeof(int) <= sizeof(PY_LONG_LONG)) { - __PYX_VERIFY_RETURN_INT_EXC(int, PY_LONG_LONG, PyLong_AsLongLong(x)) -#endif - } - } - { -#if CYTHON_COMPILING_IN_PYPY && !defined(_PyLong_AsByteArray) - PyErr_SetString(PyExc_RuntimeError, - "_PyLong_AsByteArray() not available in PyPy, cannot convert large numbers"); -#else - int val; - PyObject *v = __Pyx_PyNumber_IntOrLong(x); - #if PY_MAJOR_VERSION < 3 - if (likely(v) && !PyLong_Check(v)) { - PyObject *tmp = v; - v = PyNumber_Long(tmp); - Py_DECREF(tmp); - } - #endif - if (likely(v)) { - int one = 1; int is_little = (int)*(unsigned char *)&one; - unsigned char *bytes = (unsigned char *)&val; - int ret = _PyLong_AsByteArray((PyLongObject *)v, - bytes, sizeof(val), - is_little, !is_unsigned); - Py_DECREF(v); - if (likely(!ret)) - return val; - } -#endif - return (int) -1; - } - } else { - int val; - PyObject *tmp = __Pyx_PyNumber_IntOrLong(x); - if (!tmp) return (int) -1; - val = __Pyx_PyInt_As_int(tmp); - Py_DECREF(tmp); - return val; - } -raise_overflow: - PyErr_SetString(PyExc_OverflowError, - "value too large to convert to int"); - return (int) -1; -raise_neg_overflow: - PyErr_SetString(PyExc_OverflowError, - "can't convert negative value to int"); - return (int) -1; -} - -/* CIntFromPy */ - static CYTHON_INLINE long __Pyx_PyInt_As_long(PyObject *x) { - const long neg_one = (long) ((long) 0 - (long) 1), const_zero = (long) 0; - const int is_unsigned = neg_one > const_zero; -#if PY_MAJOR_VERSION < 3 - if (likely(PyInt_Check(x))) { - if (sizeof(long) < sizeof(long)) { - __PYX_VERIFY_RETURN_INT(long, long, PyInt_AS_LONG(x)) - } else { - long val = PyInt_AS_LONG(x); - if (is_unsigned && unlikely(val < 0)) { - goto raise_neg_overflow; - } - return (long) val; - } - } else -#endif - if (likely(PyLong_Check(x))) { - if (is_unsigned) { -#if CYTHON_USE_PYLONG_INTERNALS - const digit* digits = ((PyLongObject*)x)->ob_digit; - switch (Py_SIZE(x)) { - case 0: return (long) 0; - case 1: __PYX_VERIFY_RETURN_INT(long, digit, digits[0]) - case 2: - if (8 * sizeof(long) > 1 * PyLong_SHIFT) { - if (8 * sizeof(unsigned long) > 2 * PyLong_SHIFT) { - __PYX_VERIFY_RETURN_INT(long, unsigned long, (((((unsigned long)digits[1]) << PyLong_SHIFT) | (unsigned long)digits[0]))) - } else if (8 * sizeof(long) >= 2 * PyLong_SHIFT) { - return (long) (((((long)digits[1]) << PyLong_SHIFT) | (long)digits[0])); - } - } - break; - case 3: - if (8 * sizeof(long) > 2 * PyLong_SHIFT) { - if (8 * sizeof(unsigned long) > 3 * PyLong_SHIFT) { - __PYX_VERIFY_RETURN_INT(long, unsigned long, (((((((unsigned long)digits[2]) << PyLong_SHIFT) | (unsigned long)digits[1]) << PyLong_SHIFT) | (unsigned long)digits[0]))) - } else if (8 * sizeof(long) >= 3 * PyLong_SHIFT) { - return (long) (((((((long)digits[2]) << PyLong_SHIFT) | (long)digits[1]) << PyLong_SHIFT) | (long)digits[0])); - } - } - break; - case 4: - if (8 * sizeof(long) > 3 * PyLong_SHIFT) { - if (8 * sizeof(unsigned long) > 4 * PyLong_SHIFT) { - __PYX_VERIFY_RETURN_INT(long, unsigned long, (((((((((unsigned long)digits[3]) << PyLong_SHIFT) | (unsigned long)digits[2]) << PyLong_SHIFT) | (unsigned long)digits[1]) << PyLong_SHIFT) | (unsigned long)digits[0]))) - } else if (8 * sizeof(long) >= 4 * PyLong_SHIFT) { - return (long) (((((((((long)digits[3]) << PyLong_SHIFT) | (long)digits[2]) << PyLong_SHIFT) | (long)digits[1]) << PyLong_SHIFT) | (long)digits[0])); - } - } - break; - } -#endif -#if CYTHON_COMPILING_IN_CPYTHON - if (unlikely(Py_SIZE(x) < 0)) { - goto raise_neg_overflow; - } -#else - { - int result = PyObject_RichCompareBool(x, Py_False, Py_LT); - if (unlikely(result < 0)) - return (long) -1; - if (unlikely(result == 1)) - goto raise_neg_overflow; - } -#endif - if (sizeof(long) <= sizeof(unsigned long)) { - __PYX_VERIFY_RETURN_INT_EXC(long, unsigned long, PyLong_AsUnsignedLong(x)) -#ifdef HAVE_LONG_LONG - } else if (sizeof(long) <= sizeof(unsigned PY_LONG_LONG)) { - __PYX_VERIFY_RETURN_INT_EXC(long, unsigned PY_LONG_LONG, PyLong_AsUnsignedLongLong(x)) -#endif - } - } else { -#if CYTHON_USE_PYLONG_INTERNALS - const digit* digits = ((PyLongObject*)x)->ob_digit; - switch (Py_SIZE(x)) { - case 0: return (long) 0; - case -1: __PYX_VERIFY_RETURN_INT(long, sdigit, (sdigit) (-(sdigit)digits[0])) - case 1: __PYX_VERIFY_RETURN_INT(long, digit, +digits[0]) - case -2: - if (8 * sizeof(long) - 1 > 1 * PyLong_SHIFT) { - if (8 * sizeof(unsigned long) > 2 * PyLong_SHIFT) { - __PYX_VERIFY_RETURN_INT(long, long, -(long) (((((unsigned long)digits[1]) << PyLong_SHIFT) | (unsigned long)digits[0]))) - } else if (8 * sizeof(long) - 1 > 2 * PyLong_SHIFT) { - return (long) (((long)-1)*(((((long)digits[1]) << PyLong_SHIFT) | (long)digits[0]))); - } - } - break; - case 2: - if (8 * sizeof(long) > 1 * PyLong_SHIFT) { - if (8 * sizeof(unsigned long) > 2 * PyLong_SHIFT) { - __PYX_VERIFY_RETURN_INT(long, unsigned long, (((((unsigned long)digits[1]) << PyLong_SHIFT) | (unsigned long)digits[0]))) - } else if (8 * sizeof(long) - 1 > 2 * PyLong_SHIFT) { - return (long) ((((((long)digits[1]) << PyLong_SHIFT) | (long)digits[0]))); - } - } - break; - case -3: - if (8 * sizeof(long) - 1 > 2 * PyLong_SHIFT) { - if (8 * sizeof(unsigned long) > 3 * PyLong_SHIFT) { - __PYX_VERIFY_RETURN_INT(long, long, -(long) (((((((unsigned long)digits[2]) << PyLong_SHIFT) | (unsigned long)digits[1]) << PyLong_SHIFT) | (unsigned long)digits[0]))) - } else if (8 * sizeof(long) - 1 > 3 * PyLong_SHIFT) { - return (long) (((long)-1)*(((((((long)digits[2]) << PyLong_SHIFT) | (long)digits[1]) << PyLong_SHIFT) | (long)digits[0]))); - } - } - break; - case 3: - if (8 * sizeof(long) > 2 * PyLong_SHIFT) { - if (8 * sizeof(unsigned long) > 3 * PyLong_SHIFT) { - __PYX_VERIFY_RETURN_INT(long, unsigned long, (((((((unsigned long)digits[2]) << PyLong_SHIFT) | (unsigned long)digits[1]) << PyLong_SHIFT) | (unsigned long)digits[0]))) - } else if (8 * sizeof(long) - 1 > 3 * PyLong_SHIFT) { - return (long) ((((((((long)digits[2]) << PyLong_SHIFT) | (long)digits[1]) << PyLong_SHIFT) | (long)digits[0]))); - } - } - break; - case -4: - if (8 * sizeof(long) - 1 > 3 * PyLong_SHIFT) { - if (8 * sizeof(unsigned long) > 4 * PyLong_SHIFT) { - __PYX_VERIFY_RETURN_INT(long, long, -(long) (((((((((unsigned long)digits[3]) << PyLong_SHIFT) | (unsigned long)digits[2]) << PyLong_SHIFT) | (unsigned long)digits[1]) << PyLong_SHIFT) | (unsigned long)digits[0]))) - } else if (8 * sizeof(long) - 1 > 4 * PyLong_SHIFT) { - return (long) (((long)-1)*(((((((((long)digits[3]) << PyLong_SHIFT) | (long)digits[2]) << PyLong_SHIFT) | (long)digits[1]) << PyLong_SHIFT) | (long)digits[0]))); - } - } - break; - case 4: - if (8 * sizeof(long) > 3 * PyLong_SHIFT) { - if (8 * sizeof(unsigned long) > 4 * PyLong_SHIFT) { - __PYX_VERIFY_RETURN_INT(long, unsigned long, (((((((((unsigned long)digits[3]) << PyLong_SHIFT) | (unsigned long)digits[2]) << PyLong_SHIFT) | (unsigned long)digits[1]) << PyLong_SHIFT) | (unsigned long)digits[0]))) - } else if (8 * sizeof(long) - 1 > 4 * PyLong_SHIFT) { - return (long) ((((((((((long)digits[3]) << PyLong_SHIFT) | (long)digits[2]) << PyLong_SHIFT) | (long)digits[1]) << PyLong_SHIFT) | (long)digits[0]))); - } - } - break; - } -#endif - if (sizeof(long) <= sizeof(long)) { - __PYX_VERIFY_RETURN_INT_EXC(long, long, PyLong_AsLong(x)) -#ifdef HAVE_LONG_LONG - } else if (sizeof(long) <= sizeof(PY_LONG_LONG)) { - __PYX_VERIFY_RETURN_INT_EXC(long, PY_LONG_LONG, PyLong_AsLongLong(x)) -#endif - } - } - { -#if CYTHON_COMPILING_IN_PYPY && !defined(_PyLong_AsByteArray) - PyErr_SetString(PyExc_RuntimeError, - "_PyLong_AsByteArray() not available in PyPy, cannot convert large numbers"); -#else - long val; - PyObject *v = __Pyx_PyNumber_IntOrLong(x); - #if PY_MAJOR_VERSION < 3 - if (likely(v) && !PyLong_Check(v)) { - PyObject *tmp = v; - v = PyNumber_Long(tmp); - Py_DECREF(tmp); - } - #endif - if (likely(v)) { - int one = 1; int is_little = (int)*(unsigned char *)&one; - unsigned char *bytes = (unsigned char *)&val; - int ret = _PyLong_AsByteArray((PyLongObject *)v, - bytes, sizeof(val), - is_little, !is_unsigned); - Py_DECREF(v); - if (likely(!ret)) - return val; - } -#endif - return (long) -1; - } - } else { - long val; - PyObject *tmp = __Pyx_PyNumber_IntOrLong(x); - if (!tmp) return (long) -1; - val = __Pyx_PyInt_As_long(tmp); - Py_DECREF(tmp); - return val; - } -raise_overflow: - PyErr_SetString(PyExc_OverflowError, - "value too large to convert to long"); - return (long) -1; -raise_neg_overflow: - PyErr_SetString(PyExc_OverflowError, - "can't convert negative value to long"); - return (long) -1; -} - -/* CIntFromPy */ - static CYTHON_INLINE char __Pyx_PyInt_As_char(PyObject *x) { - const char neg_one = (char) ((char) 0 - (char) 1), const_zero = (char) 0; - const int is_unsigned = neg_one > const_zero; -#if PY_MAJOR_VERSION < 3 - if (likely(PyInt_Check(x))) { - if (sizeof(char) < sizeof(long)) { - __PYX_VERIFY_RETURN_INT(char, long, PyInt_AS_LONG(x)) - } else { - long val = PyInt_AS_LONG(x); - if (is_unsigned && unlikely(val < 0)) { - goto raise_neg_overflow; - } - return (char) val; - } - } else -#endif - if (likely(PyLong_Check(x))) { - if (is_unsigned) { -#if CYTHON_USE_PYLONG_INTERNALS - const digit* digits = ((PyLongObject*)x)->ob_digit; - switch (Py_SIZE(x)) { - case 0: return (char) 0; - case 1: __PYX_VERIFY_RETURN_INT(char, digit, digits[0]) - case 2: - if (8 * sizeof(char) > 1 * PyLong_SHIFT) { - if (8 * sizeof(unsigned long) > 2 * PyLong_SHIFT) { - __PYX_VERIFY_RETURN_INT(char, unsigned long, (((((unsigned long)digits[1]) << PyLong_SHIFT) | (unsigned long)digits[0]))) - } else if (8 * sizeof(char) >= 2 * PyLong_SHIFT) { - return (char) (((((char)digits[1]) << PyLong_SHIFT) | (char)digits[0])); - } - } - break; - case 3: - if (8 * sizeof(char) > 2 * PyLong_SHIFT) { - if (8 * sizeof(unsigned long) > 3 * PyLong_SHIFT) { - __PYX_VERIFY_RETURN_INT(char, unsigned long, (((((((unsigned long)digits[2]) << PyLong_SHIFT) | (unsigned long)digits[1]) << PyLong_SHIFT) | (unsigned long)digits[0]))) - } else if (8 * sizeof(char) >= 3 * PyLong_SHIFT) { - return (char) (((((((char)digits[2]) << PyLong_SHIFT) | (char)digits[1]) << PyLong_SHIFT) | (char)digits[0])); - } - } - break; - case 4: - if (8 * sizeof(char) > 3 * PyLong_SHIFT) { - if (8 * sizeof(unsigned long) > 4 * PyLong_SHIFT) { - __PYX_VERIFY_RETURN_INT(char, unsigned long, (((((((((unsigned long)digits[3]) << PyLong_SHIFT) | (unsigned long)digits[2]) << PyLong_SHIFT) | (unsigned long)digits[1]) << PyLong_SHIFT) | (unsigned long)digits[0]))) - } else if (8 * sizeof(char) >= 4 * PyLong_SHIFT) { - return (char) (((((((((char)digits[3]) << PyLong_SHIFT) | (char)digits[2]) << PyLong_SHIFT) | (char)digits[1]) << PyLong_SHIFT) | (char)digits[0])); - } - } - break; - } -#endif -#if CYTHON_COMPILING_IN_CPYTHON - if (unlikely(Py_SIZE(x) < 0)) { - goto raise_neg_overflow; - } -#else - { - int result = PyObject_RichCompareBool(x, Py_False, Py_LT); - if (unlikely(result < 0)) - return (char) -1; - if (unlikely(result == 1)) - goto raise_neg_overflow; - } -#endif - if (sizeof(char) <= sizeof(unsigned long)) { - __PYX_VERIFY_RETURN_INT_EXC(char, unsigned long, PyLong_AsUnsignedLong(x)) -#ifdef HAVE_LONG_LONG - } else if (sizeof(char) <= sizeof(unsigned PY_LONG_LONG)) { - __PYX_VERIFY_RETURN_INT_EXC(char, unsigned PY_LONG_LONG, PyLong_AsUnsignedLongLong(x)) -#endif - } - } else { -#if CYTHON_USE_PYLONG_INTERNALS - const digit* digits = ((PyLongObject*)x)->ob_digit; - switch (Py_SIZE(x)) { - case 0: return (char) 0; - case -1: __PYX_VERIFY_RETURN_INT(char, sdigit, (sdigit) (-(sdigit)digits[0])) - case 1: __PYX_VERIFY_RETURN_INT(char, digit, +digits[0]) - case -2: - if (8 * sizeof(char) - 1 > 1 * PyLong_SHIFT) { - if (8 * sizeof(unsigned long) > 2 * PyLong_SHIFT) { - __PYX_VERIFY_RETURN_INT(char, long, -(long) (((((unsigned long)digits[1]) << PyLong_SHIFT) | (unsigned long)digits[0]))) - } else if (8 * sizeof(char) - 1 > 2 * PyLong_SHIFT) { - return (char) (((char)-1)*(((((char)digits[1]) << PyLong_SHIFT) | (char)digits[0]))); - } - } - break; - case 2: - if (8 * sizeof(char) > 1 * PyLong_SHIFT) { - if (8 * sizeof(unsigned long) > 2 * PyLong_SHIFT) { - __PYX_VERIFY_RETURN_INT(char, unsigned long, (((((unsigned long)digits[1]) << PyLong_SHIFT) | (unsigned long)digits[0]))) - } else if (8 * sizeof(char) - 1 > 2 * PyLong_SHIFT) { - return (char) ((((((char)digits[1]) << PyLong_SHIFT) | (char)digits[0]))); - } - } - break; - case -3: - if (8 * sizeof(char) - 1 > 2 * PyLong_SHIFT) { - if (8 * sizeof(unsigned long) > 3 * PyLong_SHIFT) { - __PYX_VERIFY_RETURN_INT(char, long, -(long) (((((((unsigned long)digits[2]) << PyLong_SHIFT) | (unsigned long)digits[1]) << PyLong_SHIFT) | (unsigned long)digits[0]))) - } else if (8 * sizeof(char) - 1 > 3 * PyLong_SHIFT) { - return (char) (((char)-1)*(((((((char)digits[2]) << PyLong_SHIFT) | (char)digits[1]) << PyLong_SHIFT) | (char)digits[0]))); - } - } - break; - case 3: - if (8 * sizeof(char) > 2 * PyLong_SHIFT) { - if (8 * sizeof(unsigned long) > 3 * PyLong_SHIFT) { - __PYX_VERIFY_RETURN_INT(char, unsigned long, (((((((unsigned long)digits[2]) << PyLong_SHIFT) | (unsigned long)digits[1]) << PyLong_SHIFT) | (unsigned long)digits[0]))) - } else if (8 * sizeof(char) - 1 > 3 * PyLong_SHIFT) { - return (char) ((((((((char)digits[2]) << PyLong_SHIFT) | (char)digits[1]) << PyLong_SHIFT) | (char)digits[0]))); - } - } - break; - case -4: - if (8 * sizeof(char) - 1 > 3 * PyLong_SHIFT) { - if (8 * sizeof(unsigned long) > 4 * PyLong_SHIFT) { - __PYX_VERIFY_RETURN_INT(char, long, -(long) (((((((((unsigned long)digits[3]) << PyLong_SHIFT) | (unsigned long)digits[2]) << PyLong_SHIFT) | (unsigned long)digits[1]) << PyLong_SHIFT) | (unsigned long)digits[0]))) - } else if (8 * sizeof(char) - 1 > 4 * PyLong_SHIFT) { - return (char) (((char)-1)*(((((((((char)digits[3]) << PyLong_SHIFT) | (char)digits[2]) << PyLong_SHIFT) | (char)digits[1]) << PyLong_SHIFT) | (char)digits[0]))); - } - } - break; - case 4: - if (8 * sizeof(char) > 3 * PyLong_SHIFT) { - if (8 * sizeof(unsigned long) > 4 * PyLong_SHIFT) { - __PYX_VERIFY_RETURN_INT(char, unsigned long, (((((((((unsigned long)digits[3]) << PyLong_SHIFT) | (unsigned long)digits[2]) << PyLong_SHIFT) | (unsigned long)digits[1]) << PyLong_SHIFT) | (unsigned long)digits[0]))) - } else if (8 * sizeof(char) - 1 > 4 * PyLong_SHIFT) { - return (char) ((((((((((char)digits[3]) << PyLong_SHIFT) | (char)digits[2]) << PyLong_SHIFT) | (char)digits[1]) << PyLong_SHIFT) | (char)digits[0]))); - } - } - break; - } -#endif - if (sizeof(char) <= sizeof(long)) { - __PYX_VERIFY_RETURN_INT_EXC(char, long, PyLong_AsLong(x)) -#ifdef HAVE_LONG_LONG - } else if (sizeof(char) <= sizeof(PY_LONG_LONG)) { - __PYX_VERIFY_RETURN_INT_EXC(char, PY_LONG_LONG, PyLong_AsLongLong(x)) -#endif - } - } - { -#if CYTHON_COMPILING_IN_PYPY && !defined(_PyLong_AsByteArray) - PyErr_SetString(PyExc_RuntimeError, - "_PyLong_AsByteArray() not available in PyPy, cannot convert large numbers"); -#else - char val; - PyObject *v = __Pyx_PyNumber_IntOrLong(x); - #if PY_MAJOR_VERSION < 3 - if (likely(v) && !PyLong_Check(v)) { - PyObject *tmp = v; - v = PyNumber_Long(tmp); - Py_DECREF(tmp); - } - #endif - if (likely(v)) { - int one = 1; int is_little = (int)*(unsigned char *)&one; - unsigned char *bytes = (unsigned char *)&val; - int ret = _PyLong_AsByteArray((PyLongObject *)v, - bytes, sizeof(val), - is_little, !is_unsigned); - Py_DECREF(v); - if (likely(!ret)) - return val; - } -#endif - return (char) -1; - } - } else { - char val; - PyObject *tmp = __Pyx_PyNumber_IntOrLong(x); - if (!tmp) return (char) -1; - val = __Pyx_PyInt_As_char(tmp); - Py_DECREF(tmp); - return val; - } -raise_overflow: - PyErr_SetString(PyExc_OverflowError, - "value too large to convert to char"); - return (char) -1; -raise_neg_overflow: - PyErr_SetString(PyExc_OverflowError, - "can't convert negative value to char"); - return (char) -1; -} - -/* CheckBinaryVersion */ - static int __Pyx_check_binary_version(void) { - char ctversion[4], rtversion[4]; - PyOS_snprintf(ctversion, 4, "%d.%d", PY_MAJOR_VERSION, PY_MINOR_VERSION); - PyOS_snprintf(rtversion, 4, "%s", Py_GetVersion()); - if (ctversion[0] != rtversion[0] || ctversion[2] != rtversion[2]) { - char message[200]; - PyOS_snprintf(message, sizeof(message), - "compiletime version %s of module '%.100s' " - "does not match runtime version %s", - ctversion, __Pyx_MODULE_NAME, rtversion); - return PyErr_WarnEx(NULL, message, 1); - } - return 0; -} - -/* InitStrings */ - static int __Pyx_InitStrings(__Pyx_StringTabEntry *t) { - while (t->p) { - #if PY_MAJOR_VERSION < 3 - if (t->is_unicode) { - *t->p = PyUnicode_DecodeUTF8(t->s, t->n - 1, NULL); - } else if (t->intern) { - *t->p = PyString_InternFromString(t->s); - } else { - *t->p = PyString_FromStringAndSize(t->s, t->n - 1); - } - #else - if (t->is_unicode | t->is_str) { - if (t->intern) { - *t->p = PyUnicode_InternFromString(t->s); - } else if (t->encoding) { - *t->p = PyUnicode_Decode(t->s, t->n - 1, t->encoding, NULL); - } else { - *t->p = PyUnicode_FromStringAndSize(t->s, t->n - 1); - } - } else { - *t->p = PyBytes_FromStringAndSize(t->s, t->n - 1); - } - #endif - if (!*t->p) - return -1; - if (PyObject_Hash(*t->p) == -1) - return -1; - ++t; - } - return 0; -} - -static CYTHON_INLINE PyObject* __Pyx_PyUnicode_FromString(const char* c_str) { - return __Pyx_PyUnicode_FromStringAndSize(c_str, (Py_ssize_t)strlen(c_str)); -} -static CYTHON_INLINE const char* __Pyx_PyObject_AsString(PyObject* o) { - Py_ssize_t ignore; - return __Pyx_PyObject_AsStringAndSize(o, &ignore); -} -#if __PYX_DEFAULT_STRING_ENCODING_IS_ASCII || __PYX_DEFAULT_STRING_ENCODING_IS_DEFAULT -#if !CYTHON_PEP393_ENABLED -static const char* __Pyx_PyUnicode_AsStringAndSize(PyObject* o, Py_ssize_t *length) { - char* defenc_c; - PyObject* defenc = _PyUnicode_AsDefaultEncodedString(o, NULL); - if (!defenc) return NULL; - defenc_c = PyBytes_AS_STRING(defenc); -#if __PYX_DEFAULT_STRING_ENCODING_IS_ASCII - { - char* end = defenc_c + PyBytes_GET_SIZE(defenc); - char* c; - for (c = defenc_c; c < end; c++) { - if ((unsigned char) (*c) >= 128) { - PyUnicode_AsASCIIString(o); - return NULL; - } - } - } -#endif - *length = PyBytes_GET_SIZE(defenc); - return defenc_c; -} -#else -static CYTHON_INLINE const char* __Pyx_PyUnicode_AsStringAndSize(PyObject* o, Py_ssize_t *length) { - if (unlikely(__Pyx_PyUnicode_READY(o) == -1)) return NULL; -#if __PYX_DEFAULT_STRING_ENCODING_IS_ASCII - if (likely(PyUnicode_IS_ASCII(o))) { - *length = PyUnicode_GET_LENGTH(o); - return PyUnicode_AsUTF8(o); - } else { - PyUnicode_AsASCIIString(o); - return NULL; - } -#else - return PyUnicode_AsUTF8AndSize(o, length); -#endif -} -#endif -#endif -static CYTHON_INLINE const char* __Pyx_PyObject_AsStringAndSize(PyObject* o, Py_ssize_t *length) { -#if __PYX_DEFAULT_STRING_ENCODING_IS_ASCII || __PYX_DEFAULT_STRING_ENCODING_IS_DEFAULT - if ( -#if PY_MAJOR_VERSION < 3 && __PYX_DEFAULT_STRING_ENCODING_IS_ASCII - __Pyx_sys_getdefaultencoding_not_ascii && -#endif - PyUnicode_Check(o)) { - return __Pyx_PyUnicode_AsStringAndSize(o, length); - } else -#endif -#if (!CYTHON_COMPILING_IN_PYPY) || (defined(PyByteArray_AS_STRING) && defined(PyByteArray_GET_SIZE)) - if (PyByteArray_Check(o)) { - *length = PyByteArray_GET_SIZE(o); - return PyByteArray_AS_STRING(o); - } else -#endif - { - char* result; - int r = PyBytes_AsStringAndSize(o, &result, length); - if (unlikely(r < 0)) { - return NULL; - } else { - return result; - } - } -} -static CYTHON_INLINE int __Pyx_PyObject_IsTrue(PyObject* x) { - int is_true = x == Py_True; - if (is_true | (x == Py_False) | (x == Py_None)) return is_true; - else return PyObject_IsTrue(x); -} -static CYTHON_INLINE int __Pyx_PyObject_IsTrueAndDecref(PyObject* x) { - int retval; - if (unlikely(!x)) return -1; - retval = __Pyx_PyObject_IsTrue(x); - Py_DECREF(x); - return retval; -} -static PyObject* __Pyx_PyNumber_IntOrLongWrongResultType(PyObject* result, const char* type_name) { -#if PY_MAJOR_VERSION >= 3 - if (PyLong_Check(result)) { - if (PyErr_WarnFormat(PyExc_DeprecationWarning, 1, - "__int__ returned non-int (type %.200s). " - "The ability to return an instance of a strict subclass of int " - "is deprecated, and may be removed in a future version of Python.", - Py_TYPE(result)->tp_name)) { - Py_DECREF(result); - return NULL; - } - return result; - } -#endif - PyErr_Format(PyExc_TypeError, - "__%.4s__ returned non-%.4s (type %.200s)", - type_name, type_name, Py_TYPE(result)->tp_name); - Py_DECREF(result); - return NULL; -} -static CYTHON_INLINE PyObject* __Pyx_PyNumber_IntOrLong(PyObject* x) { -#if CYTHON_USE_TYPE_SLOTS - PyNumberMethods *m; -#endif - const char *name = NULL; - PyObject *res = NULL; -#if PY_MAJOR_VERSION < 3 - if (likely(PyInt_Check(x) || PyLong_Check(x))) -#else - if (likely(PyLong_Check(x))) -#endif - return __Pyx_NewRef(x); -#if CYTHON_USE_TYPE_SLOTS - m = Py_TYPE(x)->tp_as_number; - #if PY_MAJOR_VERSION < 3 - if (m && m->nb_int) { - name = "int"; - res = m->nb_int(x); - } - else if (m && m->nb_long) { - name = "long"; - res = m->nb_long(x); - } - #else - if (likely(m && m->nb_int)) { - name = "int"; - res = m->nb_int(x); - } - #endif -#else - if (!PyBytes_CheckExact(x) && !PyUnicode_CheckExact(x)) { - res = PyNumber_Int(x); - } -#endif - if (likely(res)) { -#if PY_MAJOR_VERSION < 3 - if (unlikely(!PyInt_Check(res) && !PyLong_Check(res))) { -#else - if (unlikely(!PyLong_CheckExact(res))) { -#endif - return __Pyx_PyNumber_IntOrLongWrongResultType(res, name); - } - } - else if (!PyErr_Occurred()) { - PyErr_SetString(PyExc_TypeError, - "an integer is required"); - } - return res; -} -static CYTHON_INLINE Py_ssize_t __Pyx_PyIndex_AsSsize_t(PyObject* b) { - Py_ssize_t ival; - PyObject *x; -#if PY_MAJOR_VERSION < 3 - if (likely(PyInt_CheckExact(b))) { - if (sizeof(Py_ssize_t) >= sizeof(long)) - return PyInt_AS_LONG(b); - else - return PyInt_AsSsize_t(b); - } -#endif - if (likely(PyLong_CheckExact(b))) { - #if CYTHON_USE_PYLONG_INTERNALS - const digit* digits = ((PyLongObject*)b)->ob_digit; - const Py_ssize_t size = Py_SIZE(b); - if (likely(__Pyx_sst_abs(size) <= 1)) { - ival = likely(size) ? digits[0] : 0; - if (size == -1) ival = -ival; - return ival; - } else { - switch (size) { - case 2: - if (8 * sizeof(Py_ssize_t) > 2 * PyLong_SHIFT) { - return (Py_ssize_t) (((((size_t)digits[1]) << PyLong_SHIFT) | (size_t)digits[0])); - } - break; - case -2: - if (8 * sizeof(Py_ssize_t) > 2 * PyLong_SHIFT) { - return -(Py_ssize_t) (((((size_t)digits[1]) << PyLong_SHIFT) | (size_t)digits[0])); - } - break; - case 3: - if (8 * sizeof(Py_ssize_t) > 3 * PyLong_SHIFT) { - return (Py_ssize_t) (((((((size_t)digits[2]) << PyLong_SHIFT) | (size_t)digits[1]) << PyLong_SHIFT) | (size_t)digits[0])); - } - break; - case -3: - if (8 * sizeof(Py_ssize_t) > 3 * PyLong_SHIFT) { - return -(Py_ssize_t) (((((((size_t)digits[2]) << PyLong_SHIFT) | (size_t)digits[1]) << PyLong_SHIFT) | (size_t)digits[0])); - } - break; - case 4: - if (8 * sizeof(Py_ssize_t) > 4 * PyLong_SHIFT) { - return (Py_ssize_t) (((((((((size_t)digits[3]) << PyLong_SHIFT) | (size_t)digits[2]) << PyLong_SHIFT) | (size_t)digits[1]) << PyLong_SHIFT) | (size_t)digits[0])); - } - break; - case -4: - if (8 * sizeof(Py_ssize_t) > 4 * PyLong_SHIFT) { - return -(Py_ssize_t) (((((((((size_t)digits[3]) << PyLong_SHIFT) | (size_t)digits[2]) << PyLong_SHIFT) | (size_t)digits[1]) << PyLong_SHIFT) | (size_t)digits[0])); - } - break; - } - } - #endif - return PyLong_AsSsize_t(b); - } - x = PyNumber_Index(b); - if (!x) return -1; - ival = PyInt_AsSsize_t(x); - Py_DECREF(x); - return ival; -} -static CYTHON_INLINE PyObject * __Pyx_PyBool_FromLong(long b) { - return b ? __Pyx_NewRef(Py_True) : __Pyx_NewRef(Py_False); -} -static CYTHON_INLINE PyObject * __Pyx_PyInt_FromSize_t(size_t ival) { - return PyInt_FromSize_t(ival); -} - - -#endif /* Py_PYTHON_H */ diff --git a/spaces/SERER/VITS-Umamusume-voice-synthesizer/text/korean.py b/spaces/SERER/VITS-Umamusume-voice-synthesizer/text/korean.py deleted file mode 100644 index edee07429a450c55e3d8e246997faaa1e0b89cc9..0000000000000000000000000000000000000000 --- a/spaces/SERER/VITS-Umamusume-voice-synthesizer/text/korean.py +++ /dev/null @@ -1,210 +0,0 @@ -import re -from jamo import h2j, j2hcj -import ko_pron - - -# This is a list of Korean classifiers preceded by pure Korean numerals. -_korean_classifiers = '군데 권 개 그루 닢 대 두 마리 모 모금 뭇 발 발짝 방 번 벌 보루 살 수 술 시 쌈 움큼 정 짝 채 척 첩 축 켤레 톨 통' - -# List of (hangul, hangul divided) pairs: -_hangul_divided = [(re.compile('%s' % x[0]), x[1]) for x in [ - ('ㄳ', 'ㄱㅅ'), - ('ㄵ', 'ㄴㅈ'), - ('ㄶ', 'ㄴㅎ'), - ('ㄺ', 'ㄹㄱ'), - ('ㄻ', 'ㄹㅁ'), - ('ㄼ', 'ㄹㅂ'), - ('ㄽ', 'ㄹㅅ'), - ('ㄾ', 'ㄹㅌ'), - ('ㄿ', 'ㄹㅍ'), - ('ㅀ', 'ㄹㅎ'), - ('ㅄ', 'ㅂㅅ'), - ('ㅘ', 'ㅗㅏ'), - ('ㅙ', 'ㅗㅐ'), - ('ㅚ', 'ㅗㅣ'), - ('ㅝ', 'ㅜㅓ'), - ('ㅞ', 'ㅜㅔ'), - ('ㅟ', 'ㅜㅣ'), - ('ㅢ', 'ㅡㅣ'), - ('ㅑ', 'ㅣㅏ'), - ('ㅒ', 'ㅣㅐ'), - ('ㅕ', 'ㅣㅓ'), - ('ㅖ', 'ㅣㅔ'), - ('ㅛ', 'ㅣㅗ'), - ('ㅠ', 'ㅣㅜ') -]] - -# List of (Latin alphabet, hangul) pairs: -_latin_to_hangul = [(re.compile('%s' % x[0], re.IGNORECASE), x[1]) for x in [ - ('a', '에이'), - ('b', '비'), - ('c', '시'), - ('d', '디'), - ('e', '이'), - ('f', '에프'), - ('g', '지'), - ('h', '에이치'), - ('i', '아이'), - ('j', '제이'), - ('k', '케이'), - ('l', '엘'), - ('m', '엠'), - ('n', '엔'), - ('o', '오'), - ('p', '피'), - ('q', '큐'), - ('r', '아르'), - ('s', '에스'), - ('t', '티'), - ('u', '유'), - ('v', '브이'), - ('w', '더블유'), - ('x', '엑스'), - ('y', '와이'), - ('z', '제트') -]] - -# List of (ipa, lazy ipa) pairs: -_ipa_to_lazy_ipa = [(re.compile('%s' % x[0], re.IGNORECASE), x[1]) for x in [ - ('t͡ɕ','ʧ'), - ('d͡ʑ','ʥ'), - ('ɲ','n^'), - ('ɕ','ʃ'), - ('ʷ','w'), - ('ɭ','l`'), - ('ʎ','ɾ'), - ('ɣ','ŋ'), - ('ɰ','ɯ'), - ('ʝ','j'), - ('ʌ','ə'), - ('ɡ','g'), - ('\u031a','#'), - ('\u0348','='), - ('\u031e',''), - ('\u0320',''), - ('\u0339','') -]] - - -def latin_to_hangul(text): - for regex, replacement in _latin_to_hangul: - text = re.sub(regex, replacement, text) - return text - - -def divide_hangul(text): - text = j2hcj(h2j(text)) - for regex, replacement in _hangul_divided: - text = re.sub(regex, replacement, text) - return text - - -def hangul_number(num, sino=True): - '''Reference https://github.com/Kyubyong/g2pK''' - num = re.sub(',', '', num) - - if num == '0': - return '영' - if not sino and num == '20': - return '스무' - - digits = '123456789' - names = '일이삼사오육칠팔구' - digit2name = {d: n for d, n in zip(digits, names)} - - modifiers = '한 두 세 네 다섯 여섯 일곱 여덟 아홉' - decimals = '열 스물 서른 마흔 쉰 예순 일흔 여든 아흔' - digit2mod = {d: mod for d, mod in zip(digits, modifiers.split())} - digit2dec = {d: dec for d, dec in zip(digits, decimals.split())} - - spelledout = [] - for i, digit in enumerate(num): - i = len(num) - i - 1 - if sino: - if i == 0: - name = digit2name.get(digit, '') - elif i == 1: - name = digit2name.get(digit, '') + '십' - name = name.replace('일십', '십') - else: - if i == 0: - name = digit2mod.get(digit, '') - elif i == 1: - name = digit2dec.get(digit, '') - if digit == '0': - if i % 4 == 0: - last_three = spelledout[-min(3, len(spelledout)):] - if ''.join(last_three) == '': - spelledout.append('') - continue - else: - spelledout.append('') - continue - if i == 2: - name = digit2name.get(digit, '') + '백' - name = name.replace('일백', '백') - elif i == 3: - name = digit2name.get(digit, '') + '천' - name = name.replace('일천', '천') - elif i == 4: - name = digit2name.get(digit, '') + '만' - name = name.replace('일만', '만') - elif i == 5: - name = digit2name.get(digit, '') + '십' - name = name.replace('일십', '십') - elif i == 6: - name = digit2name.get(digit, '') + '백' - name = name.replace('일백', '백') - elif i == 7: - name = digit2name.get(digit, '') + '천' - name = name.replace('일천', '천') - elif i == 8: - name = digit2name.get(digit, '') + '억' - elif i == 9: - name = digit2name.get(digit, '') + '십' - elif i == 10: - name = digit2name.get(digit, '') + '백' - elif i == 11: - name = digit2name.get(digit, '') + '천' - elif i == 12: - name = digit2name.get(digit, '') + '조' - elif i == 13: - name = digit2name.get(digit, '') + '십' - elif i == 14: - name = digit2name.get(digit, '') + '백' - elif i == 15: - name = digit2name.get(digit, '') + '천' - spelledout.append(name) - return ''.join(elem for elem in spelledout) - - -def number_to_hangul(text): - '''Reference https://github.com/Kyubyong/g2pK''' - tokens = set(re.findall(r'(\d[\d,]*)([\uac00-\ud71f]+)', text)) - for token in tokens: - num, classifier = token - if classifier[:2] in _korean_classifiers or classifier[0] in _korean_classifiers: - spelledout = hangul_number(num, sino=False) - else: - spelledout = hangul_number(num, sino=True) - text = text.replace(f'{num}{classifier}', f'{spelledout}{classifier}') - # digit by digit for remaining digits - digits = '0123456789' - names = '영일이삼사오육칠팔구' - for d, n in zip(digits, names): - text = text.replace(d, n) - return text - - -def korean_to_lazy_ipa(text): - text = latin_to_hangul(text) - text = number_to_hangul(text) - text=re.sub('[\uac00-\ud7af]+',lambda x:ko_pron.romanise(x.group(0),'ipa').split('] ~ [')[0],text) - for regex, replacement in _ipa_to_lazy_ipa: - text = re.sub(regex, replacement, text) - return text - - -def korean_to_ipa(text): - text = korean_to_lazy_ipa(text) - return text.replace('ʧ','tʃ').replace('ʥ','dʑ') diff --git a/spaces/SIGGRAPH2022/Text2Human/Text2Human/utils/options.py b/spaces/SIGGRAPH2022/Text2Human/Text2Human/utils/options.py deleted file mode 100644 index 1045dd07381bd680b623d0187be5353e2e3dee80..0000000000000000000000000000000000000000 --- a/spaces/SIGGRAPH2022/Text2Human/Text2Human/utils/options.py +++ /dev/null @@ -1,129 +0,0 @@ -import os -import os.path as osp -from collections import OrderedDict - -import yaml - - -def ordered_yaml(): - """Support OrderedDict for yaml. - - Returns: - yaml Loader and Dumper. - """ - try: - from yaml import CDumper as Dumper - from yaml import CLoader as Loader - except ImportError: - from yaml import Dumper, Loader - - _mapping_tag = yaml.resolver.BaseResolver.DEFAULT_MAPPING_TAG - - def dict_representer(dumper, data): - return dumper.represent_dict(data.items()) - - def dict_constructor(loader, node): - return OrderedDict(loader.construct_pairs(node)) - - Dumper.add_representer(OrderedDict, dict_representer) - Loader.add_constructor(_mapping_tag, dict_constructor) - return Loader, Dumper - - -def parse(opt_path, is_train=True): - """Parse option file. - - Args: - opt_path (str): Option file path. - is_train (str): Indicate whether in training or not. Default: True. - - Returns: - (dict): Options. - """ - with open(opt_path, mode='r') as f: - Loader, _ = ordered_yaml() - opt = yaml.load(f, Loader=Loader) - - gpu_list = ','.join(str(x) for x in opt['gpu_ids']) - if opt.get('set_CUDA_VISIBLE_DEVICES', None): - os.environ['CUDA_VISIBLE_DEVICES'] = gpu_list - print('export CUDA_VISIBLE_DEVICES=' + gpu_list, flush=True) - else: - print('gpu_list: ', gpu_list, flush=True) - - opt['is_train'] = is_train - - # paths - opt['path'] = {} - opt['path']['root'] = osp.abspath( - osp.join(__file__, osp.pardir, osp.pardir)) - if is_train: - experiments_root = osp.join(opt['path']['root'], 'experiments', - opt['name']) - opt['path']['experiments_root'] = experiments_root - opt['path']['models'] = osp.join(experiments_root, 'models') - opt['path']['log'] = experiments_root - opt['path']['visualization'] = osp.join(experiments_root, - 'visualization') - - # change some options for debug mode - if 'debug' in opt['name']: - opt['debug'] = True - opt['val_freq'] = 1 - opt['print_freq'] = 1 - opt['save_checkpoint_freq'] = 1 - else: # test - results_root = osp.join(opt['path']['root'], 'results', opt['name']) - opt['path']['results_root'] = results_root - opt['path']['log'] = results_root - opt['path']['visualization'] = osp.join(results_root, 'visualization') - - return opt - - -def dict2str(opt, indent_level=1): - """dict to string for printing options. - - Args: - opt (dict): Option dict. - indent_level (int): Indent level. Default: 1. - - Return: - (str): Option string for printing. - """ - msg = '' - for k, v in opt.items(): - if isinstance(v, dict): - msg += ' ' * (indent_level * 2) + k + ':[\n' - msg += dict2str(v, indent_level + 1) - msg += ' ' * (indent_level * 2) + ']\n' - else: - msg += ' ' * (indent_level * 2) + k + ': ' + str(v) + '\n' - return msg - - -class NoneDict(dict): - """None dict. It will return none if key is not in the dict.""" - - def __missing__(self, key): - return None - - -def dict_to_nonedict(opt): - """Convert to NoneDict, which returns None for missing keys. - - Args: - opt (dict): Option dict. - - Returns: - (dict): NoneDict for options. - """ - if isinstance(opt, dict): - new_opt = dict() - for key, sub_opt in opt.items(): - new_opt[key] = dict_to_nonedict(sub_opt) - return NoneDict(**new_opt) - elif isinstance(opt, list): - return [dict_to_nonedict(sub_opt) for sub_opt in opt] - else: - return opt diff --git a/spaces/SUSTech/llm-evaluate/README.md b/spaces/SUSTech/llm-evaluate/README.md deleted file mode 100644 index 8b945eaa3bf6be9ee858dbe329cede227b919ab6..0000000000000000000000000000000000000000 --- a/spaces/SUSTech/llm-evaluate/README.md +++ /dev/null @@ -1,10 +0,0 @@ ---- -title: LLM Evaluation -emoji: 🐢 -colorFrom: yellow -colorTo: purple -sdk: static -pinned: false ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/Sapphire-356/Video2MC/data/data_utils.py b/spaces/Sapphire-356/Video2MC/data/data_utils.py deleted file mode 100644 index 24945af2d6ed0d954c48187b67bad20ee5883498..0000000000000000000000000000000000000000 --- a/spaces/Sapphire-356/Video2MC/data/data_utils.py +++ /dev/null @@ -1,110 +0,0 @@ -# Copyright (c) 2018-present, Facebook, Inc. -# All rights reserved. -# -# This source code is licensed under the license found in the -# LICENSE file in the root directory of this source tree. -# - -import h5py -import numpy as np - -mpii_metadata = { - 'layout_name': 'mpii', - 'num_joints': 16, - 'keypoints_symmetry': [ - [3, 4, 5, 13, 14, 15], - [0, 1, 2, 10, 11, 12], - ] -} - -coco_metadata = { - 'layout_name': 'coco', - 'num_joints': 17, - 'keypoints_symmetry': [ - [1, 3, 5, 7, 9, 11, 13, 15], - [2, 4, 6, 8, 10, 12, 14, 16], - ] -} - -h36m_metadata = { - 'layout_name': 'h36m', - 'num_joints': 17, - 'keypoints_symmetry': [ - [4, 5, 6, 11, 12, 13], - [1, 2, 3, 14, 15, 16], - ] -} - -humaneva15_metadata = { - 'layout_name': 'humaneva15', - 'num_joints': 15, - 'keypoints_symmetry': [ - [2, 3, 4, 8, 9, 10], - [5, 6, 7, 11, 12, 13] - ] -} - -humaneva20_metadata = { - 'layout_name': 'humaneva20', - 'num_joints': 20, - 'keypoints_symmetry': [ - [3, 4, 5, 6, 11, 12, 13, 14], - [7, 8, 9, 10, 15, 16, 17, 18] - ] -} - - -def suggest_metadata(name): - names = [] - for metadata in [mpii_metadata, coco_metadata, h36m_metadata, humaneva15_metadata, humaneva20_metadata]: - if metadata['layout_name'] in name: - return metadata - names.append(metadata['layout_name']) - raise KeyError('Cannot infer keypoint layout from name "{}". Tried {}.'.format(name, names)) - - -def import_detectron_poses(path): - # Latin1 encoding because Detectron runs on Python 2.7 - data = np.load(path, encoding='latin1') - kp = data['keypoints'] - bb = data['boxes'] - results = [] - for i in range(len(bb)): - if len(bb[i][1]) == 0: - assert i > 0 - # Use last pose in case of detection failure - results.append(results[-1]) - continue - best_match = np.argmax(bb[i][1][:, 4]) - # import ipdb;ipdb.set_trace() - keypoints = kp[i][1][best_match].T.copy() - results.append(keypoints) - results = np.array(results) - # return results[:, :, 4:6] # Soft-argmax - return results[:, :, [0, 1, 3]] # Argmax + score - - -def my_pose(path): - data = np.load(path, encoding='latin1') - - -def import_cpn_poses(path): - data = np.load(path) - kp = data['keypoints'] - return kp[:, :, :2] - - -def import_sh_poses(path): - with h5py.File(path) as hf: - positions = hf['poses'].value - return positions.astype('float32') - - -def suggest_pose_importer(name): - if 'detectron' in name: - return import_detectron_poses - if 'cpn' in name: - return import_cpn_poses - if 'sh' in name: - return import_sh_poses - raise KeyError('Cannot infer keypoint format from name "{}". Tried detectron, cpn, sh.'.format(name)) diff --git a/spaces/Shuang59/Composable-Diffusion/app.py b/spaces/Shuang59/Composable-Diffusion/app.py deleted file mode 100644 index 92fa5f93e12e511396db6b754dd3766c752c05ae..0000000000000000000000000000000000000000 --- a/spaces/Shuang59/Composable-Diffusion/app.py +++ /dev/null @@ -1,404 +0,0 @@ -# -*- coding: utf-8 -*- -"""Copy of compose_glide.ipynb - -Automatically generated by Colaboratory. - -Original file is located at - https://colab.research.google.com/drive/19xx6Nu4FeiGj-TzTUFxBf-15IkeuFx_F -""" - -import gradio as gr -import torch as th - -from composable_diffusion.download import download_model -from composable_diffusion.model_creation import create_model_and_diffusion as create_model_and_diffusion_for_clevr -from composable_diffusion.model_creation import model_and_diffusion_defaults as model_and_diffusion_defaults_for_clevr -from composable_diffusion.composable_stable_diffusion.pipeline_composable_stable_diffusion import \ - ComposableStableDiffusionPipeline - -import os -import shutil -import time -import glob -import numpy as np -import open3d as o3d -import open3d.visualization.rendering as rendering - -import plotly.graph_objects as go -from PIL import Image -from tqdm.auto import tqdm -from point_e.diffusion.configs import DIFFUSION_CONFIGS, diffusion_from_config -from point_e.diffusion.sampler import PointCloudSampler -from point_e.models.download import load_checkpoint -from point_e.models.configs import MODEL_CONFIGS, model_from_config -from point_e.util.pc_to_mesh import marching_cubes_mesh - -has_cuda = th.cuda.is_available() -device = th.device('cpu' if not th.cuda.is_available() else 'cuda') -print(has_cuda) - -# init stable diffusion model -pipe = ComposableStableDiffusionPipeline.from_pretrained( - "CompVis/stable-diffusion-v1-4", -).to(device) - -# uncomment to disable safety_checker -# pipe.safety_checker = None - -# create model for CLEVR Objects -clevr_options = model_and_diffusion_defaults_for_clevr() - -flags = { - "image_size": 128, - "num_channels": 192, - "num_res_blocks": 2, - "learn_sigma": True, - "use_scale_shift_norm": False, - "raw_unet": True, - "noise_schedule": "squaredcos_cap_v2", - "rescale_learned_sigmas": False, - "rescale_timesteps": False, - "num_classes": '2', - "dataset": "clevr_pos", - "use_fp16": has_cuda, - "timestep_respacing": '100' -} - -for key, val in flags.items(): - clevr_options[key] = val - -clevr_model, clevr_diffusion = create_model_and_diffusion_for_clevr(**clevr_options) -clevr_model.eval() -if has_cuda: - clevr_model.convert_to_fp16() - -clevr_model.to(device) -clevr_model.load_state_dict(th.load(download_model('clevr_pos'), device)) -device = th.device('cpu' if not th.cuda.is_available() else 'cuda') - -print('creating base model...') -base_name = 'base40M-textvec' -base_model = model_from_config(MODEL_CONFIGS[base_name], device) -base_model.eval() -base_diffusion = diffusion_from_config(DIFFUSION_CONFIGS[base_name]) - -print('creating upsample model...') -upsampler_model = model_from_config(MODEL_CONFIGS['upsample'], device) -upsampler_model.eval() -upsampler_diffusion = diffusion_from_config(DIFFUSION_CONFIGS['upsample']) - -print('downloading base checkpoint...') -base_model.load_state_dict(load_checkpoint(base_name, device)) - -print('downloading upsampler checkpoint...') -upsampler_model.load_state_dict(load_checkpoint('upsample', device)) - -print('creating SDF model...') -name = 'sdf' -model = model_from_config(MODEL_CONFIGS[name], device) -model.eval() - -print('loading SDF model...') -model.load_state_dict(load_checkpoint(name, device)) - - -def compose_pointe(prompt, weights, version): - weight_list = [float(x.strip()) for x in weights.split('|')] - sampler = PointCloudSampler( - device=device, - models=[base_model, upsampler_model], - diffusions=[base_diffusion, upsampler_diffusion], - num_points=[1024, 4096 - 1024], - aux_channels=['R', 'G', 'B'], - guidance_scale=[weight_list, 0.0], - model_kwargs_key_filter=('texts', ''), # Do not condition the upsampler at all - ) - - def generate_pcd(prompt_list): - # Produce a sample from the model. - samples = None - for x in tqdm(sampler.sample_batch_progressive(batch_size=1, model_kwargs=dict(texts=prompt_list))): - samples = x - return samples - - def generate_fig(samples): - pc = sampler.output_to_point_clouds(samples)[0] - return pc - - def generate_mesh(pc): - mesh = marching_cubes_mesh( - pc=pc, - model=model, - batch_size=4096, - grid_size=128, # increase to 128 for resolution used in evals - progress=True, - ) - return mesh - - def generate_video(mesh_path): - render = rendering.OffscreenRenderer(640, 480) - mesh = o3d.io.read_triangle_mesh(mesh_path) - mesh.compute_vertex_normals() - - mat = o3d.visualization.rendering.MaterialRecord() - mat.shader = 'defaultLit' - - render.scene.camera.look_at([0, 0, 0], [1, 1, 1], [0, 0, 1]) - render.scene.add_geometry('mesh', mesh, mat) - - timestr = time.strftime("%Y%m%d-%H%M%S") - os.makedirs(timestr, exist_ok=True) - - def update_geometry(): - render.scene.clear_geometry() - render.scene.add_geometry('mesh', mesh, mat) - - def generate_images(): - for i in range(64): - # Rotation - R = mesh.get_rotation_matrix_from_xyz((0, 0, np.pi / 32)) - mesh.rotate(R, center=(0, 0, 0)) - # Update geometry - update_geometry() - img = render.render_to_image() - o3d.io.write_image(os.path.join(timestr + "/{:05d}.jpg".format(i)), img, quality=100) - time.sleep(0.05) - - generate_images() - image_list = [] - for filename in sorted(glob.glob(f'{timestr}/*.jpg')): # assuming gif - im = Image.open(filename) - image_list.append(im) - # remove the folder - shutil.rmtree(timestr) - return image_list - - prompt_list = [x.strip() for x in prompt.split("|")] - pcd = generate_pcd(prompt_list) - pc = generate_fig(pcd) - - fig = go.Figure( - data=[ - go.Scatter3d( - x=pc.coords[:, 0], y=pc.coords[:, 1], z=pc.coords[:, 2], - mode='markers', - marker=dict( - size=2, - color=['rgb({},{},{})'.format(r, g, b) for r, g, b in - zip(pc.channels["R"], pc.channels["G"], pc.channels["B"])], - ) - ) - ], - layout=dict( - scene=dict( - xaxis=dict(visible=False), - yaxis=dict(visible=False), - zaxis=dict(visible=False) - ) - ), - ) - return fig - - # huggingface failed to render, so we only visualize pointclouds - # mesh = generate_mesh(pc) - # timestr = time.strftime("%Y%m%d-%H%M%S") - # mesh_path = os.path.join(f'{timestr}.ply') - # with open(mesh_path, 'wb') as f: - # mesh.write_ply(f) - # image_frames = generate_video(mesh_path) - # gif_path = os.path.join(f'{timestr}.gif') - # image_frames[0].save(gif_path, save_all=True, optimizer=False, duration=5, append_images=image_frames[1:], loop=0) - # return f'{timestr}.gif' - - -def compose_clevr_objects(prompt, weights, steps): - weights = [float(x.strip()) for x in weights.split('|')] - weights = th.tensor(weights, device=device).reshape(-1, 1, 1, 1) - coordinates = [ - [ - float(x.split(',')[0].strip()), float(x.split(',')[1].strip())] - for x in prompt.split('|') - ] - coordinates += [[-1, -1]] # add unconditional score label - batch_size = 1 - - clevr_options['timestep_respacing'] = str(int(steps)) - _, clevr_diffusion = create_model_and_diffusion_for_clevr(**clevr_options) - - def model_fn(x_t, ts, **kwargs): - half = x_t[:1] - combined = th.cat([half] * kwargs['y'].size(0), dim=0) - model_out = clevr_model(combined, ts, **kwargs) - eps, rest = model_out[:, :3], model_out[:, 3:] - masks = kwargs.get('masks') - cond_eps = eps[masks] - uncond_eps = eps[~masks] - half_eps = uncond_eps + (weights * (cond_eps - uncond_eps)).sum(dim=0, keepdims=True) - eps = th.cat([half_eps] * x_t.size(0), dim=0) - return th.cat([eps, rest], dim=1) - - def sample(coordinates): - masks = [True] * (len(coordinates) - 1) + [False] - model_kwargs = dict( - y=th.tensor(coordinates, dtype=th.float, device=device), - masks=th.tensor(masks, dtype=th.bool, device=device) - ) - samples = clevr_diffusion.p_sample_loop( - model_fn, - (len(coordinates), 3, clevr_options["image_size"], clevr_options["image_size"]), - device=device, - clip_denoised=True, - progress=True, - model_kwargs=model_kwargs, - cond_fn=None, - )[:batch_size] - - return samples - - samples = sample(coordinates) - out_img = samples[0].permute(1, 2, 0) - out_img = (out_img + 1) / 2 - out_img = (out_img.detach().cpu() * 255.).to(th.uint8) - out_img = out_img.numpy() - - return out_img - - -def stable_diffusion_compose(prompt, steps, weights, seed): - generator = th.Generator("cuda").manual_seed(int(seed)) - image = pipe(prompt, guidance_scale=7.5, num_inference_steps=steps, - weights=weights, generator=generator).images[0] - image.save(f'{"_".join(prompt.split())}.png') - return image - - -def compose_2D_diffusion(prompt, weights, version, steps, seed): - try: - with th.no_grad(): - if version == 'Stable_Diffusion_1v_4': - res = stable_diffusion_compose(prompt, steps, weights, seed) - return res - else: - return compose_clevr_objects(prompt, weights, steps) - except Exception as e: - return None - - -examples_1 = "A castle in a forest | grainy, fog" -examples_3 = '0.1, 0.5 | 0.3, 0.5 | 0.5, 0.5 | 0.7, 0.5 | 0.9, 0.5' -examples_5 = 'a white church | lightning in the background' -examples_6 = 'mystical trees | A dark magical pond | dark' -examples_7 = 'A lake | A mountain | Cherry Blossoms next to the lake' - -image_examples = [ - [examples_6, "7.5 | 7.5 | -7.5", 'Stable_Diffusion_1v_4', 50, 8], - [examples_6, "7.5 | 7.5 | 7.5", 'Stable_Diffusion_1v_4', 50, 8], - [examples_1, "7.5 | -7.5", 'Stable_Diffusion_1v_4', 50, 0], - [examples_7, "7.5 | 7.5 | 7.5", 'Stable_Diffusion_1v_4', 50, 3], - [examples_5, "7.5 | 7.5", 'Stable_Diffusion_1v_4', 50, 0], - [examples_3, "7.5 | 7.5 | 7.5 | 7.5 | 7.5", 'CLEVR Objects', 100, 0] -] - -pointe_examples = [["a cake | a house", "7.5 | 7.5", 'Point-E'], - ["a chair | chair legs", "7.5 | -7.5", 'Point-E'], - ["a green avocado | a chair", "7.5 | 3", 'Point-E'], - ["a toilet | a chair", "7 | 5", 'Point-E']] - -with gr.Blocks() as demo: - gr.Markdown( - """

      Composable Diffusion Models (ECCV - 2022) - Project Page

      """) - gr.Markdown( - """ - - - - - - -
      -
      - -
      "Mystical trees" AND "A magical pond" AND "Dark"
      -
      -
      -
      - -
      "Mystical trees" AND "A magical pond" AND NOT "Dark"
      -
      -
      -
      - -
      "A chair" AND NOT "Chair legs"
      -
      -
      -
      - -
      "A monitor" AND "A brown couch"
      -
      -
      - """ - ) - gr.Markdown( - """

      Compositional visual generation by composing pre-trained diffusion models - using compositional operators, AND and NOT.

      """) - gr.Markdown( - """

      When composing multiple inputs, please use “|” to separate them

      """) - gr.Markdown( - """

      ( Clevr Note: For composing CLEVR objects, we recommend using x in range [0.1, - 0.9] and y in range [0.25, 0.7], since the training dataset labels are in - given ranges.)

      """) - gr.Markdown( - """

      ( Point-E Note: This demo only shows the point cloud results instead of meshes due to - hardware limitation. For mesh results, check out our code to render them on your local machine!)

      """) - gr.Markdown( - """

      ( Stable Diffusion Note: Stable Diffusion has a filter enabled, so it sometimes generates all black - results for possibly inappropriate images.)

      """) - gr.Markdown( - """

      ( Note: Absolute values of weights should be > 1, negative weights indicate negation.)

      """ - ) - with gr.Row(): - with gr.Column(): - gr.Markdown( - """

      Composing natural language descriptions / objects for 2D image - generation

      """) - with gr.Row(): - text_input = gr.Textbox(value="mystical trees | A dark magical pond | dark", label="Text to image prompt") - weights_input = gr.Textbox(value="7.5 | 7.5 | 7.5", label="Weights") - with gr.Row(): - seed_input = gr.Number(0, label="Seed") - steps_input = gr.Slider(10, 200, value=50, label="Steps") - with gr.Row(): - model_input = gr.Radio( - ['Stable_Diffusion_1v_4', 'CLEVR Objects'], type="value", label='Text to image model', - value='Stable_Diffusion_1v_4') - image_output = gr.Image() - image_button = gr.Button("Generate") - img_examples = gr.Examples( - examples=image_examples, - inputs=[text_input, weights_input, model_input, steps_input, seed_input] - ) - - with gr.Column(): - gr.Markdown( - """

      Composing natural language descriptions for 3D asset generation

      """) - with gr.Row(): - asset_input = gr.Textbox(value="a cake | a house", label="Text to 3D prompt") - with gr.Row(): - asset_weights = gr.Textbox(value="7.5 | 7.5", label="Weights") - with gr.Row(): - asset_model = gr.Radio(['Point-E'], type="value", label='Text to 3D model', value='Point-E') - # asset_output = gr.Image(label='GIF') - asset_output = gr.Plot(label='Plot') - asset_button = gr.Button("Generate") - asset_examples = gr.Examples(examples=pointe_examples, inputs=[asset_input, asset_weights, asset_model]) - - image_button.click(compose_2D_diffusion, - inputs=[text_input, weights_input, model_input, steps_input, seed_input], - outputs=image_output) - asset_button.click(compose_pointe, inputs=[asset_input, asset_weights, asset_model], outputs=asset_output) - -if __name__ == "__main__": - demo.queue(max_size=5) - demo.launch(debug=True) diff --git a/spaces/Sidaddy/Beluga2ScriptGenerator/app.py b/spaces/Sidaddy/Beluga2ScriptGenerator/app.py deleted file mode 100644 index 2e8c01a0ef4b9e27d75cd6ea5cf6fde28aa24fb4..0000000000000000000000000000000000000000 --- a/spaces/Sidaddy/Beluga2ScriptGenerator/app.py +++ /dev/null @@ -1,3 +0,0 @@ -import gradio as gr - -gr.load("models/stabilityai/StableBeluga2").launch() \ No newline at end of file diff --git a/spaces/SmileyTatsu/Bleh/Dockerfile b/spaces/SmileyTatsu/Bleh/Dockerfile deleted file mode 100644 index e5d2c0c6d6f99d68d4f095a2c1fba349cc59efaa..0000000000000000000000000000000000000000 --- a/spaces/SmileyTatsu/Bleh/Dockerfile +++ /dev/null @@ -1,11 +0,0 @@ -FROM node:18-bullseye-slim -RUN apt-get update && \ - apt-get install -y git -RUN git clone https://gitgud.io/Drago/oai-reverse-proxy.git /app -WORKDIR /app -RUN npm install -COPY Dockerfile greeting.md* .env* ./ -RUN npm run build -EXPOSE 7860 -ENV NODE_ENV=production -CMD [ "npm", "start" ] \ No newline at end of file diff --git a/spaces/Sonnt/Fracture_Webapp/backup/081222/Antuns/page_setting.py b/spaces/Sonnt/Fracture_Webapp/backup/081222/Antuns/page_setting.py deleted file mode 100644 index 42d234f4163a3c39142b4f0939e196184a468893..0000000000000000000000000000000000000000 --- a/spaces/Sonnt/Fracture_Webapp/backup/081222/Antuns/page_setting.py +++ /dev/null @@ -1,14 +0,0 @@ -import streamlit as st -from PIL import Image -img = Image.open("data/LogoVPI.png") -def page_intro(): - st.set_page_config(# Alternate names: setup_page, page, layout - layout="wide", # Can be "centered" or "wide". In the future also "dashboard", etc. - initial_sidebar_state="auto", # Can be "auto", "expanded", "collapsed" - page_title="VPI-MLogs", # String or None. Strings get appended with "• Streamlit". - page_icon=img, # String, anything supported by st.image, or None. - ) - col_1, col_2, col_3, col_4, col_5, = st.columns(5) - with col_3: - st.image("https://i.ibb.co/Yd42K98/LogoVPI.png", width=250) - st.header("Welcome to VPI-MLOGS!") \ No newline at end of file diff --git a/spaces/SungBeom/chatwine-korean/.venv/Lib/site-packages/IPython/core/events.py b/spaces/SungBeom/chatwine-korean/.venv/Lib/site-packages/IPython/core/events.py deleted file mode 100644 index 3a66e75e5a497de424df54963618bc2b3c711d77..0000000000000000000000000000000000000000 --- a/spaces/SungBeom/chatwine-korean/.venv/Lib/site-packages/IPython/core/events.py +++ /dev/null @@ -1,166 +0,0 @@ -"""Infrastructure for registering and firing callbacks on application events. - -Unlike :mod:`IPython.core.hooks`, which lets end users set single functions to -be called at specific times, or a collection of alternative methods to try, -callbacks are designed to be used by extension authors. A number of callbacks -can be registered for the same event without needing to be aware of one another. - -The functions defined in this module are no-ops indicating the names of available -events and the arguments which will be passed to them. - -.. note:: - - This API is experimental in IPython 2.0, and may be revised in future versions. -""" - -from backcall import callback_prototype - - -class EventManager(object): - """Manage a collection of events and a sequence of callbacks for each. - - This is attached to :class:`~IPython.core.interactiveshell.InteractiveShell` - instances as an ``events`` attribute. - - .. note:: - - This API is experimental in IPython 2.0, and may be revised in future versions. - """ - - def __init__(self, shell, available_events, print_on_error=True): - """Initialise the :class:`CallbackManager`. - - Parameters - ---------- - shell - The :class:`~IPython.core.interactiveshell.InteractiveShell` instance - available_events - An iterable of names for callback events. - print_on_error: - A boolean flag to set whether the EventManager will print a warning which a event errors. - """ - self.shell = shell - self.callbacks = {n:[] for n in available_events} - self.print_on_error = print_on_error - - def register(self, event, function): - """Register a new event callback. - - Parameters - ---------- - event : str - The event for which to register this callback. - function : callable - A function to be called on the given event. It should take the same - parameters as the appropriate callback prototype. - - Raises - ------ - TypeError - If ``function`` is not callable. - KeyError - If ``event`` is not one of the known events. - """ - if not callable(function): - raise TypeError('Need a callable, got %r' % function) - callback_proto = available_events.get(event) - if function not in self.callbacks[event]: - self.callbacks[event].append(callback_proto.adapt(function)) - - def unregister(self, event, function): - """Remove a callback from the given event.""" - if function in self.callbacks[event]: - return self.callbacks[event].remove(function) - - # Remove callback in case ``function`` was adapted by `backcall`. - for callback in self.callbacks[event]: - try: - if callback.__wrapped__ is function: - return self.callbacks[event].remove(callback) - except AttributeError: - pass - - raise ValueError('Function {!r} is not registered as a {} callback'.format(function, event)) - - def trigger(self, event, *args, **kwargs): - """Call callbacks for ``event``. - - Any additional arguments are passed to all callbacks registered for this - event. Exceptions raised by callbacks are caught, and a message printed. - """ - for func in self.callbacks[event][:]: - try: - func(*args, **kwargs) - except (Exception, KeyboardInterrupt): - if self.print_on_error: - print("Error in callback {} (for {}):".format(func, event)) - self.shell.showtraceback() - -# event_name -> prototype mapping -available_events = {} - -def _define_event(callback_function): - callback_proto = callback_prototype(callback_function) - available_events[callback_function.__name__] = callback_proto - return callback_proto - -# ------------------------------------------------------------------------------ -# Callback prototypes -# -# No-op functions which describe the names of available events and the -# signatures of callbacks for those events. -# ------------------------------------------------------------------------------ - -@_define_event -def pre_execute(): - """Fires before code is executed in response to user/frontend action. - - This includes comm and widget messages and silent execution, as well as user - code cells. - """ - pass - -@_define_event -def pre_run_cell(info): - """Fires before user-entered code runs. - - Parameters - ---------- - info : :class:`~IPython.core.interactiveshell.ExecutionInfo` - An object containing information used for the code execution. - """ - pass - -@_define_event -def post_execute(): - """Fires after code is executed in response to user/frontend action. - - This includes comm and widget messages and silent execution, as well as user - code cells. - """ - pass - -@_define_event -def post_run_cell(result): - """Fires after user-entered code runs. - - Parameters - ---------- - result : :class:`~IPython.core.interactiveshell.ExecutionResult` - The object which will be returned as the execution result. - """ - pass - -@_define_event -def shell_initialized(ip): - """Fires after initialisation of :class:`~IPython.core.interactiveshell.InteractiveShell`. - - This is before extensions and startup scripts are loaded, so it can only be - set by subclassing. - - Parameters - ---------- - ip : :class:`~IPython.core.interactiveshell.InteractiveShell` - The newly initialised shell. - """ - pass diff --git a/spaces/SungBeom/chatwine-korean/.venv/Lib/site-packages/docarray/documents/legacy/__init__.py b/spaces/SungBeom/chatwine-korean/.venv/Lib/site-packages/docarray/documents/legacy/__init__.py deleted file mode 100644 index 61cb9c485c1fb769ce6b4fff5e3caaedc63a9d29..0000000000000000000000000000000000000000 --- a/spaces/SungBeom/chatwine-korean/.venv/Lib/site-packages/docarray/documents/legacy/__init__.py +++ /dev/null @@ -1,3 +0,0 @@ -from docarray.documents.legacy.legacy_document import LegacyDocument - -__all__ = ['LegacyDocument'] diff --git a/spaces/SungBeom/chatwine-korean/.venv/Lib/site-packages/docarray/typing/url/url_3d/point_cloud_url.py b/spaces/SungBeom/chatwine-korean/.venv/Lib/site-packages/docarray/typing/url/url_3d/point_cloud_url.py deleted file mode 100644 index efe6ce6ae0e0623010c5b522e2018b506fcc0d01..0000000000000000000000000000000000000000 --- a/spaces/SungBeom/chatwine-korean/.venv/Lib/site-packages/docarray/typing/url/url_3d/point_cloud_url.py +++ /dev/null @@ -1,120 +0,0 @@ -from typing import TYPE_CHECKING, Any, Dict, Optional, TypeVar - -import numpy as np -from pydantic import parse_obj_as - -from docarray.typing.proto_register import _register_proto -from docarray.typing.tensor.ndarray import NdArray -from docarray.typing.url.url_3d.url_3d import Url3D - -if TYPE_CHECKING: - from docarray.documents.point_cloud.points_and_colors import PointsAndColors - - -T = TypeVar('T', bound='PointCloud3DUrl') - - -@_register_proto(proto_type_name='point_cloud_url') -class PointCloud3DUrl(Url3D): - """ - URL to a file containing point cloud information. - Can be remote (web) URL, or a local file path. - """ - - def load( - self: T, - samples: int, - multiple_geometries: bool = False, - skip_materials: bool = True, - trimesh_args: Optional[Dict[str, Any]] = None, - ) -> 'PointsAndColors': - """ - Load the data from the url into an `NdArray` containing point cloud information. - - - --- - - ```python - import numpy as np - from docarray import BaseDoc - - from docarray.typing import PointCloud3DUrl - - - class MyDoc(BaseDoc): - point_cloud_url: PointCloud3DUrl - - - doc = MyDoc(point_cloud_url="thttps://people.sc.fsu.edu/~jburkardt/data/obj/al.obj") - - # point_cloud = doc.point_cloud_url.load(samples=100) - - # assert isinstance(point_cloud, np.ndarray) - # assert point_cloud.shape == (100, 3) - ``` - - --- - - :param samples: number of points to sample from the mesh - :param multiple_geometries: if False, store point cloud in 2D np.ndarray. - If True, store point clouds from multiple geometries in 3D np.ndarray. - :param skip_materials: Skip materials if True, else load. - :param trimesh_args: dictionary of additional arguments for `trimesh.load()` - or `trimesh.load_remote()`. - - :return: np.ndarray representing the point cloud - """ - from docarray.documents.point_cloud.points_and_colors import PointsAndColors - - if not trimesh_args: - trimesh_args = {} - - if multiple_geometries: - # try to coerce everything into a scene - scene = self._load_trimesh_instance( - force='scene', skip_materials=skip_materials, **trimesh_args - ) - point_cloud = np.stack( - [np.array(geo.sample(samples)) for geo in scene.geometry.values()], - axis=0, - ) - else: - # combine a scene into a single mesh - mesh = self._load_trimesh_instance(force='mesh', **trimesh_args) - point_cloud = np.array(mesh.sample(samples)) - - points = parse_obj_as(NdArray, point_cloud) - return PointsAndColors(points=points, colors=None) - - def display( - self, - samples: int = 10000, - ) -> None: - """ - Plot point cloud from url. - - First, it loads the point cloud into a `PointsAndColors` object, and then - calls display on it. The following is therefore equivalent: - - --- - - ```python - import numpy as np - from docarray import BaseDoc - - from docarray.documents import PointCloud3D - - pc = PointCloud3D(url="https://people.sc.fsu.edu/~jburkardt/data/obj/al.obj") - - # option 1 - # pc.url.display() - - # option 2 (equivalent) - # pc.url.load(samples=10000).display() - ``` - - --- - - :param samples: number of points to sample from the mesh. - """ - self.load(samples=samples, skip_materials=False).display() diff --git a/spaces/TRI-ML/risk_biased_prediction/tests/risk_biased/__init__.py b/spaces/TRI-ML/risk_biased_prediction/tests/risk_biased/__init__.py deleted file mode 100644 index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000 diff --git a/spaces/TencentARC/VLog/models/grit_src/third_party/CenterNet2/detectron2/data/transforms/augmentation_impl.py b/spaces/TencentARC/VLog/models/grit_src/third_party/CenterNet2/detectron2/data/transforms/augmentation_impl.py deleted file mode 100644 index 652a34a9aef2d4004f46ad7814befe6d1c230bc4..0000000000000000000000000000000000000000 --- a/spaces/TencentARC/VLog/models/grit_src/third_party/CenterNet2/detectron2/data/transforms/augmentation_impl.py +++ /dev/null @@ -1,614 +0,0 @@ -# -*- coding: utf-8 -*- -# Copyright (c) Facebook, Inc. and its affiliates. -""" -Implement many useful :class:`Augmentation`. -""" -import numpy as np -import sys -from typing import Tuple -import torch -from fvcore.transforms.transform import ( - BlendTransform, - CropTransform, - HFlipTransform, - NoOpTransform, - PadTransform, - Transform, - TransformList, - VFlipTransform, -) -from PIL import Image - -from .augmentation import Augmentation, _transform_to_aug -from .transform import ExtentTransform, ResizeTransform, RotationTransform - -__all__ = [ - "FixedSizeCrop", - "RandomApply", - "RandomBrightness", - "RandomContrast", - "RandomCrop", - "RandomExtent", - "RandomFlip", - "RandomSaturation", - "RandomLighting", - "RandomRotation", - "Resize", - "ResizeScale", - "ResizeShortestEdge", - "RandomCrop_CategoryAreaConstraint", -] - - -class RandomApply(Augmentation): - """ - Randomly apply an augmentation with a given probability. - """ - - def __init__(self, tfm_or_aug, prob=0.5): - """ - Args: - tfm_or_aug (Transform, Augmentation): the transform or augmentation - to be applied. It can either be a `Transform` or `Augmentation` - instance. - prob (float): probability between 0.0 and 1.0 that - the wrapper transformation is applied - """ - super().__init__() - self.aug = _transform_to_aug(tfm_or_aug) - assert 0.0 <= prob <= 1.0, f"Probablity must be between 0.0 and 1.0 (given: {prob})" - self.prob = prob - - def get_transform(self, *args): - do = self._rand_range() < self.prob - if do: - return self.aug.get_transform(*args) - else: - return NoOpTransform() - - def __call__(self, aug_input): - do = self._rand_range() < self.prob - if do: - return self.aug(aug_input) - else: - return NoOpTransform() - - -class RandomFlip(Augmentation): - """ - Flip the image horizontally or vertically with the given probability. - """ - - def __init__(self, prob=0.5, *, horizontal=True, vertical=False): - """ - Args: - prob (float): probability of flip. - horizontal (boolean): whether to apply horizontal flipping - vertical (boolean): whether to apply vertical flipping - """ - super().__init__() - - if horizontal and vertical: - raise ValueError("Cannot do both horiz and vert. Please use two Flip instead.") - if not horizontal and not vertical: - raise ValueError("At least one of horiz or vert has to be True!") - self._init(locals()) - - def get_transform(self, image): - h, w = image.shape[:2] - do = self._rand_range() < self.prob - if do: - if self.horizontal: - return HFlipTransform(w) - elif self.vertical: - return VFlipTransform(h) - else: - return NoOpTransform() - - -class Resize(Augmentation): - """Resize image to a fixed target size""" - - def __init__(self, shape, interp=Image.BILINEAR): - """ - Args: - shape: (h, w) tuple or a int - interp: PIL interpolation method - """ - if isinstance(shape, int): - shape = (shape, shape) - shape = tuple(shape) - self._init(locals()) - - def get_transform(self, image): - return ResizeTransform( - image.shape[0], image.shape[1], self.shape[0], self.shape[1], self.interp - ) - - -class ResizeShortestEdge(Augmentation): - """ - Resize the image while keeping the aspect ratio unchanged. - It attempts to scale the shorter edge to the given `short_edge_length`, - as long as the longer edge does not exceed `max_size`. - If `max_size` is reached, then downscale so that the longer edge does not exceed max_size. - """ - - @torch.jit.unused - def __init__( - self, short_edge_length, max_size=sys.maxsize, sample_style="range", interp=Image.BILINEAR - ): - """ - Args: - short_edge_length (list[int]): If ``sample_style=="range"``, - a [min, max] interval from which to sample the shortest edge length. - If ``sample_style=="choice"``, a list of shortest edge lengths to sample from. - max_size (int): maximum allowed longest edge length. - sample_style (str): either "range" or "choice". - """ - super().__init__() - assert sample_style in ["range", "choice"], sample_style - - self.is_range = sample_style == "range" - if isinstance(short_edge_length, int): - short_edge_length = (short_edge_length, short_edge_length) - if self.is_range: - assert len(short_edge_length) == 2, ( - "short_edge_length must be two values using 'range' sample style." - f" Got {short_edge_length}!" - ) - self._init(locals()) - - @torch.jit.unused - def get_transform(self, image): - h, w = image.shape[:2] - if self.is_range: - size = np.random.randint(self.short_edge_length[0], self.short_edge_length[1] + 1) - else: - size = np.random.choice(self.short_edge_length) - if size == 0: - return NoOpTransform() - - newh, neww = ResizeShortestEdge.get_output_shape(h, w, size, self.max_size) - return ResizeTransform(h, w, newh, neww, self.interp) - - @staticmethod - def get_output_shape( - oldh: int, oldw: int, short_edge_length: int, max_size: int - ) -> Tuple[int, int]: - """ - Compute the output size given input size and target short edge length. - """ - h, w = oldh, oldw - size = short_edge_length * 1.0 - scale = size / min(h, w) - if h < w: - newh, neww = size, scale * w - else: - newh, neww = scale * h, size - if max(newh, neww) > max_size: - scale = max_size * 1.0 / max(newh, neww) - newh = newh * scale - neww = neww * scale - neww = int(neww + 0.5) - newh = int(newh + 0.5) - return (newh, neww) - - -class ResizeScale(Augmentation): - """ - Takes target size as input and randomly scales the given target size between `min_scale` - and `max_scale`. It then scales the input image such that it fits inside the scaled target - box, keeping the aspect ratio constant. - This implements the resize part of the Google's 'resize_and_crop' data augmentation: - https://github.com/tensorflow/tpu/blob/master/models/official/detection/utils/input_utils.py#L127 - """ - - def __init__( - self, - min_scale: float, - max_scale: float, - target_height: int, - target_width: int, - interp: int = Image.BILINEAR, - ): - """ - Args: - min_scale: minimum image scale range. - max_scale: maximum image scale range. - target_height: target image height. - target_width: target image width. - interp: image interpolation method. - """ - super().__init__() - self._init(locals()) - - def _get_resize(self, image: np.ndarray, scale: float) -> Transform: - input_size = image.shape[:2] - - # Compute new target size given a scale. - target_size = (self.target_height, self.target_width) - target_scale_size = np.multiply(target_size, scale) - - # Compute actual rescaling applied to input image and output size. - output_scale = np.minimum( - target_scale_size[0] / input_size[0], target_scale_size[1] / input_size[1] - ) - output_size = np.round(np.multiply(input_size, output_scale)).astype(int) - - return ResizeTransform( - input_size[0], input_size[1], output_size[0], output_size[1], self.interp - ) - - def get_transform(self, image: np.ndarray) -> Transform: - random_scale = np.random.uniform(self.min_scale, self.max_scale) - return self._get_resize(image, random_scale) - - -class RandomRotation(Augmentation): - """ - This method returns a copy of this image, rotated the given - number of degrees counter clockwise around the given center. - """ - - def __init__(self, angle, expand=True, center=None, sample_style="range", interp=None): - """ - Args: - angle (list[float]): If ``sample_style=="range"``, - a [min, max] interval from which to sample the angle (in degrees). - If ``sample_style=="choice"``, a list of angles to sample from - expand (bool): choose if the image should be resized to fit the whole - rotated image (default), or simply cropped - center (list[[float, float]]): If ``sample_style=="range"``, - a [[minx, miny], [maxx, maxy]] relative interval from which to sample the center, - [0, 0] being the top left of the image and [1, 1] the bottom right. - If ``sample_style=="choice"``, a list of centers to sample from - Default: None, which means that the center of rotation is the center of the image - center has no effect if expand=True because it only affects shifting - """ - super().__init__() - assert sample_style in ["range", "choice"], sample_style - self.is_range = sample_style == "range" - if isinstance(angle, (float, int)): - angle = (angle, angle) - if center is not None and isinstance(center[0], (float, int)): - center = (center, center) - self._init(locals()) - - def get_transform(self, image): - h, w = image.shape[:2] - center = None - if self.is_range: - angle = np.random.uniform(self.angle[0], self.angle[1]) - if self.center is not None: - center = ( - np.random.uniform(self.center[0][0], self.center[1][0]), - np.random.uniform(self.center[0][1], self.center[1][1]), - ) - else: - angle = np.random.choice(self.angle) - if self.center is not None: - center = np.random.choice(self.center) - - if center is not None: - center = (w * center[0], h * center[1]) # Convert to absolute coordinates - - if angle % 360 == 0: - return NoOpTransform() - - return RotationTransform(h, w, angle, expand=self.expand, center=center, interp=self.interp) - - -class FixedSizeCrop(Augmentation): - """ - If `crop_size` is smaller than the input image size, then it uses a random crop of - the crop size. If `crop_size` is larger than the input image size, then it pads - the right and the bottom of the image to the crop size if `pad` is True, otherwise - it returns the smaller image. - """ - - def __init__(self, crop_size: Tuple[int], pad: bool = True, pad_value: float = 128.0): - """ - Args: - crop_size: target image (height, width). - pad: if True, will pad images smaller than `crop_size` up to `crop_size` - pad_value: the padding value. - """ - super().__init__() - self._init(locals()) - - def _get_crop(self, image: np.ndarray) -> Transform: - # Compute the image scale and scaled size. - input_size = image.shape[:2] - output_size = self.crop_size - - # Add random crop if the image is scaled up. - max_offset = np.subtract(input_size, output_size) - max_offset = np.maximum(max_offset, 0) - offset = np.multiply(max_offset, np.random.uniform(0.0, 1.0)) - offset = np.round(offset).astype(int) - return CropTransform( - offset[1], offset[0], output_size[1], output_size[0], input_size[1], input_size[0] - ) - - def _get_pad(self, image: np.ndarray) -> Transform: - # Compute the image scale and scaled size. - input_size = image.shape[:2] - output_size = self.crop_size - - # Add padding if the image is scaled down. - pad_size = np.subtract(output_size, input_size) - pad_size = np.maximum(pad_size, 0) - original_size = np.minimum(input_size, output_size) - return PadTransform( - 0, 0, pad_size[1], pad_size[0], original_size[1], original_size[0], self.pad_value - ) - - def get_transform(self, image: np.ndarray) -> TransformList: - transforms = [self._get_crop(image)] - if self.pad: - transforms.append(self._get_pad(image)) - return TransformList(transforms) - - -class RandomCrop(Augmentation): - """ - Randomly crop a rectangle region out of an image. - """ - - def __init__(self, crop_type: str, crop_size): - """ - Args: - crop_type (str): one of "relative_range", "relative", "absolute", "absolute_range". - crop_size (tuple[float, float]): two floats, explained below. - - - "relative": crop a (H * crop_size[0], W * crop_size[1]) region from an input image of - size (H, W). crop size should be in (0, 1] - - "relative_range": uniformly sample two values from [crop_size[0], 1] - and [crop_size[1]], 1], and use them as in "relative" crop type. - - "absolute" crop a (crop_size[0], crop_size[1]) region from input image. - crop_size must be smaller than the input image size. - - "absolute_range", for an input of size (H, W), uniformly sample H_crop in - [crop_size[0], min(H, crop_size[1])] and W_crop in [crop_size[0], min(W, crop_size[1])]. - Then crop a region (H_crop, W_crop). - """ - # TODO style of relative_range and absolute_range are not consistent: - # one takes (h, w) but another takes (min, max) - super().__init__() - assert crop_type in ["relative_range", "relative", "absolute", "absolute_range"] - self._init(locals()) - - def get_transform(self, image): - h, w = image.shape[:2] - croph, cropw = self.get_crop_size((h, w)) - assert h >= croph and w >= cropw, "Shape computation in {} has bugs.".format(self) - h0 = np.random.randint(h - croph + 1) - w0 = np.random.randint(w - cropw + 1) - return CropTransform(w0, h0, cropw, croph) - - def get_crop_size(self, image_size): - """ - Args: - image_size (tuple): height, width - - Returns: - crop_size (tuple): height, width in absolute pixels - """ - h, w = image_size - if self.crop_type == "relative": - ch, cw = self.crop_size - return int(h * ch + 0.5), int(w * cw + 0.5) - elif self.crop_type == "relative_range": - crop_size = np.asarray(self.crop_size, dtype=np.float32) - ch, cw = crop_size + np.random.rand(2) * (1 - crop_size) - return int(h * ch + 0.5), int(w * cw + 0.5) - elif self.crop_type == "absolute": - return (min(self.crop_size[0], h), min(self.crop_size[1], w)) - elif self.crop_type == "absolute_range": - assert self.crop_size[0] <= self.crop_size[1] - ch = np.random.randint(min(h, self.crop_size[0]), min(h, self.crop_size[1]) + 1) - cw = np.random.randint(min(w, self.crop_size[0]), min(w, self.crop_size[1]) + 1) - return ch, cw - else: - raise NotImplementedError("Unknown crop type {}".format(self.crop_type)) - - -class RandomCrop_CategoryAreaConstraint(Augmentation): - """ - Similar to :class:`RandomCrop`, but find a cropping window such that no single category - occupies a ratio of more than `single_category_max_area` in semantic segmentation ground - truth, which can cause unstability in training. The function attempts to find such a valid - cropping window for at most 10 times. - """ - - def __init__( - self, - crop_type: str, - crop_size, - single_category_max_area: float = 1.0, - ignored_category: int = None, - ): - """ - Args: - crop_type, crop_size: same as in :class:`RandomCrop` - single_category_max_area: the maximum allowed area ratio of a - category. Set to 1.0 to disable - ignored_category: allow this category in the semantic segmentation - ground truth to exceed the area ratio. Usually set to the category - that's ignored in training. - """ - self.crop_aug = RandomCrop(crop_type, crop_size) - self._init(locals()) - - def get_transform(self, image, sem_seg): - if self.single_category_max_area >= 1.0: - return self.crop_aug.get_transform(image) - else: - h, w = sem_seg.shape - for _ in range(10): - crop_size = self.crop_aug.get_crop_size((h, w)) - y0 = np.random.randint(h - crop_size[0] + 1) - x0 = np.random.randint(w - crop_size[1] + 1) - sem_seg_temp = sem_seg[y0 : y0 + crop_size[0], x0 : x0 + crop_size[1]] - labels, cnt = np.unique(sem_seg_temp, return_counts=True) - if self.ignored_category is not None: - cnt = cnt[labels != self.ignored_category] - if len(cnt) > 1 and np.max(cnt) < np.sum(cnt) * self.single_category_max_area: - break - crop_tfm = CropTransform(x0, y0, crop_size[1], crop_size[0]) - return crop_tfm - - -class RandomExtent(Augmentation): - """ - Outputs an image by cropping a random "subrect" of the source image. - - The subrect can be parameterized to include pixels outside the source image, - in which case they will be set to zeros (i.e. black). The size of the output - image will vary with the size of the random subrect. - """ - - def __init__(self, scale_range, shift_range): - """ - Args: - output_size (h, w): Dimensions of output image - scale_range (l, h): Range of input-to-output size scaling factor - shift_range (x, y): Range of shifts of the cropped subrect. The rect - is shifted by [w / 2 * Uniform(-x, x), h / 2 * Uniform(-y, y)], - where (w, h) is the (width, height) of the input image. Set each - component to zero to crop at the image's center. - """ - super().__init__() - self._init(locals()) - - def get_transform(self, image): - img_h, img_w = image.shape[:2] - - # Initialize src_rect to fit the input image. - src_rect = np.array([-0.5 * img_w, -0.5 * img_h, 0.5 * img_w, 0.5 * img_h]) - - # Apply a random scaling to the src_rect. - src_rect *= np.random.uniform(self.scale_range[0], self.scale_range[1]) - - # Apply a random shift to the coordinates origin. - src_rect[0::2] += self.shift_range[0] * img_w * (np.random.rand() - 0.5) - src_rect[1::2] += self.shift_range[1] * img_h * (np.random.rand() - 0.5) - - # Map src_rect coordinates into image coordinates (center at corner). - src_rect[0::2] += 0.5 * img_w - src_rect[1::2] += 0.5 * img_h - - return ExtentTransform( - src_rect=(src_rect[0], src_rect[1], src_rect[2], src_rect[3]), - output_size=(int(src_rect[3] - src_rect[1]), int(src_rect[2] - src_rect[0])), - ) - - -class RandomContrast(Augmentation): - """ - Randomly transforms image contrast. - - Contrast intensity is uniformly sampled in (intensity_min, intensity_max). - - intensity < 1 will reduce contrast - - intensity = 1 will preserve the input image - - intensity > 1 will increase contrast - - See: https://pillow.readthedocs.io/en/3.0.x/reference/ImageEnhance.html - """ - - def __init__(self, intensity_min, intensity_max): - """ - Args: - intensity_min (float): Minimum augmentation - intensity_max (float): Maximum augmentation - """ - super().__init__() - self._init(locals()) - - def get_transform(self, image): - w = np.random.uniform(self.intensity_min, self.intensity_max) - return BlendTransform(src_image=image.mean(), src_weight=1 - w, dst_weight=w) - - -class RandomBrightness(Augmentation): - """ - Randomly transforms image brightness. - - Brightness intensity is uniformly sampled in (intensity_min, intensity_max). - - intensity < 1 will reduce brightness - - intensity = 1 will preserve the input image - - intensity > 1 will increase brightness - - See: https://pillow.readthedocs.io/en/3.0.x/reference/ImageEnhance.html - """ - - def __init__(self, intensity_min, intensity_max): - """ - Args: - intensity_min (float): Minimum augmentation - intensity_max (float): Maximum augmentation - """ - super().__init__() - self._init(locals()) - - def get_transform(self, image): - w = np.random.uniform(self.intensity_min, self.intensity_max) - return BlendTransform(src_image=0, src_weight=1 - w, dst_weight=w) - - -class RandomSaturation(Augmentation): - """ - Randomly transforms saturation of an RGB image. - Input images are assumed to have 'RGB' channel order. - - Saturation intensity is uniformly sampled in (intensity_min, intensity_max). - - intensity < 1 will reduce saturation (make the image more grayscale) - - intensity = 1 will preserve the input image - - intensity > 1 will increase saturation - - See: https://pillow.readthedocs.io/en/3.0.x/reference/ImageEnhance.html - """ - - def __init__(self, intensity_min, intensity_max): - """ - Args: - intensity_min (float): Minimum augmentation (1 preserves input). - intensity_max (float): Maximum augmentation (1 preserves input). - """ - super().__init__() - self._init(locals()) - - def get_transform(self, image): - assert image.shape[-1] == 3, "RandomSaturation only works on RGB images" - w = np.random.uniform(self.intensity_min, self.intensity_max) - grayscale = image.dot([0.299, 0.587, 0.114])[:, :, np.newaxis] - return BlendTransform(src_image=grayscale, src_weight=1 - w, dst_weight=w) - - -class RandomLighting(Augmentation): - """ - The "lighting" augmentation described in AlexNet, using fixed PCA over ImageNet. - Input images are assumed to have 'RGB' channel order. - - The degree of color jittering is randomly sampled via a normal distribution, - with standard deviation given by the scale parameter. - """ - - def __init__(self, scale): - """ - Args: - scale (float): Standard deviation of principal component weighting. - """ - super().__init__() - self._init(locals()) - self.eigen_vecs = np.array( - [[-0.5675, 0.7192, 0.4009], [-0.5808, -0.0045, -0.8140], [-0.5836, -0.6948, 0.4203]] - ) - self.eigen_vals = np.array([0.2175, 0.0188, 0.0045]) - - def get_transform(self, image): - assert image.shape[-1] == 3, "RandomLighting only works on RGB images" - weights = np.random.normal(scale=self.scale, size=3) - return BlendTransform( - src_image=self.eigen_vecs.dot(weights * self.eigen_vals), src_weight=1.0, dst_weight=1.0 - ) diff --git a/spaces/ThirdEyeData/Object_Detection/app.py b/spaces/ThirdEyeData/Object_Detection/app.py deleted file mode 100644 index 8b9fe278abbdbca34de40892b8b6aa63a4f0872b..0000000000000000000000000000000000000000 --- a/spaces/ThirdEyeData/Object_Detection/app.py +++ /dev/null @@ -1,134 +0,0 @@ -from detecto import core, utils, visualize -from detecto.visualize import show_labeled_image, plot_prediction_grid -from torchvision import transforms -import matplotlib.pyplot as plt -from tensorflow.keras.utils import img_to_array -import numpy as np -import warnings -from PIL import Image -import streamlit as st -warnings.filterwarnings("ignore", category=UserWarning) -from tempfile import NamedTemporaryFile - -import cv2 -import matplotlib.patches as patches - -import torch - -import matplotlib.image as mpimg -import os - -from detecto.utils import reverse_normalize, normalize_transform, _is_iterable -from torchvision import transforms - - -MODEL_PATH = "SD_model_weights.pth" -IMAGE_PATH = "img1.jpeg" -model = core.Model.load(MODEL_PATH, ['cross_arm','pole','tag']) -#warnings.warn(msg) - -st.title("Object Detection") -image = utils.read_image(IMAGE_PATH) -predictions = model.predict(image) -labels, boxes, scores = predictions - -images = ["img1.jpeg","img4.jpeg","img5.jpeg","img6.jpeg"] -with st.sidebar: - st.write("choose an image") - st.image(images) - - - -def detect_object(IMAGE_PATH): - image = utils.read_image(IMAGE_PATH) - # predictions = model.predict(image) - # labels, boxes, scores = predictions - - - thresh=0.2 - filtered_indices=np.where(scores>thresh) - filtered_scores=scores[filtered_indices] - filtered_boxes=boxes[filtered_indices] - num_list = filtered_indices[0].tolist() - filtered_labels = [labels[i] for i in num_list] - show_labeled_image(image, filtered_boxes, filtered_labels) - - fig1 = show_image(image,filtered_boxes,filtered_labels) - st.write("Object Detected Image is") - st.image(fig1) - #img_array = img_to_array(img) -def show_image(image, boxes, labels=None): - """Show the image along with the specified boxes around detected objects. - Also displays each box's label if a list of labels is provided. - :param image: The image to plot. If the image is a normalized - torch.Tensor object, it will automatically be reverse-normalized - and converted to a PIL image for plotting. - :type image: numpy.ndarray or torch.Tensor - :param boxes: A torch tensor of size (N, 4) where N is the number - of boxes to plot, or simply size 4 if N is 1. - :type boxes: torch.Tensor - :param labels: (Optional) A list of size N giving the labels of - each box (labels[i] corresponds to boxes[i]). Defaults to None. - :type labels: torch.Tensor or None - **Example**:: - >>> from detecto.core import Model - >>> from detecto.utils import read_image - >>> from detecto.visualize import show_labeled_image - >>> model = Model.load('model_weights.pth', ['tick', 'gate']) - >>> image = read_image('image.jpg') - >>> labels, boxes, scores = model.predict(image) - >>> show_labeled_image(image, boxes, labels) - """ - fig, ax = plt.subplots(1) - # If the image is already a tensor, convert it back to a PILImage - # and reverse normalize it - if isinstance(image, torch.Tensor): - image = reverse_normalize(image) - image = transforms.ToPILImage()(image) - ax.imshow(image) - - # Show a single box or multiple if provided - if boxes.ndim == 1: - boxes = boxes.view(1, 4) - - if labels is not None and not _is_iterable(labels): - labels = [labels] - - # Plot each box - for i in range(2): - box = boxes[i] - width, height = (box[2] - box[0]).item(), (box[3] - box[1]).item() - initial_pos = (box[0].item(), box[1].item()) - rect = patches.Rectangle(initial_pos, width, height, linewidth=1, - edgecolor='r', facecolor='none') - if labels: - ax.text(box[0] + 5, box[1] - 5, '{}'.format(labels[i]), color='red') - - ax.add_patch(rect) - - cp = os.path.abspath(os.getcwd()) + '/foo.png' - plt.savefig(cp) - plt.close(fig) - return cp - #print(type(plt - -file = st.file_uploader('Upload an Image',type=(["jpeg","jpg","png"])) - -if file is None: - st.write("Please upload an image file") -else: - image= Image.open(file) - st.write("Input Image") - st.image(image,use_column_width = True) - with NamedTemporaryFile(dir='.', suffix='.jpeg') as f: - f.write(file.getbuffer()) - #your_function_which_takes_a_path(f.name) - - detect_object(f.name) - - - - - - - diff --git a/spaces/Vision-CAIR/MiniGPT-v2/minigpt4/models/blip2_outputs.py b/spaces/Vision-CAIR/MiniGPT-v2/minigpt4/models/blip2_outputs.py deleted file mode 100644 index e8722b1fedaec1e31e39d8c80f911b8ff79bbb75..0000000000000000000000000000000000000000 --- a/spaces/Vision-CAIR/MiniGPT-v2/minigpt4/models/blip2_outputs.py +++ /dev/null @@ -1,110 +0,0 @@ -""" - Copyright (c) 2022, salesforce.com, inc. - All rights reserved. - SPDX-License-Identifier: BSD-3-Clause - For full license text, see the LICENSE_Lavis file in the repo root or https://opensource.org/licenses/BSD-3-Clause -""" - -from dataclasses import dataclass -from typing import Optional - -import torch -from transformers.modeling_outputs import ( - ModelOutput, - BaseModelOutputWithPoolingAndCrossAttentions, - CausalLMOutputWithCrossAttentions, -) - - -@dataclass -class BlipSimilarity(ModelOutput): - sim_i2t: torch.FloatTensor = None - sim_t2i: torch.FloatTensor = None - - sim_i2t_m: Optional[torch.FloatTensor] = None - sim_t2i_m: Optional[torch.FloatTensor] = None - - sim_i2t_targets: Optional[torch.FloatTensor] = None - sim_t2i_targets: Optional[torch.FloatTensor] = None - - -@dataclass -class BlipIntermediateOutput(ModelOutput): - """ - Data class for intermediate outputs of BLIP models. - - image_embeds (torch.FloatTensor): Image embeddings, shape (batch_size, num_patches, embed_dim). - text_embeds (torch.FloatTensor): Text embeddings, shape (batch_size, seq_len, embed_dim). - - image_embeds_m (torch.FloatTensor): Image embeddings from momentum visual encoder, shape (batch_size, num_patches, embed_dim). - text_embeds_m (torch.FloatTensor): Text embeddings from momentum text encoder, shape (batch_size, seq_len, embed_dim). - - encoder_output (BaseModelOutputWithPoolingAndCrossAttentions): output from the image-grounded text encoder. - encoder_output_neg (BaseModelOutputWithPoolingAndCrossAttentions): output from the image-grounded text encoder for negative pairs. - - decoder_output (CausalLMOutputWithCrossAttentions): output from the image-grounded text decoder. - decoder_labels (torch.LongTensor): labels for the captioning loss. - - itm_logits (torch.FloatTensor): logits for the image-text matching loss, shape (batch_size * 3, 2). - itm_labels (torch.LongTensor): labels for the image-text matching loss, shape (batch_size * 3,) - - """ - - # uni-modal features - image_embeds: torch.FloatTensor = None - text_embeds: Optional[torch.FloatTensor] = None - - image_embeds_m: Optional[torch.FloatTensor] = None - text_embeds_m: Optional[torch.FloatTensor] = None - - # intermediate outputs of multimodal encoder - encoder_output: Optional[BaseModelOutputWithPoolingAndCrossAttentions] = None - encoder_output_neg: Optional[BaseModelOutputWithPoolingAndCrossAttentions] = None - - itm_logits: Optional[torch.FloatTensor] = None - itm_labels: Optional[torch.LongTensor] = None - - # intermediate outputs of multimodal decoder - decoder_output: Optional[CausalLMOutputWithCrossAttentions] = None - decoder_labels: Optional[torch.LongTensor] = None - - -@dataclass -class BlipOutput(ModelOutput): - # some finetuned models (e.g. BlipVQA) do not compute similarity, thus optional. - sims: Optional[BlipSimilarity] = None - - intermediate_output: BlipIntermediateOutput = None - - loss: Optional[torch.FloatTensor] = None - - loss_itc: Optional[torch.FloatTensor] = None - - loss_itm: Optional[torch.FloatTensor] = None - - loss_lm: Optional[torch.FloatTensor] = None - - -@dataclass -class BlipOutputFeatures(ModelOutput): - """ - Data class of features from BlipFeatureExtractor. - - Args: - image_embeds: (torch.FloatTensor) of shape (batch_size, num_patches+1, embed_dim), optional - image_features: (torch.FloatTensor) of shape (batch_size, num_patches+1, feature_dim), optional - text_embeds: (torch.FloatTensor) of shape (batch_size, sequence_length+1, embed_dim), optional - text_features: (torch.FloatTensor) of shape (batch_size, sequence_length+1, feature_dim), optional - - The first embedding or feature is for the [CLS] token. - - Features are obtained by projecting the corresponding embedding into a normalized low-dimensional space. - """ - - image_embeds: Optional[torch.FloatTensor] = None - image_embeds_proj: Optional[torch.FloatTensor] = None - - text_embeds: Optional[torch.FloatTensor] = None - text_embeds_proj: Optional[torch.FloatTensor] = None - - multimodal_embeds: Optional[torch.FloatTensor] = None diff --git a/spaces/Vision-CAIR/minigpt4/minigpt4/runners/runner_base.py b/spaces/Vision-CAIR/minigpt4/minigpt4/runners/runner_base.py deleted file mode 100644 index 5f667f213d3874e3b616080df22de9ff91a9844b..0000000000000000000000000000000000000000 --- a/spaces/Vision-CAIR/minigpt4/minigpt4/runners/runner_base.py +++ /dev/null @@ -1,658 +0,0 @@ -""" - Copyright (c) 2022, salesforce.com, inc. - All rights reserved. - SPDX-License-Identifier: BSD-3-Clause - For full license text, see the LICENSE file in the repo root or https://opensource.org/licenses/BSD-3-Clause -""" - -import datetime -import json -import logging -import os -import time -from pathlib import Path - -import torch -import torch.distributed as dist -import webdataset as wds -from minigpt4.common.dist_utils import ( - download_cached_file, - get_rank, - get_world_size, - is_main_process, - main_process, -) -from minigpt4.common.registry import registry -from minigpt4.common.utils import is_url -from minigpt4.datasets.data_utils import concat_datasets, reorg_datasets_by_split, ChainDataset -from minigpt4.datasets.datasets.dataloader_utils import ( - IterLoader, - MultiIterLoader, - PrefetchLoader, -) -from torch.nn.parallel import DistributedDataParallel as DDP -from torch.utils.data import DataLoader, DistributedSampler - - -@registry.register_runner("runner_base") -class RunnerBase: - """ - A runner class to train and evaluate a model given a task and datasets. - - The runner uses pytorch distributed data parallel by default. Future release - will support other distributed frameworks. - """ - - def __init__(self, cfg, task, model, datasets, job_id): - self.config = cfg - self.job_id = job_id - - self.task = task - self.datasets = datasets - - self._model = model - - self._wrapped_model = None - self._device = None - self._optimizer = None - self._scaler = None - self._dataloaders = None - self._lr_sched = None - - self.start_epoch = 0 - - # self.setup_seeds() - self.setup_output_dir() - - @property - def device(self): - if self._device is None: - self._device = torch.device(self.config.run_cfg.device) - - return self._device - - @property - def use_distributed(self): - return self.config.run_cfg.distributed - - @property - def model(self): - """ - A property to get the DDP-wrapped model on the device. - """ - # move model to device - if self._model.device != self.device: - self._model = self._model.to(self.device) - - # distributed training wrapper - if self.use_distributed: - if self._wrapped_model is None: - self._wrapped_model = DDP( - self._model, device_ids=[self.config.run_cfg.gpu] - ) - else: - self._wrapped_model = self._model - - return self._wrapped_model - - @property - def optimizer(self): - # TODO make optimizer class and configurations - if self._optimizer is None: - num_parameters = 0 - p_wd, p_non_wd = [], [] - for n, p in self.model.named_parameters(): - if not p.requires_grad: - continue # frozen weights - print(n) - if p.ndim < 2 or "bias" in n or "ln" in n or "bn" in n: - p_non_wd.append(p) - else: - p_wd.append(p) - num_parameters += p.data.nelement() - logging.info("number of trainable parameters: %d" % num_parameters) - optim_params = [ - { - "params": p_wd, - "weight_decay": float(self.config.run_cfg.weight_decay), - }, - {"params": p_non_wd, "weight_decay": 0}, - ] - beta2 = self.config.run_cfg.get("beta2", 0.999) - self._optimizer = torch.optim.AdamW( - optim_params, - lr=float(self.config.run_cfg.init_lr), - weight_decay=float(self.config.run_cfg.weight_decay), - betas=(0.9, beta2), - ) - - return self._optimizer - - @property - def scaler(self): - amp = self.config.run_cfg.get("amp", False) - - if amp: - if self._scaler is None: - self._scaler = torch.cuda.amp.GradScaler() - - return self._scaler - - @property - def lr_scheduler(self): - """ - A property to get and create learning rate scheduler by split just in need. - """ - if self._lr_sched is None: - lr_sched_cls = registry.get_lr_scheduler_class(self.config.run_cfg.lr_sched) - - # max_epoch = self.config.run_cfg.max_epoch - max_epoch = self.max_epoch - # min_lr = self.config.run_cfg.min_lr - min_lr = self.min_lr - # init_lr = self.config.run_cfg.init_lr - init_lr = self.init_lr - - # optional parameters - decay_rate = self.config.run_cfg.get("lr_decay_rate", None) - warmup_start_lr = self.config.run_cfg.get("warmup_lr", -1) - warmup_steps = self.config.run_cfg.get("warmup_steps", 0) - iters_per_epoch = self.config.run_cfg.get("iters_per_epoch", None) - - if iters_per_epoch is None: - try: - iters_per_epoch = len(self.dataloaders['train']) - except (AttributeError, TypeError): - iters_per_epoch = 10000 - - self._lr_sched = lr_sched_cls( - optimizer=self.optimizer, - max_epoch=max_epoch, - iters_per_epoch=iters_per_epoch, - min_lr=min_lr, - init_lr=init_lr, - decay_rate=decay_rate, - warmup_start_lr=warmup_start_lr, - warmup_steps=warmup_steps, - ) - - return self._lr_sched - - @property - def dataloaders(self) -> dict: - """ - A property to get and create dataloaders by split just in need. - - If no train_dataset_ratio is provided, concatenate map-style datasets and - chain wds.DataPipe datasets separately. Training set becomes a tuple - (ConcatDataset, ChainDataset), both are optional but at least one of them is - required. The resultant ConcatDataset and ChainDataset will be sampled evenly. - - If train_dataset_ratio is provided, create a MultiIterLoader to sample - each dataset by ratios during training. - - Currently do not support multiple datasets for validation and test. - - Returns: - dict: {split_name: (tuples of) dataloader} - """ - if self._dataloaders is None: - - # concatenate map-style datasets and chain wds.DataPipe datasets separately - # training set becomes a tuple (ConcatDataset, ChainDataset), both are - # optional but at least one of them is required. The resultant ConcatDataset - # and ChainDataset will be sampled evenly. - logging.info( - "dataset_ratios not specified, datasets will be concatenated (map-style datasets) or chained (webdataset.DataPipeline)." - ) - - datasets = reorg_datasets_by_split(self.datasets) - self.datasets = datasets - # self.datasets = concat_datasets(datasets) - - # print dataset statistics after concatenation/chaining - for split_name in self.datasets: - if isinstance(self.datasets[split_name], tuple) or isinstance( - self.datasets[split_name], list - ): - # mixed wds.DataPipeline and torch.utils.data.Dataset - num_records = sum( - [ - len(d) - if not type(d) in [wds.DataPipeline, ChainDataset] - else 0 - for d in self.datasets[split_name] - ] - ) - - else: - if hasattr(self.datasets[split_name], "__len__"): - # a single map-style dataset - num_records = len(self.datasets[split_name]) - else: - # a single wds.DataPipeline - num_records = -1 - logging.info( - "Only a single wds.DataPipeline dataset, no __len__ attribute." - ) - - if num_records >= 0: - logging.info( - "Loaded {} records for {} split from the dataset.".format( - num_records, split_name - ) - ) - - # create dataloaders - split_names = sorted(self.datasets.keys()) - - datasets = [self.datasets[split] for split in split_names] - is_trains = [split in self.train_splits for split in split_names] - - batch_sizes = [ - self.config.run_cfg.batch_size_train - if split == "train" - else self.config.run_cfg.batch_size_eval - for split in split_names - ] - - collate_fns = [] - for dataset in datasets: - if isinstance(dataset, tuple) or isinstance(dataset, list): - collate_fns.append([getattr(d, "collater", None) for d in dataset]) - else: - collate_fns.append(getattr(dataset, "collater", None)) - - dataloaders = self.create_loaders( - datasets=datasets, - num_workers=self.config.run_cfg.num_workers, - batch_sizes=batch_sizes, - is_trains=is_trains, - collate_fns=collate_fns, - ) - - self._dataloaders = {k: v for k, v in zip(split_names, dataloaders)} - - return self._dataloaders - - @property - def cuda_enabled(self): - return self.device.type == "cuda" - - @property - def max_epoch(self): - return int(self.config.run_cfg.max_epoch) - - @property - def log_freq(self): - log_freq = self.config.run_cfg.get("log_freq", 50) - return int(log_freq) - - @property - def init_lr(self): - return float(self.config.run_cfg.init_lr) - - @property - def min_lr(self): - return float(self.config.run_cfg.min_lr) - - @property - def accum_grad_iters(self): - return int(self.config.run_cfg.get("accum_grad_iters", 1)) - - @property - def valid_splits(self): - valid_splits = self.config.run_cfg.get("valid_splits", []) - - if len(valid_splits) == 0: - logging.info("No validation splits found.") - - return valid_splits - - @property - def test_splits(self): - test_splits = self.config.run_cfg.get("test_splits", []) - - return test_splits - - @property - def train_splits(self): - train_splits = self.config.run_cfg.get("train_splits", []) - - if len(train_splits) == 0: - logging.info("Empty train splits.") - - return train_splits - - @property - def evaluate_only(self): - """ - Set to True to skip training. - """ - return self.config.run_cfg.evaluate - - @property - def use_dist_eval_sampler(self): - return self.config.run_cfg.get("use_dist_eval_sampler", True) - - @property - def resume_ckpt_path(self): - return self.config.run_cfg.get("resume_ckpt_path", None) - - @property - def train_loader(self): - train_dataloader = self.dataloaders["train"] - - return train_dataloader - - def setup_output_dir(self): - lib_root = Path(registry.get_path("library_root")) - - output_dir = lib_root / self.config.run_cfg.output_dir / self.job_id - result_dir = output_dir / "result" - - output_dir.mkdir(parents=True, exist_ok=True) - result_dir.mkdir(parents=True, exist_ok=True) - - registry.register_path("result_dir", str(result_dir)) - registry.register_path("output_dir", str(output_dir)) - - self.result_dir = result_dir - self.output_dir = output_dir - - def train(self): - start_time = time.time() - best_agg_metric = 0 - best_epoch = 0 - - self.log_config() - - # resume from checkpoint if specified - if not self.evaluate_only and self.resume_ckpt_path is not None: - self._load_checkpoint(self.resume_ckpt_path) - - for cur_epoch in range(self.start_epoch, self.max_epoch): - # training phase - if not self.evaluate_only: - logging.info("Start training") - train_stats = self.train_epoch(cur_epoch) - self.log_stats(split_name="train", stats=train_stats) - - # evaluation phase - if len(self.valid_splits) > 0: - for split_name in self.valid_splits: - logging.info("Evaluating on {}.".format(split_name)) - - val_log = self.eval_epoch( - split_name=split_name, cur_epoch=cur_epoch - ) - if val_log is not None: - if is_main_process(): - assert ( - "agg_metrics" in val_log - ), "No agg_metrics found in validation log." - - agg_metrics = val_log["agg_metrics"] - if agg_metrics > best_agg_metric and split_name == "val": - best_epoch, best_agg_metric = cur_epoch, agg_metrics - - self._save_checkpoint(cur_epoch, is_best=True) - - val_log.update({"best_epoch": best_epoch}) - self.log_stats(val_log, split_name) - - else: - # if no validation split is provided, we just save the checkpoint at the end of each epoch. - if not self.evaluate_only: - self._save_checkpoint(cur_epoch, is_best=False) - - if self.evaluate_only: - break - - if self.config.run_cfg.distributed: - dist.barrier() - - # testing phase - test_epoch = "best" if len(self.valid_splits) > 0 else cur_epoch - self.evaluate(cur_epoch=test_epoch, skip_reload=self.evaluate_only) - - total_time = time.time() - start_time - total_time_str = str(datetime.timedelta(seconds=int(total_time))) - logging.info("Training time {}".format(total_time_str)) - - def evaluate(self, cur_epoch="best", skip_reload=False): - test_logs = dict() - - if len(self.test_splits) > 0: - for split_name in self.test_splits: - test_logs[split_name] = self.eval_epoch( - split_name=split_name, cur_epoch=cur_epoch, skip_reload=skip_reload - ) - - return test_logs - - def train_epoch(self, epoch): - # train - self.model.train() - - return self.task.train_epoch( - epoch=epoch, - model=self.model, - data_loader=self.train_loader, - optimizer=self.optimizer, - scaler=self.scaler, - lr_scheduler=self.lr_scheduler, - cuda_enabled=self.cuda_enabled, - log_freq=self.log_freq, - accum_grad_iters=self.accum_grad_iters, - ) - - @torch.no_grad() - def eval_epoch(self, split_name, cur_epoch, skip_reload=False): - """ - Evaluate the model on a given split. - - Args: - split_name (str): name of the split to evaluate on. - cur_epoch (int): current epoch. - skip_reload_best (bool): whether to skip reloading the best checkpoint. - During training, we will reload the best checkpoint for validation. - During testing, we will use provided weights and skip reloading the best checkpoint . - """ - data_loader = self.dataloaders.get(split_name, None) - assert data_loader, "data_loader for split {} is None.".format(split_name) - - # TODO In validation, you need to compute loss as well as metrics - # TODO consider moving to model.before_evaluation() - model = self.unwrap_dist_model(self.model) - if not skip_reload and cur_epoch == "best": - model = self._reload_best_model(model) - model.eval() - - self.task.before_evaluation( - model=model, - dataset=self.datasets[split_name], - ) - results = self.task.evaluation(model, data_loader) - - if results is not None: - return self.task.after_evaluation( - val_result=results, - split_name=split_name, - epoch=cur_epoch, - ) - - def unwrap_dist_model(self, model): - if self.use_distributed: - return model.module - else: - return model - - def create_loaders( - self, - datasets, - num_workers, - batch_sizes, - is_trains, - collate_fns, - dataset_ratios=None, - ): - """ - Create dataloaders for training and validation. - """ - - def _create_loader(dataset, num_workers, bsz, is_train, collate_fn): - # create a single dataloader for each split - if isinstance(dataset, ChainDataset) or isinstance( - dataset, wds.DataPipeline - ): - # wds.WebdDataset instance are chained together - # webdataset.DataPipeline has its own sampler and collate_fn - loader = iter( - DataLoader( - dataset, - batch_size=bsz, - num_workers=num_workers, - pin_memory=True, - ) - ) - else: - # map-style dataset are concatenated together - # setup distributed sampler - if self.use_distributed: - sampler = DistributedSampler( - dataset, - shuffle=is_train, - num_replicas=get_world_size(), - rank=get_rank(), - ) - if not self.use_dist_eval_sampler: - # e.g. retrieval evaluation - sampler = sampler if is_train else None - else: - sampler = None - - loader = DataLoader( - dataset, - batch_size=bsz, - num_workers=num_workers, - pin_memory=True, - sampler=sampler, - shuffle=sampler is None and is_train, - collate_fn=collate_fn, - drop_last=True if is_train else False, - ) - loader = PrefetchLoader(loader) - - if is_train: - loader = IterLoader(loader, use_distributed=self.use_distributed) - - return loader - - loaders = [] - - for dataset, bsz, is_train, collate_fn in zip( - datasets, batch_sizes, is_trains, collate_fns - ): - if isinstance(dataset, list) or isinstance(dataset, tuple): - if hasattr(dataset[0], 'sample_ratio') and dataset_ratios is None: - dataset_ratios = [d.sample_ratio for d in dataset] - loader = MultiIterLoader( - loaders=[ - _create_loader(d, num_workers, bsz, is_train, collate_fn[i]) - for i, d in enumerate(dataset) - ], - ratios=dataset_ratios, - ) - else: - loader = _create_loader(dataset, num_workers, bsz, is_train, collate_fn) - - loaders.append(loader) - - return loaders - - @main_process - def _save_checkpoint(self, cur_epoch, is_best=False): - """ - Save the checkpoint at the current epoch. - """ - model_no_ddp = self.unwrap_dist_model(self.model) - param_grad_dic = { - k: v.requires_grad for (k, v) in model_no_ddp.named_parameters() - } - state_dict = model_no_ddp.state_dict() - for k in list(state_dict.keys()): - if k in param_grad_dic.keys() and not param_grad_dic[k]: - # delete parameters that do not require gradient - del state_dict[k] - save_obj = { - "model": state_dict, - "optimizer": self.optimizer.state_dict(), - "config": self.config.to_dict(), - "scaler": self.scaler.state_dict() if self.scaler else None, - "epoch": cur_epoch, - } - save_to = os.path.join( - self.output_dir, - "checkpoint_{}.pth".format("best" if is_best else cur_epoch), - ) - logging.info("Saving checkpoint at epoch {} to {}.".format(cur_epoch, save_to)) - torch.save(save_obj, save_to) - - def _reload_best_model(self, model): - """ - Load the best checkpoint for evaluation. - """ - checkpoint_path = os.path.join(self.output_dir, "checkpoint_best.pth") - - logging.info("Loading checkpoint from {}.".format(checkpoint_path)) - checkpoint = torch.load(checkpoint_path, map_location="cpu") - try: - model.load_state_dict(checkpoint["model"]) - except RuntimeError as e: - logging.warning( - """ - Key mismatch when loading checkpoint. This is expected if only part of the model is saved. - Trying to load the model with strict=False. - """ - ) - model.load_state_dict(checkpoint["model"], strict=False) - return model - - def _load_checkpoint(self, url_or_filename): - """ - Resume from a checkpoint. - """ - if is_url(url_or_filename): - cached_file = download_cached_file( - url_or_filename, check_hash=False, progress=True - ) - checkpoint = torch.load(cached_file, map_location=self.device, strict=False) - elif os.path.isfile(url_or_filename): - checkpoint = torch.load(url_or_filename, map_location=self.device, strict=False) - else: - raise RuntimeError("checkpoint url or path is invalid") - - state_dict = checkpoint["model"] - self.unwrap_dist_model(self.model).load_state_dict(state_dict) - - self.optimizer.load_state_dict(checkpoint["optimizer"]) - if self.scaler and "scaler" in checkpoint: - self.scaler.load_state_dict(checkpoint["scaler"]) - - self.start_epoch = checkpoint["epoch"] + 1 - logging.info("Resume checkpoint from {}".format(url_or_filename)) - - @main_process - def log_stats(self, stats, split_name): - if isinstance(stats, dict): - log_stats = {**{f"{split_name}_{k}": v for k, v in stats.items()}} - with open(os.path.join(self.output_dir, "log.txt"), "a") as f: - f.write(json.dumps(log_stats) + "\n") - elif isinstance(stats, list): - pass - - @main_process - def log_config(self): - with open(os.path.join(self.output_dir, "log.txt"), "a") as f: - f.write(json.dumps(self.config.to_dict(), indent=4) + "\n") diff --git a/spaces/VoiceHero69/changer/webui/modules/implementations/rvc/rvc.py b/spaces/VoiceHero69/changer/webui/modules/implementations/rvc/rvc.py deleted file mode 100644 index bdd0286a56a75804a77c7aae09127afa2db1523b..0000000000000000000000000000000000000000 --- a/spaces/VoiceHero69/changer/webui/modules/implementations/rvc/rvc.py +++ /dev/null @@ -1,354 +0,0 @@ -# From https://github.com/RVC-Project/Retrieval-based-Voice-Conversion-WebUI -""" -Copyright: RVC-Project -License: MIT -""" - -import gc -import os -import traceback - -import ffmpeg -import numpy as np -import torch.cuda -import argparse -import torch -from multiprocessing import cpu_count -from fairseq import checkpoint_utils - -from hubert.hubert_manager import HuBERTManager -from webui.modules.implementations.rvc.vc_infer_pipeline import VC - -from webui.modules.implementations.rvc.infer_pack.models import ( - SynthesizerTrnMs256NSFsid, - SynthesizerTrnMs256NSFsid_nono, - SynthesizerTrnMs768NSFsid, - SynthesizerTrnMs768NSFsid_nono, -) - -hubert_model = None -weight_root = os.path.join('data', 'models', 'rvc') - - -def config_file_change_fp32(): - try: - for config_file in ["32k.json", "40k.json", "48k.json"]: - with open(f"configs/{config_file}", "r") as f: - strr = f.read().replace("true", "false") - with open(f"configs/{config_file}", "w") as f: - f.write(strr) - with open("trainset_preprocess_pipeline_print.py", "r") as f: - strr = f.read().replace("3.7", "3.0") - with open("trainset_preprocess_pipeline_print.py", "w") as f: - f.write(strr) - except Exception as e: - print(f'exception in config_file_change_fp32: {e}') - - -class Config: - def __init__(self): - self.device = "cuda:0" - self.is_half = True - self.n_cpu = 0 - self.gpu_name = None - self.gpu_mem = None - self.x_pad, self.x_query, self.x_center, self.x_max = self.device_config() - - def device_config(self) -> tuple: - if torch.cuda.is_available(): - i_device = int(self.device.split(":")[-1]) - self.gpu_name = torch.cuda.get_device_name(i_device) - if ( - ("16" in self.gpu_name and "V100" not in self.gpu_name.upper()) - or "P40" in self.gpu_name.upper() - or "1060" in self.gpu_name - or "1070" in self.gpu_name - or "1080" in self.gpu_name - ): - print("16系/10系显卡和P40强制单精度") - self.is_half = False - config_file_change_fp32() - else: - self.gpu_name = None - self.gpu_mem = int( - torch.cuda.get_device_properties(i_device).total_memory - / 1024 - / 1024 - / 1024 - + 0.4 - ) - # if self.gpu_mem <= 4: - # with open("trainset_preprocess_pipeline_print.py", "r") as f: - # strr = f.read().replace("3.7", "3.0") - # with open("trainset_preprocess_pipeline_print.py", "w") as f: - # f.write(strr) - elif torch.backends.mps.is_available(): - print("没有发现支持的N卡, 使用MPS进行推理") - self.device = "mps" - self.is_half = False - config_file_change_fp32() - else: - print("没有发现支持的N卡, 使用CPU进行推理") - self.device = "cpu" - self.is_half = False - config_file_change_fp32() - - if self.n_cpu == 0: - self.n_cpu = cpu_count() - - if self.is_half: - # 6G显存配置 - x_pad = 3 - x_query = 10 - x_center = 60 - x_max = 65 - else: - # 5G显存配置 - x_pad = 1 - x_query = 6 - x_center = 38 - x_max = 41 - - if self.gpu_mem != None and self.gpu_mem <= 4: - x_pad = 1 - x_query = 5 - x_center = 30 - x_max = 32 - - return x_pad, x_query, x_center, x_max - - -config = Config() - - -def load_hubert(): - global hubert_model - if not hubert_model: - models, _, _ = checkpoint_utils.load_model_ensemble_and_task( - [HuBERTManager.make_sure_hubert_rvc_installed()], - suffix="", - ) - hubert_model = models[0] - hubert_model = hubert_model.to(config.device) - if config.is_half: - hubert_model = hubert_model.half() - else: - hubert_model = hubert_model.float() - hubert_model.eval() - - -def load_audio(file, sr): - try: - # https://github.com/openai/whisper/blob/main/whisper/audio.py#L26 - # This launches a subprocess to decode audio while down-mixing and resampling as necessary. - # Requires the ffmpeg CLI and `ffmpeg-python` package to be installed. - file = ( - file.strip(" ").strip('"').strip("\n").strip('"').strip(" ") - ) # 防止小白拷路径头尾带了空格和"和回车 - out, _ = ( - ffmpeg.input(file, threads=0) - .output("-", format="f32le", acodec="pcm_f32le", ac=1, ar=sr) - .run(cmd=["ffmpeg", "-nostdin"], capture_stdout=True, capture_stderr=True) - ) - except Exception as e: - raise RuntimeError(f"Failed to load audio: {e}") - - return np.frombuffer(out, np.float32).flatten() - - -vc = None -rvc_model_name = None -maximum = 0 - - -def unload_rvc(): - global vc, rvc_model_name - rvc_model_name = None - vc = None - gc.collect() - torch.cuda.empty_cache() - - -def load_rvc(model): - global vc, rvc_model_name, maximum - if model != rvc_model_name: - rvc_model_name = model - unload_rvc() - # Load rvc - maximum = get_vc(model)['maximum'] - return maximum - - -def vc_single( - sid, - input_audio_path, - f0_up_key, - f0_file, - f0_method, - file_index, - file_index2, - # file_big_npy, - index_rate, - filter_radius, - resample_sr, - rms_mix_rate, - protect, - crepe_hop_length=128 -): # spk_item, input_audio0, vc_transform0,f0_file,f0method0 - global tgt_sr, net_g, vc, hubert_model, version - if input_audio_path is None: - return "You need to upload an audio", None - f0_up_key = int(f0_up_key) - try: - audio = load_audio(input_audio_path, 16000) - audio_max = np.abs(audio).max() / 0.95 - if audio_max > 1: - audio /= audio_max - times = [0, 0, 0] - if hubert_model is None: - load_hubert() - if_f0 = cpt.get("f0", 1) - file_index = ( - ( - file_index.strip(" ") - .strip('"') - .strip("\n") - .strip('"') - .strip(" ") - .replace("trained", "added") - ) - if file_index != "" - else file_index2 - ) # 防止小白写错,自动帮他替换掉 - # file_big_npy = ( - # file_big_npy.strip(" ").strip('"').strip("\n").strip('"').strip(" ") - # ) - audio_opt = vc.pipeline( - hubert_model, - net_g, - sid, - audio, - input_audio_path, - times, - f0_up_key, - f0_method, - file_index, - # file_big_npy, - index_rate, - if_f0, - filter_radius, - tgt_sr, - resample_sr, - rms_mix_rate, - version, - protect, - f0_file=f0_file, - crepe_hop_length=crepe_hop_length - ) - if resample_sr >= 16000 and tgt_sr != resample_sr: - tgt_sr = resample_sr - index_info = ( - "Using index:%s." % file_index - if os.path.exists(file_index) - else "Index not used." - ) - return "Success.\n %s\nTime:\n npy:%ss, f0:%ss, infer:%ss" % ( - index_info, - times[0], - times[1], - times[2], - ), (tgt_sr, audio_opt) - except: - info = traceback.format_exc() - print(info) - return info, (None, None) - - -# 一个选项卡全局只能有一个音色 -def get_vc(sid): - global n_spk, tgt_sr, net_g, vc, cpt, version - if sid == "" or sid == []: - global hubert_model - if hubert_model is not None: # 考虑到轮询, 需要加个判断看是否 sid 是由有模型切换到无模型的 - print("clean_empty_cache") - del net_g, n_spk, vc, hubert_model, tgt_sr # ,cpt - hubert_model = net_g = n_spk = vc = hubert_model = tgt_sr = None - if torch.cuda.is_available(): - torch.cuda.empty_cache() - ###楼下不这么折腾清理不干净 - if_f0 = cpt.get("f0", 1) - version = cpt.get("version", "v1") - if version == "v1": - if if_f0 == 1: - net_g = SynthesizerTrnMs256NSFsid( - *cpt["config"], is_half=config.is_half - ) - else: - net_g = SynthesizerTrnMs256NSFsid_nono(*cpt["config"]) - elif version == "v2": - if if_f0 == 1: - net_g = SynthesizerTrnMs768NSFsid( - *cpt["config"], is_half=config.is_half - ) - else: - net_g = SynthesizerTrnMs768NSFsid_nono(*cpt["config"]) - del net_g, cpt - if torch.cuda.is_available(): - torch.cuda.empty_cache() - cpt = None - return {"visible": False, "__type__": "update"} - person = "%s/%s" % (weight_root, sid) - print("loading %s" % person) - cpt = torch.load(person, map_location="cpu") - tgt_sr = cpt["config"][-1] - cpt["config"][-3] = cpt["weight"]["emb_g.weight"].shape[0] # n_spk - if_f0 = cpt.get("f0", 1) - version = cpt.get("version", "v1") - if version == "v1": - if if_f0 == 1: - net_g = SynthesizerTrnMs256NSFsid(*cpt["config"], is_half=config.is_half) - else: - net_g = SynthesizerTrnMs256NSFsid_nono(*cpt["config"]) - elif version == "v2": - if if_f0 == 1: - net_g = SynthesizerTrnMs768NSFsid(*cpt["config"], is_half=config.is_half) - else: - net_g = SynthesizerTrnMs768NSFsid_nono(*cpt["config"]) - - del net_g.enc_q - print(net_g.load_state_dict(cpt["weight"], strict=False)) - net_g.eval().to(config.device) - if config.is_half: - net_g = net_g.half() - else: - net_g = net_g.float() - vc = VC(tgt_sr, config) - n_spk = cpt["config"][-3] - return {"visible": True, "maximum": n_spk, "__type__": "update"} - - -def change_info(path, info, name): - try: - ckpt = torch.load(path, map_location="cpu") - ckpt["info"] = info - if name == "": - name = os.path.basename(path) - torch.save(ckpt, "weights/%s" % name) - return "Success." - except: - return traceback.format_exc() - - -def change_info_(ckpt_path): - if not os.path.exists(ckpt_path.replace(os.path.basename(ckpt_path), "train.log")): - return - try: - with open( - ckpt_path.replace(os.path.basename(ckpt_path), "train.log"), "r" - ) as f: - info = eval(f.read().strip("\n").split("\n")[0].split("\t")[-1]) - sr, f0 = info["sample_rate"], info["if_f0"] - version = "v2" if ("version" in info and info["version"] == "v2") else "v1" - return sr, str(f0), version - except: - traceback.print_exc() diff --git a/spaces/Xenova/text-to-speech-client/assets/worker-7f2d1abe.js b/spaces/Xenova/text-to-speech-client/assets/worker-7f2d1abe.js deleted file mode 100644 index e6b33e408a300394a8ae35d6e48413e8d8076aaa..0000000000000000000000000000000000000000 --- a/spaces/Xenova/text-to-speech-client/assets/worker-7f2d1abe.js +++ /dev/null @@ -1,1790 +0,0 @@ -var fn=Object.defineProperty;var gn=(nt,b,n)=>b in nt?fn(nt,b,{enumerable:!0,configurable:!0,writable:!0,value:n}):nt[b]=n;var je=(nt,b,n)=>(gn(nt,typeof b!="symbol"?b+"":b,n),n);(function(){var nt;"use strict";function _mergeNamespaces(b,n){return n.forEach(function(a){a&&typeof a!="string"&&!Array.isArray(a)&&Object.keys(a).forEach(function(u){if(u!=="default"&&!(u in b)){var c=Object.getOwnPropertyDescriptor(a,u);Object.defineProperty(b,u,c.get?c:{enumerable:!0,get:function(){return a[u]}})}})}),Object.freeze(b)}function dispatchCallback(b,n){b!==null&&b(n)}function reverseDictionary(b){return Object.fromEntries(Object.entries(b).map(([n,a])=>[a,n]))}function escapeRegExp(b){return b.replace(/[.*+?^${}()|[\]\\]/g,"\\$&")}const Callable=class{constructor(){let b=function(...n){return b._call(...n)};return Object.setPrototypeOf(b,new.target.prototype)}_call(...b){throw Error("Must implement _call method in subclass")}};function isTypedArray(b){var n,a,u;return((u=(a=(n=b==null?void 0:b.prototype)==null?void 0:n.__proto__)==null?void 0:a.constructor)==null?void 0:u.name)==="TypedArray"}function isIntegralNumber(b){return Number.isInteger(b)||typeof b=="bigint"}function exists(b){return b!=null}function mergeArrays(...b){return Array.prototype.concat.apply([],b)}var sharp={},ONNX_NODE=Object.freeze({__proto__:null,default:sharp});function getDefaultExportFromCjs(b){return b&&b.__esModule&&Object.prototype.hasOwnProperty.call(b,"default")?b.default:b}function getAugmentedNamespace(b){if(b.__esModule)return b;var n=b.default;if(typeof n=="function"){var a=function u(){return this instanceof u?Reflect.construct(n,arguments,this.constructor):n.apply(this,arguments)};a.prototype=n.prototype}else a={};return Object.defineProperty(a,"__esModule",{value:!0}),Object.keys(b).forEach(function(u){var c=Object.getOwnPropertyDescriptor(b,u);Object.defineProperty(a,u,c.get?c:{enumerable:!0,get:function(){return b[u]}})}),a}var ortWeb_min$1={exports:{}};const backends={},backendsSortedByPriority=[],registerBackend=(b,n,a)=>{if(n&&typeof n.init=="function"&&typeof n.createSessionHandler=="function"){const u=backends[b];if(u===void 0)backends[b]={backend:n,priority:a};else{if(u.priority>a)return;if(u.priority===a&&u.backend!==n)throw new Error(`cannot register backend "${b}" using priority ${a}`)}if(a>=0){const c=backendsSortedByPriority.indexOf(b);c!==-1&&backendsSortedByPriority.splice(c,1);for(let p=0;p{const n=b.length===0?backendsSortedByPriority:b,a=[];for(const u of n){const c=backends[u];if(c){if(c.initialized)return c.backend;if(c.aborted)continue;const p=!!c.initPromise;try{return p||(c.initPromise=c.backend.init()),await c.initPromise,c.initialized=!0,c.backend}catch(s){p||a.push({name:u,err:s}),c.aborted=!0}finally{delete c.initPromise}}}throw new Error(`no available backend found. ERR: ${a.map(u=>`[${u.name}] ${u.err}`).join(", ")}`)};class EnvImpl{constructor(){this.wasm={},this.webgl={},this.logLevelInternal="warning"}set logLevel(n){if(n!==void 0){if(typeof n!="string"||["verbose","info","warning","error","fatal"].indexOf(n)===-1)throw new Error(`Unsupported logging level: ${n}`);this.logLevelInternal=n}}get logLevel(){return this.logLevelInternal}}const env$1=new EnvImpl,isBigInt64ArrayAvailable=typeof BigInt64Array<"u"&&typeof BigInt64Array.from=="function",isBigUint64ArrayAvailable=typeof BigUint64Array<"u"&&typeof BigUint64Array.from=="function",NUMERIC_TENSOR_TYPE_TO_TYPEDARRAY_MAP=new Map([["float32",Float32Array],["uint8",Uint8Array],["int8",Int8Array],["uint16",Uint16Array],["int16",Int16Array],["int32",Int32Array],["bool",Uint8Array],["float64",Float64Array],["uint32",Uint32Array]]),NUMERIC_TENSOR_TYPEDARRAY_TO_TYPE_MAP=new Map([[Float32Array,"float32"],[Uint8Array,"uint8"],[Int8Array,"int8"],[Uint16Array,"uint16"],[Int16Array,"int16"],[Int32Array,"int32"],[Float64Array,"float64"],[Uint32Array,"uint32"]]);isBigInt64ArrayAvailable&&(NUMERIC_TENSOR_TYPE_TO_TYPEDARRAY_MAP.set("int64",BigInt64Array),NUMERIC_TENSOR_TYPEDARRAY_TO_TYPE_MAP.set(BigInt64Array,"int64")),isBigUint64ArrayAvailable&&(NUMERIC_TENSOR_TYPE_TO_TYPEDARRAY_MAP.set("uint64",BigUint64Array),NUMERIC_TENSOR_TYPEDARRAY_TO_TYPE_MAP.set(BigUint64Array,"uint64"));const calculateSize=b=>{let n=1;for(let a=0;a{const t=document.createElement("canvas"),e=t.getContext("2d");if(!n||!e)return o();const r=new Image;r.crossOrigin="Anonymous",r.src=n,r.onload=()=>{t.width=r.width,t.height=r.height,e.drawImage(r,0,0,t.width,t.height);const i=e.getImageData(0,0,t.width,t.height);if(a!==void 0){if(a.height!==void 0&&a.height!==t.height)throw new Error("Image input config height doesn't match ImageBitmap height");if(f.height=t.height,a.width!==void 0&&a.width!==t.width)throw new Error("Image input config width doesn't match ImageBitmap width");f.width=t.width}else f.height=t.height,f.width=t.width;l(ut.bufferToTensor(i.data,f))}});throw new Error("Input data provided is not supported - aborted tensor creation")}if(h!==void 0)return ut.bufferToTensor(h,f);throw new Error("Input data provided is not supported - aborted tensor creation")}toImageData(n){var a,u;const c=document.createElement("canvas").getContext("2d");let p;if(c!=null){const s=this.dims[3],h=this.dims[2],f=this.dims[1],l=n!==void 0&&n.format!==void 0?n.format:"RGB",o=n!==void 0&&((a=n.norm)===null||a===void 0?void 0:a.mean)!==void 0?n.norm.mean:255,t=n!==void 0&&((u=n.norm)===null||u===void 0?void 0:u.bias)!==void 0?n.norm.bias:0,e=h*s;if(n!==void 0){if(n.height!==void 0&&n.height!==h)throw new Error("Image output config height doesn't match tensor height");if(n.width!==void 0&&n.width!==s)throw new Error("Image output config width doesn't match tensor width");if(n.format!==void 0&&f===4&&n.format!=="RGBA"||f===3&&n.format!=="RGB"&&n.format!=="BGR")throw new Error("Tensor format doesn't match input tensor dims")}const r=4;let i=0,d=1,g=2,m=3,_=0,y=e,w=e*2,v=-1;l==="RGBA"?(_=0,y=e,w=e*2,v=e*3):l==="RGB"?(_=0,y=e,w=e*2):l==="RBG"&&(_=0,w=e,y=e*2),p=c.createImageData(s,h);for(let S=0;S"u")throw new Error(`input '${l}' is missing in 'feeds'.`);if(s)for(const l of this.outputNames)c[l]=null;const h=await this.handler.run(n,c,p),f={};for(const l in h)Object.hasOwnProperty.call(h,l)&&(f[l]=new Tensor$1(h[l].type,h[l].data,h[l].dims));return f}static async create(n,a,u,c){let p,s={};if(typeof n=="string"){if(p=n,typeof a=="object"&&a!==null)s=a;else if(typeof a<"u")throw new TypeError("'options' must be an object.")}else if(n instanceof Uint8Array){if(p=n,typeof a=="object"&&a!==null)s=a;else if(typeof a<"u")throw new TypeError("'options' must be an object.")}else if(n instanceof ArrayBuffer||typeof SharedArrayBuffer<"u"&&n instanceof SharedArrayBuffer){const t=n;let e=0,r=n.byteLength;if(typeof a=="object"&&a!==null)s=a;else if(typeof a=="number"){if(e=a,!Number.isSafeInteger(e))throw new RangeError("'byteOffset' must be an integer.");if(e<0||e>=t.byteLength)throw new RangeError(`'byteOffset' is out of range [0, ${t.byteLength}).`);if(r=n.byteLength-e,typeof u=="number"){if(r=u,!Number.isSafeInteger(r))throw new RangeError("'byteLength' must be an integer.");if(r<=0||e+r>t.byteLength)throw new RangeError(`'byteLength' is out of range (0, ${t.byteLength-e}].`);if(typeof c=="object"&&c!==null)s=c;else if(typeof c<"u")throw new TypeError("'options' must be an object.")}else if(typeof u<"u")throw new TypeError("'byteLength' must be a number.")}else if(typeof a<"u")throw new TypeError("'options' must be an object.");p=new Uint8Array(t,e,r)}else throw new TypeError("Unexpected argument[0]: must be 'path' or 'buffer'.");const f=(s.executionProviders||[]).map(t=>typeof t=="string"?t:t.name),o=await(await resolveBackend(f)).createSessionHandler(p,s);return new dn(o)}startProfiling(){this.handler.startProfiling()}endProfiling(){this.handler.endProfiling()}get inputNames(){return this.handler.inputNames}get outputNames(){return this.handler.outputNames}};const InferenceSession$1=InferenceSession$2;var lib=Object.freeze({__proto__:null,InferenceSession:InferenceSession$1,Tensor:Tensor$1,env:env$1,registerBackend}),require$$0=getAugmentedNamespace(lib);/*! -* ONNX Runtime Web v1.14.0 -* Copyright (c) Microsoft Corporation. All rights reserved. -* Licensed under the MIT License. -*/(function(module,exports){(function(b,n){module.exports=n(require$$0)})(self,__WEBPACK_EXTERNAL_MODULE__1670__=>(()=>{var __webpack_modules__={3474:(b,n,a)=>{var u,c=(u=(u=typeof document<"u"&&document.currentScript?document.currentScript.src:void 0)||"/index.js",function(p){function s(){return X.buffer!=ne&&Pe(X.buffer),me}function h(){return X.buffer!=ne&&Pe(X.buffer),Me}function f(){return X.buffer!=ne&&Pe(X.buffer),Oe}function l(){return X.buffer!=ne&&Pe(X.buffer),ce}function o(){return X.buffer!=ne&&Pe(X.buffer),Te}var t,e,r;p=p||{},t||(t=p!==void 0?p:{}),t.ready=new Promise(function(x,P){e=x,r=P});var i,d,g,m,_,y,w=Object.assign({},t),v="./this.program",S=(x,P)=>{throw P},O=typeof window=="object",A=typeof importScripts=="function",T=typeof process=="object"&&typeof process.versions=="object"&&typeof process.versions.node=="string",M=t.ENVIRONMENT_IS_PTHREAD||!1,N="";function B(x){return t.locateFile?t.locateFile(x,N):N+x}if(T){let x;N=A?a(908).dirname(N)+"/":"//",y=()=>{_||(m=a(1384),_=a(908))},i=function(P,k){return y(),P=_.normalize(P),m.readFileSync(P,k?void 0:"utf8")},g=P=>((P=i(P,!0)).buffer||(P=new Uint8Array(P)),P),d=(P,k,D)=>{y(),P=_.normalize(P),m.readFile(P,function(j,V){j?D(j):k(V.buffer)})},1{if(Ge())throw process.exitCode=P,k;k instanceof Je||z("exiting due to exception: "+k),process.exit(P)},t.inspect=function(){return"[Emscripten Module object]"};try{x=a(9925)}catch(P){throw console.error('The "worker_threads" module is not supported in this node.js build - perhaps a newer version is needed?'),P}a.g.Worker=x.Worker}else(O||A)&&(A?N=self.location.href:typeof document<"u"&&document.currentScript&&(N=document.currentScript.src),u&&(N=u),N=N.indexOf("blob:")!==0?N.substr(0,N.replace(/[?#].*/,"").lastIndexOf("/")+1):"",T||(i=x=>{var P=new XMLHttpRequest;return P.open("GET",x,!1),P.send(null),P.responseText},A&&(g=x=>{var P=new XMLHttpRequest;return P.open("GET",x,!1),P.responseType="arraybuffer",P.send(null),new Uint8Array(P.response)}),d=(x,P,k)=>{var D=new XMLHttpRequest;D.open("GET",x,!0),D.responseType="arraybuffer",D.onload=()=>{D.status==200||D.status==0&&D.response?P(D.response):k()},D.onerror=k,D.send(null)}));T&&typeof performance>"u"&&(a.g.performance=a(6953).performance);var $=console.log.bind(console),L=console.warn.bind(console);T&&(y(),$=x=>m.writeSync(1,x+` -`),L=x=>m.writeSync(2,x+` -`));var H,C=t.print||$,z=t.printErr||L;Object.assign(t,w),w=null,t.thisProgram&&(v=t.thisProgram),t.quit&&(S=t.quit),t.wasmBinary&&(H=t.wasmBinary);var J=t.noExitRuntime||!1;typeof WebAssembly!="object"&&pe("no native wasm support detected");var X,te,ne,me,Me,Oe,ce,Te,ye=!1,Fe=typeof TextDecoder<"u"?new TextDecoder("utf8"):void 0;function He(x,P,k){var D=(P>>>=0)+k;for(k=P;x[k]&&!(k>=D);)++k;if(16(j=(240&j)==224?(15&j)<<12|V<<6|K:(7&j)<<18|V<<12|K<<6|63&x[P++])?D+=String.fromCharCode(j):(j-=65536,D+=String.fromCharCode(55296|j>>10,56320|1023&j))}}else D+=String.fromCharCode(j)}return D}function Ae(x,P){return(x>>>=0)?He(h(),x,P):""}function Ne(x,P,k,D){if(!(0>>=0;D=k+D-1;for(var V=0;V=K&&(K=65536+((1023&K)<<10)|1023&x.charCodeAt(++V)),127>=K){if(k>=D)break;P[k++>>>0]=K}else{if(2047>=K){if(k+1>=D)break;P[k++>>>0]=192|K>>6}else{if(65535>=K){if(k+2>=D)break;P[k++>>>0]=224|K>>12}else{if(k+3>=D)break;P[k++>>>0]=240|K>>18,P[k++>>>0]=128|K>>12&63}P[k++>>>0]=128|K>>6&63}P[k++>>>0]=128|63&K}}return P[k>>>0]=0,k-j}function De(x){for(var P=0,k=0;k=D?P++:2047>=D?P+=2:55296<=D&&57343>=D?(P+=4,++k):P+=3}return P}function Pe(x){ne=x,t.HEAP8=me=new Int8Array(x),t.HEAP16=new Int16Array(x),t.HEAP32=Oe=new Int32Array(x),t.HEAPU8=Me=new Uint8Array(x),t.HEAPU16=new Uint16Array(x),t.HEAPU32=ce=new Uint32Array(x),t.HEAPF32=new Float32Array(x),t.HEAPF64=Te=new Float64Array(x)}M&&(ne=t.buffer);var ve=t.INITIAL_MEMORY||16777216;if(M)X=t.wasmMemory,ne=t.buffer;else if(t.wasmMemory)X=t.wasmMemory;else if(!((X=new WebAssembly.Memory({initial:ve/65536,maximum:65536,shared:!0})).buffer instanceof SharedArrayBuffer))throw z("requested a shared WebAssembly.Memory but the returned buffer is not a SharedArrayBuffer, indicating that while the browser has SharedArrayBuffer it does not have WebAssembly threads support - you may need to set a flag"),T&&console.log("(on node you may need: --experimental-wasm-threads --experimental-wasm-bulk-memory and also use a recent version)"),Error("bad memory");X&&(ne=X.buffer),ve=ne.byteLength,Pe(ne);var Be,Ue=[],Ve=[],Xe=[],Qe=[];function Ge(){return J||!1}function ze(){var x=t.preRun.shift();Ue.unshift(x)}var Se,$e=0,Ye=null;function pe(x){throw M?postMessage({cmd:"onAbort",arg:x}):t.onAbort&&t.onAbort(x),z(x="Aborted("+x+")"),ye=!0,x=new WebAssembly.RuntimeError(x+". Build with -sASSERTIONS for more info."),r(x),x}function pt(){return Se.startsWith("data:application/octet-stream;base64,")}function lt(){var x=Se;try{if(x==Se&&H)return new Uint8Array(H);if(g)return g(x);throw"both async and sync fetching of the wasm failed"}catch(P){pe(P)}}Se="ort-wasm-threaded.wasm",pt()||(Se=B(Se));var Et={};function Je(x){this.name="ExitStatus",this.message="Program terminated with exit("+x+")",this.status=x}function ct(x){(x=re.Vb[x])||pe(),re.mc(x)}function dt(x){var P=re.Cc();if(!P)return 6;re.ac.push(P),re.Vb[x.Ub]=P,P.Ub=x.Ub;var k={cmd:"run",start_routine:x.Ic,arg:x.zc,pthread_ptr:x.Ub};return P.$b=()=>{k.time=performance.now(),P.postMessage(k,x.Nc)},P.loaded&&(P.$b(),delete P.$b),0}function Le(x){if(M)return Q(1,1,x);Ge()||(re.oc(),t.onExit&&t.onExit(x),ye=!0),S(x,new Je(x))}function it(x,P){if(!P&&M)throw kt(x),"unwind";Ge()||M||(Wt(),rt(Xe),qt(0),Lt[1].length&&Ft(1,10),Lt[2].length&&Ft(2,10),re.oc()),Le(x)}var re={Yb:[],ac:[],qc:[],Vb:{},fc:function(){M&&re.Ec()},Pc:function(){},Ec:function(){re.receiveObjectTransfer=re.Gc,re.threadInitTLS=re.pc,re.setExitStatus=re.nc,J=!1},nc:function(){},oc:function(){for(var x of Object.values(re.Vb))re.mc(x);for(x of re.Yb)x.terminate();re.Yb=[]},mc:function(x){var P=x.Ub;delete re.Vb[P],re.Yb.push(x),re.ac.splice(re.ac.indexOf(x),1),x.Ub=0,Rt(P)},Gc:function(){},pc:function(){re.qc.forEach(x=>x())},Fc:function(x,P){x.onmessage=k=>{var D=(k=k.data).cmd;if(x.Ub&&(re.Bc=x.Ub),k.targetThread&&k.targetThread!=Dt()){var j=re.Vb[k.Qc];j?j.postMessage(k,k.transferList):z('Internal error! Worker sent a message "'+D+'" to target pthread '+k.targetThread+", but that thread no longer exists!")}else D==="processProxyingQueue"?F(k.queue):D==="spawnThread"?dt(k):D==="cleanupThread"?ct(k.thread):D==="killThread"?(k=k.thread,D=re.Vb[k],delete re.Vb[k],D.terminate(),Rt(k),re.ac.splice(re.ac.indexOf(D),1),D.Ub=0):D==="cancelThread"?re.Vb[k.thread].postMessage({cmd:"cancel"}):D==="loaded"?(x.loaded=!0,P&&P(x),x.$b&&(x.$b(),delete x.$b)):D==="print"?C("Thread "+k.threadId+": "+k.text):D==="printErr"?z("Thread "+k.threadId+": "+k.text):D==="alert"?alert("Thread "+k.threadId+": "+k.text):k.target==="setimmediate"?x.postMessage(k):D==="onAbort"?t.onAbort&&t.onAbort(k.arg):D&&z("worker sent an unknown command "+D);re.Bc=void 0},x.onerror=k=>{throw z("worker sent an error! "+k.filename+":"+k.lineno+": "+k.message),k},T&&(x.on("message",function(k){x.onmessage({data:k})}),x.on("error",function(k){x.onerror(k)}),x.on("detachedExit",function(){})),x.postMessage({cmd:"load",urlOrBlob:t.mainScriptUrlOrBlob||u,wasmMemory:X,wasmModule:te})},yc:function(){var x=B("ort-wasm-threaded.worker.js");re.Yb.push(new Worker(x))},Cc:function(){return re.Yb.length==0&&(re.yc(),re.Fc(re.Yb[0])),re.Yb.pop()}};function rt(x){for(;0>2>>>0];x=f()[x+48>>2>>>0],Qt(P,P-x),ue(P)};var Ze=[];function _e(x){var P=Ze[x];return P||(x>=Ze.length&&(Ze.length=x+1),Ze[x]=P=Be.get(x)),P}t.invokeEntryPoint=function(x,P){x=_e(x)(P),Ge()?re.nc(x):Kt(x)};var ot,ft,st=[],se=0,ie=0;function oe(x){this.Zb=x,this.Sb=x-24,this.xc=function(P){l()[this.Sb+4>>2>>>0]=P},this.bc=function(){return l()[this.Sb+4>>2>>>0]},this.wc=function(P){l()[this.Sb+8>>2>>>0]=P},this.Dc=function(){return l()[this.Sb+8>>2>>>0]},this.rc=function(){f()[this.Sb>>2>>>0]=0},this.hc=function(P){P=P?1:0,s()[this.Sb+12>>0>>>0]=P},this.uc=function(){return s()[this.Sb+12>>0>>>0]!=0},this.ic=function(P){P=P?1:0,s()[this.Sb+13>>0>>>0]=P},this.kc=function(){return s()[this.Sb+13>>0>>>0]!=0},this.fc=function(P,k){this.cc(0),this.xc(P),this.wc(k),this.rc(),this.hc(!1),this.ic(!1)},this.sc=function(){Atomics.add(f(),this.Sb>>2,1)},this.Hc=function(){return Atomics.sub(f(),this.Sb>>2,1)===1},this.cc=function(P){l()[this.Sb+16>>2>>>0]=P},this.tc=function(){return l()[this.Sb+16>>2>>>0]},this.vc=function(){if(Jt(this.bc()))return l()[this.Zb>>2>>>0];var P=this.tc();return P!==0?P:this.Zb}}function gt(x){return Gt(new oe(x).Sb)}function at(x,P,k,D){return M?Q(3,1,x,P,k,D):mt(x,P,k,D)}function mt(x,P,k,D){if(typeof SharedArrayBuffer>"u")return z("Current environment does not support SharedArrayBuffer, pthreads are not available!"),6;var j=[];return M&&j.length===0?at(x,P,k,D):(x={Ic:k,Ub:x,zc:D,Nc:j},M?(x.Oc="spawnThread",postMessage(x,j),0):dt(x))}function bt(x,P,k){return M?Q(4,1,x,P,k):0}function _t(x,P){if(M)return Q(5,1,x,P)}function yt(x,P){if(M)return Q(6,1,x,P)}function wt(x,P,k){if(M)return Q(7,1,x,P,k)}function vt(x,P,k){return M?Q(8,1,x,P,k):0}function Tt(x,P){if(M)return Q(9,1,x,P)}function xt(x,P,k){if(M)return Q(10,1,x,P,k)}function St(x,P,k,D){if(M)return Q(11,1,x,P,k,D)}function Ot(x,P,k,D){if(M)return Q(12,1,x,P,k,D)}function At(x,P,k,D){if(M)return Q(13,1,x,P,k,D)}function Pt(x){if(M)return Q(14,1,x)}function E(x,P){if(M)return Q(15,1,x,P)}function I(x,P,k){if(M)return Q(16,1,x,P,k)}function F(x){Atomics.store(f(),x>>2,1),Dt()&&Yt(x),Atomics.compareExchange(f(),x>>2,1,0)}function R(x){return l()[x>>>2]+4294967296*f()[x+4>>>2]}function U(x,P,k,D,j,V){return M?Q(17,1,x,P,k,D,j,V):-52}function W(x,P,k,D,j,V){if(M)return Q(18,1,x,P,k,D,j,V)}function Y(x){var P=De(x)+1,k=$t(P);return k&&Ne(x,s(),k,P),k}function Z(x,P,k){function D(fe){return(fe=fe.toTimeString().match(/\(([A-Za-z ]+)\)$/))?fe[1]:"GMT"}if(M)return Q(19,1,x,P,k);var j=new Date().getFullYear(),V=new Date(j,0,1),K=new Date(j,6,1);j=V.getTimezoneOffset();var ee=K.getTimezoneOffset(),he=Math.max(j,ee);f()[x>>2>>>0]=60*he,f()[P>>2>>>0]=+(j!=ee),x=D(V),P=D(K),x=Y(x),P=Y(P),ee>2>>>0]=x,l()[k+4>>2>>>0]=P):(l()[k>>2>>>0]=P,l()[k+4>>2>>>0]=x)}function Q(x,P){var k=arguments.length-2,D=arguments;return Mt(()=>{for(var j=jt(8*k),V=j>>3,K=0;K>>0]=ee}return Xt(x,k,j,P)})}t.executeNotifiedProxyingQueue=F,ft=T?()=>{var x=process.hrtime();return 1e3*x[0]+x[1]/1e6}:M?()=>performance.now()-t.__performance_now_clock_drift:()=>performance.now();var ae,we=[],Ce={};function Ie(){if(!ae){var x,P={USER:"web_user",LOGNAME:"web_user",PATH:"/",PWD:"/",HOME:"/home/web_user",LANG:(typeof navigator=="object"&&navigator.languages&&navigator.languages[0]||"C").replace("-","_")+".UTF-8",_:v||"./this.program"};for(x in Ce)Ce[x]===void 0?delete P[x]:P[x]=Ce[x];var k=[];for(x in P)k.push(x+"="+P[x]);ae=k}return ae}function G(x,P){if(M)return Q(20,1,x,P);var k=0;return Ie().forEach(function(D,j){var V=P+k;for(j=l()[x+4*j>>2>>>0]=V,V=0;V>0>>>0]=D.charCodeAt(V);s()[j>>0>>>0]=0,k+=D.length+1}),0}function ge(x,P){if(M)return Q(21,1,x,P);var k=Ie();l()[x>>2>>>0]=k.length;var D=0;return k.forEach(function(j){D+=j.length+1}),l()[P>>2>>>0]=D,0}function xe(x){return M?Q(22,1,x):52}function qe(x,P,k,D){return M?Q(23,1,x,P,k,D):52}function et(x,P,k,D,j){return M?Q(24,1,x,P,k,D,j):70}var Lt=[null,[],[]];function Ft(x,P){var k=Lt[x];P===0||P===10?((x===1?C:z)(He(k,0)),k.length=0):k.push(P)}function Bt(x,P,k,D){if(M)return Q(25,1,x,P,k,D);for(var j=0,V=0;V>2>>>0],ee=l()[P+4>>2>>>0];P+=8;for(var he=0;he>>0]);j+=ee}return l()[D>>2>>>0]=j,0}var Re=0;function It(x){return x%4==0&&(x%100!=0||x%400==0)}var zt=[31,29,31,30,31,30,31,31,30,31,30,31],Ut=[31,28,31,30,31,30,31,31,30,31,30,31];function Vt(x,P,k,D){function j(q,be,Ee){for(q=typeof q=="number"?q.toString():q||"";q.lengthht?-1:0tt-q.getDate())){q.setDate(q.getDate()+be);break}be-=tt-q.getDate()+1,q.setDate(1),11>Ee?q.setMonth(Ee+1):(q.setMonth(0),q.setFullYear(q.getFullYear()+1))}return Ee=new Date(q.getFullYear()+1,0,4),be=ee(new Date(q.getFullYear(),0,4)),Ee=ee(Ee),0>=K(be,q)?0>=K(Ee,q)?q.getFullYear()+1:q.getFullYear():q.getFullYear()-1}var fe=f()[D+40>>2>>>0];for(var ke in D={Lc:f()[D>>2>>>0],Kc:f()[D+4>>2>>>0],dc:f()[D+8>>2>>>0],jc:f()[D+12>>2>>>0],ec:f()[D+16>>2>>>0],Xb:f()[D+20>>2>>>0],Tb:f()[D+24>>2>>>0],Wb:f()[D+28>>2>>>0],Rc:f()[D+32>>2>>>0],Jc:f()[D+36>>2>>>0],Mc:fe?Ae(fe):""},k=Ae(k),fe={"%c":"%a %b %d %H:%M:%S %Y","%D":"%m/%d/%y","%F":"%Y-%m-%d","%h":"%b","%r":"%I:%M:%S %p","%R":"%H:%M","%T":"%H:%M:%S","%x":"%m/%d/%y","%X":"%H:%M:%S","%Ec":"%c","%EC":"%C","%Ex":"%m/%d/%y","%EX":"%H:%M:%S","%Ey":"%y","%EY":"%Y","%Od":"%d","%Oe":"%e","%OH":"%H","%OI":"%I","%Om":"%m","%OM":"%M","%OS":"%S","%Ou":"%u","%OU":"%U","%OV":"%V","%Ow":"%w","%OW":"%W","%Oy":"%y"})k=k.replace(new RegExp(ke,"g"),fe[ke]);var Ke="Sunday Monday Tuesday Wednesday Thursday Friday Saturday".split(" "),We="January February March April May June July August September October November December".split(" ");for(ke in fe={"%a":function(q){return Ke[q.Tb].substring(0,3)},"%A":function(q){return Ke[q.Tb]},"%b":function(q){return We[q.ec].substring(0,3)},"%B":function(q){return We[q.ec]},"%C":function(q){return V((q.Xb+1900)/100|0,2)},"%d":function(q){return V(q.jc,2)},"%e":function(q){return j(q.jc,2," ")},"%g":function(q){return he(q).toString().substring(2)},"%G":function(q){return he(q)},"%H":function(q){return V(q.dc,2)},"%I":function(q){return(q=q.dc)==0?q=12:12q.dc?"AM":"PM"},"%S":function(q){return V(q.Lc,2)},"%t":function(){return" "},"%u":function(q){return q.Tb||7},"%U":function(q){return V(Math.floor((q.Wb+7-q.Tb)/7),2)},"%V":function(q){var be=Math.floor((q.Wb+7-(q.Tb+6)%7)/7);if(2>=(q.Tb+371-q.Wb-2)%7&&be++,be)be==53&&((Ee=(q.Tb+371-q.Wb)%7)==4||Ee==3&&It(q.Xb)||(be=1));else{be=52;var Ee=(q.Tb+7-q.Wb-1)%7;(Ee==4||Ee==5&&It(q.Xb%400-1))&&be++}return V(be,2)},"%w":function(q){return q.Tb},"%W":function(q){return V(Math.floor((q.Wb+7-(q.Tb+6)%7)/7),2)},"%y":function(q){return(q.Xb+1900).toString().substring(2)},"%Y":function(q){return q.Xb+1900},"%z":function(q){var be=0<=(q=q.Jc);return q=Math.abs(q)/60,(be?"+":"-")+("0000"+(q/60*100+q%60)).slice(-4)},"%Z":function(q){return q.Mc},"%%":function(){return"%"}},k=k.replace(/%%/g,"\0\0"),fe)k.includes(ke)&&(k=k.replace(new RegExp(ke,"g"),fe[ke](D)));return ke=function(q){var be=Array(De(q)+1);return Ne(q,be,0,be.length),be}(k=k.replace(/\0\0/g,"%")),ke.length>P?0:(function(q,be){s().set(q,be>>>0)}(ke,x),ke.length-1)}re.fc();var hn=[null,Le,kt,at,bt,_t,yt,wt,vt,Tt,xt,St,Ot,At,Pt,E,I,U,W,Z,G,ge,xe,qe,et,Bt],pn={b:function(x){return $t(x+24)+24},n:function(x){return(x=new oe(x)).uc()||(x.hc(!0),se--),x.ic(!1),st.push(x),x.sc(),x.vc()},ma:function(x){throw z("Unexpected exception thrown, this is not properly supported - aborting"),ye=!0,x},x:function(){de(0);var x=st.pop();if(x.Hc()&&!x.kc()){var P=x.Dc();P&&_e(P)(x.Zb),gt(x.Zb)}ie=0},e:function(){var x=ie;if(!x)return Re=0;var P=new oe(x);P.cc(x);var k=P.bc();if(!k)return Re=0,x;for(var D=Array.prototype.slice.call(arguments),j=0;jF(D));else if(M)postMessage({targetThread:x,cmd:"processProxyingQueue",queue:D});else{if(!(x=re.Vb[x]))return;x.postMessage({cmd:"processProxyingQueue",queue:D})}return 1},Ea:function(){return-1},Pa:function(x,P){x=new Date(1e3*R(x)),f()[P>>2>>>0]=x.getUTCSeconds(),f()[P+4>>2>>>0]=x.getUTCMinutes(),f()[P+8>>2>>>0]=x.getUTCHours(),f()[P+12>>2>>>0]=x.getUTCDate(),f()[P+16>>2>>>0]=x.getUTCMonth(),f()[P+20>>2>>>0]=x.getUTCFullYear()-1900,f()[P+24>>2>>>0]=x.getUTCDay(),x=(x.getTime()-Date.UTC(x.getUTCFullYear(),0,1,0,0,0,0))/864e5|0,f()[P+28>>2>>>0]=x},Qa:function(x,P){x=new Date(1e3*R(x)),f()[P>>2>>>0]=x.getSeconds(),f()[P+4>>2>>>0]=x.getMinutes(),f()[P+8>>2>>>0]=x.getHours(),f()[P+12>>2>>>0]=x.getDate(),f()[P+16>>2>>>0]=x.getMonth(),f()[P+20>>2>>>0]=x.getFullYear()-1900,f()[P+24>>2>>>0]=x.getDay();var k=new Date(x.getFullYear(),0,1),D=(x.getTime()-k.getTime())/864e5|0;f()[P+28>>2>>>0]=D,f()[P+36>>2>>>0]=-60*x.getTimezoneOffset(),D=new Date(x.getFullYear(),6,1).getTimezoneOffset(),x=0|(D!=(k=k.getTimezoneOffset())&&x.getTimezoneOffset()==Math.min(k,D)),f()[P+32>>2>>>0]=x},Ra:function(x){var P=new Date(f()[x+20>>2>>>0]+1900,f()[x+16>>2>>>0],f()[x+12>>2>>>0],f()[x+8>>2>>>0],f()[x+4>>2>>>0],f()[x>>2>>>0],0),k=f()[x+32>>2>>>0],D=P.getTimezoneOffset(),j=new Date(P.getFullYear(),0,1),V=new Date(P.getFullYear(),6,1).getTimezoneOffset(),K=j.getTimezoneOffset(),ee=Math.min(K,V);return 0>k?f()[x+32>>2>>>0]=+(V!=K&&ee==D):0>2>>>0]=P.getDay(),k=(P.getTime()-j.getTime())/864e5|0,f()[x+28>>2>>>0]=k,f()[x>>2>>>0]=P.getSeconds(),f()[x+4>>2>>>0]=P.getMinutes(),f()[x+8>>2>>>0]=P.getHours(),f()[x+12>>2>>>0]=P.getDate(),f()[x+16>>2>>>0]=P.getMonth(),P.getTime()/1e3|0},Aa:U,Ba:W,Sa:function x(P,k,D){x.Ac||(x.Ac=!0,Z(P,k,D))},y:function(){pe("")},U:function(){if(!T&&!A){var x="Blocking on the main thread is very dangerous, see https://emscripten.org/docs/porting/pthreads.html#blocking-on-the-main-browser-thread";ot||(ot={}),ot[x]||(ot[x]=1,T&&(x="warning: "+x),z(x))}},ra:function(){return 4294901760},B:ft,Ia:function(x,P,k){h().copyWithin(x>>>0,P>>>0,P+k>>>0)},F:function(){return T?a(3993).cpus().length:navigator.hardwareConcurrency},Da:function(x,P,k){we.length=P,k>>=3;for(var D=0;D>>0];return(0>x?Et[-x-1]:hn[x]).apply(null,we)},qa:function(x){var P=h().length;if((x>>>=0)<=P||4294901760=k;k*=2){var D=P*(1+.2/k);D=Math.min(D,x+100663296);var j=Math;D=Math.max(x,D),j=j.min.call(j,4294901760,D+(65536-D%65536)%65536);e:{try{X.grow(j-ne.byteLength+65535>>>16),Pe(X.buffer);var V=1;break e}catch{}V=void 0}if(V)return!0}return!1},Na:function(){throw"unwind"},Ga:G,Ha:ge,J:it,I:xe,S:qe,ga:et,R:Bt,d:function(){return Re},na:function x(P,k){x.lc||(x.lc=function(){if(typeof crypto=="object"&&typeof crypto.getRandomValues=="function"){var j=new Uint8Array(1);return()=>(crypto.getRandomValues(j),j[0])}if(T)try{var V=a(Object(function(){var K=new Error("Cannot find module 'crypto'");throw K.code="MODULE_NOT_FOUND",K}()));return()=>V.randomBytes(1)[0]}catch{}return()=>pe("randomDevice")}());for(var D=0;D>0>>>0]=x.lc();return 0},ia:function(x,P,k){var D=le();try{return _e(x)(P,k)}catch(j){if(ue(D),j!==j+0)throw j;de(1,0)}},ja:function(x,P,k){var D=le();try{return _e(x)(P,k)}catch(j){if(ue(D),j!==j+0)throw j;de(1,0)}},K:function(x){var P=le();try{return _e(x)()}catch(k){if(ue(P),k!==k+0)throw k;de(1,0)}},f:function(x,P){var k=le();try{return _e(x)(P)}catch(D){if(ue(k),D!==D+0)throw D;de(1,0)}},P:function(x,P,k){var D=le();try{return _e(x)(P,k)}catch(j){if(ue(D),j!==j+0)throw j;de(1,0)}},Q:function(x,P,k){var D=le();try{return _e(x)(P,k)}catch(j){if(ue(D),j!==j+0)throw j;de(1,0)}},k:function(x,P,k){var D=le();try{return _e(x)(P,k)}catch(j){if(ue(D),j!==j+0)throw j;de(1,0)}},p:function(x,P,k,D){var j=le();try{return _e(x)(P,k,D)}catch(V){if(ue(j),V!==V+0)throw V;de(1,0)}},q:function(x,P,k,D,j){var V=le();try{return _e(x)(P,k,D,j)}catch(K){if(ue(V),K!==K+0)throw K;de(1,0)}},N:function(x,P,k,D,j,V){var K=le();try{return _e(x)(P,k,D,j,V)}catch(ee){if(ue(K),ee!==ee+0)throw ee;de(1,0)}},s:function(x,P,k,D,j,V){var K=le();try{return _e(x)(P,k,D,j,V)}catch(ee){if(ue(K),ee!==ee+0)throw ee;de(1,0)}},w:function(x,P,k,D,j,V,K){var ee=le();try{return _e(x)(P,k,D,j,V,K)}catch(he){if(ue(ee),he!==he+0)throw he;de(1,0)}},L:function(x,P,k,D,j,V,K,ee){var he=le();try{return _e(x)(P,k,D,j,V,K,ee)}catch(fe){if(ue(he),fe!==fe+0)throw fe;de(1,0)}},E:function(x,P,k,D,j,V,K,ee,he,fe,ke,Ke){var We=le();try{return _e(x)(P,k,D,j,V,K,ee,he,fe,ke,Ke)}catch(q){if(ue(We),q!==q+0)throw q;de(1,0)}},aa:function(x,P,k,D,j,V,K,ee){var he=le();try{return un(x,P,k,D,j,V,K,ee)}catch(fe){if(ue(he),fe!==fe+0)throw fe;de(1,0)}},_:function(x,P,k,D,j,V,K){var ee=le();try{return en(x,P,k,D,j,V,K)}catch(he){if(ue(ee),he!==he+0)throw he;de(1,0)}},Z:function(x,P,k,D,j){var V=le();try{return ln(x,P,k,D,j)}catch(K){if(ue(V),K!==K+0)throw K;de(1,0)}},ca:function(x,P,k,D){var j=le();try{return sn(x,P,k,D)}catch(V){if(ue(j),V!==V+0)throw V;de(1,0)}},$:function(x){var P=le();try{return Zt(x)}catch(k){if(ue(P),k!==k+0)throw k;de(1,0)}},ba:function(x,P){var k=le();try{return an(x,P)}catch(D){if(ue(k),D!==D+0)throw D;de(1,0)}},Y:function(x,P,k){var D=le();try{return tn(x,P,k)}catch(j){if(ue(D),j!==j+0)throw j;de(1,0)}},g:function(x){var P=le();try{_e(x)()}catch(k){if(ue(P),k!==k+0)throw k;de(1,0)}},r:function(x,P){var k=le();try{_e(x)(P)}catch(D){if(ue(k),D!==D+0)throw D;de(1,0)}},i:function(x,P,k){var D=le();try{_e(x)(P,k)}catch(j){if(ue(D),j!==j+0)throw j;de(1,0)}},ha:function(x,P,k,D){var j=le();try{_e(x)(P,k,D)}catch(V){if(ue(j),V!==V+0)throw V;de(1,0)}},m:function(x,P,k,D){var j=le();try{_e(x)(P,k,D)}catch(V){if(ue(j),V!==V+0)throw V;de(1,0)}},v:function(x,P,k,D,j){var V=le();try{_e(x)(P,k,D,j)}catch(K){if(ue(V),K!==K+0)throw K;de(1,0)}},u:function(x,P,k,D,j,V){var K=le();try{_e(x)(P,k,D,j,V)}catch(ee){if(ue(K),ee!==ee+0)throw ee;de(1,0)}},O:function(x,P,k,D,j,V,K){var ee=le();try{_e(x)(P,k,D,j,V,K)}catch(he){if(ue(ee),he!==he+0)throw he;de(1,0)}},A:function(x,P,k,D,j,V,K,ee){var he=le();try{_e(x)(P,k,D,j,V,K,ee)}catch(fe){if(ue(he),fe!==fe+0)throw fe;de(1,0)}},ka:function(x,P,k,D,j,V,K,ee,he){var fe=le();try{_e(x)(P,k,D,j,V,K,ee,he)}catch(ke){if(ue(fe),ke!==ke+0)throw ke;de(1,0)}},C:function(x,P,k,D,j,V,K,ee,he,fe,ke){var Ke=le();try{_e(x)(P,k,D,j,V,K,ee,he,fe,ke)}catch(We){if(ue(Ke),We!==We+0)throw We;de(1,0)}},D:function(x,P,k,D,j,V,K,ee,he,fe,ke,Ke,We,q,be,Ee){var tt=le();try{_e(x)(P,k,D,j,V,K,ee,he,fe,ke,Ke,We,q,be,Ee)}catch(ht){if(ue(tt),ht!==ht+0)throw ht;de(1,0)}},fa:function(x,P,k,D,j,V,K,ee){var he=le();try{nn(x,P,k,D,j,V,K,ee)}catch(fe){if(ue(he),fe!==fe+0)throw fe;de(1,0)}},da:function(x,P,k,D,j,V,K,ee,he,fe,ke,Ke){var We=le();try{on(x,P,k,D,j,V,K,ee,he,fe,ke,Ke)}catch(q){if(ue(We),q!==q+0)throw q;de(1,0)}},ea:function(x,P,k,D,j,V){var K=le();try{rn(x,P,k,D,j,V)}catch(ee){if(ue(K),ee!==ee+0)throw ee;de(1,0)}},o:function(x){return x},a:X||t.wasmMemory,G:function(x){Re=x},la:Vt,z:function(x,P,k,D){return Vt(x,P,k,D)}};(function(){function x(j,V){t.asm=j.exports,re.qc.push(t.asm.sb),Be=t.asm.ub,Ve.unshift(t.asm.Va),te=V,M||($e--,t.monitorRunDependencies&&t.monitorRunDependencies($e),$e==0&&Ye&&(j=Ye,Ye=null,j()))}function P(j){x(j.instance,j.module)}function k(j){return function(){if(!H&&(O||A)){if(typeof fetch=="function"&&!Se.startsWith("file://"))return fetch(Se,{credentials:"same-origin"}).then(function(V){if(!V.ok)throw"failed to load wasm binary file at '"+Se+"'";return V.arrayBuffer()}).catch(function(){return lt()});if(d)return new Promise(function(V,K){d(Se,function(ee){V(new Uint8Array(ee))},K)})}return Promise.resolve().then(function(){return lt()})}().then(function(V){return WebAssembly.instantiate(V,D)}).then(function(V){return V}).then(j,function(V){z("failed to asynchronously prepare wasm: "+V),pe(V)})}var D={a:pn};if(M||($e++,t.monitorRunDependencies&&t.monitorRunDependencies($e)),t.instantiateWasm)try{return t.instantiateWasm(D,x)}catch(j){return z("Module.instantiateWasm callback failed with error: "+j),!1}(H||typeof WebAssembly.instantiateStreaming!="function"||pt()||Se.startsWith("file://")||T||typeof fetch!="function"?k(P):fetch(Se,{credentials:"same-origin"}).then(function(j){return WebAssembly.instantiateStreaming(j,D).then(P,function(V){return z("wasm streaming compile failed: "+V),z("falling back to ArrayBuffer instantiation"),k(P)})})).catch(r)})(),t.___wasm_call_ctors=function(){return(t.___wasm_call_ctors=t.asm.Va).apply(null,arguments)},t._OrtInit=function(){return(t._OrtInit=t.asm.Wa).apply(null,arguments)},t._OrtCreateSessionOptions=function(){return(t._OrtCreateSessionOptions=t.asm.Xa).apply(null,arguments)},t._OrtAppendExecutionProvider=function(){return(t._OrtAppendExecutionProvider=t.asm.Ya).apply(null,arguments)},t._OrtAddSessionConfigEntry=function(){return(t._OrtAddSessionConfigEntry=t.asm.Za).apply(null,arguments)},t._OrtReleaseSessionOptions=function(){return(t._OrtReleaseSessionOptions=t.asm._a).apply(null,arguments)},t._OrtCreateSession=function(){return(t._OrtCreateSession=t.asm.$a).apply(null,arguments)},t._OrtReleaseSession=function(){return(t._OrtReleaseSession=t.asm.ab).apply(null,arguments)},t._OrtGetInputCount=function(){return(t._OrtGetInputCount=t.asm.bb).apply(null,arguments)},t._OrtGetOutputCount=function(){return(t._OrtGetOutputCount=t.asm.cb).apply(null,arguments)},t._OrtGetInputName=function(){return(t._OrtGetInputName=t.asm.db).apply(null,arguments)},t._OrtGetOutputName=function(){return(t._OrtGetOutputName=t.asm.eb).apply(null,arguments)},t._OrtFree=function(){return(t._OrtFree=t.asm.fb).apply(null,arguments)},t._OrtCreateTensor=function(){return(t._OrtCreateTensor=t.asm.gb).apply(null,arguments)},t._OrtGetTensorData=function(){return(t._OrtGetTensorData=t.asm.hb).apply(null,arguments)},t._OrtReleaseTensor=function(){return(t._OrtReleaseTensor=t.asm.ib).apply(null,arguments)},t._OrtCreateRunOptions=function(){return(t._OrtCreateRunOptions=t.asm.jb).apply(null,arguments)},t._OrtAddRunConfigEntry=function(){return(t._OrtAddRunConfigEntry=t.asm.kb).apply(null,arguments)},t._OrtReleaseRunOptions=function(){return(t._OrtReleaseRunOptions=t.asm.lb).apply(null,arguments)},t._OrtRun=function(){return(t._OrtRun=t.asm.mb).apply(null,arguments)},t._OrtEndProfiling=function(){return(t._OrtEndProfiling=t.asm.nb).apply(null,arguments)};var Dt=t._pthread_self=function(){return(Dt=t._pthread_self=t.asm.ob).apply(null,arguments)},$t=t._malloc=function(){return($t=t._malloc=t.asm.pb).apply(null,arguments)},Gt=t._free=function(){return(Gt=t._free=t.asm.qb).apply(null,arguments)},qt=t._fflush=function(){return(qt=t._fflush=t.asm.rb).apply(null,arguments)};t.__emscripten_tls_init=function(){return(t.__emscripten_tls_init=t.asm.sb).apply(null,arguments)};var Wt=t.___funcs_on_exit=function(){return(Wt=t.___funcs_on_exit=t.asm.tb).apply(null,arguments)},Ht=t.__emscripten_thread_init=function(){return(Ht=t.__emscripten_thread_init=t.asm.vb).apply(null,arguments)};t.__emscripten_thread_crashed=function(){return(t.__emscripten_thread_crashed=t.asm.wb).apply(null,arguments)};var Ct,Xt=t._emscripten_run_in_main_runtime_thread_js=function(){return(Xt=t._emscripten_run_in_main_runtime_thread_js=t.asm.xb).apply(null,arguments)},Yt=t.__emscripten_proxy_execute_task_queue=function(){return(Yt=t.__emscripten_proxy_execute_task_queue=t.asm.yb).apply(null,arguments)},Rt=t.__emscripten_thread_free_data=function(){return(Rt=t.__emscripten_thread_free_data=t.asm.zb).apply(null,arguments)},Kt=t.__emscripten_thread_exit=function(){return(Kt=t.__emscripten_thread_exit=t.asm.Ab).apply(null,arguments)},de=t._setThrew=function(){return(de=t._setThrew=t.asm.Bb).apply(null,arguments)},Qt=t._emscripten_stack_set_limits=function(){return(Qt=t._emscripten_stack_set_limits=t.asm.Cb).apply(null,arguments)},le=t.stackSave=function(){return(le=t.stackSave=t.asm.Db).apply(null,arguments)},ue=t.stackRestore=function(){return(ue=t.stackRestore=t.asm.Eb).apply(null,arguments)},jt=t.stackAlloc=function(){return(jt=t.stackAlloc=t.asm.Fb).apply(null,arguments)},Nt=t.___cxa_can_catch=function(){return(Nt=t.___cxa_can_catch=t.asm.Gb).apply(null,arguments)},Jt=t.___cxa_is_pointer_type=function(){return(Jt=t.___cxa_is_pointer_type=t.asm.Hb).apply(null,arguments)},Zt=t.dynCall_j=function(){return(Zt=t.dynCall_j=t.asm.Ib).apply(null,arguments)},en=t.dynCall_iiiiij=function(){return(en=t.dynCall_iiiiij=t.asm.Jb).apply(null,arguments)},tn=t.dynCall_jii=function(){return(tn=t.dynCall_jii=t.asm.Kb).apply(null,arguments)},nn=t.dynCall_viiiiij=function(){return(nn=t.dynCall_viiiiij=t.asm.Lb).apply(null,arguments)},rn=t.dynCall_vjji=function(){return(rn=t.dynCall_vjji=t.asm.Mb).apply(null,arguments)},on=t.dynCall_viiijjjii=function(){return(on=t.dynCall_viiijjjii=t.asm.Nb).apply(null,arguments)},sn=t.dynCall_iij=function(){return(sn=t.dynCall_iij=t.asm.Ob).apply(null,arguments)},an=t.dynCall_ji=function(){return(an=t.dynCall_ji=t.asm.Pb).apply(null,arguments)},un=t.dynCall_iiiiiij=function(){return(un=t.dynCall_iiiiiij=t.asm.Qb).apply(null,arguments)},ln=t.dynCall_iiij=function(){return(ln=t.dynCall_iiij=t.asm.Rb).apply(null,arguments)};function cn(){function x(){if(!Ct&&(Ct=!0,t.calledRun=!0,!ye)&&(M||rt(Ve),e(t),t.onRuntimeInitialized&&t.onRuntimeInitialized(),!M)){if(t.postRun)for(typeof t.postRun=="function"&&(t.postRun=[t.postRun]);t.postRun.length;){var P=t.postRun.shift();Qe.unshift(P)}rt(Qe)}}if(!(0<$e))if(M)e(t),M||rt(Ve),postMessage({cmd:"loaded"});else{if(t.preRun)for(typeof t.preRun=="function"&&(t.preRun=[t.preRun]);t.preRun.length;)ze();rt(Ue),0<$e||(t.setStatus?(t.setStatus("Running..."),setTimeout(function(){setTimeout(function(){t.setStatus("")},1),x()},1)):x())}}if(t.UTF8ToString=Ae,t.stringToUTF8=function(x,P,k){return Ne(x,h(),P,k)},t.lengthBytesUTF8=De,t.keepRuntimeAlive=Ge,t.wasmMemory=X,t.stackSave=le,t.stackRestore=ue,t.stackAlloc=jt,t.ExitStatus=Je,t.PThread=re,Ye=function x(){Ct||cn(),Ct||(Ye=x)},t.preInit)for(typeof t.preInit=="function"&&(t.preInit=[t.preInit]);0{var u,c=(u=(u=typeof document<"u"&&document.currentScript?document.currentScript.src:void 0)||"/index.js",function(p){var s,h,f;p=p||{},s||(s=p!==void 0?p:{}),s.ready=new Promise(function(E,I){h=E,f=I});var l,o,t,e,r,i,d=Object.assign({},s),g="./this.program",m=(E,I)=>{throw I},_=typeof window=="object",y=typeof importScripts=="function",w=typeof process=="object"&&typeof process.versions=="object"&&typeof process.versions.node=="string",v="";w?(v=y?a(908).dirname(v)+"/":"//",i=()=>{r||(e=a(1384),r=a(908))},l=function(E,I){return i(),E=r.normalize(E),e.readFileSync(E,I?void 0:"utf8")},t=E=>((E=l(E,!0)).buffer||(E=new Uint8Array(E)),E),o=(E,I,F)=>{i(),E=r.normalize(E),e.readFile(E,function(R,U){R?F(R):I(U.buffer)})},1{if(T||0{var I=new XMLHttpRequest;return I.open("GET",E,!1),I.send(null),I.responseText},y&&(t=E=>{var I=new XMLHttpRequest;return I.open("GET",E,!1),I.responseType="arraybuffer",I.send(null),new Uint8Array(I.response)}),o=(E,I,F)=>{var R=new XMLHttpRequest;R.open("GET",E,!0),R.responseType="arraybuffer",R.onload=()=>{R.status==200||R.status==0&&R.response?I(R.response):F()},R.onerror=F,R.send(null)});var S,O=s.print||console.log.bind(console),A=s.printErr||console.warn.bind(console);Object.assign(s,d),d=null,s.thisProgram&&(g=s.thisProgram),s.quit&&(m=s.quit),s.wasmBinary&&(S=s.wasmBinary);var T=s.noExitRuntime||!1;typeof WebAssembly!="object"&&Pe("no native wasm support detected");var M,N,B,$,L,H,C=!1,z=typeof TextDecoder<"u"?new TextDecoder("utf8"):void 0;function J(E,I,F){var R=(I>>>=0)+F;for(F=I;E[F]&&!(F>=R);)++F;if(16(U=(240&U)==224?(15&U)<<12|W<<6|Y:(7&U)<<18|W<<12|Y<<6|63&E[I++])?R+=String.fromCharCode(U):(U-=65536,R+=String.fromCharCode(55296|U>>10,56320|1023&U))}}else R+=String.fromCharCode(U)}return R}function X(E,I){return(E>>>=0)?J($,E,I):""}function te(E,I,F,R){if(!(0>>=0;R=F+R-1;for(var W=0;W=Y&&(Y=65536+((1023&Y)<<10)|1023&E.charCodeAt(++W)),127>=Y){if(F>=R)break;I[F++>>>0]=Y}else{if(2047>=Y){if(F+1>=R)break;I[F++>>>0]=192|Y>>6}else{if(65535>=Y){if(F+2>=R)break;I[F++>>>0]=224|Y>>12}else{if(F+3>=R)break;I[F++>>>0]=240|Y>>18,I[F++>>>0]=128|Y>>12&63}I[F++>>>0]=128|Y>>6&63}I[F++>>>0]=128|63&Y}}return I[F>>>0]=0,F-U}function ne(E){for(var I=0,F=0;F=R?I++:2047>=R?I+=2:55296<=R&&57343>=R?(I+=4,++F):I+=3}return I}function me(){var E=M.buffer;N=E,s.HEAP8=B=new Int8Array(E),s.HEAP16=new Int16Array(E),s.HEAP32=L=new Int32Array(E),s.HEAPU8=$=new Uint8Array(E),s.HEAPU16=new Uint16Array(E),s.HEAPU32=H=new Uint32Array(E),s.HEAPF32=new Float32Array(E),s.HEAPF64=new Float64Array(E)}var Me,Oe=[],ce=[],Te=[],ye=[],Fe=0;function He(){var E=s.preRun.shift();Oe.unshift(E)}var Ae,Ne=0,De=null;function Pe(E){throw s.onAbort&&s.onAbort(E),A(E="Aborted("+E+")"),C=!0,E=new WebAssembly.RuntimeError(E+". Build with -sASSERTIONS for more info."),f(E),E}function ve(){return Ae.startsWith("data:application/octet-stream;base64,")}if(Ae="ort-wasm.wasm",!ve()){var Be=Ae;Ae=s.locateFile?s.locateFile(Be,v):v+Be}function Ue(){var E=Ae;try{if(E==Ae&&S)return new Uint8Array(S);if(t)return t(E);throw"both async and sync fetching of the wasm failed"}catch(I){Pe(I)}}function Ve(E){this.name="ExitStatus",this.message="Program terminated with exit("+E+")",this.status=E}function Xe(E){for(;0>2>>>0]=I},this.Eb=function(){return H[this.zb+4>>2>>>0]},this.Sb=function(I){H[this.zb+8>>2>>>0]=I},this.Wb=function(){return H[this.zb+8>>2>>>0]},this.Tb=function(){L[this.zb>>2>>>0]=0},this.Ib=function(I){B[this.zb+12>>0>>>0]=I?1:0},this.Pb=function(){return B[this.zb+12>>0>>>0]!=0},this.Jb=function(I){B[this.zb+13>>0>>>0]=I?1:0},this.Lb=function(){return B[this.zb+13>>0>>>0]!=0},this.Rb=function(I,F){this.Fb(0),this.Ub(I),this.Sb(F),this.Tb(),this.Ib(!1),this.Jb(!1)},this.Nb=function(){L[this.zb>>2>>>0]+=1},this.Xb=function(){var I=L[this.zb>>2>>>0];return L[this.zb>>2>>>0]=I-1,I===1},this.Fb=function(I){H[this.zb+16>>2>>>0]=I},this.Ob=function(){return H[this.zb+16>>2>>>0]},this.Qb=function(){if(mt(this.Eb()))return H[this.Db>>2>>>0];var I=this.Ob();return I!==0?I:this.Db}}function $e(E){return ot(new Se(E).zb)}var Ye=[];function pe(E){var I=Ye[E];return I||(E>=Ye.length&&(Ye.length=E+1),Ye[E]=I=Me.get(E)),I}function pt(E){var I=ne(E)+1,F=_e(I);return F&&te(E,B,F,I),F}var lt={};function Et(){if(!Je){var E,I={USER:"web_user",LOGNAME:"web_user",PATH:"/",PWD:"/",HOME:"/home/web_user",LANG:(typeof navigator=="object"&&navigator.languages&&navigator.languages[0]||"C").replace("-","_")+".UTF-8",_:g||"./this.program"};for(E in lt)lt[E]===void 0?delete I[E]:I[E]=lt[E];var F=[];for(E in I)F.push(E+"="+I[E]);Je=F}return Je}var Je,ct=[null,[],[]];function dt(E,I){var F=ct[E];I===0||I===10?((E===1?O:A)(J(F,0)),F.length=0):F.push(I)}var Le=0;function it(E){return E%4==0&&(E%100!=0||E%400==0)}var re=[31,29,31,30,31,30,31,31,30,31,30,31],rt=[31,28,31,30,31,30,31,31,30,31,30,31];function Mt(E,I,F,R){function U(G,ge,xe){for(G=typeof G=="number"?G.toString():G||"";G.lengthet?-1:0qe-G.getDate())){G.setDate(G.getDate()+ge);break}ge-=qe-G.getDate()+1,G.setDate(1),11>xe?G.setMonth(xe+1):(G.setMonth(0),G.setFullYear(G.getFullYear()+1))}return xe=new Date(G.getFullYear()+1,0,4),ge=Z(new Date(G.getFullYear(),0,4)),xe=Z(xe),0>=Y(ge,G)?0>=Y(xe,G)?G.getFullYear()+1:G.getFullYear():G.getFullYear()-1}var ae=L[R+40>>2>>>0];for(var we in R={$b:L[R>>2>>>0],Zb:L[R+4>>2>>>0],Gb:L[R+8>>2>>>0],Kb:L[R+12>>2>>>0],Hb:L[R+16>>2>>>0],Cb:L[R+20>>2>>>0],Ab:L[R+24>>2>>>0],Bb:L[R+28>>2>>>0],bc:L[R+32>>2>>>0],Yb:L[R+36>>2>>>0],ac:ae?X(ae):""},F=X(F),ae={"%c":"%a %b %d %H:%M:%S %Y","%D":"%m/%d/%y","%F":"%Y-%m-%d","%h":"%b","%r":"%I:%M:%S %p","%R":"%H:%M","%T":"%H:%M:%S","%x":"%m/%d/%y","%X":"%H:%M:%S","%Ec":"%c","%EC":"%C","%Ex":"%m/%d/%y","%EX":"%H:%M:%S","%Ey":"%y","%EY":"%Y","%Od":"%d","%Oe":"%e","%OH":"%H","%OI":"%I","%Om":"%m","%OM":"%M","%OS":"%S","%Ou":"%u","%OU":"%U","%OV":"%V","%Ow":"%w","%OW":"%W","%Oy":"%y"})F=F.replace(new RegExp(we,"g"),ae[we]);var Ce="Sunday Monday Tuesday Wednesday Thursday Friday Saturday".split(" "),Ie="January February March April May June July August September October November December".split(" ");for(we in ae={"%a":function(G){return Ce[G.Ab].substring(0,3)},"%A":function(G){return Ce[G.Ab]},"%b":function(G){return Ie[G.Hb].substring(0,3)},"%B":function(G){return Ie[G.Hb]},"%C":function(G){return W((G.Cb+1900)/100|0,2)},"%d":function(G){return W(G.Kb,2)},"%e":function(G){return U(G.Kb,2," ")},"%g":function(G){return Q(G).toString().substring(2)},"%G":function(G){return Q(G)},"%H":function(G){return W(G.Gb,2)},"%I":function(G){return(G=G.Gb)==0?G=12:12G.Gb?"AM":"PM"},"%S":function(G){return W(G.$b,2)},"%t":function(){return" "},"%u":function(G){return G.Ab||7},"%U":function(G){return W(Math.floor((G.Bb+7-G.Ab)/7),2)},"%V":function(G){var ge=Math.floor((G.Bb+7-(G.Ab+6)%7)/7);if(2>=(G.Ab+371-G.Bb-2)%7&&ge++,ge)ge==53&&((xe=(G.Ab+371-G.Bb)%7)==4||xe==3&&it(G.Cb)||(ge=1));else{ge=52;var xe=(G.Ab+7-G.Bb-1)%7;(xe==4||xe==5&&it(G.Cb%400-1))&&ge++}return W(ge,2)},"%w":function(G){return G.Ab},"%W":function(G){return W(Math.floor((G.Bb+7-(G.Ab+6)%7)/7),2)},"%y":function(G){return(G.Cb+1900).toString().substring(2)},"%Y":function(G){return G.Cb+1900},"%z":function(G){var ge=0<=(G=G.Yb);return G=Math.abs(G)/60,(ge?"+":"-")+("0000"+(G/60*100+G%60)).slice(-4)},"%Z":function(G){return G.ac},"%%":function(){return"%"}},F=F.replace(/%%/g,"\0\0"),ae)F.includes(we)&&(F=F.replace(new RegExp(we,"g"),ae[we](R)));return we=function(G){var ge=Array(ne(G)+1);return te(G,ge,0,ge.length),ge}(F=F.replace(/\0\0/g,"%")),we.length>I?0:(B.set(we,E>>>0),we.length-1)}var kt={a:function(E){return _e(E+24)+24},m:function(E){return(E=new Se(E)).Pb()||(E.Ib(!0),Ge--),E.Jb(!1),Qe.push(E),E.Nb(),E.Qb()},ia:function(E){throw A("Unexpected exception thrown, this is not properly supported - aborting"),C=!0,E},w:function(){se(0);var E=Qe.pop();if(E.Xb()&&!E.Lb()){var I=E.Wb();I&&pe(I)(E.Db),$e(E.Db)}ze=0},d:function(){var E=ze;if(!E)return Le=0;var I=new Se(E);I.Fb(E);var F=I.Eb();if(!F)return Le=0,E;for(var R=Array.prototype.slice.call(arguments),U=0;U>>2]+4294967296*L[E+4>>>2])),L[I>>2>>>0]=E.getUTCSeconds(),L[I+4>>2>>>0]=E.getUTCMinutes(),L[I+8>>2>>>0]=E.getUTCHours(),L[I+12>>2>>>0]=E.getUTCDate(),L[I+16>>2>>>0]=E.getUTCMonth(),L[I+20>>2>>>0]=E.getUTCFullYear()-1900,L[I+24>>2>>>0]=E.getUTCDay(),L[I+28>>2>>>0]=(E.getTime()-Date.UTC(E.getUTCFullYear(),0,1,0,0,0,0))/864e5|0},Ea:function(E,I){E=new Date(1e3*(H[E>>>2]+4294967296*L[E+4>>>2])),L[I>>2>>>0]=E.getSeconds(),L[I+4>>2>>>0]=E.getMinutes(),L[I+8>>2>>>0]=E.getHours(),L[I+12>>2>>>0]=E.getDate(),L[I+16>>2>>>0]=E.getMonth(),L[I+20>>2>>>0]=E.getFullYear()-1900,L[I+24>>2>>>0]=E.getDay();var F=new Date(E.getFullYear(),0,1);L[I+28>>2>>>0]=(E.getTime()-F.getTime())/864e5|0,L[I+36>>2>>>0]=-60*E.getTimezoneOffset();var R=new Date(E.getFullYear(),6,1).getTimezoneOffset();F=F.getTimezoneOffset(),L[I+32>>2>>>0]=0|(R!=F&&E.getTimezoneOffset()==Math.min(F,R))},Fa:function(E){var I=new Date(L[E+20>>2>>>0]+1900,L[E+16>>2>>>0],L[E+12>>2>>>0],L[E+8>>2>>>0],L[E+4>>2>>>0],L[E>>2>>>0],0),F=L[E+32>>2>>>0],R=I.getTimezoneOffset(),U=new Date(I.getFullYear(),0,1),W=new Date(I.getFullYear(),6,1).getTimezoneOffset(),Y=U.getTimezoneOffset(),Z=Math.min(Y,W);return 0>F?L[E+32>>2>>>0]=+(W!=Y&&Z==R):0>2>>>0]=I.getDay(),L[E+28>>2>>>0]=(I.getTime()-U.getTime())/864e5|0,L[E>>2>>>0]=I.getSeconds(),L[E+4>>2>>>0]=I.getMinutes(),L[E+8>>2>>>0]=I.getHours(),L[E+12>>2>>>0]=I.getDate(),L[E+16>>2>>>0]=I.getMonth(),I.getTime()/1e3|0},sa:function(){return-52},ta:function(){},Ga:function E(I,F,R){E.Vb||(E.Vb=!0,function(U,W,Y){function Z(Ie){return(Ie=Ie.toTimeString().match(/\(([A-Za-z ]+)\)$/))?Ie[1]:"GMT"}var Q=new Date().getFullYear(),ae=new Date(Q,0,1),we=new Date(Q,6,1);Q=ae.getTimezoneOffset();var Ce=we.getTimezoneOffset();L[U>>2>>>0]=60*Math.max(Q,Ce),L[W>>2>>>0]=+(Q!=Ce),U=Z(ae),W=Z(we),U=pt(U),W=pt(W),Ce>2>>>0]=U,H[Y+4>>2>>>0]=W):(H[Y>>2>>>0]=W,H[Y+4>>2>>>0]=U)}(I,F,R))},B:function(){Pe("")},ma:function(){return 4294901760},I:w?()=>{var E=process.hrtime();return 1e3*E[0]+E[1]/1e6}:()=>performance.now(),xa:function(E,I,F){$.copyWithin(E>>>0,I>>>0,I+F>>>0)},G:function(E){var I=$.length;if(4294901760<(E>>>=0))return!1;for(var F=1;4>=F;F*=2){var R=I*(1+.2/F);R=Math.min(R,E+100663296);var U=Math;R=Math.max(E,R),U=U.min.call(U,4294901760,R+(65536-R%65536)%65536);e:{try{M.grow(U-N.byteLength+65535>>>16),me();var W=1;break e}catch{}W=void 0}if(W)return!0}return!1},va:function(E,I){var F=0;return Et().forEach(function(R,U){var W=I+F;for(U=H[E+4*U>>2>>>0]=W,W=0;W>0>>>0]=R.charCodeAt(W);B[U>>0>>>0]=0,F+=R.length+1}),0},wa:function(E,I){var F=Et();H[E>>2>>>0]=F.length;var R=0;return F.forEach(function(U){R+=U.length+1}),H[I>>2>>>0]=R,0},ba:function(E){T||0>2>>>0],Z=H[I+4>>2>>>0];I+=8;for(var Q=0;Q>>0]);U+=Z}return H[R>>2>>>0]=U,0},c:function(){return Le},ja:function E(I,F){E.Mb||(E.Mb=function(){if(typeof crypto=="object"&&typeof crypto.getRandomValues=="function"){var U=new Uint8Array(1);return()=>(crypto.getRandomValues(U),U[0])}if(w)try{var W=a(Object(function(){var Y=new Error("Cannot find module 'crypto'");throw Y.code="MODULE_NOT_FOUND",Y}()));return()=>W.randomBytes(1)[0]}catch{}return()=>Pe("randomDevice")}());for(var R=0;R>0>>>0]=E.Mb();return 0},ea:function(E,I,F){var R=ie();try{return pe(E)(I,F)}catch(U){if(oe(R),U!==U+0)throw U;se(1,0)}},fa:function(E,I,F){var R=ie();try{return pe(E)(I,F)}catch(U){if(oe(R),U!==U+0)throw U;se(1,0)}},J:function(E){var I=ie();try{return pe(E)()}catch(F){if(oe(I),F!==F+0)throw F;se(1,0)}},e:function(E,I){var F=ie();try{return pe(E)(I)}catch(R){if(oe(F),R!==R+0)throw R;se(1,0)}},N:function(E,I,F){var R=ie();try{return pe(E)(I,F)}catch(U){if(oe(R),U!==U+0)throw U;se(1,0)}},O:function(E,I,F){var R=ie();try{return pe(E)(I,F)}catch(U){if(oe(R),U!==U+0)throw U;se(1,0)}},j:function(E,I,F){var R=ie();try{return pe(E)(I,F)}catch(U){if(oe(R),U!==U+0)throw U;se(1,0)}},o:function(E,I,F,R){var U=ie();try{return pe(E)(I,F,R)}catch(W){if(oe(U),W!==W+0)throw W;se(1,0)}},p:function(E,I,F,R,U){var W=ie();try{return pe(E)(I,F,R,U)}catch(Y){if(oe(W),Y!==Y+0)throw Y;se(1,0)}},M:function(E,I,F,R,U,W){var Y=ie();try{return pe(E)(I,F,R,U,W)}catch(Z){if(oe(Y),Z!==Z+0)throw Z;se(1,0)}},r:function(E,I,F,R,U,W){var Y=ie();try{return pe(E)(I,F,R,U,W)}catch(Z){if(oe(Y),Z!==Z+0)throw Z;se(1,0)}},v:function(E,I,F,R,U,W,Y){var Z=ie();try{return pe(E)(I,F,R,U,W,Y)}catch(Q){if(oe(Z),Q!==Q+0)throw Q;se(1,0)}},K:function(E,I,F,R,U,W,Y,Z){var Q=ie();try{return pe(E)(I,F,R,U,W,Y,Z)}catch(ae){if(oe(Q),ae!==ae+0)throw ae;se(1,0)}},D:function(E,I,F,R,U,W,Y,Z,Q,ae,we,Ce){var Ie=ie();try{return pe(E)(I,F,R,U,W,Y,Z,Q,ae,we,Ce)}catch(G){if(oe(Ie),G!==G+0)throw G;se(1,0)}},X:function(E,I,F,R,U,W,Y,Z){var Q=ie();try{return Ot(E,I,F,R,U,W,Y,Z)}catch(ae){if(oe(Q),ae!==ae+0)throw ae;se(1,0)}},V:function(E,I,F,R,U,W,Y){var Z=ie();try{return _t(E,I,F,R,U,W,Y)}catch(Q){if(oe(Z),Q!==Q+0)throw Q;se(1,0)}},U:function(E,I,F,R,U){var W=ie();try{return At(E,I,F,R,U)}catch(Y){if(oe(W),Y!==Y+0)throw Y;se(1,0)}},Z:function(E,I,F,R){var U=ie();try{return xt(E,I,F,R)}catch(W){if(oe(U),W!==W+0)throw W;se(1,0)}},W:function(E){var I=ie();try{return bt(E)}catch(F){if(oe(I),F!==F+0)throw F;se(1,0)}},Y:function(E,I){var F=ie();try{return St(E,I)}catch(R){if(oe(F),R!==R+0)throw R;se(1,0)}},T:function(E,I,F){var R=ie();try{return yt(E,I,F)}catch(U){if(oe(R),U!==U+0)throw U;se(1,0)}},f:function(E){var I=ie();try{pe(E)()}catch(F){if(oe(I),F!==F+0)throw F;se(1,0)}},q:function(E,I){var F=ie();try{pe(E)(I)}catch(R){if(oe(F),R!==R+0)throw R;se(1,0)}},h:function(E,I,F){var R=ie();try{pe(E)(I,F)}catch(U){if(oe(R),U!==U+0)throw U;se(1,0)}},da:function(E,I,F,R){var U=ie();try{pe(E)(I,F,R)}catch(W){if(oe(U),W!==W+0)throw W;se(1,0)}},l:function(E,I,F,R){var U=ie();try{pe(E)(I,F,R)}catch(W){if(oe(U),W!==W+0)throw W;se(1,0)}},t:function(E,I,F,R,U){var W=ie();try{pe(E)(I,F,R,U)}catch(Y){if(oe(W),Y!==Y+0)throw Y;se(1,0)}},u:function(E,I,F,R,U,W){var Y=ie();try{pe(E)(I,F,R,U,W)}catch(Z){if(oe(Y),Z!==Z+0)throw Z;se(1,0)}},x:function(E,I,F,R,U,W,Y){var Z=ie();try{pe(E)(I,F,R,U,W,Y)}catch(Q){if(oe(Z),Q!==Q+0)throw Q;se(1,0)}},z:function(E,I,F,R,U,W,Y,Z){var Q=ie();try{pe(E)(I,F,R,U,W,Y,Z)}catch(ae){if(oe(Q),ae!==ae+0)throw ae;se(1,0)}},ga:function(E,I,F,R,U,W,Y,Z,Q){var ae=ie();try{pe(E)(I,F,R,U,W,Y,Z,Q)}catch(we){if(oe(ae),we!==we+0)throw we;se(1,0)}},A:function(E,I,F,R,U,W,Y,Z,Q,ae,we){var Ce=ie();try{pe(E)(I,F,R,U,W,Y,Z,Q,ae,we)}catch(Ie){if(oe(Ce),Ie!==Ie+0)throw Ie;se(1,0)}},C:function(E,I,F,R,U,W,Y,Z,Q,ae,we,Ce,Ie,G,ge,xe){var qe=ie();try{pe(E)(I,F,R,U,W,Y,Z,Q,ae,we,Ce,Ie,G,ge,xe)}catch(et){if(oe(qe),et!==et+0)throw et;se(1,0)}},aa:function(E,I,F,R,U,W,Y,Z){var Q=ie();try{wt(E,I,F,R,U,W,Y,Z)}catch(ae){if(oe(Q),ae!==ae+0)throw ae;se(1,0)}},_:function(E,I,F,R,U,W,Y,Z,Q,ae,we,Ce){var Ie=ie();try{Tt(E,I,F,R,U,W,Y,Z,Q,ae,we,Ce)}catch(G){if(oe(Ie),G!==G+0)throw G;se(1,0)}},$:function(E,I,F,R,U,W){var Y=ie();try{vt(E,I,F,R,U,W)}catch(Z){if(oe(Y),Z!==Z+0)throw Z;se(1,0)}},n:function(E){return E},F:function(E){Le=E},ha:Mt,y:function(E,I,F,R){return Mt(E,I,F,R)}};(function(){function E(U){s.asm=U.exports,M=s.asm.Ka,me(),Me=s.asm.ib,ce.unshift(s.asm.La),Ne--,s.monitorRunDependencies&&s.monitorRunDependencies(Ne),Ne==0&&De&&(U=De,De=null,U())}function I(U){E(U.instance)}function F(U){return function(){if(!S&&(_||y)){if(typeof fetch=="function"&&!Ae.startsWith("file://"))return fetch(Ae,{credentials:"same-origin"}).then(function(W){if(!W.ok)throw"failed to load wasm binary file at '"+Ae+"'";return W.arrayBuffer()}).catch(function(){return Ue()});if(o)return new Promise(function(W,Y){o(Ae,function(Z){W(new Uint8Array(Z))},Y)})}return Promise.resolve().then(function(){return Ue()})}().then(function(W){return WebAssembly.instantiate(W,R)}).then(function(W){return W}).then(U,function(W){A("failed to asynchronously prepare wasm: "+W),Pe(W)})}var R={a:kt};if(Ne++,s.monitorRunDependencies&&s.monitorRunDependencies(Ne),s.instantiateWasm)try{return s.instantiateWasm(R,E)}catch(U){return A("Module.instantiateWasm callback failed with error: "+U),!1}(S||typeof WebAssembly.instantiateStreaming!="function"||ve()||Ae.startsWith("file://")||w||typeof fetch!="function"?F(I):fetch(Ae,{credentials:"same-origin"}).then(function(U){return WebAssembly.instantiateStreaming(U,R).then(I,function(W){return A("wasm streaming compile failed: "+W),A("falling back to ArrayBuffer instantiation"),F(I)})})).catch(f)})(),s.___wasm_call_ctors=function(){return(s.___wasm_call_ctors=s.asm.La).apply(null,arguments)},s._OrtInit=function(){return(s._OrtInit=s.asm.Ma).apply(null,arguments)},s._OrtCreateSessionOptions=function(){return(s._OrtCreateSessionOptions=s.asm.Na).apply(null,arguments)},s._OrtAppendExecutionProvider=function(){return(s._OrtAppendExecutionProvider=s.asm.Oa).apply(null,arguments)},s._OrtAddSessionConfigEntry=function(){return(s._OrtAddSessionConfigEntry=s.asm.Pa).apply(null,arguments)},s._OrtReleaseSessionOptions=function(){return(s._OrtReleaseSessionOptions=s.asm.Qa).apply(null,arguments)},s._OrtCreateSession=function(){return(s._OrtCreateSession=s.asm.Ra).apply(null,arguments)},s._OrtReleaseSession=function(){return(s._OrtReleaseSession=s.asm.Sa).apply(null,arguments)},s._OrtGetInputCount=function(){return(s._OrtGetInputCount=s.asm.Ta).apply(null,arguments)},s._OrtGetOutputCount=function(){return(s._OrtGetOutputCount=s.asm.Ua).apply(null,arguments)},s._OrtGetInputName=function(){return(s._OrtGetInputName=s.asm.Va).apply(null,arguments)},s._OrtGetOutputName=function(){return(s._OrtGetOutputName=s.asm.Wa).apply(null,arguments)},s._OrtFree=function(){return(s._OrtFree=s.asm.Xa).apply(null,arguments)},s._OrtCreateTensor=function(){return(s._OrtCreateTensor=s.asm.Ya).apply(null,arguments)},s._OrtGetTensorData=function(){return(s._OrtGetTensorData=s.asm.Za).apply(null,arguments)},s._OrtReleaseTensor=function(){return(s._OrtReleaseTensor=s.asm._a).apply(null,arguments)},s._OrtCreateRunOptions=function(){return(s._OrtCreateRunOptions=s.asm.$a).apply(null,arguments)},s._OrtAddRunConfigEntry=function(){return(s._OrtAddRunConfigEntry=s.asm.ab).apply(null,arguments)},s._OrtReleaseRunOptions=function(){return(s._OrtReleaseRunOptions=s.asm.bb).apply(null,arguments)},s._OrtRun=function(){return(s._OrtRun=s.asm.cb).apply(null,arguments)},s._OrtEndProfiling=function(){return(s._OrtEndProfiling=s.asm.db).apply(null,arguments)};var Ze,_e=s._malloc=function(){return(_e=s._malloc=s.asm.eb).apply(null,arguments)},ot=s._free=function(){return(ot=s._free=s.asm.fb).apply(null,arguments)},ft=s._fflush=function(){return(ft=s._fflush=s.asm.gb).apply(null,arguments)},st=s.___funcs_on_exit=function(){return(st=s.___funcs_on_exit=s.asm.hb).apply(null,arguments)},se=s._setThrew=function(){return(se=s._setThrew=s.asm.jb).apply(null,arguments)},ie=s.stackSave=function(){return(ie=s.stackSave=s.asm.kb).apply(null,arguments)},oe=s.stackRestore=function(){return(oe=s.stackRestore=s.asm.lb).apply(null,arguments)},gt=s.stackAlloc=function(){return(gt=s.stackAlloc=s.asm.mb).apply(null,arguments)},at=s.___cxa_can_catch=function(){return(at=s.___cxa_can_catch=s.asm.nb).apply(null,arguments)},mt=s.___cxa_is_pointer_type=function(){return(mt=s.___cxa_is_pointer_type=s.asm.ob).apply(null,arguments)},bt=s.dynCall_j=function(){return(bt=s.dynCall_j=s.asm.pb).apply(null,arguments)},_t=s.dynCall_iiiiij=function(){return(_t=s.dynCall_iiiiij=s.asm.qb).apply(null,arguments)},yt=s.dynCall_jii=function(){return(yt=s.dynCall_jii=s.asm.rb).apply(null,arguments)},wt=s.dynCall_viiiiij=function(){return(wt=s.dynCall_viiiiij=s.asm.sb).apply(null,arguments)},vt=s.dynCall_vjji=function(){return(vt=s.dynCall_vjji=s.asm.tb).apply(null,arguments)},Tt=s.dynCall_viiijjjii=function(){return(Tt=s.dynCall_viiijjjii=s.asm.ub).apply(null,arguments)},xt=s.dynCall_iij=function(){return(xt=s.dynCall_iij=s.asm.vb).apply(null,arguments)},St=s.dynCall_ji=function(){return(St=s.dynCall_ji=s.asm.wb).apply(null,arguments)},Ot=s.dynCall_iiiiiij=function(){return(Ot=s.dynCall_iiiiiij=s.asm.xb).apply(null,arguments)},At=s.dynCall_iiij=function(){return(At=s.dynCall_iiij=s.asm.yb).apply(null,arguments)};function Pt(){function E(){if(!Ze&&(Ze=!0,s.calledRun=!0,!C)){if(Xe(ce),h(s),s.onRuntimeInitialized&&s.onRuntimeInitialized(),s.postRun)for(typeof s.postRun=="function"&&(s.postRun=[s.postRun]);s.postRun.length;){var I=s.postRun.shift();ye.unshift(I)}Xe(ye)}}if(!(0{b.exports=function(n,a){for(var u=new Array(arguments.length-1),c=0,p=2,s=!0;p{var a=n;a.length=function(h){var f=h.length;if(!f)return 0;for(var l=0;--f%4>1&&h.charAt(f)==="=";)++l;return Math.ceil(3*h.length)/4-l};for(var u=new Array(64),c=new Array(123),p=0;p<64;)c[u[p]=p<26?p+65:p<52?p+71:p<62?p-4:p-59|43]=p++;a.encode=function(h,f,l){for(var o,t=null,e=[],r=0,i=0;f>2],o=(3&d)<<4,i=1;break;case 1:e[r++]=u[o|d>>4],o=(15&d)<<2,i=2;break;case 2:e[r++]=u[o|d>>6],e[r++]=u[63&d],i=0}r>8191&&((t||(t=[])).push(String.fromCharCode.apply(String,e)),r=0)}return i&&(e[r++]=u[o],e[r++]=61,i===1&&(e[r++]=61)),t?(r&&t.push(String.fromCharCode.apply(String,e.slice(0,r))),t.join("")):String.fromCharCode.apply(String,e.slice(0,r))};var s="invalid encoding";a.decode=function(h,f,l){for(var o,t=l,e=0,r=0;r1)break;if((i=c[i])===void 0)throw Error(s);switch(e){case 0:o=i,e=1;break;case 1:f[l++]=o<<2|(48&i)>>4,o=i,e=2;break;case 2:f[l++]=(15&o)<<4|(60&i)>>2,o=i,e=3;break;case 3:f[l++]=(3&o)<<6|i,e=0}}if(e===1)throw Error(s);return l-t},a.test=function(h){return/^(?:[A-Za-z0-9+/]{4})*(?:[A-Za-z0-9+/]{2}==|[A-Za-z0-9+/]{3}=)?$/.test(h)}},9211:b=>{function n(){this._listeners={}}b.exports=n,n.prototype.on=function(a,u,c){return(this._listeners[a]||(this._listeners[a]=[])).push({fn:u,ctx:c||this}),this},n.prototype.off=function(a,u){if(a===void 0)this._listeners={};else if(u===void 0)this._listeners[a]=[];else for(var c=this._listeners[a],p=0;p{function n(s){return typeof Float32Array<"u"?function(){var h=new Float32Array([-0]),f=new Uint8Array(h.buffer),l=f[3]===128;function o(i,d,g){h[0]=i,d[g]=f[0],d[g+1]=f[1],d[g+2]=f[2],d[g+3]=f[3]}function t(i,d,g){h[0]=i,d[g]=f[3],d[g+1]=f[2],d[g+2]=f[1],d[g+3]=f[0]}function e(i,d){return f[0]=i[d],f[1]=i[d+1],f[2]=i[d+2],f[3]=i[d+3],h[0]}function r(i,d){return f[3]=i[d],f[2]=i[d+1],f[1]=i[d+2],f[0]=i[d+3],h[0]}s.writeFloatLE=l?o:t,s.writeFloatBE=l?t:o,s.readFloatLE=l?e:r,s.readFloatBE=l?r:e}():function(){function h(l,o,t,e){var r=o<0?1:0;if(r&&(o=-o),o===0)l(1/o>0?0:2147483648,t,e);else if(isNaN(o))l(2143289344,t,e);else if(o>34028234663852886e22)l((r<<31|2139095040)>>>0,t,e);else if(o<11754943508222875e-54)l((r<<31|Math.round(o/1401298464324817e-60))>>>0,t,e);else{var i=Math.floor(Math.log(o)/Math.LN2);l((r<<31|i+127<<23|8388607&Math.round(o*Math.pow(2,-i)*8388608))>>>0,t,e)}}function f(l,o,t){var e=l(o,t),r=2*(e>>31)+1,i=e>>>23&255,d=8388607&e;return i===255?d?NaN:r*(1/0):i===0?1401298464324817e-60*r*d:r*Math.pow(2,i-150)*(d+8388608)}s.writeFloatLE=h.bind(null,a),s.writeFloatBE=h.bind(null,u),s.readFloatLE=f.bind(null,c),s.readFloatBE=f.bind(null,p)}(),typeof Float64Array<"u"?function(){var h=new Float64Array([-0]),f=new Uint8Array(h.buffer),l=f[7]===128;function o(i,d,g){h[0]=i,d[g]=f[0],d[g+1]=f[1],d[g+2]=f[2],d[g+3]=f[3],d[g+4]=f[4],d[g+5]=f[5],d[g+6]=f[6],d[g+7]=f[7]}function t(i,d,g){h[0]=i,d[g]=f[7],d[g+1]=f[6],d[g+2]=f[5],d[g+3]=f[4],d[g+4]=f[3],d[g+5]=f[2],d[g+6]=f[1],d[g+7]=f[0]}function e(i,d){return f[0]=i[d],f[1]=i[d+1],f[2]=i[d+2],f[3]=i[d+3],f[4]=i[d+4],f[5]=i[d+5],f[6]=i[d+6],f[7]=i[d+7],h[0]}function r(i,d){return f[7]=i[d],f[6]=i[d+1],f[5]=i[d+2],f[4]=i[d+3],f[3]=i[d+4],f[2]=i[d+5],f[1]=i[d+6],f[0]=i[d+7],h[0]}s.writeDoubleLE=l?o:t,s.writeDoubleBE=l?t:o,s.readDoubleLE=l?e:r,s.readDoubleBE=l?r:e}():function(){function h(l,o,t,e,r,i){var d=e<0?1:0;if(d&&(e=-e),e===0)l(0,r,i+o),l(1/e>0?0:2147483648,r,i+t);else if(isNaN(e))l(0,r,i+o),l(2146959360,r,i+t);else if(e>17976931348623157e292)l(0,r,i+o),l((d<<31|2146435072)>>>0,r,i+t);else{var g;if(e<22250738585072014e-324)l((g=e/5e-324)>>>0,r,i+o),l((d<<31|g/4294967296)>>>0,r,i+t);else{var m=Math.floor(Math.log(e)/Math.LN2);m===1024&&(m=1023),l(4503599627370496*(g=e*Math.pow(2,-m))>>>0,r,i+o),l((d<<31|m+1023<<20|1048576*g&1048575)>>>0,r,i+t)}}}function f(l,o,t,e,r){var i=l(e,r+o),d=l(e,r+t),g=2*(d>>31)+1,m=d>>>20&2047,_=4294967296*(1048575&d)+i;return m===2047?_?NaN:g*(1/0):m===0?5e-324*g*_:g*Math.pow(2,m-1075)*(_+4503599627370496)}s.writeDoubleLE=h.bind(null,a,0,4),s.writeDoubleBE=h.bind(null,u,4,0),s.readDoubleLE=f.bind(null,c,0,4),s.readDoubleBE=f.bind(null,p,4,0)}(),s}function a(s,h,f){h[f]=255&s,h[f+1]=s>>>8&255,h[f+2]=s>>>16&255,h[f+3]=s>>>24}function u(s,h,f){h[f]=s>>>24,h[f+1]=s>>>16&255,h[f+2]=s>>>8&255,h[f+3]=255&s}function c(s,h){return(s[h]|s[h+1]<<8|s[h+2]<<16|s[h+3]<<24)>>>0}function p(s,h){return(s[h]<<24|s[h+1]<<16|s[h+2]<<8|s[h+3])>>>0}b.exports=n(n)},7199:module=>{function inquire(moduleName){try{var mod=eval("quire".replace(/^/,"re"))(moduleName);if(mod&&(mod.length||Object.keys(mod).length))return mod}catch(b){}return null}module.exports=inquire},6662:b=>{b.exports=function(n,a,u){var c=u||8192,p=c>>>1,s=null,h=c;return function(f){if(f<1||f>p)return n(f);h+f>c&&(s=n(c),h=0);var l=a.call(s,h,h+=f);return 7&h&&(h=1+(7|h)),l}}},4997:(b,n)=>{var a=n;a.length=function(u){for(var c=0,p=0,s=0;s191&&s<224?f[l++]=(31&s)<<6|63&u[c++]:s>239&&s<365?(s=((7&s)<<18|(63&u[c++])<<12|(63&u[c++])<<6|63&u[c++])-65536,f[l++]=55296+(s>>10),f[l++]=56320+(1023&s)):f[l++]=(15&s)<<12|(63&u[c++])<<6|63&u[c++],l>8191&&((h||(h=[])).push(String.fromCharCode.apply(String,f)),l=0);return h?(l&&h.push(String.fromCharCode.apply(String,f.slice(0,l))),h.join("")):String.fromCharCode.apply(String,f.slice(0,l))},a.write=function(u,c,p){for(var s,h,f=p,l=0;l>6|192,c[p++]=63&s|128):(64512&s)==55296&&(64512&(h=u.charCodeAt(l+1)))==56320?(s=65536+((1023&s)<<10)+(1023&h),++l,c[p++]=s>>18|240,c[p++]=s>>12&63|128,c[p++]=s>>6&63|128,c[p++]=63&s|128):(c[p++]=s>>12|224,c[p++]=s>>6&63|128,c[p++]=63&s|128);return p-f}},3442:(b,n)=>{n.__esModule=!0;var a=function(){function u(c){if(!c)throw new TypeError("Invalid argument; `value` has no value.");this.value=u.EMPTY,c&&u.isGuid(c)&&(this.value=c)}return u.isGuid=function(c){var p=c.toString();return c&&(c instanceof u||u.validator.test(p))},u.create=function(){return new u([u.gen(2),u.gen(1),u.gen(1),u.gen(1),u.gen(3)].join("-"))},u.createEmpty=function(){return new u("emptyguid")},u.parse=function(c){return new u(c)},u.raw=function(){return[u.gen(2),u.gen(1),u.gen(1),u.gen(1),u.gen(3)].join("-")},u.gen=function(c){for(var p="",s=0;s{b.exports=a;var n=null;try{n=new WebAssembly.Instance(new WebAssembly.Module(new Uint8Array([0,97,115,109,1,0,0,0,1,13,2,96,0,1,127,96,4,127,127,127,127,1,127,3,7,6,0,1,1,1,1,1,6,6,1,127,1,65,0,11,7,50,6,3,109,117,108,0,1,5,100,105,118,95,115,0,2,5,100,105,118,95,117,0,3,5,114,101,109,95,115,0,4,5,114,101,109,95,117,0,5,8,103,101,116,95,104,105,103,104,0,0,10,191,1,6,4,0,35,0,11,36,1,1,126,32,0,173,32,1,173,66,32,134,132,32,2,173,32,3,173,66,32,134,132,126,34,4,66,32,135,167,36,0,32,4,167,11,36,1,1,126,32,0,173,32,1,173,66,32,134,132,32,2,173,32,3,173,66,32,134,132,127,34,4,66,32,135,167,36,0,32,4,167,11,36,1,1,126,32,0,173,32,1,173,66,32,134,132,32,2,173,32,3,173,66,32,134,132,128,34,4,66,32,135,167,36,0,32,4,167,11,36,1,1,126,32,0,173,32,1,173,66,32,134,132,32,2,173,32,3,173,66,32,134,132,129,34,4,66,32,135,167,36,0,32,4,167,11,36,1,1,126,32,0,173,32,1,173,66,32,134,132,32,2,173,32,3,173,66,32,134,132,130,34,4,66,32,135,167,36,0,32,4,167,11])),{}).exports}catch{}function a(T,M,N){this.low=0|T,this.high=0|M,this.unsigned=!!N}function u(T){return(T&&T.__isLong__)===!0}a.prototype.__isLong__,Object.defineProperty(a.prototype,"__isLong__",{value:!0}),a.isLong=u;var c={},p={};function s(T,M){var N,B,$;return M?($=0<=(T>>>=0)&&T<256)&&(B=p[T])?B:(N=f(T,(0|T)<0?-1:0,!0),$&&(p[T]=N),N):($=-128<=(T|=0)&&T<128)&&(B=c[T])?B:(N=f(T,T<0?-1:0,!1),$&&(c[T]=N),N)}function h(T,M){if(isNaN(T))return M?m:g;if(M){if(T<0)return m;if(T>=r)return S}else{if(T<=-i)return O;if(T+1>=i)return v}return T<0?h(-T,M).neg():f(T%e|0,T/e|0,M)}function f(T,M,N){return new a(T,M,N)}a.fromInt=s,a.fromNumber=h,a.fromBits=f;var l=Math.pow;function o(T,M,N){if(T.length===0)throw Error("empty string");if(T==="NaN"||T==="Infinity"||T==="+Infinity"||T==="-Infinity")return g;if(typeof M=="number"?(N=M,M=!1):M=!!M,(N=N||10)<2||360)throw Error("interior hyphen");if(B===0)return o(T.substring(1),M,N).neg();for(var $=h(l(N,8)),L=g,H=0;H>>0:this.low},A.toNumber=function(){return this.unsigned?(this.high>>>0)*e+(this.low>>>0):this.high*e+(this.low>>>0)},A.toString=function(T){if((T=T||10)<2||36>>0).toString(T);if((L=C).isZero())return z+H;for(;z.length<6;)z="0"+z;H=""+z+H}},A.getHighBits=function(){return this.high},A.getHighBitsUnsigned=function(){return this.high>>>0},A.getLowBits=function(){return this.low},A.getLowBitsUnsigned=function(){return this.low>>>0},A.getNumBitsAbs=function(){if(this.isNegative())return this.eq(O)?64:this.neg().getNumBitsAbs();for(var T=this.high!=0?this.high:this.low,M=31;M>0&&!(T&1<=0},A.isOdd=function(){return(1&this.low)==1},A.isEven=function(){return(1&this.low)==0},A.equals=function(T){return u(T)||(T=t(T)),(this.unsigned===T.unsigned||this.high>>>31!=1||T.high>>>31!=1)&&this.high===T.high&&this.low===T.low},A.eq=A.equals,A.notEquals=function(T){return!this.eq(T)},A.neq=A.notEquals,A.ne=A.notEquals,A.lessThan=function(T){return this.comp(T)<0},A.lt=A.lessThan,A.lessThanOrEqual=function(T){return this.comp(T)<=0},A.lte=A.lessThanOrEqual,A.le=A.lessThanOrEqual,A.greaterThan=function(T){return this.comp(T)>0},A.gt=A.greaterThan,A.greaterThanOrEqual=function(T){return this.comp(T)>=0},A.gte=A.greaterThanOrEqual,A.ge=A.greaterThanOrEqual,A.compare=function(T){if(u(T)||(T=t(T)),this.eq(T))return 0;var M=this.isNegative(),N=T.isNegative();return M&&!N?-1:!M&&N?1:this.unsigned?T.high>>>0>this.high>>>0||T.high===this.high&&T.low>>>0>this.low>>>0?-1:1:this.sub(T).isNegative()?-1:1},A.comp=A.compare,A.negate=function(){return!this.unsigned&&this.eq(O)?O:this.not().add(_)},A.neg=A.negate,A.add=function(T){u(T)||(T=t(T));var M=this.high>>>16,N=65535&this.high,B=this.low>>>16,$=65535&this.low,L=T.high>>>16,H=65535&T.high,C=T.low>>>16,z=0,J=0,X=0,te=0;return X+=(te+=$+(65535&T.low))>>>16,J+=(X+=B+C)>>>16,z+=(J+=N+H)>>>16,z+=M+L,f((X&=65535)<<16|(te&=65535),(z&=65535)<<16|(J&=65535),this.unsigned)},A.subtract=function(T){return u(T)||(T=t(T)),this.add(T.neg())},A.sub=A.subtract,A.multiply=function(T){if(this.isZero())return g;if(u(T)||(T=t(T)),n)return f(n.mul(this.low,this.high,T.low,T.high),n.get_high(),this.unsigned);if(T.isZero())return g;if(this.eq(O))return T.isOdd()?O:g;if(T.eq(O))return this.isOdd()?O:g;if(this.isNegative())return T.isNegative()?this.neg().mul(T.neg()):this.neg().mul(T).neg();if(T.isNegative())return this.mul(T.neg()).neg();if(this.lt(d)&&T.lt(d))return h(this.toNumber()*T.toNumber(),this.unsigned);var M=this.high>>>16,N=65535&this.high,B=this.low>>>16,$=65535&this.low,L=T.high>>>16,H=65535&T.high,C=T.low>>>16,z=65535&T.low,J=0,X=0,te=0,ne=0;return te+=(ne+=$*z)>>>16,X+=(te+=B*z)>>>16,te&=65535,X+=(te+=$*C)>>>16,J+=(X+=N*z)>>>16,X&=65535,J+=(X+=B*C)>>>16,X&=65535,J+=(X+=$*H)>>>16,J+=M*z+N*C+B*H+$*L,f((te&=65535)<<16|(ne&=65535),(J&=65535)<<16|(X&=65535),this.unsigned)},A.mul=A.multiply,A.divide=function(T){if(u(T)||(T=t(T)),T.isZero())throw Error("division by zero");var M,N,B;if(n)return this.unsigned||this.high!==-2147483648||T.low!==-1||T.high!==-1?f((this.unsigned?n.div_u:n.div_s)(this.low,this.high,T.low,T.high),n.get_high(),this.unsigned):this;if(this.isZero())return this.unsigned?m:g;if(this.unsigned){if(T.unsigned||(T=T.toUnsigned()),T.gt(this))return m;if(T.gt(this.shru(1)))return y;B=m}else{if(this.eq(O))return T.eq(_)||T.eq(w)?O:T.eq(O)?_:(M=this.shr(1).div(T).shl(1)).eq(g)?T.isNegative()?_:w:(N=this.sub(T.mul(M)),B=M.add(N.div(T)));if(T.eq(O))return this.unsigned?m:g;if(this.isNegative())return T.isNegative()?this.neg().div(T.neg()):this.neg().div(T).neg();if(T.isNegative())return this.div(T.neg()).neg();B=g}for(N=this;N.gte(T);){M=Math.max(1,Math.floor(N.toNumber()/T.toNumber()));for(var $=Math.ceil(Math.log(M)/Math.LN2),L=$<=48?1:l(2,$-48),H=h(M),C=H.mul(T);C.isNegative()||C.gt(N);)C=(H=h(M-=L,this.unsigned)).mul(T);H.isZero()&&(H=_),B=B.add(H),N=N.sub(C)}return B},A.div=A.divide,A.modulo=function(T){return u(T)||(T=t(T)),n?f((this.unsigned?n.rem_u:n.rem_s)(this.low,this.high,T.low,T.high),n.get_high(),this.unsigned):this.sub(this.div(T).mul(T))},A.mod=A.modulo,A.rem=A.modulo,A.not=function(){return f(~this.low,~this.high,this.unsigned)},A.and=function(T){return u(T)||(T=t(T)),f(this.low&T.low,this.high&T.high,this.unsigned)},A.or=function(T){return u(T)||(T=t(T)),f(this.low|T.low,this.high|T.high,this.unsigned)},A.xor=function(T){return u(T)||(T=t(T)),f(this.low^T.low,this.high^T.high,this.unsigned)},A.shiftLeft=function(T){return u(T)&&(T=T.toInt()),(T&=63)==0?this:T<32?f(this.low<>>32-T,this.unsigned):f(0,this.low<>>T|this.high<<32-T,this.high>>T,this.unsigned):f(this.high>>T-32,this.high>=0?0:-1,this.unsigned)},A.shr=A.shiftRight,A.shiftRightUnsigned=function(T){if(u(T)&&(T=T.toInt()),(T&=63)==0)return this;var M=this.high;return T<32?f(this.low>>>T|M<<32-T,M>>>T,this.unsigned):f(T===32?M:M>>>T-32,0,this.unsigned)},A.shru=A.shiftRightUnsigned,A.shr_u=A.shiftRightUnsigned,A.toSigned=function(){return this.unsigned?f(this.low,this.high,!1):this},A.toUnsigned=function(){return this.unsigned?this:f(this.low,this.high,!0)},A.toBytes=function(T){return T?this.toBytesLE():this.toBytesBE()},A.toBytesLE=function(){var T=this.high,M=this.low;return[255&M,M>>>8&255,M>>>16&255,M>>>24,255&T,T>>>8&255,T>>>16&255,T>>>24]},A.toBytesBE=function(){var T=this.high,M=this.low;return[T>>>24,T>>>16&255,T>>>8&255,255&T,M>>>24,M>>>16&255,M>>>8&255,255&M]},a.fromBytes=function(T,M,N){return N?a.fromBytesLE(T,M):a.fromBytesBE(T,M)},a.fromBytesLE=function(T,M){return new a(T[0]|T[1]<<8|T[2]<<16|T[3]<<24,T[4]|T[5]<<8|T[6]<<16|T[7]<<24,M)},a.fromBytesBE=function(T,M){return new a(T[4]<<24|T[5]<<16|T[6]<<8|T[7],T[0]<<24|T[1]<<16|T[2]<<8|T[3],M)}},1446:(b,n,a)=>{var u,c,p,s=a(2100),h=s.Reader,f=s.Writer,l=s.util,o=s.roots.default||(s.roots.default={});o.onnx=((p={}).Version=(u={},(c=Object.create(u))[u[0]="_START_VERSION"]=0,c[u[1]="IR_VERSION_2017_10_10"]=1,c[u[2]="IR_VERSION_2017_10_30"]=2,c[u[3]="IR_VERSION_2017_11_3"]=3,c[u[4]="IR_VERSION_2019_1_22"]=4,c[u[5]="IR_VERSION"]=5,c),p.AttributeProto=function(){function t(e){if(this.floats=[],this.ints=[],this.strings=[],this.tensors=[],this.graphs=[],e)for(var r=Object.keys(e),i=0;i>>3){case 1:d.name=e.string();break;case 21:d.refAttrName=e.string();break;case 13:d.docString=e.string();break;case 20:d.type=e.int32();break;case 2:d.f=e.float();break;case 3:d.i=e.int64();break;case 4:d.s=e.bytes();break;case 5:d.t=o.onnx.TensorProto.decode(e,e.uint32());break;case 6:d.g=o.onnx.GraphProto.decode(e,e.uint32());break;case 7:if(d.floats&&d.floats.length||(d.floats=[]),(7&g)==2)for(var m=e.uint32()+e.pos;e.pos>>0,e.i.high>>>0).toNumber())),e.s!=null&&(typeof e.s=="string"?l.base64.decode(e.s,r.s=l.newBuffer(l.base64.length(e.s)),0):e.s.length&&(r.s=e.s)),e.t!=null){if(typeof e.t!="object")throw TypeError(".onnx.AttributeProto.t: object expected");r.t=o.onnx.TensorProto.fromObject(e.t)}if(e.g!=null){if(typeof e.g!="object")throw TypeError(".onnx.AttributeProto.g: object expected");r.g=o.onnx.GraphProto.fromObject(e.g)}if(e.floats){if(!Array.isArray(e.floats))throw TypeError(".onnx.AttributeProto.floats: array expected");r.floats=[];for(var i=0;i>>0,e.ints[i].high>>>0).toNumber())}if(e.strings){if(!Array.isArray(e.strings))throw TypeError(".onnx.AttributeProto.strings: array expected");for(r.strings=[],i=0;i>>0,e.i.high>>>0).toNumber():e.i),e.s!=null&&e.hasOwnProperty("s")&&(i.s=r.bytes===String?l.base64.encode(e.s,0,e.s.length):r.bytes===Array?Array.prototype.slice.call(e.s):e.s),e.t!=null&&e.hasOwnProperty("t")&&(i.t=o.onnx.TensorProto.toObject(e.t,r)),e.g!=null&&e.hasOwnProperty("g")&&(i.g=o.onnx.GraphProto.toObject(e.g,r)),e.floats&&e.floats.length){i.floats=[];for(var g=0;g>>0,e.ints[g].high>>>0).toNumber():e.ints[g];if(e.strings&&e.strings.length)for(i.strings=[],g=0;g>>3){case 1:d.name=e.string();break;case 2:d.type=o.onnx.TypeProto.decode(e,e.uint32());break;case 3:d.docString=e.string();break;default:e.skipType(7&g)}}return d},t.decodeDelimited=function(e){return e instanceof h||(e=new h(e)),this.decode(e,e.uint32())},t.verify=function(e){if(typeof e!="object"||e===null)return"object expected";if(e.name!=null&&e.hasOwnProperty("name")&&!l.isString(e.name))return"name: string expected";if(e.type!=null&&e.hasOwnProperty("type")){var r=o.onnx.TypeProto.verify(e.type);if(r)return"type."+r}return e.docString!=null&&e.hasOwnProperty("docString")&&!l.isString(e.docString)?"docString: string expected":null},t.fromObject=function(e){if(e instanceof o.onnx.ValueInfoProto)return e;var r=new o.onnx.ValueInfoProto;if(e.name!=null&&(r.name=String(e.name)),e.type!=null){if(typeof e.type!="object")throw TypeError(".onnx.ValueInfoProto.type: object expected");r.type=o.onnx.TypeProto.fromObject(e.type)}return e.docString!=null&&(r.docString=String(e.docString)),r},t.toObject=function(e,r){r||(r={});var i={};return r.defaults&&(i.name="",i.type=null,i.docString=""),e.name!=null&&e.hasOwnProperty("name")&&(i.name=e.name),e.type!=null&&e.hasOwnProperty("type")&&(i.type=o.onnx.TypeProto.toObject(e.type,r)),e.docString!=null&&e.hasOwnProperty("docString")&&(i.docString=e.docString),i},t.prototype.toJSON=function(){return this.constructor.toObject(this,s.util.toJSONOptions)},t}(),p.NodeProto=function(){function t(e){if(this.input=[],this.output=[],this.attribute=[],e)for(var r=Object.keys(e),i=0;i>>3){case 1:d.input&&d.input.length||(d.input=[]),d.input.push(e.string());break;case 2:d.output&&d.output.length||(d.output=[]),d.output.push(e.string());break;case 3:d.name=e.string();break;case 4:d.opType=e.string();break;case 7:d.domain=e.string();break;case 5:d.attribute&&d.attribute.length||(d.attribute=[]),d.attribute.push(o.onnx.AttributeProto.decode(e,e.uint32()));break;case 6:d.docString=e.string();break;default:e.skipType(7&g)}}return d},t.decodeDelimited=function(e){return e instanceof h||(e=new h(e)),this.decode(e,e.uint32())},t.verify=function(e){if(typeof e!="object"||e===null)return"object expected";if(e.input!=null&&e.hasOwnProperty("input")){if(!Array.isArray(e.input))return"input: array expected";for(var r=0;r>>3){case 1:d.irVersion=e.int64();break;case 8:d.opsetImport&&d.opsetImport.length||(d.opsetImport=[]),d.opsetImport.push(o.onnx.OperatorSetIdProto.decode(e,e.uint32()));break;case 2:d.producerName=e.string();break;case 3:d.producerVersion=e.string();break;case 4:d.domain=e.string();break;case 5:d.modelVersion=e.int64();break;case 6:d.docString=e.string();break;case 7:d.graph=o.onnx.GraphProto.decode(e,e.uint32());break;case 14:d.metadataProps&&d.metadataProps.length||(d.metadataProps=[]),d.metadataProps.push(o.onnx.StringStringEntryProto.decode(e,e.uint32()));break;default:e.skipType(7&g)}}return d},t.decodeDelimited=function(e){return e instanceof h||(e=new h(e)),this.decode(e,e.uint32())},t.verify=function(e){if(typeof e!="object"||e===null)return"object expected";if(e.irVersion!=null&&e.hasOwnProperty("irVersion")&&!(l.isInteger(e.irVersion)||e.irVersion&&l.isInteger(e.irVersion.low)&&l.isInteger(e.irVersion.high)))return"irVersion: integer|Long expected";if(e.opsetImport!=null&&e.hasOwnProperty("opsetImport")){if(!Array.isArray(e.opsetImport))return"opsetImport: array expected";for(var r=0;r>>0,e.irVersion.high>>>0).toNumber())),e.opsetImport){if(!Array.isArray(e.opsetImport))throw TypeError(".onnx.ModelProto.opsetImport: array expected");r.opsetImport=[];for(var i=0;i>>0,e.modelVersion.high>>>0).toNumber())),e.docString!=null&&(r.docString=String(e.docString)),e.graph!=null){if(typeof e.graph!="object")throw TypeError(".onnx.ModelProto.graph: object expected");r.graph=o.onnx.GraphProto.fromObject(e.graph)}if(e.metadataProps){if(!Array.isArray(e.metadataProps))throw TypeError(".onnx.ModelProto.metadataProps: array expected");for(r.metadataProps=[],i=0;i>>0,e.irVersion.high>>>0).toNumber():e.irVersion),e.producerName!=null&&e.hasOwnProperty("producerName")&&(i.producerName=e.producerName),e.producerVersion!=null&&e.hasOwnProperty("producerVersion")&&(i.producerVersion=e.producerVersion),e.domain!=null&&e.hasOwnProperty("domain")&&(i.domain=e.domain),e.modelVersion!=null&&e.hasOwnProperty("modelVersion")&&(typeof e.modelVersion=="number"?i.modelVersion=r.longs===String?String(e.modelVersion):e.modelVersion:i.modelVersion=r.longs===String?l.Long.prototype.toString.call(e.modelVersion):r.longs===Number?new l.LongBits(e.modelVersion.low>>>0,e.modelVersion.high>>>0).toNumber():e.modelVersion),e.docString!=null&&e.hasOwnProperty("docString")&&(i.docString=e.docString),e.graph!=null&&e.hasOwnProperty("graph")&&(i.graph=o.onnx.GraphProto.toObject(e.graph,r)),e.opsetImport&&e.opsetImport.length){i.opsetImport=[];for(var g=0;g>>3){case 1:d.key=e.string();break;case 2:d.value=e.string();break;default:e.skipType(7&g)}}return d},t.decodeDelimited=function(e){return e instanceof h||(e=new h(e)),this.decode(e,e.uint32())},t.verify=function(e){return typeof e!="object"||e===null?"object expected":e.key!=null&&e.hasOwnProperty("key")&&!l.isString(e.key)?"key: string expected":e.value!=null&&e.hasOwnProperty("value")&&!l.isString(e.value)?"value: string expected":null},t.fromObject=function(e){if(e instanceof o.onnx.StringStringEntryProto)return e;var r=new o.onnx.StringStringEntryProto;return e.key!=null&&(r.key=String(e.key)),e.value!=null&&(r.value=String(e.value)),r},t.toObject=function(e,r){r||(r={});var i={};return r.defaults&&(i.key="",i.value=""),e.key!=null&&e.hasOwnProperty("key")&&(i.key=e.key),e.value!=null&&e.hasOwnProperty("value")&&(i.value=e.value),i},t.prototype.toJSON=function(){return this.constructor.toObject(this,s.util.toJSONOptions)},t}(),p.TensorAnnotation=function(){function t(e){if(this.quantParameterTensorNames=[],e)for(var r=Object.keys(e),i=0;i>>3){case 1:d.tensorName=e.string();break;case 2:d.quantParameterTensorNames&&d.quantParameterTensorNames.length||(d.quantParameterTensorNames=[]),d.quantParameterTensorNames.push(o.onnx.StringStringEntryProto.decode(e,e.uint32()));break;default:e.skipType(7&g)}}return d},t.decodeDelimited=function(e){return e instanceof h||(e=new h(e)),this.decode(e,e.uint32())},t.verify=function(e){if(typeof e!="object"||e===null)return"object expected";if(e.tensorName!=null&&e.hasOwnProperty("tensorName")&&!l.isString(e.tensorName))return"tensorName: string expected";if(e.quantParameterTensorNames!=null&&e.hasOwnProperty("quantParameterTensorNames")){if(!Array.isArray(e.quantParameterTensorNames))return"quantParameterTensorNames: array expected";for(var r=0;r>>3){case 1:d.node&&d.node.length||(d.node=[]),d.node.push(o.onnx.NodeProto.decode(e,e.uint32()));break;case 2:d.name=e.string();break;case 5:d.initializer&&d.initializer.length||(d.initializer=[]),d.initializer.push(o.onnx.TensorProto.decode(e,e.uint32()));break;case 10:d.docString=e.string();break;case 11:d.input&&d.input.length||(d.input=[]),d.input.push(o.onnx.ValueInfoProto.decode(e,e.uint32()));break;case 12:d.output&&d.output.length||(d.output=[]),d.output.push(o.onnx.ValueInfoProto.decode(e,e.uint32()));break;case 13:d.valueInfo&&d.valueInfo.length||(d.valueInfo=[]),d.valueInfo.push(o.onnx.ValueInfoProto.decode(e,e.uint32()));break;case 14:d.quantizationAnnotation&&d.quantizationAnnotation.length||(d.quantizationAnnotation=[]),d.quantizationAnnotation.push(o.onnx.TensorAnnotation.decode(e,e.uint32()));break;default:e.skipType(7&g)}}return d},t.decodeDelimited=function(e){return e instanceof h||(e=new h(e)),this.decode(e,e.uint32())},t.verify=function(e){if(typeof e!="object"||e===null)return"object expected";if(e.node!=null&&e.hasOwnProperty("node")){if(!Array.isArray(e.node))return"node: array expected";for(var r=0;r>>3){case 1:if(d.dims&&d.dims.length||(d.dims=[]),(7&g)==2)for(var m=e.uint32()+e.pos;e.pos>>0,e.dims[i].high>>>0).toNumber())}if(e.dataType!=null&&(r.dataType=0|e.dataType),e.segment!=null){if(typeof e.segment!="object")throw TypeError(".onnx.TensorProto.segment: object expected");r.segment=o.onnx.TensorProto.Segment.fromObject(e.segment)}if(e.floatData){if(!Array.isArray(e.floatData))throw TypeError(".onnx.TensorProto.floatData: array expected");for(r.floatData=[],i=0;i>>0,e.int64Data[i].high>>>0).toNumber())}if(e.name!=null&&(r.name=String(e.name)),e.docString!=null&&(r.docString=String(e.docString)),e.rawData!=null&&(typeof e.rawData=="string"?l.base64.decode(e.rawData,r.rawData=l.newBuffer(l.base64.length(e.rawData)),0):e.rawData.length&&(r.rawData=e.rawData)),e.externalData){if(!Array.isArray(e.externalData))throw TypeError(".onnx.TensorProto.externalData: array expected");for(r.externalData=[],i=0;i>>0,e.uint64Data[i].high>>>0).toNumber(!0))}return r},t.toObject=function(e,r){r||(r={});var i={};if((r.arrays||r.defaults)&&(i.dims=[],i.floatData=[],i.int32Data=[],i.stringData=[],i.int64Data=[],i.doubleData=[],i.uint64Data=[],i.externalData=[]),r.defaults&&(i.dataType=0,i.segment=null,i.name="",r.bytes===String?i.rawData="":(i.rawData=[],r.bytes!==Array&&(i.rawData=l.newBuffer(i.rawData))),i.docString="",i.dataLocation=r.enums===String?"DEFAULT":0),e.dims&&e.dims.length){i.dims=[];for(var d=0;d>>0,e.dims[d].high>>>0).toNumber():e.dims[d]}if(e.dataType!=null&&e.hasOwnProperty("dataType")&&(i.dataType=e.dataType),e.segment!=null&&e.hasOwnProperty("segment")&&(i.segment=o.onnx.TensorProto.Segment.toObject(e.segment,r)),e.floatData&&e.floatData.length)for(i.floatData=[],d=0;d>>0,e.int64Data[d].high>>>0).toNumber():e.int64Data[d];if(e.name!=null&&e.hasOwnProperty("name")&&(i.name=e.name),e.rawData!=null&&e.hasOwnProperty("rawData")&&(i.rawData=r.bytes===String?l.base64.encode(e.rawData,0,e.rawData.length):r.bytes===Array?Array.prototype.slice.call(e.rawData):e.rawData),e.doubleData&&e.doubleData.length)for(i.doubleData=[],d=0;d>>0,e.uint64Data[d].high>>>0).toNumber(!0):e.uint64Data[d];if(e.docString!=null&&e.hasOwnProperty("docString")&&(i.docString=e.docString),e.externalData&&e.externalData.length)for(i.externalData=[],d=0;d>>3){case 1:g.begin=r.int64();break;case 2:g.end=r.int64();break;default:r.skipType(7&m)}}return g},e.decodeDelimited=function(r){return r instanceof h||(r=new h(r)),this.decode(r,r.uint32())},e.verify=function(r){return typeof r!="object"||r===null?"object expected":r.begin!=null&&r.hasOwnProperty("begin")&&!(l.isInteger(r.begin)||r.begin&&l.isInteger(r.begin.low)&&l.isInteger(r.begin.high))?"begin: integer|Long expected":r.end!=null&&r.hasOwnProperty("end")&&!(l.isInteger(r.end)||r.end&&l.isInteger(r.end.low)&&l.isInteger(r.end.high))?"end: integer|Long expected":null},e.fromObject=function(r){if(r instanceof o.onnx.TensorProto.Segment)return r;var i=new o.onnx.TensorProto.Segment;return r.begin!=null&&(l.Long?(i.begin=l.Long.fromValue(r.begin)).unsigned=!1:typeof r.begin=="string"?i.begin=parseInt(r.begin,10):typeof r.begin=="number"?i.begin=r.begin:typeof r.begin=="object"&&(i.begin=new l.LongBits(r.begin.low>>>0,r.begin.high>>>0).toNumber())),r.end!=null&&(l.Long?(i.end=l.Long.fromValue(r.end)).unsigned=!1:typeof r.end=="string"?i.end=parseInt(r.end,10):typeof r.end=="number"?i.end=r.end:typeof r.end=="object"&&(i.end=new l.LongBits(r.end.low>>>0,r.end.high>>>0).toNumber())),i},e.toObject=function(r,i){i||(i={});var d={};if(i.defaults){if(l.Long){var g=new l.Long(0,0,!1);d.begin=i.longs===String?g.toString():i.longs===Number?g.toNumber():g}else d.begin=i.longs===String?"0":0;l.Long?(g=new l.Long(0,0,!1),d.end=i.longs===String?g.toString():i.longs===Number?g.toNumber():g):d.end=i.longs===String?"0":0}return r.begin!=null&&r.hasOwnProperty("begin")&&(typeof r.begin=="number"?d.begin=i.longs===String?String(r.begin):r.begin:d.begin=i.longs===String?l.Long.prototype.toString.call(r.begin):i.longs===Number?new l.LongBits(r.begin.low>>>0,r.begin.high>>>0).toNumber():r.begin),r.end!=null&&r.hasOwnProperty("end")&&(typeof r.end=="number"?d.end=i.longs===String?String(r.end):r.end:d.end=i.longs===String?l.Long.prototype.toString.call(r.end):i.longs===Number?new l.LongBits(r.end.low>>>0,r.end.high>>>0).toNumber():r.end),d},e.prototype.toJSON=function(){return this.constructor.toObject(this,s.util.toJSONOptions)},e}(),t.DataLocation=function(){var e={},r=Object.create(e);return r[e[0]="DEFAULT"]=0,r[e[1]="EXTERNAL"]=1,r}(),t}(),p.TensorShapeProto=function(){function t(e){if(this.dim=[],e)for(var r=Object.keys(e),i=0;i>>3==1?(d.dim&&d.dim.length||(d.dim=[]),d.dim.push(o.onnx.TensorShapeProto.Dimension.decode(e,e.uint32()))):e.skipType(7&g)}return d},t.decodeDelimited=function(e){return e instanceof h||(e=new h(e)),this.decode(e,e.uint32())},t.verify=function(e){if(typeof e!="object"||e===null)return"object expected";if(e.dim!=null&&e.hasOwnProperty("dim")){if(!Array.isArray(e.dim))return"dim: array expected";for(var r=0;r>>3){case 1:m.dimValue=i.int64();break;case 2:m.dimParam=i.string();break;case 3:m.denotation=i.string();break;default:i.skipType(7&_)}}return m},e.decodeDelimited=function(i){return i instanceof h||(i=new h(i)),this.decode(i,i.uint32())},e.verify=function(i){if(typeof i!="object"||i===null)return"object expected";var d={};if(i.dimValue!=null&&i.hasOwnProperty("dimValue")&&(d.value=1,!(l.isInteger(i.dimValue)||i.dimValue&&l.isInteger(i.dimValue.low)&&l.isInteger(i.dimValue.high))))return"dimValue: integer|Long expected";if(i.dimParam!=null&&i.hasOwnProperty("dimParam")){if(d.value===1)return"value: multiple values";if(d.value=1,!l.isString(i.dimParam))return"dimParam: string expected"}return i.denotation!=null&&i.hasOwnProperty("denotation")&&!l.isString(i.denotation)?"denotation: string expected":null},e.fromObject=function(i){if(i instanceof o.onnx.TensorShapeProto.Dimension)return i;var d=new o.onnx.TensorShapeProto.Dimension;return i.dimValue!=null&&(l.Long?(d.dimValue=l.Long.fromValue(i.dimValue)).unsigned=!1:typeof i.dimValue=="string"?d.dimValue=parseInt(i.dimValue,10):typeof i.dimValue=="number"?d.dimValue=i.dimValue:typeof i.dimValue=="object"&&(d.dimValue=new l.LongBits(i.dimValue.low>>>0,i.dimValue.high>>>0).toNumber())),i.dimParam!=null&&(d.dimParam=String(i.dimParam)),i.denotation!=null&&(d.denotation=String(i.denotation)),d},e.toObject=function(i,d){d||(d={});var g={};return d.defaults&&(g.denotation=""),i.dimValue!=null&&i.hasOwnProperty("dimValue")&&(typeof i.dimValue=="number"?g.dimValue=d.longs===String?String(i.dimValue):i.dimValue:g.dimValue=d.longs===String?l.Long.prototype.toString.call(i.dimValue):d.longs===Number?new l.LongBits(i.dimValue.low>>>0,i.dimValue.high>>>0).toNumber():i.dimValue,d.oneofs&&(g.value="dimValue")),i.dimParam!=null&&i.hasOwnProperty("dimParam")&&(g.dimParam=i.dimParam,d.oneofs&&(g.value="dimParam")),i.denotation!=null&&i.hasOwnProperty("denotation")&&(g.denotation=i.denotation),g},e.prototype.toJSON=function(){return this.constructor.toObject(this,s.util.toJSONOptions)},e}(),t}(),p.TypeProto=function(){function t(r){if(r)for(var i=Object.keys(r),d=0;d>>3){case 1:g.tensorType=o.onnx.TypeProto.Tensor.decode(r,r.uint32());break;case 6:g.denotation=r.string();break;default:r.skipType(7&m)}}return g},t.decodeDelimited=function(r){return r instanceof h||(r=new h(r)),this.decode(r,r.uint32())},t.verify=function(r){if(typeof r!="object"||r===null)return"object expected";if(r.tensorType!=null&&r.hasOwnProperty("tensorType")){var i=o.onnx.TypeProto.Tensor.verify(r.tensorType);if(i)return"tensorType."+i}return r.denotation!=null&&r.hasOwnProperty("denotation")&&!l.isString(r.denotation)?"denotation: string expected":null},t.fromObject=function(r){if(r instanceof o.onnx.TypeProto)return r;var i=new o.onnx.TypeProto;if(r.tensorType!=null){if(typeof r.tensorType!="object")throw TypeError(".onnx.TypeProto.tensorType: object expected");i.tensorType=o.onnx.TypeProto.Tensor.fromObject(r.tensorType)}return r.denotation!=null&&(i.denotation=String(r.denotation)),i},t.toObject=function(r,i){i||(i={});var d={};return i.defaults&&(d.denotation=""),r.tensorType!=null&&r.hasOwnProperty("tensorType")&&(d.tensorType=o.onnx.TypeProto.Tensor.toObject(r.tensorType,i),i.oneofs&&(d.value="tensorType")),r.denotation!=null&&r.hasOwnProperty("denotation")&&(d.denotation=r.denotation),d},t.prototype.toJSON=function(){return this.constructor.toObject(this,s.util.toJSONOptions)},t.Tensor=function(){function r(i){if(i)for(var d=Object.keys(i),g=0;g>>3){case 1:m.elemType=i.int32();break;case 2:m.shape=o.onnx.TensorShapeProto.decode(i,i.uint32());break;default:i.skipType(7&_)}}return m},r.decodeDelimited=function(i){return i instanceof h||(i=new h(i)),this.decode(i,i.uint32())},r.verify=function(i){if(typeof i!="object"||i===null)return"object expected";if(i.elemType!=null&&i.hasOwnProperty("elemType")&&!l.isInteger(i.elemType))return"elemType: integer expected";if(i.shape!=null&&i.hasOwnProperty("shape")){var d=o.onnx.TensorShapeProto.verify(i.shape);if(d)return"shape."+d}return null},r.fromObject=function(i){if(i instanceof o.onnx.TypeProto.Tensor)return i;var d=new o.onnx.TypeProto.Tensor;if(i.elemType!=null&&(d.elemType=0|i.elemType),i.shape!=null){if(typeof i.shape!="object")throw TypeError(".onnx.TypeProto.Tensor.shape: object expected");d.shape=o.onnx.TensorShapeProto.fromObject(i.shape)}return d},r.toObject=function(i,d){d||(d={});var g={};return d.defaults&&(g.elemType=0,g.shape=null),i.elemType!=null&&i.hasOwnProperty("elemType")&&(g.elemType=i.elemType),i.shape!=null&&i.hasOwnProperty("shape")&&(g.shape=o.onnx.TensorShapeProto.toObject(i.shape,d)),g},r.prototype.toJSON=function(){return this.constructor.toObject(this,s.util.toJSONOptions)},r}(),t}(),p.OperatorSetIdProto=function(){function t(e){if(e)for(var r=Object.keys(e),i=0;i>>3){case 1:d.domain=e.string();break;case 2:d.version=e.int64();break;default:e.skipType(7&g)}}return d},t.decodeDelimited=function(e){return e instanceof h||(e=new h(e)),this.decode(e,e.uint32())},t.verify=function(e){return typeof e!="object"||e===null?"object expected":e.domain!=null&&e.hasOwnProperty("domain")&&!l.isString(e.domain)?"domain: string expected":e.version!=null&&e.hasOwnProperty("version")&&!(l.isInteger(e.version)||e.version&&l.isInteger(e.version.low)&&l.isInteger(e.version.high))?"version: integer|Long expected":null},t.fromObject=function(e){if(e instanceof o.onnx.OperatorSetIdProto)return e;var r=new o.onnx.OperatorSetIdProto;return e.domain!=null&&(r.domain=String(e.domain)),e.version!=null&&(l.Long?(r.version=l.Long.fromValue(e.version)).unsigned=!1:typeof e.version=="string"?r.version=parseInt(e.version,10):typeof e.version=="number"?r.version=e.version:typeof e.version=="object"&&(r.version=new l.LongBits(e.version.low>>>0,e.version.high>>>0).toNumber())),r},t.toObject=function(e,r){r||(r={});var i={};if(r.defaults)if(i.domain="",l.Long){var d=new l.Long(0,0,!1);i.version=r.longs===String?d.toString():r.longs===Number?d.toNumber():d}else i.version=r.longs===String?"0":0;return e.domain!=null&&e.hasOwnProperty("domain")&&(i.domain=e.domain),e.version!=null&&e.hasOwnProperty("version")&&(typeof e.version=="number"?i.version=r.longs===String?String(e.version):e.version:i.version=r.longs===String?l.Long.prototype.toString.call(e.version):r.longs===Number?new l.LongBits(e.version.low>>>0,e.version.high>>>0).toNumber():e.version),i},t.prototype.toJSON=function(){return this.constructor.toObject(this,s.util.toJSONOptions)},t}(),p),b.exports=o},2100:(b,n,a)=>{b.exports=a(9482)},9482:(b,n,a)=>{var u=n;function c(){u.util._configure(),u.Writer._configure(u.BufferWriter),u.Reader._configure(u.BufferReader)}u.build="minimal",u.Writer=a(1173),u.BufferWriter=a(3155),u.Reader=a(1408),u.BufferReader=a(593),u.util=a(9693),u.rpc=a(5994),u.roots=a(5054),u.configure=c,c()},1408:(b,n,a)=>{b.exports=f;var u,c=a(9693),p=c.LongBits,s=c.utf8;function h(d,g){return RangeError("index out of range: "+d.pos+" + "+(g||1)+" > "+d.len)}function f(d){this.buf=d,this.pos=0,this.len=d.length}var l,o=typeof Uint8Array<"u"?function(d){if(d instanceof Uint8Array||Array.isArray(d))return new f(d);throw Error("illegal buffer")}:function(d){if(Array.isArray(d))return new f(d);throw Error("illegal buffer")},t=function(){return c.Buffer?function(d){return(f.create=function(g){return c.Buffer.isBuffer(g)?new u(g):o(g)})(d)}:o};function e(){var d=new p(0,0),g=0;if(!(this.len-this.pos>4)){for(;g<3;++g){if(this.pos>=this.len)throw h(this);if(d.lo=(d.lo|(127&this.buf[this.pos])<<7*g)>>>0,this.buf[this.pos++]<128)return d}return d.lo=(d.lo|(127&this.buf[this.pos++])<<7*g)>>>0,d}for(;g<4;++g)if(d.lo=(d.lo|(127&this.buf[this.pos])<<7*g)>>>0,this.buf[this.pos++]<128)return d;if(d.lo=(d.lo|(127&this.buf[this.pos])<<28)>>>0,d.hi=(d.hi|(127&this.buf[this.pos])>>4)>>>0,this.buf[this.pos++]<128)return d;if(g=0,this.len-this.pos>4){for(;g<5;++g)if(d.hi=(d.hi|(127&this.buf[this.pos])<<7*g+3)>>>0,this.buf[this.pos++]<128)return d}else for(;g<5;++g){if(this.pos>=this.len)throw h(this);if(d.hi=(d.hi|(127&this.buf[this.pos])<<7*g+3)>>>0,this.buf[this.pos++]<128)return d}throw Error("invalid varint encoding")}function r(d,g){return(d[g-4]|d[g-3]<<8|d[g-2]<<16|d[g-1]<<24)>>>0}function i(){if(this.pos+8>this.len)throw h(this,8);return new p(r(this.buf,this.pos+=4),r(this.buf,this.pos+=4))}f.create=t(),f.prototype._slice=c.Array.prototype.subarray||c.Array.prototype.slice,f.prototype.uint32=(l=4294967295,function(){if(l=(127&this.buf[this.pos])>>>0,this.buf[this.pos++]<128||(l=(l|(127&this.buf[this.pos])<<7)>>>0,this.buf[this.pos++]<128)||(l=(l|(127&this.buf[this.pos])<<14)>>>0,this.buf[this.pos++]<128)||(l=(l|(127&this.buf[this.pos])<<21)>>>0,this.buf[this.pos++]<128)||(l=(l|(15&this.buf[this.pos])<<28)>>>0,this.buf[this.pos++]<128))return l;if((this.pos+=5)>this.len)throw this.pos=this.len,h(this,10);return l}),f.prototype.int32=function(){return 0|this.uint32()},f.prototype.sint32=function(){var d=this.uint32();return d>>>1^-(1&d)|0},f.prototype.bool=function(){return this.uint32()!==0},f.prototype.fixed32=function(){if(this.pos+4>this.len)throw h(this,4);return r(this.buf,this.pos+=4)},f.prototype.sfixed32=function(){if(this.pos+4>this.len)throw h(this,4);return 0|r(this.buf,this.pos+=4)},f.prototype.float=function(){if(this.pos+4>this.len)throw h(this,4);var d=c.float.readFloatLE(this.buf,this.pos);return this.pos+=4,d},f.prototype.double=function(){if(this.pos+8>this.len)throw h(this,4);var d=c.float.readDoubleLE(this.buf,this.pos);return this.pos+=8,d},f.prototype.bytes=function(){var d=this.uint32(),g=this.pos,m=this.pos+d;if(m>this.len)throw h(this,d);return this.pos+=d,Array.isArray(this.buf)?this.buf.slice(g,m):g===m?new this.buf.constructor(0):this._slice.call(this.buf,g,m)},f.prototype.string=function(){var d=this.bytes();return s.read(d,0,d.length)},f.prototype.skip=function(d){if(typeof d=="number"){if(this.pos+d>this.len)throw h(this,d);this.pos+=d}else do if(this.pos>=this.len)throw h(this);while(128&this.buf[this.pos++]);return this},f.prototype.skipType=function(d){switch(d){case 0:this.skip();break;case 1:this.skip(8);break;case 2:this.skip(this.uint32());break;case 3:for(;(d=7&this.uint32())!=4;)this.skipType(d);break;case 5:this.skip(4);break;default:throw Error("invalid wire type "+d+" at offset "+this.pos)}return this},f._configure=function(d){u=d,f.create=t(),u._configure();var g=c.Long?"toLong":"toNumber";c.merge(f.prototype,{int64:function(){return e.call(this)[g](!1)},uint64:function(){return e.call(this)[g](!0)},sint64:function(){return e.call(this).zzDecode()[g](!1)},fixed64:function(){return i.call(this)[g](!0)},sfixed64:function(){return i.call(this)[g](!1)}})}},593:(b,n,a)=>{b.exports=p;var u=a(1408);(p.prototype=Object.create(u.prototype)).constructor=p;var c=a(9693);function p(s){u.call(this,s)}p._configure=function(){c.Buffer&&(p.prototype._slice=c.Buffer.prototype.slice)},p.prototype.string=function(){var s=this.uint32();return this.buf.utf8Slice?this.buf.utf8Slice(this.pos,this.pos=Math.min(this.pos+s,this.len)):this.buf.toString("utf-8",this.pos,this.pos=Math.min(this.pos+s,this.len))},p._configure()},5054:b=>{b.exports={}},5994:(b,n,a)=>{n.Service=a(7948)},7948:(b,n,a)=>{b.exports=c;var u=a(9693);function c(p,s,h){if(typeof p!="function")throw TypeError("rpcImpl must be a function");u.EventEmitter.call(this),this.rpcImpl=p,this.requestDelimited=!!s,this.responseDelimited=!!h}(c.prototype=Object.create(u.EventEmitter.prototype)).constructor=c,c.prototype.rpcCall=function p(s,h,f,l,o){if(!l)throw TypeError("request must be specified");var t=this;if(!o)return u.asPromise(p,t,s,h,f,l);if(t.rpcImpl)try{return t.rpcImpl(s,h[t.requestDelimited?"encodeDelimited":"encode"](l).finish(),function(e,r){if(e)return t.emit("error",e,s),o(e);if(r!==null){if(!(r instanceof f))try{r=f[t.responseDelimited?"decodeDelimited":"decode"](r)}catch(i){return t.emit("error",i,s),o(i)}return t.emit("data",r,s),o(null,r)}t.end(!0)})}catch(e){return t.emit("error",e,s),void setTimeout(function(){o(e)},0)}else setTimeout(function(){o(Error("already ended"))},0)},c.prototype.end=function(p){return this.rpcImpl&&(p||this.rpcImpl(null,null,null),this.rpcImpl=null,this.emit("end").off()),this}},1945:(b,n,a)=>{b.exports=c;var u=a(9693);function c(f,l){this.lo=f>>>0,this.hi=l>>>0}var p=c.zero=new c(0,0);p.toNumber=function(){return 0},p.zzEncode=p.zzDecode=function(){return this},p.length=function(){return 1};var s=c.zeroHash="\0\0\0\0\0\0\0\0";c.fromNumber=function(f){if(f===0)return p;var l=f<0;l&&(f=-f);var o=f>>>0,t=(f-o)/4294967296>>>0;return l&&(t=~t>>>0,o=~o>>>0,++o>4294967295&&(o=0,++t>4294967295&&(t=0))),new c(o,t)},c.from=function(f){if(typeof f=="number")return c.fromNumber(f);if(u.isString(f)){if(!u.Long)return c.fromNumber(parseInt(f,10));f=u.Long.fromString(f)}return f.low||f.high?new c(f.low>>>0,f.high>>>0):p},c.prototype.toNumber=function(f){if(!f&&this.hi>>>31){var l=1+~this.lo>>>0,o=~this.hi>>>0;return l||(o=o+1>>>0),-(l+4294967296*o)}return this.lo+4294967296*this.hi},c.prototype.toLong=function(f){return u.Long?new u.Long(0|this.lo,0|this.hi,!!f):{low:0|this.lo,high:0|this.hi,unsigned:!!f}};var h=String.prototype.charCodeAt;c.fromHash=function(f){return f===s?p:new c((h.call(f,0)|h.call(f,1)<<8|h.call(f,2)<<16|h.call(f,3)<<24)>>>0,(h.call(f,4)|h.call(f,5)<<8|h.call(f,6)<<16|h.call(f,7)<<24)>>>0)},c.prototype.toHash=function(){return String.fromCharCode(255&this.lo,this.lo>>>8&255,this.lo>>>16&255,this.lo>>>24,255&this.hi,this.hi>>>8&255,this.hi>>>16&255,this.hi>>>24)},c.prototype.zzEncode=function(){var f=this.hi>>31;return this.hi=((this.hi<<1|this.lo>>>31)^f)>>>0,this.lo=(this.lo<<1^f)>>>0,this},c.prototype.zzDecode=function(){var f=-(1&this.lo);return this.lo=((this.lo>>>1|this.hi<<31)^f)>>>0,this.hi=(this.hi>>>1^f)>>>0,this},c.prototype.length=function(){var f=this.lo,l=(this.lo>>>28|this.hi<<4)>>>0,o=this.hi>>>24;return o===0?l===0?f<16384?f<128?1:2:f<2097152?3:4:l<16384?l<128?5:6:l<2097152?7:8:o<128?9:10}},9693:function(b,n,a){var u=n;function c(s,h,f){for(var l=Object.keys(h),o=0;o0)},u.Buffer=function(){try{var s=u.inquire("buffer").Buffer;return s.prototype.utf8Write?s:null}catch{return null}}(),u._Buffer_from=null,u._Buffer_allocUnsafe=null,u.newBuffer=function(s){return typeof s=="number"?u.Buffer?u._Buffer_allocUnsafe(s):new u.Array(s):u.Buffer?u._Buffer_from(s):typeof Uint8Array>"u"?s:new Uint8Array(s)},u.Array=typeof Uint8Array<"u"?Uint8Array:Array,u.Long=u.global.dcodeIO&&u.global.dcodeIO.Long||u.global.Long||u.inquire("long"),u.key2Re=/^true|false|0|1$/,u.key32Re=/^-?(?:0|[1-9][0-9]*)$/,u.key64Re=/^(?:[\\x00-\\xff]{8}|-?(?:0|[1-9][0-9]*))$/,u.longToHash=function(s){return s?u.LongBits.from(s).toHash():u.LongBits.zeroHash},u.longFromHash=function(s,h){var f=u.LongBits.fromHash(s);return u.Long?u.Long.fromBits(f.lo,f.hi,h):f.toNumber(!!h)},u.merge=c,u.lcFirst=function(s){return s.charAt(0).toLowerCase()+s.substring(1)},u.newError=p,u.ProtocolError=p("ProtocolError"),u.oneOfGetter=function(s){for(var h={},f=0;f-1;--o)if(h[l[o]]===1&&this[l[o]]!==void 0&&this[l[o]]!==null)return l[o]}},u.oneOfSetter=function(s){return function(h){for(var f=0;f{b.exports=t;var u,c=a(9693),p=c.LongBits,s=c.base64,h=c.utf8;function f(_,y,w){this.fn=_,this.len=y,this.next=void 0,this.val=w}function l(){}function o(_){this.head=_.head,this.tail=_.tail,this.len=_.len,this.next=_.states}function t(){this.len=0,this.head=new f(l,0,0),this.tail=this.head,this.states=null}var e=function(){return c.Buffer?function(){return(t.create=function(){return new u})()}:function(){return new t}};function r(_,y,w){y[w]=255&_}function i(_,y){this.len=_,this.next=void 0,this.val=y}function d(_,y,w){for(;_.hi;)y[w++]=127&_.lo|128,_.lo=(_.lo>>>7|_.hi<<25)>>>0,_.hi>>>=7;for(;_.lo>127;)y[w++]=127&_.lo|128,_.lo=_.lo>>>7;y[w++]=_.lo}function g(_,y,w){y[w]=255&_,y[w+1]=_>>>8&255,y[w+2]=_>>>16&255,y[w+3]=_>>>24}t.create=e(),t.alloc=function(_){return new c.Array(_)},c.Array!==Array&&(t.alloc=c.pool(t.alloc,c.Array.prototype.subarray)),t.prototype._push=function(_,y,w){return this.tail=this.tail.next=new f(_,y,w),this.len+=y,this},i.prototype=Object.create(f.prototype),i.prototype.fn=function(_,y,w){for(;_>127;)y[w++]=127&_|128,_>>>=7;y[w]=_},t.prototype.uint32=function(_){return this.len+=(this.tail=this.tail.next=new i((_>>>=0)<128?1:_<16384?2:_<2097152?3:_<268435456?4:5,_)).len,this},t.prototype.int32=function(_){return _<0?this._push(d,10,p.fromNumber(_)):this.uint32(_)},t.prototype.sint32=function(_){return this.uint32((_<<1^_>>31)>>>0)},t.prototype.uint64=function(_){var y=p.from(_);return this._push(d,y.length(),y)},t.prototype.int64=t.prototype.uint64,t.prototype.sint64=function(_){var y=p.from(_).zzEncode();return this._push(d,y.length(),y)},t.prototype.bool=function(_){return this._push(r,1,_?1:0)},t.prototype.fixed32=function(_){return this._push(g,4,_>>>0)},t.prototype.sfixed32=t.prototype.fixed32,t.prototype.fixed64=function(_){var y=p.from(_);return this._push(g,4,y.lo)._push(g,4,y.hi)},t.prototype.sfixed64=t.prototype.fixed64,t.prototype.float=function(_){return this._push(c.float.writeFloatLE,4,_)},t.prototype.double=function(_){return this._push(c.float.writeDoubleLE,8,_)};var m=c.Array.prototype.set?function(_,y,w){y.set(_,w)}:function(_,y,w){for(var v=0;v<_.length;++v)y[w+v]=_[v]};t.prototype.bytes=function(_){var y=_.length>>>0;if(!y)return this._push(r,1,0);if(c.isString(_)){var w=t.alloc(y=s.length(_));s.decode(_,w,0),_=w}return this.uint32(y)._push(m,y,_)},t.prototype.string=function(_){var y=h.length(_);return y?this.uint32(y)._push(h.write,y,_):this._push(r,1,0)},t.prototype.fork=function(){return this.states=new o(this),this.head=this.tail=new f(l,0,0),this.len=0,this},t.prototype.reset=function(){return this.states?(this.head=this.states.head,this.tail=this.states.tail,this.len=this.states.len,this.states=this.states.next):(this.head=this.tail=new f(l,0,0),this.len=0),this},t.prototype.ldelim=function(){var _=this.head,y=this.tail,w=this.len;return this.reset().uint32(w),w&&(this.tail.next=_.next,this.tail=y,this.len+=w),this},t.prototype.finish=function(){for(var _=this.head.next,y=this.constructor.alloc(this.len),w=0;_;)_.fn(_.val,y,w),w+=_.len,_=_.next;return y},t._configure=function(_){u=_,t.create=e(),u._configure()}},3155:(b,n,a)=>{b.exports=p;var u=a(1173);(p.prototype=Object.create(u.prototype)).constructor=p;var c=a(9693);function p(){u.call(this)}function s(h,f,l){h.length<40?c.utf8.write(h,f,l):f.utf8Write?f.utf8Write(h,l):f.write(h,l)}p._configure=function(){p.alloc=c._Buffer_allocUnsafe,p.writeBytesBuffer=c.Buffer&&c.Buffer.prototype instanceof Uint8Array&&c.Buffer.prototype.set.name==="set"?function(h,f,l){f.set(h,l)}:function(h,f,l){if(h.copy)h.copy(f,l,0,h.length);else for(var o=0;o>>0;return this.uint32(f),f&&this._push(p.writeBytesBuffer,f,h),this},p.prototype.string=function(h){var f=c.Buffer.byteLength(h);return this.uint32(f),f&&this._push(s,f,h),this},p._configure()},7714:(b,n,a)=>{n.R=void 0;const u=a(6919),c=a(7448);n.R=new class{async init(){}async createSessionHandler(p,s){const h=new u.Session(s);return await h.loadModel(p),new c.OnnxjsSessionHandler(h)}}},4200:(b,n,a)=>{n.c8=n.rX=void 0;const u=a(1670),c=a(5381),p=a(2157),s=a(2306);n.rX=()=>{if((typeof u.env.wasm.initTimeout!="number"||u.env.wasm.initTimeout<0)&&(u.env.wasm.initTimeout=0),typeof u.env.wasm.simd!="boolean"&&(u.env.wasm.simd=!0),typeof u.env.wasm.proxy!="boolean"&&(u.env.wasm.proxy=!1),typeof u.env.wasm.numThreads!="number"||!Number.isInteger(u.env.wasm.numThreads)||u.env.wasm.numThreads<=0){const h=typeof navigator>"u"?(0,c.cpus)().length:navigator.hardwareConcurrency;u.env.wasm.numThreads=Math.min(4,Math.ceil((h||1)/2))}},n.c8=new class{async init(){(0,n.rX)(),await(0,p.initWasm)()}async createSessionHandler(h,f){const l=new s.OnnxruntimeWebAssemblySessionHandler;return await l.loadModel(h,f),Promise.resolve(l)}}},6018:function(b,n,a){var u=this&&this.__createBinding||(Object.create?function(s,h,f,l){l===void 0&&(l=f);var o=Object.getOwnPropertyDescriptor(h,f);o&&!("get"in o?!h.__esModule:o.writable||o.configurable)||(o={enumerable:!0,get:function(){return h[f]}}),Object.defineProperty(s,l,o)}:function(s,h,f,l){l===void 0&&(l=f),s[l]=h[f]}),c=this&&this.__exportStar||function(s,h){for(var f in s)f==="default"||Object.prototype.hasOwnProperty.call(h,f)||u(h,s,f)};Object.defineProperty(n,"__esModule",{value:!0}),c(a(1670),n);const p=a(1670);{const s=a(7714).R;(0,p.registerBackend)("webgl",s,-10)}{const s=a(4200).c8;(0,p.registerBackend)("cpu",s,10),(0,p.registerBackend)("wasm",s,10),(0,p.registerBackend)("xnnpack",s,9)}},246:(b,n)=>{Object.defineProperty(n,"__esModule",{value:!0}),n.createAttributeWithCacheKey=void 0;class a{constructor(c){Object.assign(this,c)}get cacheKey(){return this._cacheKey||(this._cacheKey=Object.getOwnPropertyNames(this).sort().map(c=>`${this[c]}`).join(";")),this._cacheKey}}n.createAttributeWithCacheKey=u=>new a(u)},7778:(b,n,a)=>{Object.defineProperty(n,"__esModule",{value:!0}),n.Attribute=void 0;const u=a(1446),c=a(9395),p=a(9162),s=a(2517);var h=c.onnxruntime.experimental.fbs;class f{constructor(o){if(this._attributes=new Map,o!=null){for(const t of o)t instanceof u.onnx.AttributeProto?this._attributes.set(t.name,[f.getValue(t),f.getType(t)]):t instanceof h.Attribute&&this._attributes.set(t.name(),[f.getValue(t),f.getType(t)]);if(this._attributes.sizep.Tensor.fromProto(r));if(o instanceof h.Attribute)return e.map(r=>p.Tensor.fromOrtTensor(r))}if(t===u.onnx.AttributeProto.AttributeType.STRING&&o instanceof u.onnx.AttributeProto){const r=e;return(0,s.decodeUtf8String)(r)}return t===u.onnx.AttributeProto.AttributeType.STRINGS&&o instanceof u.onnx.AttributeProto?e.map(s.decodeUtf8String):e}static getValueNoCheck(o){return o instanceof u.onnx.AttributeProto?this.getValueNoCheckFromOnnxFormat(o):this.getValueNoCheckFromOrtFormat(o)}static getValueNoCheckFromOnnxFormat(o){switch(o.type){case u.onnx.AttributeProto.AttributeType.FLOAT:return o.f;case u.onnx.AttributeProto.AttributeType.INT:return o.i;case u.onnx.AttributeProto.AttributeType.STRING:return o.s;case u.onnx.AttributeProto.AttributeType.TENSOR:return o.t;case u.onnx.AttributeProto.AttributeType.GRAPH:return o.g;case u.onnx.AttributeProto.AttributeType.FLOATS:return o.floats;case u.onnx.AttributeProto.AttributeType.INTS:return o.ints;case u.onnx.AttributeProto.AttributeType.STRINGS:return o.strings;case u.onnx.AttributeProto.AttributeType.TENSORS:return o.tensors;case u.onnx.AttributeProto.AttributeType.GRAPHS:return o.graphs;default:throw new Error(`unsupported attribute type: ${u.onnx.AttributeProto.AttributeType[o.type]}`)}}static getValueNoCheckFromOrtFormat(o){switch(o.type()){case h.AttributeType.FLOAT:return o.f();case h.AttributeType.INT:return o.i();case h.AttributeType.STRING:return o.s();case h.AttributeType.TENSOR:return o.t();case h.AttributeType.GRAPH:return o.g();case h.AttributeType.FLOATS:return o.floatsArray();case h.AttributeType.INTS:{const t=[];for(let e=0;e{Object.defineProperty(n,"__esModule",{value:!0}),n.resolveBackend=n.backend=void 0;const u=a(5038),c=new Map;async function p(s){const h=n.backend;if(h[s]!==void 0&&function(f){const l=f;return"initialize"in l&&typeof l.initialize=="function"&&"createSessionHandler"in l&&typeof l.createSessionHandler=="function"&&"dispose"in l&&typeof l.dispose=="function"}(h[s])){const f=h[s];let l=f.initialize();if(typeof l=="object"&&"then"in l&&(l=await l),l)return c.set(s,f),f}}n.backend={webgl:new u.WebGLBackend},n.resolveBackend=async function s(h){if(!h)return s(["webgl"]);{const f=typeof h=="string"?[h]:h;for(const l of f){const o=c.get(l);if(o)return o;const t=await p(l);if(t)return t}}throw new Error("no available backend to use")}},5038:(b,n,a)=>{Object.defineProperty(n,"__esModule",{value:!0}),n.WebGLBackend=void 0;const u=a(1670),c=a(6231),p=a(6416),s=a(7305);n.WebGLBackend=class{get contextId(){return u.env.webgl.contextId}set contextId(h){u.env.webgl.contextId=h}get matmulMaxBatchSize(){return u.env.webgl.matmulMaxBatchSize}set matmulMaxBatchSize(h){u.env.webgl.matmulMaxBatchSize=h}get textureCacheMode(){return u.env.webgl.textureCacheMode}set textureCacheMode(h){u.env.webgl.textureCacheMode=h}get pack(){return u.env.webgl.pack}set pack(h){u.env.webgl.pack=h}get async(){return u.env.webgl.async}set async(h){u.env.webgl.async=h}initialize(){try{return this.glContext=(0,s.createWebGLContext)(this.contextId),typeof this.matmulMaxBatchSize!="number"&&(this.matmulMaxBatchSize=16),typeof this.textureCacheMode!="string"&&(this.textureCacheMode="full"),typeof this.pack!="boolean"&&(this.pack=!1),typeof this.async!="boolean"&&(this.async=!1),c.Logger.setWithEnv(u.env),c.Logger.verbose("WebGLBackend",`Created WebGLContext: ${typeof this.glContext} with matmulMaxBatchSize: ${this.matmulMaxBatchSize}; textureCacheMode: ${this.textureCacheMode}; pack: ${this.pack}; async: ${this.async}.`),!0}catch(h){return c.Logger.warning("WebGLBackend",`Unable to initialize WebGLBackend. ${h}`),!1}}createSessionHandler(h){return new p.WebGLSessionHandler(this,h)}dispose(){this.glContext.dispose()}}},5107:(b,n,a)=>{Object.defineProperty(n,"__esModule",{value:!0}),n.CoordsGlslLib=void 0;const u=a(2517),c=a(8520),p=a(5060),s=a(7859),h=a(9390);class f extends c.GlslLib{constructor(o){super(o)}getFunctions(){return Object.assign(Object.assign(Object.assign(Object.assign(Object.assign(Object.assign(Object.assign({},this.offsetToCoords()),this.coordsToOffset()),this.toVec()),this.valueFrom()),this.getCommonUtilFuncs()),this.getInputsSamplingSnippets()),this.getOutputSamplingSnippet())}getCustomTypes(){return{}}offsetToCoords(){return{offsetToCoords:new c.GlslLibRoutine(` - vec2 offsetToCoords(int offset, int width, int height) { - int t = offset / width; - int s = offset - t*width; - vec2 coords = (vec2(s,t) + vec2(0.5,0.5)) / vec2(width, height); - return coords; - } - `)}}coordsToOffset(){return{coordsToOffset:new c.GlslLibRoutine(` - int coordsToOffset(vec2 coords, int width, int height) { - float s = coords.s * float(width); - float t = coords.t * float(height); - int offset = int(t) * width + int(s); - return offset; - } - `)}}getOutputSamplingSnippet(){const o=this.context.outputTextureLayout;return o.isPacked?this.getPackedOutputSamplingSnippet(o):this.getUnpackedOutputSamplingSnippet(o)}getPackedOutputSamplingSnippet(o){const t=o.unpackedShape,e=[o.width,o.height],r={},i="getOutputCoords";switch(t.length){case 0:r[i]=this.getOutputScalarCoords();break;case 1:r[i]=this.getOutputPacked1DCoords(t,e);break;case 2:r[i]=this.getOutputPacked2DCoords(t,e);break;case 3:r[i]=this.getOutputPacked3DCoords(t,e);break;default:r[i]=this.getOutputPackedNDCoords(t,e)}const d=` - void setOutput(vec4 val) { - ${(0,p.getGlsl)(this.context.glContext.version).output} = val; - } - `;return r.floatTextureSetRGBA=new c.GlslLibRoutine(d),r}getUnpackedOutputSamplingSnippet(o){const t=o.unpackedShape,e=[o.width,o.height],r={},i="getOutputCoords";switch(t.length){case 0:r[i]=this.getOutputScalarCoords();break;case 1:r[i]=this.getOutputUnpacked1DCoords(t,e);break;case 2:r[i]=this.getOutputUnpacked2DCoords(t,e);break;case 3:r[i]=this.getOutputUnpacked3DCoords(t,e);break;case 4:r[i]=this.getOutputUnpacked4DCoords(t,e);break;case 5:r[i]=this.getOutputUnpacked5DCoords(t,e);break;case 6:r[i]=this.getOutputUnpacked6DCoords(t,e);break;default:throw new Error(`Unsupported output dimensionality: ${t.length}`)}const d=` - void setOutput(float val) { - ${(0,p.getGlsl)(this.context.glContext.version).output} = vec4(val, 0, 0, 0); - } - `;return r.floatTextureSetR=new c.GlslLibRoutine(d),r}getOutputScalarCoords(){return new c.GlslLibRoutine(` - int getOutputCoords() { - return 0; - } - `)}getOutputPacked1DCoords(o,t){const e=t;let r="";return e[0]===1?(r=` - int getOutputCoords() { - return 2 * int(TexCoords.y * ${e[1]}.0); - } - `,new c.GlslLibRoutine(r)):e[1]===1?(r=` - int getOutputCoords() { - return 2 * int(TexCoords.x * ${e[0]}.0); - } - `,new c.GlslLibRoutine(r)):(r=` - int getOutputCoords() { - ivec2 resTexRC = ivec2(TexCoords.xy * - vec2(${e[0]}, ${e[1]})); - return 2 * (resTexRC.y * ${e[0]} + resTexRC.x); - } - `,new c.GlslLibRoutine(r))}getOutputPacked2DCoords(o,t){let e="";if(u.ArrayUtil.arraysEqual(o,t))return e=` - ivec2 getOutputCoords() { - return 2 * ivec2(TexCoords.xy * vec2(${t[0]}, ${t[1]})); - } - `,new c.GlslLibRoutine(e);const r=t,i=Math.ceil(o[1]/2);return e=` - ivec2 getOutputCoords() { - ivec2 resTexRC = ivec2(TexCoords.xy * - vec2(${r[0]}, ${r[1]})); - - int index = resTexRC.y * ${r[0]} + resTexRC.x; - - // reverse r and c order for packed texture - int r = imod(index, ${i}) * 2; - int c = 2 * (index / ${i}); - - return ivec2(r, c); - } - `,new c.GlslLibRoutine(e)}getOutputPacked3DCoords(o,t){const e=[t[0],t[1]],r=Math.ceil(o[2]/2),i=r*Math.ceil(o[1]/2),d=` - ivec3 getOutputCoords() { - ivec2 resTexRC = ivec2(TexCoords.xy * - vec2(${e[0]}, ${e[1]})); - int index = resTexRC.y * ${e[0]} + resTexRC.x; - - int b = index / ${i}; - index -= b * ${i}; - - // reverse r and c order for packed texture - int r = imod(index, ${r}) * 2; - int c = 2 * (index / ${r}); - - return ivec3(b, r, c); - } - `;return new c.GlslLibRoutine(d)}getOutputPackedNDCoords(o,t){const e=[t[0],t[1]],r=Math.ceil(o[o.length-1]/2),i=r*Math.ceil(o[o.length-2]/2);let d=i,g="",m="b, r, c";for(let y=2;y=0;--m)i[m]=i[m+1]*o[m+1];const d=["r","c","d"],g=i.map((m,_)=>`int ${d[_]} = index / ${m}; ${_===i.length-1?`int ${d[_+1]} = index - ${d[_]} * ${m}`:`index -= ${d[_]} * ${m}`};`).join("");return e=` - ivec3 getOutputCoords() { - ivec2 resTexRC = ivec2(TexCoords.xy * - vec2(${t[0]}, ${t[1]})); - int index = resTexRC.y * ${t[0]} + resTexRC.x; - ${g} - return ivec3(r, c, d); - } - `,new c.GlslLibRoutine(e)}getOutputUnpacked4DCoords(o,t){let e="";const r=o.length;let i=null;r<2&&(i=[]),i=new Array(r-1),i[r-2]=o[r-1];for(let m=r-3;m>=0;--m)i[m]=i[m+1]*o[m+1];const d=["r","c","d","d2"],g=i.map((m,_)=>`int ${d[_]} = index / ${m}; ${_===i.length-1?`int ${d[_+1]} = index - ${d[_]} * ${m}`:`index -= ${d[_]} * ${m}`};`).join("");return e=` - ivec4 getOutputCoords() { - ivec2 resTexRC = ivec2(TexCoords.xy * - vec2(${t[0]}, ${t[1]})); - int index = resTexRC.y * ${t[0]} + resTexRC.x; - ${g} - return ivec4(r, c, d, d2); - } - `,new c.GlslLibRoutine(e)}getOutputUnpacked5DCoords(o,t){let e="";const r=o.length;let i=null;r<2&&(i=[]),i=new Array(r-1),i[r-2]=o[r-1];for(let m=r-3;m>=0;--m)i[m]=i[m+1]*o[m+1];const d=["r","c","d","d2","d3"],g=i.map((m,_)=>`int ${d[_]} = index / ${m}; ${_===i.length-1?`int ${d[_+1]} = index - ${d[_]} * ${m}`:`index -= ${d[_]} * ${m}`};`).join("");return e=` - ivec5 getOutputCoords() { - ivec2 resTexRC = ivec2(TexCoords.xy * - vec2(${t[0]}, ${t[1]})); - int index = resTexRC.y * ${t[0]} + resTexRC.x; - ${g} - return ivec5(r, c, d, d2, d3); - } - `,new c.GlslLibRoutine(e)}getOutputUnpacked6DCoords(o,t){let e="";const r=o.length;let i=null;r<2&&(i=[]),i=new Array(r-1),i[r-2]=o[r-1];for(let m=r-3;m>=0;--m)i[m]=i[m+1]*o[m+1];const d=["r","c","d","d2","d3","d4"],g=i.map((m,_)=>`int ${d[_]} = index / ${m}; ${_===i.length-1?`int ${d[_+1]} = index - ${d[_]} * ${m}`:`index -= ${d[_]} * ${m}`};`).join("");return e=` - ivec6 getOutputCoords() { - ivec2 resTexRC = ivec2(TexCoords.xy * - vec2(${t[0]}, ${t[1]})); - int index = resTexRC.y * ${t[0]} + resTexRC.x; - ${g} - return ivec6(r, c, d, d2, d3, d4); - } - `,new c.GlslLibRoutine(e)}getCommonUtilFuncs(){const o={};let t="uvFromFlat";o[t]=new c.GlslLibRoutine(` - vec2 uvFromFlat(int texNumR, int texNumC, int index) { - int texC = index / texNumR; - int texR = index - texC * texNumR; - // TODO: swap texR, texC order in following function so row is corresponding to u and column is corresponding to - // v. - return (vec2(texR, texC) + halfCR) / vec2(texNumR, texNumC); - } - `),t="packedUVfrom1D",o[t]=new c.GlslLibRoutine(` - vec2 packedUVfrom1D(int texNumR, int texNumC, int index) { - int texelIndex = index / 2; - int texR = texelIndex / texNumC; - int texC = texelIndex - texR * texNumC; - return (vec2(texC, texR) + halfCR) / vec2(texNumC, texNumR); - } - `),t="packedUVfrom2D",o[t]=new c.GlslLibRoutine(` - vec2 packedUVfrom2D(int texNumR, int texNumC, int texelsInLogicalRow, int row, int col) { - int texelIndex = (row / 2) * texelsInLogicalRow + (col / 2); - int texR = texelIndex / texNumC; - int texC = texelIndex - texR * texNumC; - return (vec2(texC, texR) + halfCR) / vec2(texNumC, texNumR); - } - `),t="packedUVfrom3D",o[t]=new c.GlslLibRoutine(` - vec2 packedUVfrom3D(int texNumR, int texNumC, - int texelsInBatch, int texelsInLogicalRow, int b, - int row, int col) { - int index = b * texelsInBatch + (row / 2) * texelsInLogicalRow + (col / 2); - int texR = index / texNumC; - int texC = index - texR * texNumC; - return (vec2(texC, texR) + halfCR) / vec2(texNumC, texNumR); - } - `),t="sampleTexture";const e=(0,p.getGlsl)(this.context.glContext.version);return o[t]=new c.GlslLibRoutine(` - float sampleTexture(sampler2D textureSampler, vec2 uv) { - return ${e.texture2D}(textureSampler, uv).r; - }`),o}getInputsSamplingSnippets(){const o={},t=this.context.outputTextureLayout;return this.context.programInfo.inputNames.forEach((e,r)=>{const i=this.context.inputTextureLayouts[r],d=(0,h.generateShaderFuncNameFromInputSamplerName)(e);i.isPacked?o[d]=this.getPackedSamplerFromInput(d,e,i):o[d]=this.getUnpackedSamplerFromInput(d,e,i);const g=(0,h.generateShaderFuncNameFromInputSamplerNameAtOutCoords)(e);i.unpackedShape.length<=t.unpackedShape.length&&(i.isPacked?o[g]=this.getPackedSamplerAtOutputCoords(g,i,t,e):o[g]=this.getUnpackedSamplerAtOutputCoords(g,i,t,e))}),o}getPackedSamplerAtOutputCoords(o,t,e,r){const i=t.unpackedShape,d=e.unpackedShape,g=r,m=(0,h.generateShaderFuncNameFromInputSamplerName)(g),_=i.length,y=d.length,w=u.BroadcastUtil.getBroadcastDims(i,d),v=(0,h.getCoordsDataType)(y),S=y-_;let O;const A=(0,h.getGlChannels)();O=_===0?"":y<2&&w.length>=1?"coords = 0;":w.map(L=>`coords.${A[L+S]} = 0;`).join(` -`);let T="";T=y<2&&_>0?"coords":i.map((L,H)=>`coords.${A[H+S]}`).join(", ");let M="return outputValue;";const N=u.ShapeUtil.size(i)===1,B=u.ShapeUtil.size(d)===1;if(_!==1||N||B){if(N&&!B)M=y===1?` - return vec4(outputValue.x, outputValue.x, 0., 0.); - `:` - return vec4(outputValue.x); - `;else if(w.length){const L=_-2,H=_-1;w.indexOf(L)>-1&&w.indexOf(H)>-1?M="return vec4(outputValue.x);":w.indexOf(L)>-1?M="return vec4(outputValue.x, outputValue.y, outputValue.x, outputValue.y);":w.indexOf(H)>-1&&(M="return vec4(outputValue.xx, outputValue.zz);")}}else M=` - return vec4(outputValue.xy, outputValue.xy); - `;const $=` - vec4 ${o}() { - ${v} coords = getOutputCoords(); - - int lastDim = coords.${A[y-1]}; - coords.${A[y-1]} = coords.${A[y-2]}; - coords.${A[y-2]} = lastDim; - - ${O} - vec4 outputValue = ${m}(${T}); - ${M} - } - `;return new c.GlslLibRoutine($,["coordinates.getOutputCoords"])}getUnpackedSamplerAtOutputCoords(o,t,e,r){const i=[e.width,e.height],d=[t.width,t.height],g=t.unpackedShape.length,m=e.unpackedShape.length,_=t.unpackedShape,y=e.unpackedShape,w=(0,h.generateShaderFuncNameFromInputSamplerName)(r);if(g===m&&u.ArrayUtil.arraysEqual(d,i)){const B=` - float ${o}() { - return sampleTexture(${r}, TexCoords); - } - `;return new c.GlslLibRoutine(B,["coordinates.sampleTexture"])}const v=(0,h.getCoordsDataType)(m),S=u.BroadcastUtil.getBroadcastDims(_,y),O=m-g;let A;const T=(0,h.getGlChannels)();A=g===0?"":m<2&&S.length>=1?"coords = 0;":S.map(B=>`coords.${T[B+O]} = 0;`).join(` -`);let M="";M=m<2&&g>0?"coords":t.unpackedShape.map((B,$)=>`coords.${T[$+O]}`).join(", ");const N=` - float ${o}() { - ${v} coords = getOutputCoords(); - ${A} - return ${w}(${M}); - } - `;return new c.GlslLibRoutine(N,["coordinates.getOutputCoords"])}getPackedSamplerFromInput(o,t,e){switch(e.unpackedShape.length){case 0:return this.getPackedSamplerScalar(o,t);case 1:return this.getPackedSampler1D(o,t,e);case 2:return this.getPackedSampler2D(o,t,e);case 3:return this.getPackedSampler3D(o,t,e);default:return this.getPackedSamplerND(o,t,e)}}getUnpackedSamplerFromInput(o,t,e){const r=e.unpackedShape;switch(r.length){case 0:return this.getUnpackedSamplerScalar(o,t,e);case 1:return this.getUnpackedSampler1D(o,t,e);case 2:return this.getUnpackedSampler2D(o,t,e);case 3:return this.getUnpackedSampler3D(o,t,e);case 4:return this.getUnpackedSampler4D(o,t,e);case 5:return this.getUnpackedSampler5D(o,t,e);case 6:return this.getUnpackedSampler6D(o,t,e);default:throw new Error(`Unsupported dimension ${r.length}-D`)}}getPackedSamplerScalar(o,t){const e=` - vec4 ${o}() { - return ${(0,p.getGlsl)(this.context.glContext.version).texture2D}(${t}, halfCR); - } - `;return new c.GlslLibRoutine(e)}getPackedSampler1D(o,t,e){const r=[e.width,e.height],i=[r[1],r[0]],d=(0,p.getGlsl)(this.context.glContext.version),g=`vec4 ${o}(int index) { - vec2 uv = packedUVfrom1D( - ${i[0]}, ${i[1]}, index); - return ${d.texture2D}(${t}, uv); - }`;return new c.GlslLibRoutine(g,["coordinates.packedUVfrom1D"])}getPackedSampler2D(o,t,e){const r=e.unpackedShape,i=[e.width,e.height],d=(0,p.getGlsl)(this.context.glContext.version),g=i[0],m=i[1];if(i!=null&&u.ArrayUtil.arraysEqual(r,i)){const v=`vec4 ${o}(int row, int col) { - vec2 uv = (vec2(col, row) + halfCR) / vec2(${m}.0, ${g}.0); - return ${d.texture2D}(${t}, uv); - }`;return new c.GlslLibRoutine(v)}const _=i,y=Math.ceil(r[1]/2),w=`vec4 ${o}(int row, int col) { - vec2 uv = packedUVfrom2D(${_[1]}, ${_[0]}, ${y}, row, col); - return ${d.texture2D}(${t}, uv); - }`;return new c.GlslLibRoutine(w,["coordinates.packedUVfrom2D"])}getPackedSampler3D(o,t,e){const r=e.unpackedShape,i=[e.width,e.height],d=[i[0],i[1]],g=(0,p.getGlsl)(this.context.glContext.version);if(r[0]===1){const v=r.slice(1),S=[1,2],O=(0,h.squeezeInputShape)(r,v),A=["b","row","col"],T=JSON.parse(JSON.stringify(e));T.unpackedShape=O;const M=this.getPackedSamplerFromInput(o,t,T),N=`${M.routineBody} - vec4 ${o}(int b, int row, int col) { - return ${o}(${(0,h.getSqueezedParams)(A,S)}); - } `;return new c.GlslLibRoutine(N,M.dependencies)}const m=d[0],_=d[1],y=Math.ceil(r[2]/2),w=`vec4 ${o}(int b, int row, int col) { - vec2 uv = packedUVfrom3D( - ${_}, ${m}, ${y*Math.ceil(r[1]/2)}, ${y}, b, row, col); - return ${g.texture2D}(${t}, uv);}`;return new c.GlslLibRoutine(w,["coordinates.packedUVfrom3D"])}getPackedSamplerND(o,t,e){const r=e.unpackedShape,i=r.length,d=[e.width,e.height],g=(0,p.getGlsl)(this.context.glContext.version),m=[d[0],d[1]],_=m[1],y=m[0],w=Math.ceil(r[i-1]/2);let v=w*Math.ceil(r[i-2]/2),S="int b, int row, int col",O=`b * ${v} + (row / 2) * ${w} + (col / 2)`;for(let T=2;T{const r=this.context.inputTextureLayouts[e],i=(r.unpackedShape.length>0?r.unpackedShape:r.shape).length;let d=`_${t}`;o[d]=new c.GlslLibRoutine(this.getValueFromSingle(t,i,r.width,r.height,!1),[`shapeUtils.indicesToOffset${d}`,"coordinates.offsetToCoords","fragcolor.getColorAsFloat"]),d+="_T",o[d]=new c.GlslLibRoutine(this.getValueFromSingle(t,i,r.width,r.height,!0),[`shapeUtils.indicesToOffset${d}`,"coordinates.offsetToCoords","fragcolor.getColorAsFloat"])}),o}getValueFromSingle(o,t,e,r,i){let d=`_${o}`;return i&&(d+="_T"),` - float ${d}(int m[${t}]) { - int offset = indicesToOffset${d}(m); - vec2 coords = offsetToCoords(offset, ${e}, ${r}); - float value = getColorAsFloat(${(0,p.getGlsl)(this.context.glContext.version).texture2D}(${o}, coords)); - return value; - } - `}getPackedValueFrom(o,t,e,r,i){let d=`_${o}_Pack`;return i&&(d+="_T"),` - vec4 ${d}(int m[${t}]) { - int offset = indicesToOffset_${o}(m); - vec2 coords = offsetToCoords(offset, ${e}, ${r}); - return ${(0,p.getGlsl)(this.context.glContext.version).texture2D}(${o}, coords); - } - `}}n.CoordsGlslLib=f},8520:(b,n)=>{var a;Object.defineProperty(n,"__esModule",{value:!0}),n.TopologicalSortGlslRoutines=n.GlslLibRoutineNode=n.GlslLibRoutine=n.GlslLib=n.GlslContext=n.FunctionType=void 0,(a=n.FunctionType||(n.FunctionType={}))[a.ValueBased=0]="ValueBased",a[a.Positional=1]="Positional",n.GlslContext=class{constructor(u,c,p,s){this.glContext=u,this.programInfo=c,this.inputTextureLayouts=p,this.outputTextureLayout=s}},n.GlslLib=class{constructor(u){this.context=u}},n.GlslLibRoutine=class{constructor(u,c){this.routineBody=u,this.dependencies=c}},n.GlslLibRoutineNode=class{constructor(u,c,p){this.name=u,this.dependencies=p||[],c&&(this.routineBody=c)}addDependency(u){u&&this.dependencies.push(u)}},n.TopologicalSortGlslRoutines=class{static returnOrderedNodes(u){if(!u||u.length===0)return[];if(u.length===1)return u;const c=new Set,p=new Set,s=new Array;return this.createOrderedNodes(u,c,p,s),s}static createOrderedNodes(u,c,p,s){for(let h=0;h0)for(let f=0;f{Object.defineProperty(n,"__esModule",{value:!0}),n.EncodingGlslLib=void 0;const u=a(8520);class c extends u.GlslLib{constructor(s){super(s)}getFunctions(){return Object.assign(Object.assign({},this.encodeFloat32()),this.decodeFloat32())}getCustomTypes(){return{}}encodeFloat32(){return{encode:new u.GlslLibRoutine(`highp vec4 encode(highp float f) { - return vec4(f, 0.0, 0.0, 0.0); - } - `)}}decodeFloat32(){return{decode:new u.GlslLibRoutine(`highp float decode(highp vec4 rgba) { - return rgba.r; - } - `)}}encodeUint8(){const s=c.isLittleEndian()?"rgba.rgba=rgba.abgr;":"";return{encode:new u.GlslLibRoutine(` - highp vec4 encode(highp float f) { - highp float F = abs(f); - highp float Sign = step(0.0,-f); - highp float Exponent = floor(log2(F)); - highp float Mantissa = (exp2(- Exponent) * F); - Exponent = floor(log2(F) + 127.0) + floor(log2(Mantissa)); - highp vec4 rgba; - rgba[0] = 128.0 * Sign + floor(Exponent*exp2(-1.0)); - rgba[1] = 128.0 * mod(Exponent,2.0) + mod(floor(Mantissa*128.0),128.0); - rgba[2] = floor(mod(floor(Mantissa*exp2(23.0 -8.0)),exp2(8.0))); - rgba[3] = floor(exp2(23.0)*mod(Mantissa,exp2(-15.0))); - ${s} - rgba = rgba / 255.0; // values need to be normalized to [0,1] - return rgba; - } - `)}}decodeUint8(){const s=c.isLittleEndian()?"rgba.rgba=rgba.abgr;":"";return{decode:new u.GlslLibRoutine(` - highp float decode(highp vec4 rgba) { - rgba = rgba * 255.0; // values need to be de-normalized from [0,1] to [0,255] - ${s} - highp float Sign = 1.0 - step(128.0,rgba[0])*2.0; - highp float Exponent = 2.0 * mod(rgba[0],128.0) + step(128.0,rgba[1]) - 127.0; - highp float Mantissa = mod(rgba[1],128.0)*65536.0 + rgba[2]*256.0 +rgba[3] + float(0x800000); - highp float Result = Sign * exp2(Exponent) * (Mantissa * exp2(-23.0 )); - return Result; - } - `)}}static isLittleEndian(){const s=new ArrayBuffer(4),h=new Uint32Array(s),f=new Uint8Array(s);if(h[0]=3735928559,f[0]===239)return!0;if(f[0]===222)return!1;throw new Error("unknown endianness")}}n.EncodingGlslLib=c},9894:(b,n,a)=>{Object.defineProperty(n,"__esModule",{value:!0}),n.FragColorGlslLib=void 0;const u=a(8520),c=a(5060);class p extends u.GlslLib{constructor(h){super(h)}getFunctions(){return Object.assign(Object.assign({},this.setFragColor()),this.getColorAsFloat())}getCustomTypes(){return{}}setFragColor(){const h=(0,c.getGlsl)(this.context.glContext.version);return{setFragColor:new u.GlslLibRoutine(` - void setFragColor(float value) { - ${h.output} = encode(value); - } - `,["encoding.encode"])}}getColorAsFloat(){return{getColorAsFloat:new u.GlslLibRoutine(` - float getColorAsFloat(vec4 color) { - return decode(color); - } - `,["encoding.decode"])}}}n.FragColorGlslLib=p},2848:(b,n)=>{Object.defineProperty(n,"__esModule",{value:!0}),n.replaceInlines=void 0;const a=/@inline[\s\n\r]+(\w+)[\s\n\r]+([0-9a-zA-Z_]+)\s*\(([^)]*)\)\s*{(([^}]|[\n\r])*)}/gm;n.replaceInlines=function(u){const c={};let p;for(;(p=a.exec(u))!==null;){const s=p[3].split(",").map(h=>{const f=h.trim().split(" ");return f&&f.length===2?{type:f[0],name:f[1]}:null}).filter(h=>h!==null);c[p[2]]={params:s,body:p[4]}}for(const s in c){const h="(\\w+)?\\s+([_0-9a-zA-Z]+)\\s+=\\s+__FUNC__\\((.*)\\)\\s*;".replace("__FUNC__",s),f=new RegExp(h,"gm");for(;(p=f.exec(u))!==null;){const l=p[1],o=p[2],t=p[3].split(","),e=l?`${l} ${o};`:"";let r=c[s].body,i="";c[s].params.forEach((g,m)=>{g&&(i+=`${g.type} ${g.name} = ${t[m]}; -`)}),r=`${i} - ${r}`,r=r.replace("return",`${o} = `);const d=` - ${e} - { - ${r} - } - `;u=u.replace(p[0],d)}}return u.replace(a,"")}},8879:(b,n,a)=>{Object.defineProperty(n,"__esModule",{value:!0}),n.GlslPreprocessor=void 0;const u=a(8520),c=a(2848),p=a(5483),s=a(5060);n.GlslPreprocessor=class{constructor(h,f,l,o){this.libs={},this.glslLibRoutineDependencyGraph={},this.context=new u.GlslContext(h,f,l,o),Object.keys(p.glslRegistry).forEach(e=>{const r=new p.glslRegistry[e](this.context);this.libs[e]=r});const t=this.glslLibRoutineDependencyGraph;for(const e in this.libs){const r=this.libs[e].getFunctions();for(const i in r){const d=e+"."+i;let g;t[d]?(g=t[d],g.routineBody=r[i].routineBody):(g=new u.GlslLibRoutineNode(d,r[i].routineBody),t[d]=g);const m=r[i].dependencies;if(m)for(let _=0;_{const o=l.split(".")[1];h.indexOf(o)!==-1&&f.push(this.glslLibRoutineDependencyGraph[l])}),u.TopologicalSortGlslRoutines.returnOrderedNodes(f)}getUniforms(h,f){const l=[];if(h)for(const o of h)l.push(`uniform sampler2D ${o};`);if(f)for(const o of f)l.push(`uniform ${o.type} ${o.name}${o.arrayLength?`[${o.arrayLength}]`:""};`);return l.join(` -`)}}},5483:(b,n,a)=>{Object.defineProperty(n,"__esModule",{value:!0}),n.glslRegistry=void 0;const u=a(5107),c=a(7341),p=a(9894),s=a(2655),h=a(3891);n.glslRegistry={encoding:c.EncodingGlslLib,fragcolor:p.FragColorGlslLib,vec:h.VecGlslLib,shapeUtils:s.ShapeUtilsGlslLib,coordinates:u.CoordsGlslLib}},2655:(b,n,a)=>{Object.defineProperty(n,"__esModule",{value:!0}),n.ShapeUtilsGlslLib=void 0;const u=a(8520);class c extends u.GlslLib{constructor(s){super(s)}getFunctions(){return Object.assign(Object.assign(Object.assign(Object.assign(Object.assign({},this.bcastIndex()),this.bcastMatmulIndex()),this.offsetToIndices()),this.indicesToOffset()),this.incrementIndices())}getCustomTypes(){return{}}bcastIndex(){const s=this.context.outputTextureLayout.shape.length,h={};return this.context.programInfo.inputNames.forEach((f,l)=>{const o=this.context.inputTextureLayouts[l].unpackedShape;if(o.length<=s){const t=o.length,e=s-t,r=`bcastIndices_${f}`;let i="";for(let g=0;g{const o=this.context.inputTextureLayouts[l].shape;if(!(o.length<2||o.length>s)){const t=o.length,e=s-t,r=`bcastMatmulIndices_${f}`;let i="";for(let g=0;g{const l=this.context.inputTextureLayouts[f].shape,o=this.context.inputTextureLayouts[f].strides,t=l.length;let e=`indicesToOffset_${h}`;s[e]=new u.GlslLibRoutine(c.indexToOffsetSingle(e,t,o)),e=`indicesToOffset_${h}_T`,s[e]=new u.GlslLibRoutine(c.indexToOffsetSingle(e,t,o.slice().reverse()))}),s}static indexToOffsetSingle(s,h,f){let l="";for(let o=h-1;o>=0;--o)l+=` - offset += indices[${o}] * ${f[o]}; - `;return` - int ${s}(int indices[${h}]) { - int offset = 0; - ${l} - return offset; - } - `}offsetToIndices(){const s={};return this.context.programInfo.inputNames.forEach((h,f)=>{const l=this.context.inputTextureLayouts[f].shape,o=this.context.inputTextureLayouts[f].strides,t=l.length;let e=`offsetToIndices_${h}`;s[e]=new u.GlslLibRoutine(c.offsetToIndicesSingle(e,t,o)),e=`offsetToIndices_${h}_T`,s[e]=new u.GlslLibRoutine(c.offsetToIndicesSingle(e,t,o.slice().reverse()))}),s}static offsetToIndicesSingle(s,h,f){const l=[];for(let o=0;o{const l=this.context.inputTextureLayouts[f].shape,o=l.length,t=`incrementIndices_${h}`;let e="";for(let i=0;i= 0; --i) { - if(i > axis) continue; - indices[i] += 1; - if(indices[i] < shape[i]) { - break; - } - indices[i] = 0; - } - } - `;s[t]=new u.GlslLibRoutine(r)}),s}}n.ShapeUtilsGlslLib=c},5060:(b,n)=>{Object.defineProperty(n,"__esModule",{value:!0}),n.getDefaultFragShaderMain=n.getFragShaderPreamble=n.getVertexShaderSource=n.getGlsl=void 0;const a={version:"",attribute:"attribute",varyingVertex:"varying",varyingFrag:"varying",texture2D:"texture2D",output:"gl_FragColor",outputDeclaration:""},u={version:"#version 300 es",attribute:"in",varyingVertex:"out",varyingFrag:"in",texture2D:"texture",output:"outputColor",outputDeclaration:"out vec4 outputColor;"};function c(p){return p===1?a:u}n.getGlsl=c,n.getVertexShaderSource=function(p){const s=c(p);return`${s.version} - precision highp float; - ${s.attribute} vec3 position; - ${s.attribute} vec2 textureCoord; - - ${s.varyingVertex} vec2 TexCoords; - - void main() - { - gl_Position = vec4(position, 1.0); - TexCoords = textureCoord; - }`},n.getFragShaderPreamble=function(p){const s=c(p);return`${s.version} - precision highp float; - precision highp int; - precision highp sampler2D; - ${s.varyingFrag} vec2 TexCoords; - ${s.outputDeclaration} - const vec2 halfCR = vec2(0.5, 0.5); - - // Custom vector types to handle higher dimenalities. - struct ivec5 - { - int x; - int y; - int z; - int w; - int u; - }; - - struct ivec6 - { - int x; - int y; - int z; - int w; - int u; - int v; - }; - - int imod(int x, int y) { - return x - y * (x / y); - } - - `},n.getDefaultFragShaderMain=function(p,s){return` - void main() { - int indices[${s}]; - toVec(TexCoords, indices); - vec4 result = vec4(process(indices)); - ${c(p).output} = result; - } - `}},3891:(b,n,a)=>{Object.defineProperty(n,"__esModule",{value:!0}),n.VecGlslLib=void 0;const u=a(8520);class c extends u.GlslLib{constructor(s){super(s)}getCustomTypes(){return{}}getFunctions(){return Object.assign(Object.assign(Object.assign(Object.assign({},this.binaryVecFunctions()),this.copyVec()),this.setVecItem()),this.getVecItem())}binaryVecFunctions(){const s=this.context.outputTextureLayout.shape.length,h={add:"+=",sub:"-=",mul:"*=",div:"/="},f={};for(const l in h){const o=`${l}Vec`;let t="";for(let r=0;r{Object.defineProperty(n,"__esModule",{value:!0}),n.WebGLInferenceHandler=void 0;const u=a(6231),c=a(9162),p=a(2517),s=a(2403),h=a(7019),f=a(8710),l=a(5611),o=a(4057),t=a(2039);n.WebGLInferenceHandler=class{constructor(e){this.session=e,this.packedTextureDataCache=new Map,this.unpackedTextureDataCache=new Map}calculateTextureWidthAndHeight(e,r){return(0,o.calculateTextureWidthAndHeight)(this.session.layoutStrategy,e,r)}executeProgram(e,r){if(r.length{const S=v.map(A=>`${A.unpackedShape.join(",")};${A.width}x${A.height}`).join("_");let O=w.name;return w.cacheHint&&(O+="["+w.cacheHint+"]"),O+=":"+S,O})(e,i);let g=this.session.programManager.getArtifact(d);const m=g?g.programInfo:typeof e.get=="function"?e.get():e,_=(0,o.createTextureLayoutFromTextureType)(this.session.layoutStrategy,m.output.dims,m.output.textureType),y=this.createTextureData(_,m.output.type);return g||(g=this.session.programManager.build(m,i,y),this.session.programManager.setArtifact(d,g)),this.runProgram(g,i,y),y}run(e,r){return this.executeProgram(e,r).tensor}runProgram(e,r,i){for(let d=0;dthis.readTexture(m),async _=>this.readTextureAsync(m),void 0,g),texture:i});return this.setTextureData(m.tensor.dataId,m,e.isPacked),m}getTextureData(e,r=!1){return this.session.isInitializer(e)?this.session.getTextureData(e,r):r?this.packedTextureDataCache.get(e):this.unpackedTextureDataCache.get(e)}setTextureData(e,r,i=!1){this.session.isInitializer(e)?this.session.setTextureData(e,r,i):(i?this.packedTextureDataCache:this.unpackedTextureDataCache).set(e,r)}isTextureLayoutCached(e,r=!1){return!!this.getTextureData(e.dataId,r)}dispose(){this.session.textureManager.clearActiveTextures(),this.packedTextureDataCache.forEach(e=>this.session.textureManager.releaseTexture(e)),this.packedTextureDataCache=new Map,this.unpackedTextureDataCache.forEach(e=>this.session.textureManager.releaseTexture(e)),this.unpackedTextureDataCache=new Map}readTexture(e){return e.isPacked?this.readTexture(this.unpack(e)):this.session.backend.glContext.isFloat32DownloadSupported?this.session.textureManager.readTexture(e,e.tensor.type,e.channels):this.session.textureManager.readUint8TextureAsFloat((0,f.encodeAsUint8)(this,e))}async readTextureAsync(e){return e.isPacked?this.readTextureAsync(this.unpack(e)):this.session.backend.glContext.isFloat32DownloadSupported?this.session.textureManager.readTextureAsync(e,e.tensor.type,e.channels):this.session.textureManager.readUint8TextureAsFloat((0,f.encodeAsUint8)(this,e))}pack(e){return this.executeProgram((0,s.createPackProgramInfoLoader)(this,e.tensor),[e.tensor])}unpack(e){return this.executeProgram((0,l.createUnpackProgramInfoLoader)(this,e.tensor),[e.tensor])}}},1640:function(b,n,a){var u=this&&this.__createBinding||(Object.create?function(X,te,ne,me){me===void 0&&(me=ne);var Me=Object.getOwnPropertyDescriptor(te,ne);Me&&!("get"in Me?!te.__esModule:Me.writable||Me.configurable)||(Me={enumerable:!0,get:function(){return te[ne]}}),Object.defineProperty(X,me,Me)}:function(X,te,ne,me){me===void 0&&(me=ne),X[me]=te[ne]}),c=this&&this.__setModuleDefault||(Object.create?function(X,te){Object.defineProperty(X,"default",{enumerable:!0,value:te})}:function(X,te){X.default=te}),p=this&&this.__importStar||function(X){if(X&&X.__esModule)return X;var te={};if(X!=null)for(var ne in X)ne!=="default"&&Object.prototype.hasOwnProperty.call(X,ne)&&u(te,X,ne);return c(te,X),te};Object.defineProperty(n,"__esModule",{value:!0}),n.WEBGL_OP_RESOLVE_RULES=void 0;const s=a(2898),h=p(a(7839)),f=a(4196),l=a(2069),o=a(8138),t=a(9663),e=a(5193),r=a(7992),i=a(1253),d=a(4776),g=a(6572),m=a(3346),_=a(5623),y=a(2870),w=a(2143),v=a(4939),S=a(718),O=a(2268),A=a(8117),T=a(2278),M=a(5524),N=a(5975),B=a(3933),$=a(6558),L=a(5723),H=a(3738),C=p(a(4909)),z=a(8428),J=a(9793);n.WEBGL_OP_RESOLVE_RULES=[["Abs","","6+",C.abs],["Acos","","7+",C.acos],["Add","","7+",h.add],["And","","7+",h.and],["Asin","","7+",C.asin],["Atan","","7+",C.atan],["AveragePool","","7+",w.averagePool,w.parseAveragePoolAttributes],["BatchNormalization","","7+",s.batchNormalization,s.parseBatchNormalizationAttributes],["Cast","","6+",f.cast,f.parseCastAttributes],["Ceil","","6+",C.ceil],["Clip","","6-10",C.clip,C.parseClipAttributes],["Clip","","11+",C.clipV11],["Concat","","4+",l.concat,l.parseConcatAttributes],["Conv","","1+",o.conv,o.parseConvAttributes],["ConvTranspose","","1+",t.convTranspose,t.parseConvTransposeAttributes],["Cos","","7+",C.cos],["Div","","7+",h.div],["Dropout","","7+",C.identity],["DepthToSpace","","1+",e.depthToSpace,e.parseDepthToSpaceAttributes],["Equal","","7+",h.equal],["Elu","","6+",C.elu,C.parseEluAttributes],["Exp","","6+",C.exp],["Flatten","","1+",r.flatten,r.parseFlattenAttributes],["Floor","","6+",C.floor],["FusedConv","com.microsoft","1+",o.conv,o.parseConvAttributes],["Gather","","1+",i.gather,i.parseGatherAttributes],["Gemm","","7-10",d.gemm,d.parseGemmAttributesV7],["Gemm","","11+",d.gemm,d.parseGemmAttributesV11],["GlobalAveragePool","","1+",w.globalAveragePool,w.parseGlobalAveragePoolAttributes],["GlobalMaxPool","","1+",w.globalMaxPool],["Greater","","7+",h.greater],["Identity","","1+",C.identity],["ImageScaler","","1+",g.imageScaler,g.parseImageScalerAttributes],["InstanceNormalization","","6+",m.instanceNormalization,m.parseInstanceNormalizationAttributes],["LeakyRelu","","6+",C.leakyRelu,C.parseLeakyReluAttributes],["Less","","7+",h.less],["Log","","6+",C.log],["MatMul","","1+",_.matMul,_.parseMatMulAttributes],["MaxPool","","1+",w.maxPool,w.parseMaxPoolAttributes],["Mul","","7+",h.mul],["Neg","","6+",C.neg],["Not","","1+",C.not],["Or","","7+",h.or],["Pad","","2-10",y.padV2,y.parsePadAttributesV2],["Pad","","11+",y.padV11,y.parsePadAttributesV11],["Pow","","7+",h.pow],["PRelu","","7+",h.pRelu],["ReduceLogSum","","1+",v.reduceLogSum,v.parseReduceAttributes],["ReduceMax","","1+",v.reduceMax,v.parseReduceAttributes],["ReduceMean","","1+",v.reduceMean,v.parseReduceAttributes],["ReduceMin","","1+",v.reduceMin,v.parseReduceAttributes],["ReduceProd","","1+",v.reduceProd,v.parseReduceAttributes],["ReduceSum","","1-12",v.reduceSum,v.parseReduceAttributes],["ReduceSumSquare","","1+",v.reduceLogSumSquare,v.parseReduceAttributes],["Relu","","6+",C.relu],["Reshape","","5+",S.reshape],["Resize","","10",O.resize,O.parseResizeAttributesV10],["Resize","","11+",O.resize,O.parseResizeAttributesV11],["Shape","","1+",A.shape],["Sigmoid","","6+",C.sigmoid],["Sin","","7+",C.sin],["Slice","","10+",T.sliceV10],["Slice","","1-9",T.slice,T.parseSliceAttributes],["Softmax","","1-12",M.softmax,M.parseSoftmaxAttributes],["Softmax","","13+",M.softmaxV13,M.parseSoftmaxAttributesV13],["Split","","2-12",N.split,N.parseSplitAttributes],["Sqrt","","6+",C.sqrt],["Squeeze","","1-12",B.squeeze,B.parseSqueezeAttributes],["Squeeze","","13+",B.squeezeV13],["Sub","","7+",h.sub],["Sum","","6+",$.sum],["Tan","","7+",C.tan],["Tanh","","6+",C.tanh],["Tile","","6+",L.tile],["Transpose","","1+",H.transpose,H.parseTransposeAttributes],["Upsample","","7-8",J.upsample,J.parseUpsampleAttributesV7],["Upsample","","9",J.upsample,J.parseUpsampleAttributesV9],["Unsqueeze","","1-12",z.unsqueeze,z.parseUnsqueezeAttributes],["Unsqueeze","","13+",z.unsqueezeV13],["Xor","","7+",h.xor]]},2898:(b,n,a)=>{Object.defineProperty(n,"__esModule",{value:!0}),n.parseBatchNormalizationAttributes=n.batchNormalization=void 0;const u=a(246),c=a(5060),p=a(2039),s={name:"BatchNormalization",inputNames:["A","Scale","B","Mean","Variance"],inputTypes:[p.TextureType.unpacked,p.TextureType.unpacked,p.TextureType.unpacked,p.TextureType.unpacked,p.TextureType.unpacked]};n.batchNormalization=(l,o,t)=>(f(o),[l.run(Object.assign(Object.assign({},s),{cacheHint:t.cacheKey,get:()=>h(l,o,t)}),o)]),n.parseBatchNormalizationAttributes=l=>{const o=l.attributes.getFloat("epsilon",1e-5),t=l.attributes.getFloat("momentum",.9),e=l.attributes.getInt("spatial",1);return(0,u.createAttributeWithCacheKey)({epsilon:o,momentum:t,spatial:e})};const h=(l,o,t)=>{const e=(0,c.getGlsl)(l.session.backend.glContext.version),r=o[0].dims.length,[i,d]=l.calculateTextureWidthAndHeight(o[1].dims,p.TextureType.unpacked),g=` - float process(int[${r}] indices) { - vec2 position = offsetToCoords(indices[1], ${i}, ${d}); - float scale = getColorAsFloat(${e.texture2D}(Scale, position)); - float mean = getColorAsFloat(${e.texture2D}(Mean, position)); - float variance = getColorAsFloat(${e.texture2D}(Variance, position)); - float b = getColorAsFloat(${e.texture2D}(B, position)); - - return scale * ( (_A(indices) - mean) / sqrt(variance + float(${t.epsilon})) ) + b; - }`;return Object.assign(Object.assign({},s),{output:{dims:o[0].dims,type:o[0].type,textureType:p.TextureType.unpacked},shaderSource:g})},f=l=>{if(!l||l.length!==5)throw new Error("BatchNormalization requires 5 inputs.");const o=l[0],t=l[1],e=l[2],r=l[3],i=l[4];if(o.dims.length<3||t.dims.length!==1||e.dims.length!==1||r.dims.length!==1||i.dims.length!==1)throw new Error("invalid input shape.");if(t.dims[0]!==o.dims[1]||e.dims[0]!==o.dims[1]||r.dims[0]!==o.dims[1]||i.dims[0]!==o.dims[1])throw new Error("invalid input shape.");if(o.type!=="float32"&&o.type!=="float64"||t.type!=="float32"&&t.type!=="float64"||e.type!=="float32"&&e.type!=="float64"||r.type!=="float32"&&r.type!=="float64"||i.type!=="float32"&&i.type!=="float64")throw new Error("invalid input tensor types.")}},7839:(b,n,a)=>{Object.defineProperty(n,"__esModule",{value:!0}),n.xor=n.sub=n.pRelu=n.pow=n.or=n.mul=n.less=n.greater=n.equal=n.div=n.and=n.add=n.glslPRelu=n.glslPow=n.glslXor=n.glslOr=n.glslAnd=n.glslLess=n.glslGreater=n.glslEqual=n.glslSub=n.glslMul=n.glslDiv=n.glslAdd=void 0;const u=a(2517),c=a(8520),p=a(5060),s=a(2039);function h(){const v="add_";return{body:` - float ${v}(float a, float b) { - return a + b; - } - vec4 ${v}(vec4 v1, vec4 v2) { - return v1 + v2; - } - `,name:v,type:c.FunctionType.ValueBased}}function f(){const v="div_";return{body:` - float ${v}(float a, float b) { - return a / b; - } - vec4 ${v}(vec4 v1, vec4 v2) { - return v1 / v2; - } - `,name:v,type:c.FunctionType.ValueBased}}function l(){const v="mul_";return{body:` - float ${v}(float a, float b) { - return a * b; - } - vec4 ${v}(vec4 v1, vec4 v2) { - return v1 * v2; - } - `,name:v,type:c.FunctionType.ValueBased}}function o(){const v="sub_";return{body:` - float ${v}(float a, float b) { - return a - b; - } - vec4 ${v}(vec4 v1, vec4 v2) { - return v1 - v2; - } - `,name:v,type:c.FunctionType.ValueBased}}function t(){const v="equal_";return{body:` - float ${v}(float a, float b) { - return float(a == b); - } - vec4 ${v}(vec4 v1, vec4 v2) { - return vec4(equal(v1, v2)); - } - `,name:v,type:c.FunctionType.ValueBased}}function e(){const v="greater_";return{body:` - float ${v}(float a, float b) { - return float(a > b); - } - vec4 ${v}(vec4 v1, vec4 v2) { - return vec4( v1.r > v2.r , - v1.g > v2.g, - v1.b > v2.b, - v1.a > v2.a ); - } - `,name:v,type:c.FunctionType.ValueBased}}function r(){const v="less_";return{body:` - float ${v}(float a, float b) { - return float(a < b); - } - vec4 ${v}(vec4 v1, vec4 v2) { - return vec4( v1.r < v2.r , - v1.g < v2.g, - v1.b < v2.b, - v1.a < v2.a ); - } - `,name:v,type:c.FunctionType.ValueBased}}function i(){const v="and_";return{body:` - float ${v}(float a, float b) { - return float( bool(a) && bool(b) ); - } - vec4 ${v}(vec4 v1, vec4 v2) { - bvec4 b1 = bvec4(v1); - bvec4 b2 = bvec4(v2); - return vec4( b1.r && b2.r , - b1.g && b2.g, - b1.b && b2.b, - b1.a && b2.a ); - } - `,name:v,type:c.FunctionType.ValueBased}}function d(){const v="or_";return{body:` - float ${v}(float a, float b) { - return float( bool(a) || bool(b) ); - } - vec4 ${v}(vec4 v1, vec4 v2) { - bvec4 b1 = bvec4(v1); - bvec4 b2 = bvec4(v2); - return vec4( b1.r || b2.r , - b1.g || b2.g, - b1.b || b2.b, - b1.a || b2.a ); - } - `,name:v,type:c.FunctionType.ValueBased}}function g(){const v="xor_";return{body:` - float ${v}(float a, float b) { - return float( bool(a) ^^ bool(b) ); - } - vec4 ${v}(vec4 v1, vec4 v2) { - bvec4 b1 = bvec4(v1); - bvec4 b2 = bvec4(v2); - return vec4( b1.r ^^ b2.r , - b1.g ^^ b2.g, - b1.b ^^ b2.b, - b1.a ^^ b2.a ); - } - `,name:v,type:c.FunctionType.ValueBased}}function m(){return function(v){const S=`${v}_`;return{body:` - float ${S}(float a, float b) { - return ${v}(a, b); - } - vec4 ${S}(vec4 v1, vec4 v2) { - return ${v}(v1, v2); - } - `,name:S,type:c.FunctionType.ValueBased}}("pow")}function _(){const v="prelu_";return{body:` - float ${v}(float a, float b) { - return a < 0.0 ? a * b: a; - } - vec4 ${v}(vec4 v1, vec4 v2) { - return vec4( - v1.r < 0.0 ? v1.r * v2.r: v1.r, - v1.g < 0.0 ? v1.g * v2.g: v1.g, - v1.b < 0.0 ? v1.b * v2.b: v1.b, - v1.a < 0.0 ? v1.a * v2.a: v1.a - ); - } - `,name:v,type:c.FunctionType.ValueBased}}n.glslAdd=h,n.glslDiv=f,n.glslMul=l,n.glslSub=o,n.glslEqual=t,n.glslGreater=e,n.glslLess=r,n.glslAnd=i,n.glslOr=d,n.glslXor=g,n.glslPow=m,n.glslPRelu=_;const y=(v,S,O,A=S[0].type,T)=>{const M=v.session.pack?s.TextureType.packed:s.TextureType.unpacked;return{name:O.name,inputNames:["A","B"],inputTypes:[M,M],cacheHint:T,get:()=>w(v,S,O,A)}},w=(v,S,O,A=S[0].type)=>{const T=v.session.pack?s.TextureType.packed:s.TextureType.unpacked,M=!u.ShapeUtil.areEqual(S[0].dims,S[1].dims);let N=S[0].dims;const B=v.session.pack;if(M){const H=u.BroadcastUtil.calcShape(S[0].dims,S[1].dims,!1);if(!H)throw new Error("Can't perform binary op on the given tensors");N=H;const C=N.length,z=S[0].dims.length!==0?S[0].dims.length:1,J=S[1].dims.length!==0?S[1].dims.length:1,X=S[0].dims.length!==0?"bcastIndices_A(indices, aindices);":"aindices[0] = 0;",te=S[1].dims.length!==0?"bcastIndices_B(indices, bindices);":"bindices[0] = 0;",ne=(0,p.getGlsl)(v.session.backend.glContext.version),me=B?` - ${O.body} - void main() { - vec4 a = getAAtOutCoords(); - vec4 b = getBAtOutCoords(); - vec4 result = ${O.name}(a, b); - ${ne.output} = result; - }`:` - ${O.body} - float process(int indices[${C}]) { - int aindices[${z}]; - int bindices[${J}]; - ${X} - ${te} - return ${O.name}(_A(aindices), _B(bindices)); - }`;return{name:O.name,inputNames:["A","B"],inputTypes:[T,T],output:{dims:N,type:A,textureType:T},shaderSource:me,hasMain:B}}const $=(0,p.getGlsl)(v.session.backend.glContext.version),L=` - ${O.body} - void main() { - vec4 v1 = ${$.texture2D}(A, TexCoords); - vec4 v2 = ${$.texture2D}(B, TexCoords); - vec4 result = ${O.name}(v1, v2); - ${$.output} = result; - } - `;return{name:O.name,inputNames:["A","B"],inputTypes:[T,T],output:{dims:S[0].dims,type:A,textureType:T},shaderSource:L,hasMain:!0}};n.add=(v,S)=>[v.run(y(v,S,h()),S)],n.and=(v,S)=>[v.run(y(v,S,i(),"bool"),S)],n.div=(v,S)=>[v.run(y(v,S,f()),S)],n.equal=(v,S)=>[v.run(y(v,S,t(),"bool"),S)],n.greater=(v,S)=>[v.run(y(v,S,e(),"bool"),S)],n.less=(v,S)=>[v.run(y(v,S,r(),"bool"),S)],n.mul=(v,S)=>[v.run(y(v,S,l()),S)],n.or=(v,S)=>[v.run(y(v,S,d(),"bool"),S)],n.pow=(v,S)=>[v.run(y(v,S,m()),S)],n.pRelu=(v,S)=>[v.run(y(v,S,_()),S)],n.sub=(v,S)=>[v.run(y(v,S,o()),S)],n.xor=(v,S)=>[v.run(y(v,S,g(),"bool"),S)]},4196:(b,n,a)=>{Object.defineProperty(n,"__esModule",{value:!0}),n.parseCastAttributes=n.cast=void 0;const u=a(2517);n.cast=(p,s,h)=>(c(s),[p.cast(s[0],h)]),n.parseCastAttributes=p=>u.ProtoUtil.tensorDataTypeFromProto(p.attributes.getInt("to"));const c=p=>{if(!p||p.length!==1)throw new Error("Cast requires 1 input.");if(p[0].type==="string")throw new Error("Invalid input type.")}},1163:(b,n,a)=>{Object.defineProperty(n,"__esModule",{value:!0}),n.createPackedConcatProgramInfoLoader=void 0;const u=a(5060),c=a(2039),p=a(9390),s=a(2827);n.createPackedConcatProgramInfoLoader=(f,l,o)=>{const t=(e=l.length,r=o.cacheKey,{name:"Concat (packed)",inputNames:Array.from({length:e},(i,d)=>`X${d}`),inputTypes:Array(e).fill(c.TextureType.packed),cacheHint:r});var e,r;return Object.assign(Object.assign({},t),{get:()=>((i,d,g,m)=>{const _=g[0].dims.slice();if(m>=_.length||m<-1*_.length)throw new Error("axis specified for concat doesn't match input dimensionality");m<0&&(m=_.length+m);const y=_.slice(0);for(let X=1;XX.dims),T=(0,p.getGlChannels)(w),M=new Array(A.length-1);M[0]=A[0][m];for(let X=1;X= ${M[X-1]}) { - return getChannel( - getX${X}(${h(T,N,te)}), - vec2(${h(B,N,te)})); - }`}const H=M.length,C=M[M.length-1];L+=` - return getChannel( - getX${H}(${h(T,N,C)}), - vec2(${h(B,N,C)}));`;const z=(0,u.getGlsl)(i.session.backend.glContext.version),J=` - ${O} - float getValue(${T.map(X=>"int "+X)}) { - ${L} - } - - void main() { - ${S} coords = getOutputCoords(); - int lastDim = coords.${T[w-1]}; - coords.${T[w-1]} = coords.${T[w-2]}; - coords.${T[w-2]} = lastDim; - - vec4 result = vec4(getValue(${v}), 0., 0., 0.); - - ${v[w-1]} = ${v[w-1]} + 1; - if (${v[w-1]} < ${y[w-1]}) { - result.g = getValue(${v}); - } - - ${v[w-2]} = ${v[w-2]} + 1; - if (${v[w-2]} < ${y[w-2]}) { - result.a = getValue(${v}); - } - - ${v[w-1]} = ${v[w-1]} - 1; - if (${v[w-2]} < ${y[w-2]} && - ${v[w-1]} < ${y[w-1]}) { - result.b = getValue(${v}); - } - ${z.output} = result; - } - `;return Object.assign(Object.assign({},d),{output:{dims:y,type:g[0].type,textureType:c.TextureType.packed},shaderSource:J,hasMain:!0})})(f,t,l,o.axis)})};const h=(f,l,o)=>{const t=f.indexOf(l);return f.map((e,r)=>r===t?`${e} - ${o}`:e).join()}},2069:(b,n,a)=>{Object.defineProperty(n,"__esModule",{value:!0}),n.parseConcatAttributes=n.concat=void 0;const u=a(246),c=a(2039),p=a(1163);n.concat=(e,r,i)=>(t(r),e.session.pack&&r[0].dims.length>1?[e.run((0,p.createPackedConcatProgramInfoLoader)(e,r,i),r)]:[e.run(s(e,r,i),r)]);const s=(e,r,i)=>{const d=(g=r.length,m=i.cacheKey,{name:"Concat",inputNames:Array.from({length:g},(_,y)=>`X${y}`),inputTypes:Array(g).fill(c.TextureType.unpacked),cacheHint:m});var g,m;return Object.assign(Object.assign({},d),{get:()=>((_,y,w,v)=>{const S=w[0].dims.slice();if(v>=S.length||v<-1*S.length)throw new Error("axis specified for concat doesn't match input dimensionality");v<0&&(v=S.length+v);const O=S.slice(0);for(let $=1;$`int getTextureWhereDataResides(int index) { - ${e.map((r,i)=>`if(index<${r}) {return ${i};} -`).join("")} - }`,f=e=>h(e),l=(e,r)=>{const i=[`float fetchDataFromCorrectTexture(int textureIndex, int indices[${r}]) {`];for(let d=0;d{const r=["int getSizeInConcatAxisValueFromIndex(int index) {"];for(let i=0;i(0,u.createAttributeWithCacheKey)({axis:e.attributes.getInt("axis")});const t=e=>{if(!e||e.length<1)throw new Error("too few inputs");const r=e[0].type,i=e[0].dims.length;if(r==="string")throw new Error("string tensor is not supported yet");for(const d of e){if(d.type!==r)throw new Error("input tensors should be one type");if(d.dims.length!==i)throw new Error("input tensors should have the same shape")}}},4770:(b,n,a)=>{Object.defineProperty(n,"__esModule",{value:!0}),n.createUnpackedGroupedConvProgramInfoLoader=void 0;const u=a(6231),c=a(5060),p=a(2039),s=a(8138),h=a(2823);n.createUnpackedGroupedConvProgramInfoLoader=(f,l,o)=>{const t=(e=l.length>2,r=o.cacheKey,{name:"GroupedConv",inputNames:e?["X","W","Bias"]:["X","W"],inputTypes:e?[p.TextureType.unpacked,p.TextureType.unpacked,p.TextureType.unpacked]:[p.TextureType.unpacked,p.TextureType.unpacked],cacheHint:r});var e,r;return Object.assign(Object.assign({},t),{get:()=>((i,d,g,m)=>{const _=d.length>2?"value += getBias(output_channel);":"",y=d[0].dims.slice(),w=d[1].dims.slice(),v=w[0]/m.group;u.Logger.verbose("GroupedConv",`autpPad:${m.autoPad}, dilations:${m.dilations}, group:${m.group}, kernelShape:${m.kernelShape}, pads:${m.pads}, strides:${m.strides}`);const S=(0,s.calculateOutputShape)(y,w,m.dilations,m.pads,m.strides),O=(0,c.getGlsl)(i.session.backend.glContext.version),{activationFunction:A,applyActivation:T}=(0,h.getActivationSnippet)(m),M=` - const ivec2 strides = ivec2(${m.strides[0]}, ${m.strides[1]}); - const ivec2 pads = ivec2(${m.pads[0]}, ${m.pads[1]}); - ${A} - void main() { - ivec4 coords = getOutputCoords(); - int batch = coords.x; - int output_channel = coords.y; - ivec2 xRCCorner = coords.zw * strides - pads; - int group_id = output_channel / ${v}; - - float value = 0.0; - for (int wInChannel = 0; wInChannel < ${w[1]}; wInChannel++) { - int input_channel = group_id * ${w[1]} + wInChannel; - for (int wHeight = 0; wHeight < ${w[2]}; wHeight++) { - int xHeight = xRCCorner.x + wHeight * ${m.dilations[0]}; - - if (xHeight < 0 || xHeight >= ${y[2]}) { - continue; - } - - for (int wWidth = 0; wWidth < ${w[3]}; wWidth++) { - int xWidth = xRCCorner.y + wWidth * ${m.dilations[1]}; - if (xWidth < 0 || xWidth >= ${y[3]}) { - continue; - } - - float xVal = getX(batch, input_channel, xWidth, xHeight); - float wVal = getW(output_channel, wInChannel, wWidth, wHeight); - value += xVal*wVal; - } - } - } - ${_} - ${T} - ${O.output} = vec4(value, .0, .0, .0); - } -`;return Object.assign(Object.assign({},g),{output:{dims:S,type:d[0].type,textureType:p.TextureType.unpacked},shaderSource:M,hasMain:!0})})(f,l,t,o)})}},1386:(b,n,a)=>{Object.defineProperty(n,"__esModule",{value:!0}),n.conv2DPacked=n.conv2DPackedPointwise=void 0;const u=a(8138),c=a(8555),p=a(708);n.conv2DPackedPointwise=(s,h,f)=>{const l=h[0].dims,o=h[1].dims,t=(0,u.calculateOutputShape)(l,o,f.dilations,f.pads,f.strides),e=s.reshapePacked(h[0],[l[1],l[2]*l[3]]),r=s.reshapePacked(h[1],[o[0],o[1]]),i=h.length>2?[r,e,h[2]]:[r,e],d=s.run((0,p.createPackedMatmulProgramInfoLoader)(s,i,f),i);return s.reshapePacked(d,t)},n.conv2DPacked=(s,h,f)=>{const l=h[0].dims,o=h[1].dims,t=(0,u.calculateOutputShape)(l,o,f.dilations,f.pads,f.strides),e=s.run((0,c.createPackedIm2ColProgramInfoLoader)(s,h[0],h[1],t,f),[h[0]]),r=s.reshapePacked(h[1],[o[0],o[1]*o[2]*o[3]]),i=h.length===3?[r,e,h[2]]:[r,e],d=s.run((0,p.createPackedMatmulProgramInfoLoader)(s,i,f),i);return s.reshapePacked(d,t)}},9663:(b,n,a)=>{Object.defineProperty(n,"__esModule",{value:!0}),n.parseConvTransposeAttributes=n.convTranspose=void 0;const u=a(246),c=a(5060),p=a(2039),s=a(2823),h=(r,i,d,g,m,_)=>(r-1)*i+d+(g-1)*m+1-_,f=(r,i,d,g,m)=>{const _=Math.floor(r/2);i==="SAME_UPPER"?(d[g]=_,d[m]=r-_):i==="SAME_LOWER"&&(d[g]=r-_,d[m]=_)};n.convTranspose=(r,i,d)=>(e(i,d),l(r,i,d));const l=(r,i,d)=>{const g=t(d,i);return[o(r,i,g)]},o=(r,i,d)=>r.run(((g,m,_)=>{const y=(w=m.length>2,v=_.cacheKey,{name:"ConvTranspose",inputNames:w?["X","W","B"]:["X","W"],inputTypes:w?[p.TextureType.unpacked,p.TextureType.unpacked,p.TextureType.unpacked]:[p.TextureType.unpacked,p.TextureType.unpacked],cacheHint:v});var w,v;return Object.assign(Object.assign({},y),{get:()=>((S,O,A,T)=>{const M=O.length>2?"getB(output_channel)":"0.0",N=O[0].dims,B=O[1].dims,$=B[1],L=B[0]/T.group,H=[O[0].dims[0],O[1].dims[1]*T.group,...T.outputShape],C=(0,c.getGlsl)(S.session.backend.glContext.version),{activationFunction:z,applyActivation:J}=(0,s.getActivationSnippet)(T),X=` - const ivec2 strides = ivec2(${T.strides[0]}, ${T.strides[1]}); - const ivec2 pads = ivec2(${T.pads[0]}, ${T.pads[1]}); - ${z} - void main() { - ivec4 coords = getOutputCoords(); - int batch = coords.x; - int output_channel = coords.y; - - ivec2 loc = coords.zw + pads; - - int group_id = output_channel / ${$}; - int wOutChannel = output_channel - group_id * ${$}; - - float value = ${M}; - for (int inChannelOffset = 0; inChannelOffset < ${L}; inChannelOffset++) { - int input_channel = group_id * ${L} + inChannelOffset; - for (int wWOff = 0; wWOff < ${B[2]}; wWOff++) { - for (int wHOff = 0; wHOff < ${B[3]}; wHOff++) { - ivec2 wOff = ivec2(wWOff * ${T.dilations[0]}, wHOff * ${T.dilations[1]}); - ivec2 wLoc = loc - wOff; - ivec2 wLocIn = wLoc / strides; - if ( - wLocIn * strides == wLoc && - wLocIn.x >= 0 && wLocIn.x < ${N[2]} && - wLocIn.y >= 0 && wLocIn.y < ${N[3]} - ) { - float xVal = getX(batch, input_channel, wLocIn.y, wLocIn.x); - float wVal = getW(input_channel, wOutChannel, wHOff, wWOff); - value += xVal * wVal; - } - } - } - } - ${J} - ${C.output} = vec4(value, .0, .0, .0); - } -`;return Object.assign(Object.assign({},A),{output:{dims:H,type:O[0].type,textureType:p.TextureType.unpacked},shaderSource:X,hasMain:!0})})(g,m,y,_)})})(r,i,d),i),t=(r,i)=>{const d=r.kernelShape.slice();if(r.kernelShape.length===0)for(let y=2;y{const N=y.length-2,B=M.length===0;for(let $=0;${const i=r.attributes,d=(0,s.parseInternalActivationAttributes)(i),g=i.getString("auto_pad","NOTSET"),m=i.getInts("dilations",[1,1]),_=i.getInt("group",1),y=i.getInts("kernel_shape",[]),w=i.getInts("output_padding",[0,0]),v=i.getInts("output_shape",[]),S=i.getInts("pads",[0,0,0,0]),O=i.getInts("strides",[1,1]);return(0,u.createAttributeWithCacheKey)(Object.assign({autoPad:g,dilations:m,group:_,kernelShape:y,outputPadding:w,outputShape:v,pads:S,strides:O},d))};const e=(r,i)=>{if(!r||r.length!==2&&r.length!==3)throw new Error("Conv requires 2 or 3 inputs");if(r[0].dims.length!==4||r[1].dims.length!==4)throw new Error("currently only support 2-dimensional conv");if(r[0].dims[1]!==r[1].dims[0])throw new Error("FILTER_IN_CHANNEL should be equal to DATA_CHANNEL");const d=r[1].dims[1]*i.group;if(r.length===3&&(r[2].dims.length!==1||r[2].dims[0]!==d))throw new Error("invalid bias");const g=r[0].dims.length-2;if(i.dilations.length!==g)throw new Error(`dilations should be ${g}D`);if(i.strides.length!==g)throw new Error(`strides should be ${g}D`);if(i.pads.length!==2*g)throw new Error(`pads should be ${2*g}D`);if(i.outputPadding.length!==g)throw new Error(`output_padding should be ${g}D`);if(i.kernelShape.length!==0&&i.kernelShape.length!==r[1].dims.length-2)throw new Error("invalid kernel shape");if(i.outputShape.length!==0&&i.outputShape.length!==r[0].dims.length-2)throw new Error("invalid output shape");if(r[0].type!=="float32"||r[1].type!=="float32")throw new Error("ConvTranspose input(X,W) should be float tensor");if(r.length===3&&r[2].type!=="float32")throw new Error("ConvTranspose input(bias) should be float tensor")}},8138:(b,n,a)=>{Object.defineProperty(n,"__esModule",{value:!0}),n.parseConvAttributes=n.conv=n.calculateOutputShape=void 0;const u=a(246),c=a(2517),p=a(4770),s=a(1386),h=a(9828),f=a(2823),l=a(3248),o=a(5623);n.calculateOutputShape=(g,m,_,y,w)=>{const v=g[0],S=g.slice(2),O=S.length,A=m[0],T=m.slice(2).map((N,B)=>N+(N-1)*(_[B]-1)),M=S.map((N,B)=>N+y[B]+y[B+O]).map((N,B)=>Math.floor((N-T[B]+w[B])/w[B]));return[v,A].concat(...M)},n.conv=(g,m,_)=>(d(m,_),t(g,m,_));const t=(g,m,_)=>{const y=i(_,m),w=g.session.pack,v=y.kernelShape[0]===1&&y.kernelShape[1]===1;return y.group>1?[g.run((0,p.createUnpackedGroupedConvProgramInfoLoader)(g,m,y),m)]:v&&w?[e(g,m,y)]:w&&m[0].dims.length===4&&m[0].dims[0]===1&&!v?[(0,s.conv2DPacked)(g,m,y)]:[r(g,m,y)]},e=(g,m,_)=>{const y=m[0].dims,w=m[1].dims,v=(0,n.calculateOutputShape)(y,w,_.dilations,_.pads,_.strides),S=g.reshapeUnpacked(m[0],[y[1],y[2]*y[3]]),O=g.reshapeUnpacked(m[1],[w[0],w[1]]),A=m.length>2?[O,S,m[2]]:[O,S],T=g.run((0,o.createMatmulProgramInfoLoader)(A,_),A);return g.reshapeUnpacked(T,v)},r=(g,m,_)=>{const y=m[0].dims,w=m[1].dims,v=(0,n.calculateOutputShape)(y,w,_.dilations,_.pads,_.strides),S=g.run((0,l.createIm2ColProgramInfoLoader)(g,m[0],m[1],v,_),[m[0]]),O=m.length===3?[S,m[1],m[2]]:[S,m[1]];return g.run((0,h.createDotProductProgramInfoLoader)(g,m,v,_),O)},i=(g,m)=>{const _=g.kernelShape.slice();if(g.kernelShape.length===0)for(let v=2;v{const m=g.attributes,_=(0,f.parseInternalActivationAttributes)(m),y=m.getString("auto_pad","NOTSET"),w=m.getInts("dilations",[1,1]),v=m.getInt("group",1),S=m.getInts("kernel_shape",[]),O=m.getInts("pads",[0,0,0,0]),A=m.getInts("strides",[1,1]);return(0,u.createAttributeWithCacheKey)(Object.assign({autoPad:y,dilations:w,group:v,kernelShape:S,pads:O,strides:A},_))};const d=(g,m)=>{if(!g||g.length!==2&&g.length!==3)throw new Error("Conv requires 2 or 3 inputs");if(g[0].dims.length!==4||g[1].dims.length!==4)throw new Error("currently only support 2-dimensional conv");if(g[0].dims[1]!==g[1].dims[1]*m.group)throw new Error("FILTER_IN_CHANNEL should be equal to DATA_CHANNEL");if(g.length===3&&(g[2].dims.length!==1||g[1].dims[0]!==g[2].dims[0]))throw new Error("invalid bias");const _=g[0].dims.length-2;if(m.dilations.length!==_)throw new Error(`dilations should be ${_}D`);if(m.strides.length!==_)throw new Error(`strides should be ${_}D`);if(m.pads.length!==2*_)throw new Error(`pads should be ${2*_}D`);if(m.kernelShape.length!==0&&m.kernelShape.length!==g[1].dims.length-2)throw new Error("invalid kernel shape");if(g[0].type!=="float32"||g[1].type!=="float32")throw new Error("Conv input(X,W) should be float tensor");if(g.length===3&&g[2].type!=="float32")throw new Error("Conv input(bias) should be float tensor")}},5193:(b,n,a)=>{Object.defineProperty(n,"__esModule",{value:!0}),n.parseDepthToSpaceAttributes=n.depthToSpace=void 0;const u=a(3738);n.depthToSpace=(p,s,h)=>{c(s);const f=h.blocksize,l=f*f,o=h.mode==="DCR"?[0,3,4,1,5,2]:[0,1,4,2,5,3],t=h.mode==="DCR"?[s[0].dims[0],f,f,s[0].dims[1]/l,s[0].dims[2],s[0].dims[3]]:[s[0].dims[0],s[0].dims[1]/l,f,f,s[0].dims[2],s[0].dims[3]],e=p.reshapeUnpacked(s[0],t),r={perm:o,cacheKey:`${o}`},[i]=(0,u.transpose)(p,[e],r),d=[s[0].dims[0],s[0].dims[1]/l,s[0].dims[2]*f,s[0].dims[3]*f];return[p.reshapeUnpacked(i,d)]},n.parseDepthToSpaceAttributes=p=>{const s=p.attributes.getInt("blocksize");if(s<1)throw new Error(`blocksize must be >= 1, but got : ${s} for DepthToSpace`);const h=p.attributes.getString("mode","DCR");if(h!=="DCR"&&h!=="CRD")throw new Error(`unrecognized mode: ${h} for DepthToSpace`);return{mode:h,blocksize:s}};const c=p=>{if(p.length!==1)throw new Error(`DepthToSpace expect 1 inputs, but got ${p.length}`);if(p[0].type==="string"||p[0].dims.length!==4)throw new TypeError("DepthToSpace input should be a 4-D numeric tensor")}},9828:(b,n,a)=>{Object.defineProperty(n,"__esModule",{value:!0}),n.createDotProductProgramInfoLoader=void 0;const u=a(2517),c=a(5060),p=a(2039),s=a(2823),h=a(3248);n.createDotProductProgramInfoLoader=(f,l,o,t)=>{const e=((r,i)=>({name:"ConvDotProduct",inputNames:r?["Im2Col","K","B"]:["Im2Col","K"],inputTypes:r?[p.TextureType.unpacked,p.TextureType.packedLastDimension,p.TextureType.unpacked]:[p.TextureType.unpacked,p.TextureType.packedLastDimension],cacheKey:i.activationCacheKey}))(l.length>2,t);return Object.assign(Object.assign({},e),{get:()=>((r,i,d,g,m)=>{const _=d[0].dims,y=d[1].dims,w=[y[0],Math.ceil(_[1]*y[2]*y[3]/4)],v=(0,h.calculateIm2ColDims)(_,y,g),[S,O]=r.calculateTextureWidthAndHeight(w,p.TextureType.packedLastDimension),A=u.ShapeUtil.computeStrides(v),[T,M]=r.calculateTextureWidthAndHeight(v,p.TextureType.packedLastDimension),N=g.length,B=d.length<3?"0.0":"_B(b)",$=Math.ceil(_[1]*y[2]*y[3]/4),{activationFunction:L,applyActivation:H}=(0,s.getActivationSnippet)(m),C=(0,c.getGlsl)(r.session.backend.glContext.version),z=` -${L} -float process(int indices[${N}]) { - int b[1]; - b[0] = indices[1]; - int im2col[4]; - im2col[0] = indices[0]; - im2col[1] = indices[2]; - im2col[2] = indices[3]; - int im2colOffset = im2col[0] * ${A[0]} + im2col[1] * ${A[1]} + im2col[2] * ${A[2]}; - int kernelOffset = indices[1] * ${w[1]}; - float value = ${B}; - for (int i = 0; i < ${$}; ++i) { - vec2 im2colCoords = offsetToCoords(im2colOffset, ${T}, ${M}); - vec2 kernelCoords = offsetToCoords(kernelOffset, ${S}, ${O}); - value += dot(${C.texture2D}(Im2Col, im2colCoords), ${C.texture2D}(K, kernelCoords)); - ++im2colOffset; - ++kernelOffset; - } - ${H} - return value; -}`;return Object.assign(Object.assign({},i),{output:{dims:g,type:d[0].type,textureType:p.TextureType.unpacked},shaderSource:z})})(f,e,l,o,t)})}},7992:(b,n,a)=>{Object.defineProperty(n,"__esModule",{value:!0}),n.parseFlattenAttributes=n.flatten=void 0;const u=a(2517);n.flatten=(p,s,h)=>{c(s,h);const f=u.ShapeUtil.flattenShape(s[0].dims,h);return[p.reshapeUnpacked(s[0],f)]},n.parseFlattenAttributes=p=>p.attributes.getInt("axis",1);const c=(p,s)=>{if(!p||p.length!==1)throw new Error("Flatten requires 1 input.");const h=p[0].dims.length;if(h===0)throw new Error("scalar tensor is not supported.");if(s<-h||s>h)throw new Error("Invalid axis");if(p[0].type==="string")throw new Error("string tensor is not supported.")}},2823:(b,n,a)=>{Object.defineProperty(n,"__esModule",{value:!0}),n.parseInternalActivationAttributes=n.getActivationSnippet=void 0;const u=a(2517),c=a(4909);n.getActivationSnippet=function(p){let s;switch(p.activation){case"Relu":s=(0,c.glslRelu)();break;case"Sigmoid":s=(0,c.glslSigmoid)();break;case"Clip":s=(0,c.glslClip)(p.clipMin,p.clipMax);break;default:return{activationFunction:"",applyActivation:""}}const h=s.name;return{activationFunction:s.body,applyActivation:`value = ${h}_(value);`}},n.parseInternalActivationAttributes=p=>{const s=p.getString("activation","");if(s==="Clip"){const[h,f]=p.getFloats("activation_params",[u.MIN_CLIP,u.MAX_CLIP]);return{activation:s,clipMax:f,clipMin:h,activationCacheKey:`${s}:${h},${f}`}}return{activation:s,activationCacheKey:s}}},1253:(b,n,a)=>{Object.defineProperty(n,"__esModule",{value:!0}),n.parseGatherAttributes=n.gather=void 0;const u=a(246),c=a(782),p=a(2517),s=a(2039);n.gather=(o,t,e)=>(l(t,e.axis),[o.run(f(o,t,e),t)]),n.parseGatherAttributes=o=>(0,u.createAttributeWithCacheKey)({axis:o.attributes.getInt("axis",0)});const h={name:"Gather",inputNames:["A","B"],inputTypes:[s.TextureType.unpacked,s.TextureType.unpacked]},f=(o,t,e)=>{const r=Object.assign(Object.assign({},h),{cacheHint:e.cacheKey});return Object.assign(Object.assign({},r),{get:()=>((i,d,g,m)=>{const _=g[0].dims.slice(),y=g[1].dims.slice(),w=new Array(_.length+y.length-1);m=p.ShapeUtil.normalizeAxis(m,_.length);const v=[];for(let O=0;O{if(!o||o.length!==2)throw new Error("Gather requires 2 inputs.");const e=o[0].dims.length;if(e<1)throw new Error("Invalid input shape.");if(t<-e||t>e-1)throw new Error("Invalid axis.");if(c.NUMBER_TYPES.indexOf(o[0].type)===-1)throw new Error("Invaid input type.");if(o[1].type!=="int32"&&o[1].type!=="int16")throw new Error("Invaid input type.")}},4776:(b,n,a)=>{Object.defineProperty(n,"__esModule",{value:!0}),n.parseGemmAttributesV11=n.parseGemmAttributesV7=n.gemm=void 0;const u=a(246),c=a(2517),p=a(2039);n.gemm=(o,t,e)=>(l(t,e),[o.run(h(t,e),t)]);const s=(o,t)=>{const e=o.attributes.getInt("transA",0)!==0,r=o.attributes.getInt("transB",0)!==0,i=o.attributes.getFloat("alpha",1),d=o.attributes.getFloat("beta",1);return(0,u.createAttributeWithCacheKey)({transA:e,transB:r,alpha:i,beta:d,isOptionalC:t})};n.parseGemmAttributesV7=o=>s(o,!1),n.parseGemmAttributesV11=o=>s(o,!0);const h=(o,t)=>{const e={name:"Gemm",inputNames:o.length===3?["A","B","C"]:["A","B"],inputTypes:o.length===3?[p.TextureType.unpacked,p.TextureType.unpacked,p.TextureType.unpacked]:[p.TextureType.unpacked,p.TextureType.unpacked],key:t.cacheKey};return Object.assign(Object.assign({},e),{get:()=>f(e,o,t)})},f=(o,t,e)=>{const r=t[0].dims.slice(),i=t[1].dims.slice(),[d,g]=c.GemmUtil.getShapeOfGemmResult(r,e.transA,i,e.transB,t.length===3?t[2].dims:void 0),m=[d,g];if(!m)throw new Error("Can't use gemm on the given tensors");let _=r[r.length-1],y="";e.transA&&(_=r[0]),e.transA&&e.transB?y="value += _A_T(a) * _B_T(b);":e.transA&&!e.transB?y="value += _A_T(a) * _B(b);":!e.transA&&e.transB?y="value += _A(a) * _B_T(b);":e.transA||e.transB||(y="value += _A(a) * _B(b);");const w=m.length,v=` - float process(int indices[${w}]) { - int a[${w}]; - int b[${w}]; - ${t.length===3?`int c[${t[2].dims.length}];`:""} - - copyVec(indices, a); - copyVec(indices, b); - ${t.length===3?"bcastIndices_C(indices, c);":""} - - float value = 0.0; - for (int k=0; k<${_}; ++k) { - a[${w-1}] = k; - b[${w-2}] = k; - ${y} - } - - value = value * alpha; - ${t.length===3?"value += beta * _C(c);":""} - return value; - }`;return Object.assign(Object.assign({},o),{output:{dims:m,type:t[0].type,textureType:p.TextureType.unpacked},variables:[{name:"alpha",type:"float",data:e.alpha},{name:"beta",type:"float",data:e.beta}],shaderSource:v})},l=(o,t)=>{if(!o)throw new Error("Input is missing");if(t.isOptionalC&&(o.length<2||o.length>3))throw new Error("Invaid input shape.");if(!t.isOptionalC&&o.length!==3)throw new Error("Gemm requires 3 inputs");if(o.length===3&&o[2].dims.length!==1&&o[2].dims.length!==2)throw new Error("Invalid input shape of C");if(o[0].type!=="float32"&&o[0].type!=="float64"||o[1].type!=="float32"&&o[1].type!=="float64"||o.length===3&&o[2].type!=="float32"&&o[2].type!=="float64")throw new Error("Invalid input type.");if(o[0].type!==o[1].type||o.length===3&&o[0].type!==o[2].type)throw new Error("Input types are mismatched")}},8555:(b,n,a)=>{Object.defineProperty(n,"__esModule",{value:!0}),n.createPackedIm2ColProgramInfoLoader=void 0;const u=a(5060),c=a(2039),p=a(2827);n.createPackedIm2ColProgramInfoLoader=(s,h,f,l,o)=>{const t=(e=o.cacheKey,{name:"Im2Col (packed)",inputNames:["A"],inputTypes:[c.TextureType.packed],cacheHint:e});var e;return Object.assign(Object.assign({},t),{get:()=>((r,i,d,g,m,_)=>{const y=d.dims,w=g.dims,v=m.length,S=[w[1]*w[2]*w[3],m[2]*m[3]],O=w[2]*w[3],A=(0,p.unpackFromChannel)(),T=(0,u.getGlsl)(r.session.backend.glContext.version);let M="";for(let B=0;B<=1;B++)for(let $=0;$<=1;$++)M+=` - blockIndex = rc.x + ${$}; - pos = rc.y + ${B}; - - if(blockIndex < ${S[1]} && pos < ${S[0]}) { - offsetY = int(blockIndex / (${m[v-1]})) * ${_.strides[0]} - - ${_.pads[0]}; - d0 = offsetY + ${_.dilations[0]} * (imod(pos, ${O}) / ${w[2]}); - - if(d0 < ${y[2]} && d0 >= 0) { - offsetX = imod(blockIndex, ${m[v-1]}) * ${_.strides[1]} - - ${_.pads[1]}; - d1 = offsetX + ${_.dilations[1]} * imod(imod(pos, ${O}), ${w[2]}); - - if(d1 < ${y[3]} && d1 >= 0) { - - ch = int(float(pos)/ ${O}.); - innerDims = vec2(d0, d1); - result[${2*B+$}] = getChannel( - getA(0, ch, int(innerDims.x), - int(innerDims.y)), innerDims); - } - } - } - - `;const N=` - ${A} - - void main() { - ivec2 rc = getOutputCoords(); - vec4 result = vec4(0.0); - int blockIndex, pos, offsetY, d0, offsetX, d1, ch; - vec2 innerDims; - ${M} - ${T.output} = result; - } - `;return Object.assign(Object.assign({},i),{output:{dims:S,type:d.type,textureType:c.TextureType.packed},shaderSource:N,hasMain:!0})})(s,t,h,f,l,o)})}},3248:(b,n,a)=>{Object.defineProperty(n,"__esModule",{value:!0}),n.calculateIm2ColDims=n.createIm2ColProgramInfoLoader=void 0;const u=a(2039);n.createIm2ColProgramInfoLoader=(c,p,s,h,f)=>{const l=(o=f.cacheKey,{name:"Im2Col",inputNames:["X"],inputTypes:[u.TextureType.unpacked],cacheHint:o});var o;return Object.assign(Object.assign({},l),{get:()=>((t,e,r,i,d,g)=>{const m=r.dims,_=i.dims,y=d.length,w=(0,n.calculateIm2ColDims)(m,_,d,4),v=` - const int XC = ${m[1]}; - const int XH = ${m[2]}; - const int XW = ${m[3]}; - const int KH = ${g.kernelShape[0]}; - const int KW = ${g.kernelShape[1]}; - const int dilationH = ${g.dilations[0]}; - const int dilationW = ${g.dilations[1]}; - const int strideH = ${g.strides[0]}; - const int strideW = ${g.strides[1]}; - const int padH = ${g.pads[0]}; - const int padW = ${g.pads[1]}; - const int KHKW = KH*KW; - const int XCKHKW = XC * KHKW; - const int outputChannels = 4; - vec4 process(int indices[${y}]) { - int b = indices[0]; // batch size - int oh = indices[1] * strideH - padH; //output height - int ow = indices[2] * strideW - padW; //output width - int p = indices[3] * outputChannels; //patch - vec4 value = vec4(0.0); - for(int i=0; i < outputChannels; ++i) { - if(p < XCKHKW) { - int patchC = p / KHKW; - int patchH = (p - patchC*KHKW) / KW; - int patchW = (p - patchC*KHKW) - patchH * KW; - int xh2 = oh + patchH * dilationH; - int xw2 = ow + patchW * dilationW; - int x[${m.length}]; - x[0] = b; - x[1] = patchC; - x[2] = xh2; - x[3] = xw2; - if(xh2 >= 0 && - xh2 < XH && - xw2 >= 0 && - xw2 < XW) { - value[i] = _X(x); - } - } - ++p; - } - return value; - } - `;return Object.assign(Object.assign({},e),{output:{dims:w,type:r.type,textureType:u.TextureType.packedLastDimension},shaderSource:v})})(0,l,p,s,h,f)})},n.calculateIm2ColDims=(c,p,s,h=4)=>[s[0],s[2],s[3],Math.ceil(c[1]*p[2]*p[3]/h)]},6572:(b,n,a)=>{Object.defineProperty(n,"__esModule",{value:!0}),n.parseImageScalerAttributes=n.imageScaler=void 0;const u=a(246),c=a(2039);n.imageScaler=(l,o,t)=>(f(o),[l.run(s(l,o,t),o)]),n.parseImageScalerAttributes=l=>{const o=l.attributes.getFloat("scale"),t=l.attributes.getFloats("bias");return(0,u.createAttributeWithCacheKey)({scale:o,bias:t})};const p={name:"ImageScaler",inputNames:["X"],inputTypes:[c.TextureType.unpacked]},s=(l,o,t)=>{const e=Object.assign(Object.assign({},p),{cacheHint:t.cacheKey});return Object.assign(Object.assign({},e),{get:()=>((r,i,d,g)=>{const m=d[0].dims.slice(),_=m.length,y=` - ${h(g.bias.length)} - float process(int indices[${_}]) { - return _X(indices) * scale + getBias(bias, indices[1]); - }`;return Object.assign(Object.assign({},i),{output:{dims:m,type:d[0].type,textureType:c.TextureType.unpacked},variables:[{name:"bias",type:"float",arrayLength:g.bias.length,data:g.bias},{name:"scale",type:"float",data:g.scale}],shaderSource:y})})(0,e,o,t)})},h=l=>{const o=[`float getBias(float bias[${l}], int channel) {`];for(let t=0;t{if(!l||l.length!==1)throw new Error("ImageScaler requires 1 input.");if(l[0].dims.length!==4)throw new Error("Invalid input shape.");if(l[0].type!=="float32"&&l[0].type!=="float64")throw new Error("Invalid input type.")}},3346:(b,n,a)=>{Object.defineProperty(n,"__esModule",{value:!0}),n.parseInstanceNormalizationAttributes=n.instanceNormalization=void 0;const u=a(5060),c=a(2039);n.instanceNormalization=(o,t,e)=>{l(t);const r=o.run(s(t[0]),t);return[o.run(f(o,t[0],e,r.dims),[t[0],r,t[1],t[2]])]},n.parseInstanceNormalizationAttributes=o=>o.attributes.getFloat("epsilon",1e-5);const p={name:"InstanceNormalization_MeanAndVariance",inputNames:["X"],inputTypes:[c.TextureType.unpacked]},s=o=>Object.assign(Object.assign({},p),{get:()=>((t,e)=>{const r=e.dims.slice(),i=r[1],d=r[2]*r[3],g=[r[0],i],m=` - vec4 process(int[2] indices) { - vec4 v = vec4(0.0); - int a[4]; - a[0] = indices[0]; - a[1] = indices[1]; - float temp = 0.0; - for(int a2=0; a2<${r[2]}; a2++) { - a[2] = a2; - for(int a3=0; a3<${r[3]}; a3++) { - a[3] = a3; - float x = _X(a); - temp += x; - } - } - float mean = temp / float(${d}); - temp = 0.0; - for(int a2=0; a2<${r[2]}; a2++) { - a[2] = a2; - for(int a3=0; a3<${r[3]}; a3++) { - a[3] = a3; - float x = _X(a); - temp += (x - mean) * (x - mean); - } - } - v.r = mean; - v.g = temp / float(${d}); - - return v; - }`;return Object.assign(Object.assign({},t),{output:{dims:g,type:e.type,textureType:c.TextureType.packedLastDimension},shaderSource:m})})(p,o)}),h={name:"InstanceNormalization_ComputeOutput",inputNames:["X","MeanAndVariance","Scale","B"],inputTypes:[c.TextureType.unpacked,c.TextureType.packedLastDimension,c.TextureType.unpacked,c.TextureType.unpacked]},f=(o,t,e,r)=>{const i=Object.assign(Object.assign({},h),{cacheHint:`${e}`});return Object.assign(Object.assign({},i),{get:()=>((d,g,m,_,y)=>{const w=(0,u.getGlsl)(d.session.backend.glContext.version),[v,S]=d.calculateTextureWidthAndHeight(y,c.TextureType.packedLastDimension),[O,A]=[v/4,S],T=` - vec4 get_MeanAndVariance(int[2] mv) { - int offset = indicesToOffset_MeanAndVariance(mv); - vec2 coords = offsetToCoords(offset, ${O}, ${A}); - return ${w.texture2D}(MeanAndVariance, coords); - } - - float process(int[4] indices) { - int mv[2]; - mv[0] = indices[0]; - mv[1] = indices[1]; - vec4 mean_and_variance = get_MeanAndVariance(mv); - float mean = mean_and_variance.r; - float variance = mean_and_variance.g; - - int sb[1]; - sb[0] = indices[1]; - float scale = _Scale(sb); - float b = _B(sb); - - return scale * (_X(indices) - mean) / sqrt(variance + epsilon) + b; - }`;return Object.assign(Object.assign({},g),{output:{dims:m.dims,type:m.type,textureType:c.TextureType.unpacked},variables:[{name:"epsilon",type:"float",data:_}],shaderSource:T})})(o,i,t,e,r)})},l=o=>{if(!o||o.length!==3)throw new Error("InstanceNormalization requires 3 inputs.");const t=o[0],e=o[1],r=o[2];if(t.dims.length<3||e.dims.length!==1||r.dims.length!==1)throw new Error("Invalid input shape.");if(e.dims[0]!==t.dims[1]||r.dims[0]!==t.dims[1])throw new Error("Input shapes are mismatched.");if(t.type!=="float32"&&t.type!=="float64"||e.type!=="float32"&&e.type!=="float64"||r.type!=="float32"&&r.type!=="float64")throw new Error("Invalid input type.");if(o[0].dims.length!==4)throw new Error("Only support 4-D input shape.")}},708:(b,n,a)=>{Object.defineProperty(n,"__esModule",{value:!0}),n.createPackedMatmulProgramInfoLoader=void 0;const u=a(2517),c=a(5060),p=a(2039),s=a(9390),h=a(2823),f=a(5623);n.createPackedMatmulProgramInfoLoader=(l,o,t)=>{const e=(r=o.length>2,i=t.activationCacheKey,{name:"MatMul (packed)",inputNames:r?["A","B","Bias"]:["A","B"],inputTypes:r?[p.TextureType.packed,p.TextureType.packed,p.TextureType.packed]:[p.TextureType.packed,p.TextureType.packed],cacheHint:i});var r,i;return Object.assign(Object.assign({},e),{get:()=>((d,g,m,_)=>{const y=m.length>2,w=y?"value += getBiasForMatmul();":"",v=m[0].dims,S=m[1].dims,O=u.BroadcastUtil.calcShape(v,S,!0),A=!u.ShapeUtil.areEqual(m[0].dims,m[1].dims);if(!O)throw new Error("Can't use matmul on the given tensors");const T=v[v.length-1],M=Math.ceil(T/2),N=v.length,B=S.length,$=(0,c.getGlsl)(d.session.backend.glContext.version),L=(0,s.getCoordsDataType)(O.length),H=O.length,C=(0,s.getGlChannels)(),{activationFunction:z,applyActivation:J}=(0,h.getActivationSnippet)(_),X=y?`${(0,f.getBiasForMatmul)(L,C,m[2].dims,O,!0)}`:"",te=A?`${function(Oe,ce,Te,ye){let Fe=[],He=[];const Ae=Te[0].dims,Ne=Te[1].dims,De=Ae.length,Pe=Ne.length,ve=ye.length,Be=ve-De,Ue=ve-Pe;Fe=Ae.map((Se,$e)=>`coords.${ce[$e+Be]}`),Fe[De-1]="i*2",Fe.join(", "),He=Ne.map((Se,$e)=>`coords.${ce[$e+Ue]}`),He[Pe-2]="i*2",He.join(", ");const Ve=u.BroadcastUtil.getBroadcastDims(Ae,ye),Xe=u.BroadcastUtil.getBroadcastDims(Ne,ye),Qe=Ve.map(Se=>`coords.${ce[Se+Be]} = 0;`).join(` -`),Ge=Xe.map(Se=>`coords.${ce[Se+Ue]} = 0;`).join(` -`),ze=`int lastDim = coords.${ce[ve-1]}; - coords.${ce[ve-1]} = coords.${ce[ve-2]}; - coords.${ce[ve-2]} = lastDim;`;return` -vec4 getAAtOutCoordsMatmul(int i) { - ${Oe} coords = getOutputCoords(); - ${ze} - ${Qe} - vec4 outputValue = getA(${Fe}); - return outputValue; -} - -vec4 getBAtOutCoordsMatmul(int i) { - ${Oe} coords = getOutputCoords(); - ${ze} - ${Ge} - vec4 outputValue = getB(${He}); - return outputValue; -}`}(L,C,m,O)}`:"",ne=A?"getAAtOutCoordsMatmul(i)":`getA(${function(Oe,ce){let Te="";for(let ye=0;ye{Object.defineProperty(n,"__esModule",{value:!0}),n.getBiasForMatmul=n.createMatmulProgramInfoLoader=n.parseMatMulAttributes=n.matMul=void 0;const u=a(2517),c=a(2039),p=a(9390),s=a(2823),h=a(708);function f(t,e){const r=(i=t.length>2,d=e.activationCacheKey,{name:"MatMul",inputNames:i?["A","B","Bias"]:["A","B"],inputTypes:i?[c.TextureType.unpacked,c.TextureType.unpacked,c.TextureType.unpacked]:[c.TextureType.unpacked,c.TextureType.unpacked],cacheHint:d});var i,d;return Object.assign(Object.assign({},r),{get:()=>function(g,m,_){const y=m[0].dims,w=m[1].dims,v=u.BroadcastUtil.calcShape(y,w,!0);if(!v)throw new Error("Can't use matmul on the given tensors");const S=(0,p.getCoordsDataType)(v.length),O=(0,p.getGlChannels)(),{activationFunction:A,applyActivation:T}=(0,s.getActivationSnippet)(_),M=m.length>2,N=M?"value += getBiasForMatmul();":"",B=M?`${o(S,O,m[2].dims,v,!1)}`:"",$=v.length,L=y.length,H=w.length,C=` - ${A} - ${B} - float process(int indices[${$}]) { - int a[${L}]; - int b[${H}]; - bcastMatmulIndices_A(indices, a); - bcastMatmulIndices_B(indices, b); - - float value; - for (int k=0; k<${y[y.length-1]}; ++k) { - a[${L-1}] = k; - b[${H-2}] = k; - value += _A(a) * _B(b); - } - ${N} - ${T} - return value; - }`;return Object.assign(Object.assign({},g),{output:{dims:v,type:m[0].type,textureType:c.TextureType.unpacked},shaderSource:C})}(r,t,e)})}n.matMul=(t,e,r)=>(l(e),t.session.pack?[t.run((0,h.createPackedMatmulProgramInfoLoader)(t,e,r),e)]:[t.run(f(e,r),e)]),n.parseMatMulAttributes=t=>(0,s.parseInternalActivationAttributes)(t.attributes),n.createMatmulProgramInfoLoader=f;const l=t=>{if(!t||t.length!==2)throw new Error("MatMul requires 2 inputs.");if(t[0].dims[t[0].dims.length-1]!==t[1].dims[t[1].dims.length-2])throw new Error("shared dimension does not match.");if(t[0].type!=="float32"&&t[0].type!=="float64"||t[1].type!=="float32"&&t[1].type!=="float64")throw new Error("inputs should be float type");if(t[0].type!==t[1].type)throw new Error("inputs types should match")};function o(t,e,r,i,d){let g="";const m=r.length,_=i.length,y=_-m;g=_<2&&m>0?"coords":r.map((S,O)=>`coords.${e[O+y]}`).join(", ");const w=u.BroadcastUtil.getBroadcastDims(r,i).map(S=>`coords.${e[S+y]} = 0;`).join(` -`);let v="vec4(outputValue.xx, outputValue.yy)";return u.ShapeUtil.size(r)===1&&(v="vec4(outputValue.x)"),d?` -vec4 getBiasForMatmul() { - ${t} coords = getOutputCoords(); - ${w} - vec4 outputValue = getBias(${g}); - return ${v}; -}`:` -float getBiasForMatmul() { - ${t} coords = getOutputCoords(); - ${w} - return getBias(coords.x); -}`}n.getBiasForMatmul=o},2403:(b,n,a)=>{Object.defineProperty(n,"__esModule",{value:!0}),n.createPackProgramInfoLoader=void 0;const u=a(5060),c=a(2039),p=a(9390),s=a(2827),h={name:"pack",inputNames:["A"],inputTypes:[c.TextureType.unpackedReversed]};n.createPackProgramInfoLoader=(f,l)=>Object.assign(Object.assign({},h),{get:()=>((o,t)=>{const e=(0,u.getGlsl)(o.session.backend.glContext.version),r=t.dims,i=r.length,d=t.dims.length,g=(0,p.getCoordsDataType)(d),m=(0,s.getChannels)("rc",d),_=(y=d,w=m,v=r[r.length-2],S=r[r.length-1],y===0||y===1?"":` - int r = ${w[y-2]}; - int c = ${w[y-1]}; - int rp1 = ${w[y-2]} + 1; - int cp1 = ${w[y-1]} + 1; - bool rEdge = rp1 >= ${S}; - bool cEdge = cp1 >= ${v}; - `);var y,w,v,S;let O;O=i===0?[1,1]:i===1?[r[0],1]:[r[d-1],r[d-2]];const A=function(N,B,$){if(N===0)return"false";if(N===1)return`rc > ${B[0]}`;let L="";for(let H=N-2;H= ${B[H-N+2]}`,H= ${N[0]} ? 0. : getA(rc + 1), - 0, 0`;let L="";if($>2)for(let H=0;H<$-2;++H)L+=`${B[H]},`;return`getA(${L}r, c), - rEdge ? 0. : getA(${L}rp1, c), - cEdge ? 0. : getA(${L}r, cp1), - rEdge || cEdge ? 0. : getA(${L}rp1, cp1)`}(r,m),M=` - void main() { - ${g} rc = getOutputCoords(); - - if(${A}) { - ${e.output} = vec4(0); - } else { - ${_} - - ${e.output} = vec4(${T}); - } - } - `;return Object.assign(Object.assign({},h),{hasMain:!0,output:{dims:t.dims,type:t.type,textureType:c.TextureType.packed},shaderSource:M})})(f,l)})},2827:(b,n,a)=>{Object.defineProperty(n,"__esModule",{value:!0}),n.unpackFromChannel=n.getChannels=n.getVecChannels=void 0;const u=a(9390);function c(p,s){return(0,u.getGlChannels)(s).map(h=>`${p}.${h}`)}n.getVecChannels=c,n.getChannels=function(p,s){return s===1?[p]:c(p,s)},n.unpackFromChannel=function(){return` - float getChannel(vec4 frag, int dim) { - int modCoord = imod(dim, 2); - return modCoord == 0 ? frag.r : frag.g; - } - - float getChannel(vec4 frag, vec2 innerDims) { - vec2 modCoord = mod(innerDims, 2.); - return modCoord.x == 0. ? - (modCoord.y == 0. ? frag.r : frag.g) : - (modCoord.y == 0. ? frag.b : frag.a); - } - `}},2870:(b,n,a)=>{Object.defineProperty(n,"__esModule",{value:!0}),n.parsePadAttributesV11=n.padV11=n.parsePadAttributesV2=n.padV2=void 0;const u=a(246),c=a(2517),p=a(5060),s=a(2039),h={name:"Pad",inputNames:["A"],inputTypes:[s.TextureType.unpacked]};n.padV2=(g,m,_)=>(o(m),[g.run(Object.assign(Object.assign({},h),{cacheHint:_.cacheKey,get:()=>l(g,m[0],_)}),m)]),n.parsePadAttributesV2=g=>{const m=g.attributes.getString("mode","constant"),_=g.attributes.getFloat("value",0),y=g.attributes.getInts("pads");return(0,u.createAttributeWithCacheKey)({mode:m,value:_,pads:y})},n.padV11=(g,m,_)=>{t(m);const y=f(g,m,_);return(0,n.padV2)(g,[m[0]],y)},n.parsePadAttributesV11=g=>g.attributes.getString("mode","constant");const f=(g,m,_)=>{if(!g.session.isInitializer(m[1].dataId)||m.length>=3&&!g.session.isInitializer(m[2].dataId))throw new Error("dynamic pad attributes are not allowed");const y=Array.from(m[1].integerData),w=m.length>=3?m[2].floatData[0]:0;return(0,u.createAttributeWithCacheKey)({mode:_,pads:y,value:w})},l=(g,m,_)=>{const y=c.ShapeUtil.padShape(m.dims.slice(),_.pads),w=y.length,v=` - ${e(g,m,_)} - float process(int[${w}] indices) { - return padA(indices); - }`;return{name:"Pad",inputNames:["A"],inputTypes:[s.TextureType.unpacked],output:{dims:y,type:m.type,textureType:s.TextureType.unpacked},shaderSource:v}},o=g=>{if(!g||g.length!==1)throw new Error("Pad requires 1 input");if(g[0].type!=="float32"&&g[0].type!=="float64")throw new Error("Invalid input type.")},t=g=>{if(!g||g.length!==2&&g.length!==3)throw new Error("Pad requires 2 or 3 inputs");if(g[1].type!=="int32")throw new Error("Invalid input type.");if(g.length>=3&&g[2].type==="string")throw new Error("Invalid input type.")},e=(g,m,_)=>{const y=(0,p.getGlsl)(g.session.backend.glContext.version),[w,v]=g.calculateTextureWidthAndHeight(m.dims,s.TextureType.unpacked),S=c.ShapeUtil.computeStrides(m.dims);switch(_.mode){case"constant":return r(y,m.dims,S,w,v,_.pads,_.value);case"reflect":return i(y,m.dims,S,w,v,_.pads);case"edge":return d(y,m.dims,S,w,v,_.pads);default:throw new Error("Invalid mode")}},r=(g,m,_,y,w,v,S)=>{const O=m.length;let A="";for(let T=O-1;T>=0;--T)A+=` - k = m[${T}] - ${v[T]}; - if (k < 0) return constant; - if (k >= ${m[T]}) return constant; - offset += k * ${_[T]}; - `;return` - float padA(int m[${O}]) { - const float constant = float(${S}); - int offset = 0; - int k = 0; - ${A} - vec2 coords = offsetToCoords(offset, ${y}, ${w}); - float value = getColorAsFloat(${g.texture2D}(A, coords)); - return value; - } - `},i=(g,m,_,y,w,v)=>{const S=m.length;let O="";for(let A=S-1;A>=0;--A)O+=` - k = m[${A}] - ${v[A]}; - if (k < 0) { k = -k; } - { - const int _2n_1 = ${2*(m[A]-1)}; - k = int( mod( float(k), float(_2n_1) ) ) ; - if(k >= ${m[A]}) { k = _2n_1 - k; } - } - offset += k * ${_[A]}; - `;return` - float padA(int m[${S}]) { - int offset = 0; - int k = 0; - ${O} - vec2 coords = offsetToCoords(offset, ${y}, ${w}); - float value = getColorAsFloat(${g.texture2D}(A, coords)); - return value; - } - `},d=(g,m,_,y,w,v)=>{const S=m.length;let O="";for(let A=S-1;A>=0;--A)O+=` - k = m[${A}] - ${v[A]}; - if (k < 0) k = 0; - if (k >= ${m[A]}) k = ${m[A]-1}; - offset += k * ${_[A]}; - `;return` - float padA(int m[${S}]) { - int offset = 0; - int k = 0; - ${O} - vec2 coords = offsetToCoords(offset, ${y}, ${w}); - float value = getColorAsFloat(${g.texture2D}(A, coords)); - return value; - } - `}},2143:(b,n,a)=>{Object.defineProperty(n,"__esModule",{value:!0}),n.globalMaxPool=n.parseMaxPoolAttributes=n.maxPool=n.parseGlobalAveragePoolAttributes=n.globalAveragePool=n.parseAveragePoolAttributes=n.averagePool=void 0;const u=a(246),c=a(2517),p=a(2039);n.averagePool=(d,g,m)=>{t(g);const _={name:"AveragePool",inputNames:["X"],inputTypes:[p.TextureType.unpacked],cacheHint:m.cacheKey};return[d.run(Object.assign(Object.assign({},_),{get:()=>s(g,_,!1,m)}),g)]},n.parseAveragePoolAttributes=d=>{const g=d.attributes.getString("auto_pad","NOTSET"),m=d.attributes.getInt("ceil_mode",0),_=d.attributes.getInt("count_include_pad",0)!==0,y=d.attributes.getInts("kernel_shape"),w=d.attributes.getInts("strides",[]),v=d.attributes.getInts("pads",[]);if(m!==0)throw new Error("using ceil() in shape computation is not yet supported for AveragePool");return(0,u.createAttributeWithCacheKey)({autoPad:g,ceilMode:m,countIncludePad:_,kernelShape:y,strides:w,pads:v})};const s=(d,g,m,_)=>{const[y,w]=f(d,_,m),v=c.ShapeUtil.size(y.kernelShape);let S="";y.countIncludePad?S+=`value /= float(${v});`:S+=`value /= float(${v} - pad);`;const O=` - ${e(d[0].dims,y,"value += _X(x);",S,"0.0")} - `;return Object.assign(Object.assign({},g),{output:{dims:w,type:d[0].type,textureType:p.TextureType.unpacked},shaderSource:O})};n.globalAveragePool=(d,g,m)=>{t(g);const _={name:"GlobalAveragePool",inputNames:["X"],inputTypes:[p.TextureType.unpacked],cacheHint:`${m.countIncludePad}`};return[d.run(Object.assign(Object.assign({},_),{get:()=>s(g,_,!0,m)}),g)]},n.parseGlobalAveragePoolAttributes=d=>{const g=d.attributes.getInt("count_include_pad",0)!==0;return(0,u.createAttributeWithCacheKey)({autoPad:"",ceilMode:0,countIncludePad:g,kernelShape:[],strides:[],pads:[]})},n.maxPool=(d,g,m)=>{t(g);const _={name:"MaxPool",inputNames:["X"],inputTypes:[p.TextureType.unpacked],cacheHint:m.cacheKey};return[d.run(Object.assign(Object.assign({},_),{get:()=>h(g,_,!1,m)}),g)]},n.parseMaxPoolAttributes=d=>{const g=d.attributes.getString("auto_pad","NOTSET"),m=d.attributes.getInt("ceil_mode",0),_=d.attributes.getInts("kernel_shape"),y=d.attributes.getInts("strides",[]),w=d.attributes.getInts("pads",[]),v=d.attributes.getInt("storage_order",0),S=d.attributes.getInts("dilations",[]);if(v!==0)throw new Error("column major storage order is not yet supported for MaxPool");if(m!==0)throw new Error("using ceil() in shape computation is not yet supported for MaxPool");return(0,u.createAttributeWithCacheKey)({autoPad:g,ceilMode:m,countIncludePad:!1,kernelShape:_,strides:y,pads:w,storageOrder:v,dilations:S})};const h=(d,g,m,_)=>{const[y,w]=f(d,_,m),v=` - ${e(d[0].dims,y,` - value = max(_X(x), value); - `,"","-1e5")} - `;return Object.assign(Object.assign({},g),{output:{dims:w,type:d[0].type,textureType:p.TextureType.unpacked},shaderSource:v})},f=(d,g,m)=>{const _=d[0].dims.slice(),y=Object.hasOwnProperty.call(g,"dilations"),w=g.kernelShape.slice(),v=g.strides.slice(),S=y?g.dilations.slice():[],O=g.pads.slice();c.PoolConvUtil.adjustPoolAttributes(m,_,w,v,S,O);const A=c.PoolConvUtil.computePoolOutputShape(m,_,v,S,w,O,g.autoPad),T=Object.assign({},g);return y?Object.assign(T,{kernelShape:w,strides:v,pads:O,dilations:S,cacheKey:g.cacheKey}):Object.assign(T,{kernelShape:w,strides:v,pads:O,cacheKey:g.cacheKey}),[T,A]},l={autoPad:"",ceilMode:0,countIncludePad:!1,kernelShape:[],strides:[],pads:[],storageOrder:0,dilations:[],cacheKey:""},o={name:"GlobalMaxPool",inputNames:["X"],inputTypes:[p.TextureType.unpacked]};n.globalMaxPool=(d,g)=>(t(g),[d.run(Object.assign(Object.assign({},o),{get:()=>h(g,o,!0,l)}),g)]);const t=d=>{if(!d||d.length!==1)throw new Error("Pool ops requires 1 input.");if(d[0].type!=="float32"&&d[0].type!=="float64")throw new Error("Invalid input type.")},e=(d,g,m,_,y)=>{const w=d.length;if(g.kernelShape.length<=2){const v=g.kernelShape[g.kernelShape.length-1],S=g.strides[g.strides.length-1],O=g.pads[g.pads.length/2-1],A=g.pads[g.pads.length-1],T=d[w-1];let M="",N="",B="";if(M=O+A!==0?` - for (int i = 0; i < ${v}; i++) { - x[${w} - 1] = indices[${w} - 1] * ${S} - ${O} + i; - if (x[${w} - 1] < 0 || x[${w} - 1] >= ${T}) { - pad++; - continue; - } - ${m} - }`:` - for (int i = 0; i < ${v}; i++) { - x[${w} - 1] = indices[${w} - 1] * ${S} - ${O} + i; - ${m} - }`,g.kernelShape.length===2){const $=g.kernelShape[g.kernelShape.length-2],L=g.strides[g.strides.length-2],H=g.pads[g.pads.length/2-2],C=g.pads[g.pads.length-2],z=d[w-2];N=H+C!==0?` - for (int j = 0; j < ${$}; j++) { - x[${w} - 2] = indices[${w} - 2] * ${L} - ${H} + j; - if (x[${w} - 2] < 0 || x[${w} - 2] >= ${z}) { - pad+= ${v}; - continue; - } - `:` - for (int j = 0; j < ${$}; j++) { - x[${w} - 2] = indices[${w} - 2] * ${L} - ${H} + j; - `,B=` - } - `}return` - float process(int indices[${w}]) { - int x[${w}]; - copyVec(indices, x); - - float value = ${y}; - int pad = 0; - ${N} - ${M} - ${B} - ${_} - return value; - } - `}{const v=c.ShapeUtil.size(g.kernelShape),S=c.ShapeUtil.computeStrides(g.kernelShape),O=S.length,A=g.pads.length,T=i(O),M=r(d,"inputDims"),N=r(g.pads,"pads"),B=r(S,"kernelStrides"),$=r(g.strides,"strides");let L="";return L=g.pads.reduce((H,C)=>H+C)?` - if (x[j] >= inputDims[j] || x[j] < 0) { - pad++; - isPad = true; - break; - } - } - if (!isPad) { - ${m} - }`:` - } - ${m} - `,` - ${T} - float process(int indices[${w}]) { - int x[${w}]; - copyVec(indices, x); - int offset[${O}]; - int pads[${A}]; - int inputDims[${w}]; - int kernelStrides[${O}]; - int strides[${O}]; - ${N} - ${M} - ${$} - ${B} - - float value = ${y}; - int pad = 0; - bool isPad = false; - for (int i = 0; i < ${v}; i++) { - offsetToIndices(i, kernelStrides, offset); - isPad = false; - for (int j = ${w} - ${O}; j < ${w}; j++) { - x[j] = indices[j] * strides[j - ${w} + ${O}] - + offset[j - ${w} + ${O}] - pads[j - 2]; - ${L} - } - ${_} - - return value; - } - `}},r=(d,g)=>{let m="";for(let _=0;_` - void offsetToIndices(int offset, int[${d}] strides, out int[${d}] indices) { - if (${d} == 0) { - return; - } - for (int i = 0; i < ${d} - 1; ++i) { - indices[i] = offset / strides[i]; - offset -= indices[i] * strides[i]; - } - indices[${d} - 1] = offset; - }`},4939:(b,n,a)=>{Object.defineProperty(n,"__esModule",{value:!0}),n.reduceLogSumSquare=n.reduceLogSum=n.reduceProd=n.reduceMin=n.reduceMax=n.reduceMean=n.reduceSum=n.parseReduceAttributes=void 0;const u=a(246),c=a(782),p=a(2517),s=a(2039),h=(o,t,e,r,i)=>{l(t);const d={name:r,inputNames:["A"],inputTypes:[s.TextureType.unpacked]};return[o.run(Object.assign(Object.assign({},d),{cacheHint:e.cacheKey,get:()=>f(o,t,e,r,i,d)}),t)]};n.parseReduceAttributes=o=>{const t=o.attributes.getInts("axes",[]),e=o.attributes.getInt("keepdims",1)===1;return(0,u.createAttributeWithCacheKey)({axes:t,keepDims:e})};const f=(o,t,e,r,i,d)=>{const g=[],m=t[0].dims.length||1,_=[],y=p.ShapeUtil.normalizeAxes(e.axes,t[0].dims.length),w=i(t,y);let v=w[1];for(let O=0;O=0||y.length===0?(e.keepDims&&g.push(1),v=` - for(int j${O} = 0; j${O} < ${t[0].dims[O]}; j${O}++) { - inputIdx[${O}] = j${O}; - ${v} - }`):(_.push(`inputIdx[${O}] = outputIdx[${g.length}];`),g.push(t[0].dims[O]));const S=` - float process(int outputIdx[${g.length||1}]) { - float value; // final result - int inputIdx[${m}]; // addressing input data - ${_.join(` -`)} - ${w[0]} // init ops for reduce max/min - ${v} - ${w[2]} // final computation for reduce mean - return value; - }`;return Object.assign(Object.assign({},d),{output:{dims:g,type:t[0].type,textureType:s.TextureType.unpacked},shaderSource:S})},l=o=>{if(!o||o.length!==1)throw new Error("Reduce op requires 1 input.");if(c.NUMBER_TYPES.indexOf(o[0].type)===-1)throw new Error("Invalid input type.")};n.reduceSum=(o,t,e)=>h(o,t,e,"ReduceSum",()=>["value = 0.0;","value += _A(inputIdx);",""]),n.reduceMean=(o,t,e)=>h(o,t,e,"ReduceMean",(r,i)=>{let d=1;for(let g=0;g=0||i.length===0)&&(d*=r[0].dims[g]);return["value = 0.0;","value += _A(inputIdx);",`value /= ${d}.;`]}),n.reduceMax=(o,t,e)=>h(o,t,e,"ReduceMax",(r,i)=>{const d=[];for(let g=0;g=0||i.length===0)&&d.push(`inputIdx[${g}] = 0;`);return[`${d.join(` -`)} -value = _A(inputIdx);`,"value = max(value, _A(inputIdx));",""]}),n.reduceMin=(o,t,e)=>h(o,t,e,"ReduceMin",(r,i)=>{const d=[];for(let g=0;g=0||i.length===0)&&d.push(`inputIdx[${g}] = 0;`);return[`${d.join(` -`)} -value = _A(inputIdx);`,"value = min(value, _A(inputIdx));",""]}),n.reduceProd=(o,t,e)=>h(o,t,e,"ReduceProd",()=>["value = 1.0;","value *= _A(inputIdx);",""]),n.reduceLogSum=(o,t,e)=>h(o,t,e,"ReduceLogSum",()=>["value = 0.0;","value += _A(inputIdx);","value = log(value);"]),n.reduceLogSumSquare=(o,t,e)=>h(o,t,e,"ReduceLogSumSquare",()=>["float t; value = 0.0;","t = _A(inputIdx); value += t * t;",""])},7019:(b,n,a)=>{Object.defineProperty(n,"__esModule",{value:!0}),n.isReshapeCheap=n.processDims3D=n.createPackedReshape3DProgramInfoLoader=void 0;const u=a(2517),c=a(5060),p=a(2039),s=a(2827);n.createPackedReshape3DProgramInfoLoader=(h,f,l)=>{const o=(t=>({name:"Reshape (packed)",inputTypes:[p.TextureType.packed],inputNames:["A"],cacheHint:`${t}`}))(l);return Object.assign(Object.assign({},o),{get:()=>((t,e,r,i)=>{const d=e.dims,g=i;let m="";for(let w=0;w<4;w++){let v="";switch(w){case 0:v="outputCoords = rc;";break;case 1:v="outputCoords = ivec3(rc.x, rc.y+1, rc.z);";break;case 2:v="outputCoords = ivec3(rc.x, rc.y, rc.z+1);";break;case 3:v="outputCoords = ivec3(rc.x, rc.y+1, rc.z+1);";break;default:throw new Error}m+=` - ${v} - ${w>0?"if(outputCoords.y < rows && outputCoords.z < cols){":""} - int flattenedIndex = getFlattenedIndex(outputCoords); - - ivec3 inputRC = inputCoordsFromReshapedOutCoords(flattenedIndex); - vec2 innerDims = vec2(float(inputRC.y),float(inputRC.z)); - - result[${w}] = getChannel(getA(inputRC.x, inputRC.y, inputRC.z), innerDims); - - ${w>0?"}":""} - `}const _=(0,c.getGlsl)(t.session.backend.glContext.version),y=` - ${function(w){const v=u.ShapeUtil.computeStrides(w),S=["b","r","c"],O="index";return` - ivec3 inputCoordsFromReshapedOutCoords(int index) { - ${v.map((A,T)=>`int ${S[T]} = ${O} / ${A}; ${T===v.length-1?`int ${S[T+1]} = ${O} - ${S[T]} * ${A}`:`index -= ${S[T]} * ${A}`};`).join("")} - return ivec3(b, r, c); - } - `}(d)} - ${function(w){const v=u.ShapeUtil.computeStrides(w);return` - int getFlattenedIndex(ivec3 coords) { - // reverse y, z order - return coords.x * ${v[0]} + coords.z * ${v[1]} + coords.y; - } -`}(g)} - ${(0,s.unpackFromChannel)()} - - void main() { - ivec3 rc = getOutputCoords(); - - vec4 result = vec4(0.0); - - ivec3 outputCoords; - int rows = ${g[2]}; - int cols = ${g[1]}; - - ${m} - ${_.output} = result; - } - `;return Object.assign(Object.assign({},r),{output:{dims:g,type:e.type,textureType:p.TextureType.packed},shaderSource:y,hasMain:!0})})(h,f,o,l)})},n.processDims3D=function(h){if(h.length===0)return[1,1,1];let f=1;for(let l=0;l1?h[h.length-2]:1,h[h.length-1]]},n.isReshapeCheap=function(h,f){let l=!1;return l=h.length===0||f.length===0||(h.length<2||f.length<2?h[h.length-1]===f[f.length-1]:h[h.length-1]===f[f.length-1]&&h[h.length-2]===f[f.length-2]),l}},718:(b,n,a)=>{Object.defineProperty(n,"__esModule",{value:!0}),n.reshape=void 0;const u=a(2517);n.reshape=(c,p)=>{const s=u.ShapeUtil.calculateReshapedDims(p[0].dims,p[1].integerData);return c.session.pack?[c.reshapePacked(p[0],s)]:[c.reshapeUnpacked(p[0],s)]}},2268:(b,n,a)=>{Object.defineProperty(n,"__esModule",{value:!0}),n.parseResizeAttributesV11=n.parseResizeAttributesV10=n.resize=void 0;const u=a(5060),c=a(2039),p=a(9390),s=a(2827),h=a(9793),f={name:"Resize",inputNames:["A"],inputTypes:[c.TextureType.packed]};n.resize=(r,i,d)=>((0,h.validateInputs)(i,d),[r.run(Object.assign(Object.assign({},f),{cacheHint:d.cacheKey,get:()=>l(r,i,d)}),i)]),n.parseResizeAttributesV10=r=>(0,h.parseUpsampleAttributes)(r,10),n.parseResizeAttributesV11=r=>(0,h.parseUpsampleAttributes)(r,11);const l=(r,i,d)=>{const g=(0,u.getGlsl)(r.session.backend.glContext.version),[m,_]=o(i,d);if(m.every(L=>L===1)&&d.coordinateTransformMode!=="tf_crop_and_resize")return Object.assign(Object.assign({},f),{output:{dims:_,type:i[0].type,textureType:c.TextureType.packed},hasMain:!0,shaderSource:`void main() { - vec4 v = ${g.texture2D}(X, TexCoords); - ${g.output} = v; - }`});const y=_.length;if(y<2)throw new Error(`output dimension should be at least 2, but got ${y}`);const w=_[y-2],v=_[y-1],S=i[0].dims;if(y!==S.length)throw new Error(`output dimension should match input ${S.length}, but got ${y}`);const O=S[y-2],A=S[y-1],T=m[y-2],M=m[y-1];let N="";if(d.mode!=="linear")throw new Error(`resize (packed) does not support mode: '${d.mode}'`);switch(d.coordinateTransformMode){case"asymmetric":N=` - vec4 getSourceFracIndex(ivec4 coords) { - return vec4(coords) / scaleWHWH; - } - `;break;case"half_pixel":N=` - vec4 getSourceFracIndex(ivec4 coords) { - return (vec4(coords) + 0.5) / scaleWHWH - 0.5; - } - `;break;case"pytorch_half_pixel":N=` - vec4 getSourceFracIndex(ivec4 coords) { - vec4 fcoords = vec4(coords); - return vec4( - ${v}.0 > 1.0 ? (fcoords.x + 0.5) / scaleWHWH.x - 0.5 : 0.0, - ${w}.0 > 1.0 ? (fcoords.y + 0.5) / scaleWHWH.y - 0.5 : 0.0, - ${v}.0 > 1.0 ? (fcoords.z + 0.5) / scaleWHWH.z - 0.5 : 0.0, - ${w}.0 > 1.0 ? (fcoords.w + 0.5) / scaleWHWH.w - 0.5 : 0.0 - ); - } - `;break;case"align_corners":N=` - vec4 getSourceFracIndex(ivec4 coords) { - vec4 resized = vec4(${v}.0 - 1.0, ${w}.0 - 1.0, ${v}.0 - 1.0, - ${w}.0 - 1.0); - vec4 original = vec4(${A}.0 - 1.0, ${O}.0 - 1.0, ${A}.0 - 1.0, - ${O}.0 - 1.0); - vec4 new_scale = original / resized; - return vec4(coords) * new_scale; - } - `;break;default:throw new Error(`resize (packed) does not support coordinateTransformMode: '${d.coordinateTransformMode}'`)}const B=(0,p.getCoordsDataType)(y),$=` - const vec2 inputWH = vec2(${O}.0, ${A}.0); - const vec4 scaleWHWH = vec4(float(${T}), float(${M}), float(${T}), float(${M})); - ${(0,s.unpackFromChannel)()} - ${N} - float getAValue(int x10, int r, int c, int d) { - return getChannel(getA(x10, r, c, d), vec2(c, d)); - } - void main() { - ${B} rc = getOutputCoords(); - - int batch = rc[0]; - int depth = rc[1]; - - // retrieve the 4 coordinates that is used in the 4 packed output values. - ivec4 coords = ivec4(rc.wz, rc.w + 1, rc.z + 1); - - // calculate the source index in fraction - vec4 sourceFrac = getSourceFracIndex(coords); - - // get the lower and upper bound of the 4 values that will be packed into one texel. - ivec4 x00 = ivec4(max(sourceFrac.xy, vec2(0.0)), min(inputWH - 1.0, ceil(sourceFrac.xy))); - ivec4 x01 = ivec4(max(sourceFrac.xw, vec2(0.0)), min(inputWH - 1.0, ceil(sourceFrac.xw))); - ivec4 x10 = ivec4(max(sourceFrac.zy, vec2(0.0)), min(inputWH - 1.0, ceil(sourceFrac.zy))); - ivec4 x11 = ivec4(max(sourceFrac.zw, vec2(0.0)), min(inputWH - 1.0, ceil(sourceFrac.zw))); - - bool hasNextRow = rc.w < ${w-1}; - bool hasNextCol = rc.z < ${v-1}; - - // pack x00, x01, x10, x11's top-left corner into one vec4 structure - vec4 topLeft = vec4( - getAValue(batch, depth, x00.x, x00.y), - hasNextCol ? getAValue(batch, depth, x01.x, x01.y) : 0.0, - hasNextRow ? getAValue(batch, depth, x10.x, x10.y) : 0.0, - (hasNextRow && hasNextCol) ? getAValue(batch, depth, x11.x, x11.y) : 0.0); - - // pack x00, x01, x10, x11's top-right corner into one vec4 structure - vec4 topRight = vec4( - getAValue(batch, depth, x00.x, x00.w), - hasNextCol ? getAValue(batch, depth, x01.x, x01.w) : 0.0, - hasNextRow ? getAValue(batch, depth, x10.x, x10.w) : 0.0, - (hasNextRow && hasNextCol) ? getAValue(batch, depth, x11.x, x11.w) : 0.0); - - // pack x00, x01, x10, x11's bottom-left corner into one vec4 structure - vec4 bottomLeft = vec4( - getAValue(batch, depth, x00.z, x00.y), - hasNextCol ? getAValue(batch, depth, x01.z, x01.y) : 0.0, - hasNextRow ? getAValue(batch, depth, x10.z, x10.y) : 0.0, - (hasNextRow && hasNextCol) ? getAValue(batch, depth, x11.z, x11.y) : 0.0); - - // pack x00, x01, x10, x11's bottom-right corner into one vec4 structure - vec4 bottomRight = vec4( - getAValue(batch, depth, x00.z, x00.w), - hasNextCol ? getAValue(batch, depth, x01.z, x01.w) : 0.0, - hasNextRow ? getAValue(batch, depth, x10.z, x10.w) : 0.0, - (hasNextRow && hasNextCol) ? getAValue(batch, depth, x11.z, x11.w) : 0.0); - - // calculate the interpolation fraction on u and v direction - vec4 frac = vec4(sourceFrac) - floor(sourceFrac); - vec4 clampFrac = clamp(frac, vec4(0.0), vec4(1.0)); - - vec4 top = mix(topLeft, topRight, clampFrac.ywyw); - vec4 bottom = mix(bottomLeft, bottomRight, clampFrac.ywyw); - vec4 newValue = mix(top, bottom, clampFrac.xxzz); - - ${g.output} = vec4(newValue); - } - `;return Object.assign(Object.assign({},f),{output:{dims:_,type:i[0].type,textureType:c.TextureType.packed},hasMain:!0,shaderSource:$})},o=(r,i)=>{const d=r[0].dims;let g,m=i.scales;if(m.length===0){const y=r[i.scalesInputIdx];if(y&&y.size!==0){if(r[i.sizesInputIdx])throw new Error("Only one of scales or sizes must be provided as input.");m=t(y,i.mode,i.isResize)}else{const w=r[i.sizesInputIdx];if(!w||w.size===0)throw new Error("Either scales or sizes MUST be provided as input.");g=Array.from(w.integerData),m=e(g,d,i.mode,i.isResize)}}else if(r[i.sizesInputIdx])throw new Error("Only one of scales or sizes must be provided as input.");const _=g||d.map((y,w)=>Math.floor(y*m[w]));return[m,_]},t=(r,i,d)=>{const g=Array.from(r.floatData);return(0,h.scalesValidation)(g,i,d),g},e=(r,i,d,g)=>{const m=i.length,_=new Array(m);for(let y=0,w=m;y{Object.defineProperty(n,"__esModule",{value:!0}),n.shape=void 0;const u=a(9162);n.shape=(p,s)=>(c(s),[new u.Tensor([s[0].dims.length],"int32",void 0,void 0,new Int32Array(s[0].dims))]);const c=p=>{if(!p||p.length!==1)throw new Error("Shape requires 1 input.")}},2278:(b,n,a)=>{Object.defineProperty(n,"__esModule",{value:!0}),n.sliceV10=n.parseSliceAttributes=n.slice=void 0;const u=a(246),c=a(782),p=a(2517),s=a(2039),h={name:"Slice",inputNames:["A"],inputTypes:[s.TextureType.unpacked]};n.slice=(e,r,i)=>(l(r),[e.run(Object.assign(Object.assign({},h),{cacheHint:i.cacheKey,get:()=>f(e,r[0],i)}),r)]),n.parseSliceAttributes=e=>{const r=e.attributes.getInts("starts"),i=e.attributes.getInts("ends"),d=e.attributes.getInts("axes",[]);return(0,u.createAttributeWithCacheKey)({starts:r,ends:i,axes:d})};const f=(e,r,i)=>{const d=i.axes.length===0?r.dims.slice(0).map((S,O)=>O):i.axes,g=p.ShapeUtil.normalizeAxes(d,r.dims.length),m=i.starts.map((S,O)=>S>r.dims[g[O]]-1?r.dims[g[O]]:p.ShapeUtil.normalizeAxis(S,r.dims[g[O]])),_=i.ends.map((S,O)=>S>r.dims[g[O]]-1?r.dims[g[O]]:p.ShapeUtil.normalizeAxis(S,r.dims[g[O]])),y=r.dims.slice(),w=[];for(let S=0;S0&&w.push(`outputIdx[${g[S]}] += ${m[S]};`);const v=` - float process(int outputIdx[${y.length}]) { - ${w.join(` - `)} - return _A(outputIdx); - }`;return Object.assign(Object.assign({},h),{output:{dims:y,type:r.type,textureType:s.TextureType.unpacked},shaderSource:v})},l=e=>{if(!e||e.length!==1)throw new Error("Slice requires 1 input.");if(c.NUMBER_TYPES.indexOf(e[0].type)===-1)throw new Error("Invalid input type.")};n.sliceV10=(e,r)=>{t(r);const i=o(e,r);return[e.run(Object.assign(Object.assign({},h),{cacheHint:i.cacheKey,get:()=>f(e,r[0],i)}),[r[0]])]};const o=(e,r)=>{if(!e.session.isInitializer(r[1].dataId)||!e.session.isInitializer(r[2].dataId)||r.length>=4&&!e.session.isInitializer(r[3].dataId)||r.length>=5&&!e.session.isInitializer(r[4].dataId))throw new Error("dynamic slice attributes are not allowed");if(r.length>=5&&r[4].integerData.some(m=>m!==1))throw new Error("currently non-1 steps is not supported for Slice");const i=Array.from(r[1].integerData),d=Array.from(r[2].integerData),g=r.length>=4?Array.from(r[3].integerData):[];return{starts:i,ends:d,axes:g,cacheKey:`${g};${i};${d}`}},t=e=>{if(!e||e.length<3||e.length>5)throw new Error("Invalid input number.");if(e[1].type!=="int32"||e[1].dims.length!==1)throw new Error("Invalid input type.");if(e[2].type!=="int32"||e[2].dims.length!==1)throw new Error("Invalid input type.");if(e.length>=4&&(e[3].type!=="int32"||e[3].dims.length!==1))throw new Error("Invalid input type.");if(e.length>=5&&(e[4].type!=="int32"||e[4].dims.length!==1))throw new Error("Invalid input type.")}},5524:(b,n,a)=>{Object.defineProperty(n,"__esModule",{value:!0}),n.softmaxV13=n.parseSoftmaxAttributesV13=n.parseSoftmaxAttributes=n.softmax=void 0;const u=a(246),c=a(2517),p=a(5060),s=a(2039),h=a(3738),f={name:"SoftmaxComputeMax",inputNames:["A"],inputTypes:[s.TextureType.unpacked]},l={name:"SoftmaxComputeScale",inputNames:["A","Max"],inputTypes:[s.TextureType.unpacked,s.TextureType.unpacked]},o={name:"SoftMax",inputNames:["A","Max","Norm"],inputTypes:[s.TextureType.unpacked,s.TextureType.unpacked,s.TextureType.unpacked]};n.softmax=(g,m,_)=>{d(m);const y=m[0].dims.slice(),w=c.ShapeUtil.normalizeAxis(_.axis,y.length),v=c.ShapeUtil.sizeToDimension(y,w),S=c.ShapeUtil.sizeFromDimension(y,w);return t(g,m,_,v,S)},n.parseSoftmaxAttributes=g=>(0,u.createAttributeWithCacheKey)({axis:g.attributes.getInt("axis",1)}),n.parseSoftmaxAttributesV13=g=>(0,u.createAttributeWithCacheKey)({axis:g.attributes.getInt("axis",-1)}),n.softmaxV13=(g,m,_)=>{d(m);const y=m[0].dims.slice(),w=c.ShapeUtil.normalizeAxis(_.axis,y.length),v=y.length,S=w!==v-1,O=[];let A,T=[],M=[];S&&(T=Array.from({length:v}).map((L,H)=>H),T[w]=v-1,T[v-1]=w,T.map(L=>O.push(y[L])),A=(0,u.createAttributeWithCacheKey)({perm:T}),M=(0,h.transpose)(g,m,A));const N=S?c.ShapeUtil.sizeToDimension(O,v-1):c.ShapeUtil.sizeToDimension(y,v-1),B=S?c.ShapeUtil.sizeFromDimension(O,v-1):c.ShapeUtil.sizeFromDimension(y,v-1),$=t(g,S?M:m,_,N,B);return S?(0,h.transpose)(g,$,A):$};const t=(g,m,_,y,w)=>{const v=e(g,m[0],y,w,[y]),S=g.run(Object.assign(Object.assign({},f),{cacheHint:_.cacheKey,get:()=>v}),m),O=r(g,m[0],y,w,v.output.dims,[y]),A=g.run(Object.assign(Object.assign({},l),{cacheHint:_.cacheKey,get:()=>O}),[m[0],S]),T=i(g,m[0],y,w,v.output.dims,O.output.dims);return[g.run(Object.assign(Object.assign({},o),{cacheHint:_.cacheKey,get:()=>T}),[m[0],S,A])]},e=(g,m,_,y,w)=>{const[v,S]=g.calculateTextureWidthAndHeight(m.dims,s.TextureType.unpacked),O=w.length;if(_<1||y<1)throw new Error("Logical row count N and feature count D must be greater than or equal to 1");if(w.length!==1)throw new Error("Dimensionality of the output should be 1");if(w[0]!==_)throw new Error("Shape of the output should be equal to logical row count");const A=(0,p.getGlsl)(g.session.backend.glContext.version),T=` - float process(int[${O}] indices) { - int logical_row_start_offset = indices[0] * ${y}; - - float max = getColorAsFloat(${A.texture2D}(A, offsetToCoords(logical_row_start_offset, ${v}, - ${S} ))); - for(int i=1; i<${y}; ++i) - { - float current = getColorAsFloat(${A.texture2D}(A, offsetToCoords(logical_row_start_offset + i, - ${v}, ${S}))); - if(current > max) - max = current; - } - - return max; - }`;return Object.assign(Object.assign({},f),{output:{dims:w,type:m.type,textureType:s.TextureType.unpacked},shaderSource:T})},r=(g,m,_,y,w,v)=>{const[S,O]=g.calculateTextureWidthAndHeight(m.dims,s.TextureType.unpacked),A=v.length;if(_<1||y<1)throw new Error("Logical row count N and feature count D must be greater than or equal to 1");if(v.length!==1)throw new Error("Dimensionality of the output should be 1");if(v[0]!==_)throw new Error("Shape of the output should be equal to logical row count");if(w.length!==1)throw new Error("Dimensionality of the intermediate results should be 1");if(w[0]!==_)throw new Error("Shape of the intermediate results should be equal to logical row count");const T=` - float process(int[${A}] indices) { - int logical_row_start_offset = indices[0] * ${y}; - - float norm_factor = 0.0; - float max = _Max(indices); - for(int i=0; i<${y}; ++i) - { - norm_factor += exp(getColorAsFloat(${(0,p.getGlsl)(g.session.backend.glContext.version).texture2D}(A, offsetToCoords(logical_row_start_offset + i, - ${S}, ${O}))) - max); - } - - return norm_factor; - }`;return Object.assign(Object.assign({},l),{output:{dims:v,type:m.type,textureType:s.TextureType.unpacked},shaderSource:T})},i=(g,m,_,y,w,v)=>{const[S,O]=g.calculateTextureWidthAndHeight(m.dims,s.TextureType.unpacked),A=m.dims.length;if(_<1||y<1)throw new Error("Logical row count N and feature count D must be greater than or equal to 1");if(w.length!==1||v.length!==1)throw new Error("Dimensionality of the intermediate results should be 1");if(w[0]!==_||v[0]!==_)throw new Error("Shape of the intermediate results should be equal to logical row count");const T=` - float process(int[${A}] indices) { - - // get offset of current logical tensor index from the 2-D texture coordinates (TexCoords) - int offset = coordsToOffset(TexCoords, ${S}, ${O}); - - //determine the logical row for this index - int logical_row_index[1]; - logical_row_index[0] = offset / ${y}; - - float norm_factor = _Norm(logical_row_index); - - // avoid possible division by 0 - // if norm_facor is 0, all elements are zero - // if so, return 0 - if(norm_factor == 0.0) - return 0.0; - - return exp(_A(indices) - _Max(logical_row_index)) / norm_factor; - }`;return Object.assign(Object.assign({},o),{output:{dims:m.dims,type:m.type,textureType:s.TextureType.unpacked},shaderSource:T})},d=g=>{if(!g||g.length!==1)throw new Error("Softmax requires 1 input.");if(g[0].type!=="float32"&&g[0].type!=="float64")throw new Error("Invalid input type")}},5975:(b,n,a)=>{Object.defineProperty(n,"__esModule",{value:!0}),n.parseSplitAttributes=n.split=void 0;const u=a(246),c=a(2517),p=a(2039),s={name:"Split",inputNames:["A"],inputTypes:[p.TextureType.unpacked]};n.split=(o,t,e)=>{l(t);const r=c.ShapeUtil.normalizeAxis(e.axis,t[0].dims.length),i=h(o,t,r,e),d=[];for(let g=0;gf(o,t[0],e,r,g)}),t));return d},n.parseSplitAttributes=o=>{const t=o.attributes.getInt("axis",0),e=o.attributes.getInts("split",[]),r=o.outputs.length;return(0,u.createAttributeWithCacheKey)({axis:t,split:e,numOutputs:r})};const h=(o,t,e,r)=>{const[,i]=c.SplitUtil.splitShape(t[0].dims,e,r.split,r.numOutputs);return i.length},f=(o,t,e,r,i)=>{const[d,g]=c.SplitUtil.splitShape(t.dims,r,e.split,e.numOutputs),m=g[i],_=d[i],y=` - float process(int indices[${_.length}]) { - indices[${r}] += ${m}; - return _A(indices); - } - `;return Object.assign(Object.assign({},s),{cacheHint:`${e.cacheKey}:${i}`,output:{dims:_,type:t.type,textureType:p.TextureType.unpacked},shaderSource:y})},l=o=>{if(!o||o.length!==1)throw new Error("Split requires one input.");if(o[0].type!=="int8"&&o[0].type!=="uint8"&&o[0].type!=="int16"&&o[0].type!=="uint16"&&o[0].type!=="int32"&&o[0].type!=="uint32"&&o[0].type!=="float32"&&o[0].type!=="float64"&&o[0].type!=="bool")throw new Error("Invalid input type.")}},3933:(b,n,a)=>{Object.defineProperty(n,"__esModule",{value:!0}),n.parseSqueezeAttributes=n.squeezeV13=n.squeeze=void 0;const u=a(2517);n.squeeze=(s,h,f)=>{c(h);const l=u.ShapeUtil.squeezeShape(h[0].dims,f);return[s.reshapeUnpacked(h[0],l)]},n.squeezeV13=(s,h)=>(p(h),(0,n.squeeze)(s,[h[0]],Array.from(h[1].integerData))),n.parseSqueezeAttributes=s=>s.attributes.getInts("axes");const c=s=>{if(!s||s.length!==1)throw new Error("Squeeze requires 1 input.");if(s[0].type==="string")throw new Error("invalid input tensor types.")},p=s=>{if(!s||s.length!==2)throw new Error("Squeeze requires 2 inputs.");if(s[1].type!=="int32")throw new Error("Invalid input type.")}},6558:(b,n,a)=>{Object.defineProperty(n,"__esModule",{value:!0}),n.sum=void 0;const u=a(5060),c=a(2039);n.sum=(h,f)=>{s(f);const l={name:"Sum",inputNames:f.map((o,t)=>`X${t}`),inputTypes:new Array(f.length).fill(c.TextureType.unpacked)};return[h.run(Object.assign(Object.assign({},l),{get:()=>p(h,f,l)}),f)]};const p=(h,f,l)=>{const o=(0,u.getGlsl)(h.session.backend.glContext.version),t=f[0].dims.slice(),e=` - void main() { - vec4 result = ${f.map((r,i)=>`${o.texture2D}(X${i},TexCoords)`).join(" + ")}; - ${o.output} = result; - } - `;return Object.assign(Object.assign({},l),{output:{dims:t,type:f[0].type,textureType:c.TextureType.unpacked},hasMain:!0,shaderSource:e})},s=h=>{if(!h||h.length===0)throw new Error("Sum requires inputs.");const f=h[0].dims.length;for(let l=1;l{Object.defineProperty(n,"__esModule",{value:!0}),n.tile=void 0;const u=a(782),c=a(2039);n.tile=(h,f)=>{s(f);const l={name:"Tile",inputNames:["A"],inputTypes:[c.TextureType.unpacked]};return[h.run(Object.assign(Object.assign({},l),{get:()=>p(h,f,l)}),f)]};const p=(h,f,l)=>{const o=f[0].dims.slice(),t=new Array(o.length),e=[];for(let d=0;d{if(!h||h.length!==2)throw new Error("Tile requires 2 input.");if(h[1].dims.length!==1)throw new Error("The second input shape must 1 dimension.");if(h[1].dims[0]!==h[0].dims.length)throw new Error("Invalid input shape.");if(u.NUMBER_TYPES.indexOf(h[0].type)===-1)throw new Error("Invalid input type.");if(h[1].type!=="int32"&&h[1].type!=="int16")throw new Error("Invalid repeat type.")}},3738:(b,n,a)=>{Object.defineProperty(n,"__esModule",{value:!0}),n.parseTransposeAttributes=n.transpose=void 0;const u=a(246),c=a(2517),p=a(2039),s={name:"Transpose",inputNames:["A"],inputTypes:[p.TextureType.unpacked]};n.transpose=(e,r,i)=>(t(r),[e.run(Object.assign(Object.assign({},s),{cacheHint:i.cacheKey,get:()=>h(e,r[0],i.perm)}),r)]),n.parseTransposeAttributes=e=>(0,u.createAttributeWithCacheKey)({perm:e.attributes.getInts("perm",[])});const h=(e,r,i)=>{const d=r.dims;i=f(d,i);const g=l(d,i),m=d.length,_=` - ${o("perm",i,m)} - float process(int indices[${m}]) { - int a[${m}]; - perm(a, indices); - return _A(a); - }`;return Object.assign(Object.assign({},s),{output:{dims:g,type:r.type,textureType:p.TextureType.unpacked},shaderSource:_})},f=(e,r)=>(r&&r.length!==e.length&&(r=[...e.keys()].reverse()),r),l=(e,r)=>(r=f(e,r),c.ShapeUtil.sortBasedOnPerm(e,r)),o=(e,r,i)=>{const d=[];d.push(`void ${e}(out int a[${i}], int src[${i}]) {`);for(let g=0;g{if(!e||e.length!==1)throw new Error("Transpose requires 1 input.");if(e[0].type!=="float32"&&e[0].type!=="float64")throw new Error("input should be float tensor")}},8710:(b,n,a)=>{Object.defineProperty(n,"__esModule",{value:!0}),n.encodeAsUint8=void 0;const u=a(5060),c=a(2039);n.encodeAsUint8=(p,s)=>{const h=s.shape,f=(0,u.getGlsl)(p.session.backend.glContext.version),l=` - const float FLOAT_MAX = 1.70141184e38; - const float FLOAT_MIN = 1.17549435e-38; - - bool isNaN(float val) { - return (val < 1.0 || 0.0 < val || val == 0.0) ? false : true; - } - - highp vec4 encodeAsUint8(highp float v) { - if (isNaN(v)) { - return vec4(255, 255, 255, 255); - } - - highp float av = abs(v); - - if(av < FLOAT_MIN) { - return vec4(0.0, 0.0, 0.0, 0.0); - } else if(v > FLOAT_MAX) { - return vec4(0.0, 0.0, 128.0, 127.0) / 255.0; - } else if(v < -FLOAT_MAX) { - return vec4(0.0, 0.0, 128.0, 255.0) / 255.0; - } - - highp vec4 c = vec4(0,0,0,0); - - highp float e = floor(log2(av)); - highp float m = exp2(fract(log2(av))) - 1.0; - - c[2] = floor(128.0 * m); - m -= c[2] / 128.0; - c[1] = floor(32768.0 * m); - m -= c[1] / 32768.0; - c[0] = floor(8388608.0 * m); - - highp float ebias = e + 127.0; - c[3] = floor(ebias / 2.0); - ebias -= c[3] * 2.0; - c[2] += floor(ebias) * 128.0; - - c[3] += 128.0 * step(0.0, -v); - - return c / 255.0; - } - - void main() { - float value = ${f.texture2D}(X,TexCoords).r; - ${f.output} = encodeAsUint8(value); - }`,o={name:"Uint8Encode",inputTypes:[c.TextureType.unpacked],inputNames:["X"],output:{dims:h,type:s.tensor.type,textureType:c.TextureType.downloadUint8AsFloat},shaderSource:l,hasMain:!0};return p.executeProgram(o,[s.tensor])}},4909:(b,n,a)=>{Object.defineProperty(n,"__esModule",{value:!0}),n.tanh=n.tan=n.sqrt=n.sin=n.sigmoid=n.relu=n.not=n.neg=n.log=n.parseLeakyReluAttributes=n.leakyRelu=n.identity=n.floor=n.exp=n.parseEluAttributes=n.elu=n.cos=n.ceil=n.clipV11=n.parseClipAttributes=n.clip=n.atan=n.asin=n.acos=n.abs=n.glslTanh=n.glslTan=n.glslSqrt=n.glslSigmoid=n.glslRelu=n.glslSin=n.glslNot=n.glslNeg=n.glslLog=n.glslLeakyRelu=n.glslIdentity=n.glslClip=n.glslFloor=n.glslExp=n.glslElu=n.glslCos=n.glslCeil=n.glslAtan=n.glslAsin=n.glslAcos=n.glslAbs=void 0;const u=a(246),c=a(2517),p=a(8520),s=a(5060),h=a(2039);function f(){return $("abs")}function l(){return $("acos")}function o(){return $("asin")}function t(){return $("atan")}function e(){return $("ceil")}function r(){return $("cos")}function i(C){const z="elu";return{body:` - const float alpha = float(${C}); - - float ${z}_(float a) { - return a >= 0.0 ? a: (exp(a) - 1.0) * alpha; - } - vec4 ${z}_(vec4 v) { - return vec4(${z}_(v.x), ${z}_(v.y), ${z}_(v.z), ${z}_(v.w)); - } - `,name:z,type:p.FunctionType.ValueBased}}function d(){return $("exp")}function g(){return $("floor")}function m(C,z){const J="clip";return{body:` - const float min = float(${C}); - const float max = float(${z}); - - float ${J}_(float a) { - return clamp(a, min, max); - } - vec4 ${J}_(vec4 v) { - return clamp(v, min, max); - } - `,name:J,type:p.FunctionType.ValueBased}}function _(){const C="indentity";return{body:` - float ${C}_(float a) { - return a; - } - vec4 ${C}_(vec4 v) { - return v; - } - `,name:C,type:p.FunctionType.ValueBased}}function y(C){const z="leakyRelu";return{body:` - const float alpha = float(${C}); - - float ${z}_(float a) { - return a < 0.0 ? a * alpha : a; - } - vec4 ${z}_(vec4 v) { - return vec4(${z}_(v.x), ${z}_(v.y), ${z}_(v.z), ${z}_(v.w)); - } - `,name:z,type:p.FunctionType.ValueBased}}function w(){return $("log")}function v(){const C="neg";return{body:` - float ${C}_(float a) { - return -a; - } - vec4 ${C}_(vec4 v) { - return -v; - } - `,name:C,type:p.FunctionType.ValueBased}}function S(){const C="not";return{body:` - float ${C}_(float a) { - return float( ! bool(a) ); - } - bool ${C}_(bool a) { - return !a; - } - vec4 ${C}_(vec4 v) { - return vec4(!bool(v.x), !bool(v.y), !bool(v.z), !bool(v.w)); - } - bvec4 ${C}_(bvec4 v) { - return bvec4(!v.x, !v.y, !v.z, !v.w); - } - `,name:C,type:p.FunctionType.ValueBased}}function O(){return $("sin")}function A(){const C="relu";return{body:` - float ${C}_(float a) { - return max( a, 0.0 ); - } - vec4 ${C}_(vec4 v) { - return max( v, 0.0 ); - } - `,name:C,type:p.FunctionType.ValueBased}}function T(){const C="sigmoid";return{body:` - float ${C}_(float a) { - return 1.0 / (1.0 + exp(-a)); - } - vec4 ${C}_(vec4 v) { - return 1.0 / (1.0 + exp(-v)); - } - `,name:C,type:p.FunctionType.ValueBased}}function M(){return $("sqrt")}function N(){return $("tan")}function B(){const C="tanh";return{body:` - float ${C}_(float a) { - a = clamp(a, -10., 10.); - a = exp(2.*a); - return (a - 1.) / (a + 1.); - } - vec4 ${C}_(vec4 v) { - v = clamp(v, -10., 10.); - v = exp(2.*v); - return (v - 1.) / (v + 1.); - } - `,name:C,type:p.FunctionType.ValueBased}}function $(C){return{body:` - float ${C}_(float a) { - return ${C}(a); - } - vec4 ${C}_(vec4 v) { - return ${C}(v); - } - `,name:C,type:p.FunctionType.ValueBased}}n.glslAbs=f,n.glslAcos=l,n.glslAsin=o,n.glslAtan=t,n.glslCeil=e,n.glslCos=r,n.glslElu=i,n.glslExp=d,n.glslFloor=g,n.glslClip=m,n.glslIdentity=_,n.glslLeakyRelu=y,n.glslLog=w,n.glslNeg=v,n.glslNot=S,n.glslSin=O,n.glslRelu=A,n.glslSigmoid=T,n.glslSqrt=M,n.glslTan=N,n.glslTanh=B;const L=(C,z,J,X)=>{const te=C.session.pack?h.TextureType.packed:h.TextureType.unpacked,ne={name:J.name,inputTypes:[te],inputNames:["A"],cacheHint:X};return Object.assign(Object.assign({},ne),{get:()=>((me,Me,Oe,ce)=>{const Te=me.session.pack?h.TextureType.packed:h.TextureType.unpacked,ye=(0,s.getGlsl)(me.session.backend.glContext.version);return Object.assign(Object.assign({},Me),{output:{dims:Oe.dims,type:Oe.type,textureType:Te},shaderSource:` - ${ce.body} - void main() { - vec4 v = ${ye.texture2D}(A, TexCoords); - v = ${ce.name}_(v); - ${ye.output} = v; - } - `,hasMain:!0})})(C,ne,z,J)})};n.abs=(C,z)=>[C.run(L(C,z[0],f()),z)],n.acos=(C,z)=>[C.run(L(C,z[0],l()),z)],n.asin=(C,z)=>[C.run(L(C,z[0],o()),z)],n.atan=(C,z)=>[C.run(L(C,z[0],t()),z)],n.clip=(C,z,J)=>[C.run(L(C,z[0],m(J.min,J.max),J.cacheKey),z)],n.parseClipAttributes=C=>(0,u.createAttributeWithCacheKey)({min:C.attributes.getFloat("min",c.MIN_CLIP),max:C.attributes.getFloat("max",c.MAX_CLIP)}),n.clipV11=(C,z)=>{const J=H(C,z);return(0,n.clip)(C,[z[0]],J)};const H=(C,z)=>{if(z.length>=3&&(!C.session.isInitializer(z[1].dataId)||!C.session.isInitializer(z[2].dataId)))throw new Error("dynamic clip attributes are not allowed");const J=z.length>=3?z[1].numberData[0]:c.MIN_CLIP,X=z.length>=3?z[2].numberData[0]:c.MAX_CLIP;return(0,u.createAttributeWithCacheKey)({min:J,max:X})};n.ceil=(C,z)=>[C.run(L(C,z[0],e()),z)],n.cos=(C,z)=>[C.run(L(C,z[0],r()),z)],n.elu=(C,z,J)=>[C.run(L(C,z[0],i(J.alpha),J.cacheKey),z)],n.parseEluAttributes=C=>(0,u.createAttributeWithCacheKey)({alpha:C.attributes.getFloat("alpha",1)}),n.exp=(C,z)=>[C.run(L(C,z[0],d()),z)],n.floor=(C,z)=>[C.run(L(C,z[0],g()),z)],n.identity=(C,z)=>[C.run(L(C,z[0],_()),z)],n.leakyRelu=(C,z,J)=>[C.run(L(C,z[0],y(J.alpha),J.cacheKey),z)],n.parseLeakyReluAttributes=C=>(0,u.createAttributeWithCacheKey)({alpha:C.attributes.getFloat("alpha",.01)}),n.log=(C,z)=>[C.run(L(C,z[0],w()),z)],n.neg=(C,z)=>[C.run(L(C,z[0],v()),z)],n.not=(C,z)=>[C.run(L(C,z[0],S()),z)],n.relu=(C,z)=>[C.run(L(C,z[0],A()),z)],n.sigmoid=(C,z)=>[C.run(L(C,z[0],T()),z)],n.sin=(C,z)=>[C.run(L(C,z[0],O()),z)],n.sqrt=(C,z)=>[C.run(L(C,z[0],M()),z)],n.tan=(C,z)=>[C.run(L(C,z[0],N()),z)],n.tanh=(C,z)=>[C.run(L(C,z[0],B()),z)]},5611:(b,n,a)=>{Object.defineProperty(n,"__esModule",{value:!0}),n.createUnpackProgramInfoLoader=n.createUnpackProgramInfo=void 0;const u=a(5060),c=a(2039),p=a(9390),s=a(2827),h={name:"unpack",inputNames:["A"],inputTypes:[c.TextureType.packed]};n.createUnpackProgramInfo=(f,l)=>{const o=l.dims.length,t=(0,s.getChannels)("rc",o),e=t.slice(-2),r=(0,p.getCoordsDataType)(o),i=(0,s.unpackFromChannel)(),d=l.dims.length===0?"":function(_,y){if(_===1)return"rc";let w="";for(let v=0;v<_;v++)w+=y[v],v<_-1&&(w+=",");return w}(o,t),g=o<=1?"rc":`vec2(${e.join(",")})`,m=` - ${i} - void main() { - ${r} rc = getOutputCoords(); - - // Sample the texture with the coords to get the rgba channel value. - vec4 packedInput = getA(${d}); - - ${(0,u.getGlsl)(f.session.backend.glContext.version).output} = vec4(getChannel(packedInput, ${g}), 0, 0, 0); - } - `;return Object.assign(Object.assign({},h),{hasMain:!0,output:{dims:l.dims,type:l.type,textureType:c.TextureType.unpacked},shaderSource:m})},n.createUnpackProgramInfoLoader=(f,l)=>Object.assign(Object.assign({},h),{get:()=>(0,n.createUnpackProgramInfo)(f,l)})},8428:(b,n,a)=>{Object.defineProperty(n,"__esModule",{value:!0}),n.parseUnsqueezeAttributes=n.unsqueezeV13=n.unsqueeze=void 0;const u=a(2517);n.unsqueeze=(s,h,f)=>{c(h);const l=u.ShapeUtil.unsqueezeShape(h[0].dims,f);return[s.reshapeUnpacked(h[0],l)]},n.unsqueezeV13=(s,h)=>(p(h),(0,n.unsqueeze)(s,[h[0]],Array.from(h[1].integerData))),n.parseUnsqueezeAttributes=s=>s.attributes.getInts("axes");const c=s=>{if(!s||s.length!==1)throw new Error("Unsqueeze requires 1 input.");if(s[0].type==="string")throw new Error("invalid input tensor types.")},p=s=>{if(!s||s.length!==2)throw new Error("Unsqueeze requires 2 inputs.");if(s[1].type!=="int32")throw new Error("Invalid input type.")}},9793:(b,n,a)=>{Object.defineProperty(n,"__esModule",{value:!0}),n.scalesValidation=n.validateInputs=n.parseUpsampleAttributes=n.parseUpsampleAttributesV9=n.parseUpsampleAttributesV7=n.upsample=void 0;const u=a(246),c=a(5060),p=a(2039),s={name:"Upsample",inputNames:["X"],inputTypes:[p.TextureType.unpacked]};n.upsample=(f,l,o)=>((0,n.validateInputs)(l,o),[f.run(Object.assign(Object.assign({},s),{cacheHint:o.cacheKey,get:()=>h(f,l,o)}),l)]),n.parseUpsampleAttributesV7=f=>(0,n.parseUpsampleAttributes)(f,7),n.parseUpsampleAttributesV9=f=>(0,n.parseUpsampleAttributes)(f,9),n.parseUpsampleAttributes=(f,l)=>{const o=l>=10,t=f.attributes.getString("mode","nearest");if(t!=="nearest"&&t!=="linear"&&(l<11||t!=="cubic"))throw new Error(`unrecognized mode: ${t}`);let e=[];l<9&&(e=f.attributes.getFloats("scales"),(0,n.scalesValidation)(e,t,o));const r=f.attributes.getFloat("extrapolation_value",0),i=l>10?f.attributes.getString("coordinate_transformation_mode","half_pixel"):"asymmetric";if(["asymmetric","pytorch_half_pixel","tf_half_pixel_for_nn","align_corners","tf_crop_and_resize","half_pixel"].indexOf(i)===-1)throw new Error(`coordinate_transform_mode '${i}' is not supported`);const d=i==="tf_crop_and_resize",g=d,m=t==="nearest"&&l>=11?f.attributes.getString("nearest_mode","round_prefer_floor"):"";if(["round_prefer_floor","round_prefer_ceil","floor","ceil",""].indexOf(m)===-1)throw new Error(`nearest_mode '${m}' is not supported`);const _=f.attributes.getFloat("cubic_coeff_a",-.75),y=f.attributes.getInt("exclude_outside",0)!==0;if(y&&t!=="cubic")throw new Error("exclude_outside can be set to 1 only when mode is CUBIC.");const w=l<11||t==="nearest"&&i==="asymmetric"&&m==="floor";let v=0,S=0,O=0;return l>10?f.inputs.length>2?(v=1,S=2,O=3):(S=1,O=2):l===9&&(S=1),(0,u.createAttributeWithCacheKey)({opset:l,isResize:o,mode:t,scales:e,extrapolationValue:r,coordinateTransformMode:i,useExtrapolation:g,needRoiInput:d,nearestMode:m,cubicCoefficientA:_,excludeOutside:y,useNearest2xOptimization:w,roiInputIdx:v,scalesInputIdx:S,sizesInputIdx:O})};const h=(f,l,o)=>{const t=(0,c.getGlsl)(f.session.backend.glContext.version),[e,r]=f.calculateTextureWidthAndHeight(l[0].dims,p.TextureType.unpacked),i=l[0].dims.map((O,A)=>Math.floor(O*o.scales[A])),[d,g]=f.calculateTextureWidthAndHeight(i,p.TextureType.unpacked),m=i.length,_=new Array(m),y=new Array(m);let w=` - int output_pitches[${m}]; - int input_pitches[${m}]; - `;for(let O=m-1;O>=0;O--)_[O]=O===m-1?1:_[O+1]*i[O+1],y[O]=O===m-1?1:y[O+1]*l[0].dims[O+1],w+=` - output_pitches[${O}] = ${_[O]}; - input_pitches[${O}] = ${y[O]}; - `;const v=` - float getInputFloat(int index) { - vec2 coords = offsetToCoords(index, ${e}, ${r}); - float value = getColorAsFloat(${t.texture2D}(X, coords)); - return value; - } - `,S=o.mode==="nearest"?` - ${v} - float process(int indices[${m}]) { - int input_index = 0; - int output_index = coordsToOffset(TexCoords, ${d}, ${g}); - - ${w} - - int d, m; - for (int dim = 0; dim < ${m}; ++dim) { - d = output_index / output_pitches[dim]; - m = output_index - d * output_pitches[dim]; - output_index = m; - - if (scales[dim] != 1 && d > 0) { - int d2 = d / scales[dim]; - m = d - d2 * scales[dim]; - d = d2; - } - input_index += input_pitches[dim] * d; - } - - return getInputFloat(input_index); - }`:m===4?` - ${v} - float process(int indices[4]) { - int input_index = 0; - int output_index = coordsToOffset(TexCoords, ${d}, ${g}); - - ${w} - - int m; - int index_of_dim0, index_of_dim1, index_of_dim2, index_of_dim3; - index_of_dim0 = output_index / output_pitches[0]; - m = output_index - index_of_dim0 * output_pitches[0]; - index_of_dim1 = m / output_pitches[1]; - m = m - index_of_dim1 * output_pitches[1]; - index_of_dim2 = m / output_pitches[2]; - m = m - index_of_dim2 * output_pitches[2]; - index_of_dim3 = m; - - int index_of_input_dim2, index_of_input_dim3, x_offset, y_offset; - index_of_input_dim2 = index_of_dim2 / scales[2]; - y_offset = index_of_dim2 - index_of_input_dim2 * scales[2]; - index_of_input_dim3 = index_of_dim3 / scales[3]; - x_offset = index_of_dim3 - index_of_input_dim3 * scales[3]; - - input_index = index_of_dim0 * input_pitches[0] + - index_of_dim1 * input_pitches[1] + - index_of_input_dim2 * input_pitches[2] + - index_of_input_dim3; - - float x00 = getInputFloat(input_index); - float x10, x01, x11; - - bool end_of_dim2 = false; - if (index_of_input_dim2 == (${l[0].dims[2]} - 1)) { - // It's the end in dimension 2 - x01 = x00; - end_of_dim2 = true; - } else { - x01 = getInputFloat(input_index + input_pitches[2]); - } - - if (index_of_input_dim3 == (input_pitches[2] - 1)) { - // It's the end in dimension 3 - x10 = x00; - x11 = x01; - } - else { - x10 = getInputFloat(input_index + 1); - x11 = end_of_dim2 ? x10 : getInputFloat(input_index + input_pitches[2] + 1); - } - - float y0 = x00 + float(y_offset) * (x01 - x00) / float(scales[2]); - float y1 = x10 + float(y_offset) * (x11 - x10) / float(scales[2]); - return y0 + float(x_offset) * (y1 - y0) / float(scales[3]); - }`:` - ${v} - float process(int indices[2]) { - int input_index = 0; - int output_index = coordsToOffset(TexCoords, ${d}, ${g}); - - ${w} - - int m; - int index_of_dim0, index_of_dim1; - index_of_dim0 = output_index / output_pitches[0]; - m = output_index - index_of_dim0 * output_pitches[0]; - index_of_dim1 = m; - - int index_of_input_dim0, index_of_input_dim1, x_offset, y_offset; - index_of_input_dim0 = index_of_dim0 / scales[0]; - y_offset = index_of_dim0 - index_of_input_dim0 * scales[0]; - index_of_input_dim1 = index_of_dim1 / scales[1]; - x_offset = index_of_dim1 - index_of_input_dim1 * scales[1]; - - input_index = index_of_input_dim0 * input_pitches[0] + index_of_input_dim1; - - float x00 = getInputFloat(input_index); - float x10, x01, x11; - - bool end_of_dim0 = false; - if (index_of_input_dim0 == (${l[0].dims[0]} - 1)) { - // It's the end in dimension 0 - x01 = x00; - end_of_dim0 = true; - } else { - x01 = getInputFloat(input_index + input_pitches[0]); - } - - if (index_of_input_dim1 == (input_pitches[0] - 1)) { - // It's the end in dimension 1 - x10 = x00; - x11 = x01; - } - else { - x10 = getInputFloat(input_index + 1); - x11 = end_of_dim0 ? x10 : getInputFloat(input_index + input_pitches[0] + 1); - } - - float y0 = x00 + float(y_offset) * (x01 - x00) / float(scales[0]); - float y1 = x10 + float(y_offset) * (x11 - x10) / float(scales[0]); - return y0 + float(x_offset) * (y1 - y0) / float(scales[1]); - }`;return Object.assign(Object.assign({},s),{output:{dims:i,type:l[0].type,textureType:p.TextureType.unpacked},shaderSource:S,variables:[{name:"scales",type:"int",arrayLength:o.scales.length,data:o.scales.map(O=>Math.ceil(O))}]})};n.validateInputs=(f,l)=>{if(!f||l.opset<9&&f.length!==1||l.opset>=9&&l.opset<11&&f.length!==2||l.opset>=11&&f.length<2)throw new Error("invalid inputs.");if(l.scales.length>0&&f[0].dims.length!==l.scales.length)throw new Error("Invalid input shape.");if(f[0].type==="string")throw new Error("Invalid input tensor types.")},n.scalesValidation=(f,l,o)=>{if(o){for(const t of f)if(t<=0)throw new Error("Scale value should be greater than 0.")}else for(const t of f)if(t<1)throw new Error("Scale value should be greater than or equal to 1.");if(!(l!=="linear"&&l!=="cubic"||f.length===2||f.length===4&&f[0]===1&&f[1]===1))throw new Error(`'Linear' mode and 'Cubic' mode only support 2-D inputs ('Bilinear', 'Bicubic') or 4-D inputs with the corresponding outermost 2 scale values being 1 in the ${o?"Resize":"Upsample"} opeartor.`)}},1958:(b,n,a)=>{Object.defineProperty(n,"__esModule",{value:!0}),n.ProgramManager=void 0;const u=a(1670),c=a(6231),p=a(8879),s=a(5060);n.ProgramManager=class{constructor(h,f,l){this.profiler=h,this.glContext=f,this.textureLayoutStrategy=l,this.repo=new Map,this.attributesBound=!1}getArtifact(h){return this.repo.get(h)}setArtifact(h,f){this.repo.set(h,f)}run(h,f,l){var o;this.profiler.event("op",`ProgramManager.run ${(o=h.programInfo.name)!==null&&o!==void 0?o:"unknown kernel"}`,()=>{var t;const e=this.glContext.gl,r=h.program;e.useProgram(r);try{this.bindOutput(l),this.attributesBound||this.bindAttributes(h.attribLocations),this.bindUniforms(h.uniformLocations,(t=h.programInfo.variables)!==null&&t!==void 0?t:[],f)}catch(i){throw c.Logger.error("ProgramManager",h.programInfo.shaderSource),i}this.profiler.event("backend","GlContext.draw()",()=>{this.glContext.draw()})},this.glContext)}dispose(){this.vertexShader&&this.glContext.deleteShader(this.vertexShader),this.repo.forEach(h=>this.glContext.deleteProgram(h.program))}build(h,f,l){return this.profiler.event("backend","ProgramManager.build",()=>{const o=new p.GlslPreprocessor(this.glContext,h,f,l),t=o.preprocess(),e=this.compile(t);return{programInfo:h,program:e,uniformLocations:this.getUniformLocations(e,o.context.programInfo.inputNames,o.context.programInfo.variables),attribLocations:this.getAttribLocations(e)}})}compile(h){if(!this.vertexShader){c.Logger.verbose("ProrgramManager","Compiling and caching Vertex shader for the first time");const o=(0,s.getVertexShaderSource)(this.glContext.version);this.vertexShader=this.glContext.compileShader(o,this.glContext.gl.VERTEX_SHADER)}u.env.debug&&c.Logger.verbose("ProrgramManager",`FragShader: -${h} -`);const f=this.glContext.compileShader(h,this.glContext.gl.FRAGMENT_SHADER),l=this.glContext.createProgram(this.vertexShader,f);return this.glContext.deleteShader(f),l}bindOutput(h){const f=h.width,l=h.height;c.Logger.verbose("ProrgramManager",`Binding output texture to Framebuffer: w/h=${f}/${l}, shape=${h.shape}, type=${h.tensor.type}`),this.glContext.attachFramebuffer(h.texture,f,l)}bindAttributes(h){const f=h.position,l=h.textureCoord;this.glContext.setVertexAttributes(f,l),this.attributesBound=!0}bindUniforms(h,f,l){var o;const t=this.glContext.gl;let e=0;for(const{name:r,type:i,location:d,arrayLength:g}of h){const m=(o=f.find(_=>_.name===r))===null||o===void 0?void 0:o.data;if(i!=="sampler2D"&&!m)throw new Error(`variable '${r}' does not have data defined in program info`);switch(i){case"sampler2D":this.bindTexture(l[e],d,e),e++;break;case"float":g?t.uniform1fv(d,m):t.uniform1f(d,m);break;case"int":g?t.uniform1iv(d,m):t.uniform1i(d,m);break;default:throw new Error(`Uniform not implemented: ${i}`)}}}bindTexture(h,f,l){this.glContext.bindTextureToUniform(h.texture,l,f)}getAttribLocations(h){return{position:this.getAttribLocation(h,"position"),textureCoord:this.getAttribLocation(h,"textureCoord")}}getUniformLocations(h,f,l){const o=[];if(f)for(const t of f)o.push({name:t,type:"sampler2D",location:this.getUniformLocation(h,t)});if(l)for(const t of l)o.push(Object.assign(Object.assign({},t),{location:this.getUniformLocation(h,t.name)}));return o}getUniformLocation(h,f){const l=this.glContext.gl.getUniformLocation(h,f);if(l===null)throw new Error(`Uniform ${f} not found.`);return l}getAttribLocation(h,f){return this.glContext.gl.getAttribLocation(h,f)}}},6416:(b,n,a)=>{Object.defineProperty(n,"__esModule",{value:!0}),n.WebGLSessionHandler=void 0;const u=a(6231),c=a(1047),p=a(8316),s=a(1640),h=a(1958),f=a(7859),l=a(5702);n.WebGLSessionHandler=class{constructor(o,t){this.backend=o,this.context=t,this.layoutStrategy=new f.PreferLogicalStrategy(o.glContext.maxTextureSize),this.programManager=new h.ProgramManager(this.context.profiler,o.glContext,this.layoutStrategy),this.textureManager=new l.TextureManager(o.glContext,this.layoutStrategy,this.context.profiler,{reuseTextures:o.textureCacheMode==="full"}),this.packedTextureDataCache=new Map,this.unpackedTextureDataCache=new Map,this.pack=o.pack,this.pack2unpackMap=new Map,this.unpack2packMap=new Map}createInferenceHandler(){return new p.WebGLInferenceHandler(this)}onGraphInitialized(o){const t=o.getValues().filter(e=>e.from===-1&&e.tensor).map(e=>e.tensor.dataId);this.initializers=new Set(t)}isInitializer(o){return!!this.initializers&&this.initializers.has(o)}addInitializer(o){this.initializers.add(o)}getTextureData(o,t){return t?this.packedTextureDataCache.get(o):this.unpackedTextureDataCache.get(o)}setTextureData(o,t,e=!1){u.Logger.verbose("WebGLSessionHandler","Storing Texture data in cache"),e?this.packedTextureDataCache.set(o,t):this.unpackedTextureDataCache.set(o,t)}dispose(){this.programManager.dispose(),this.textureManager.clearActiveTextures(),this.packedTextureDataCache.forEach(o=>this.textureManager.releaseTexture(o,!0)),this.packedTextureDataCache=new Map,this.unpackedTextureDataCache.forEach(o=>this.textureManager.releaseTexture(o,!0)),this.unpackedTextureDataCache=new Map}resolve(o,t,e){const r=(0,c.resolveOperator)(o,t,s.WEBGL_OP_RESOLVE_RULES);return{impl:r.opImpl,context:r.opInit?r.opInit(o,e):o}}}},7769:(b,n,a)=>{Object.defineProperty(n,"__esModule",{value:!0}),n.Uint8DataEncoder=n.RGBAFloatDataEncoder=n.RedFloat32DataEncoder=void 0;const u=a(6231);n.RedFloat32DataEncoder=class{constructor(c,p=1){if(p===1)this.internalFormat=c.R32F,this.format=c.RED,this.textureType=c.FLOAT,this.channelSize=p;else{if(p!==4)throw new Error(`Invalid number of channels: ${p}`);this.internalFormat=c.RGBA32F,this.format=c.RGBA,this.textureType=c.FLOAT,this.channelSize=p}}encode(c,p){let s,h;return c.constructor!==Float32Array&&(u.Logger.warning("Encoder","data was not of type Float32; creating new Float32Array"),h=new Float32Array(c)),p*this.channelSize>c.length?(u.Logger.warning("Encoder","Source data too small. Allocating larger array"),h=c,s=this.allocate(p*this.channelSize),h.forEach((f,l)=>s[l]=f)):(h=c,s=h),s}allocate(c){return new Float32Array(4*c)}decode(c,p){return this.channelSize===1?c.filter((s,h)=>h%4==0).subarray(0,p):c.subarray(0,p)}},n.RGBAFloatDataEncoder=class{constructor(c,p=1,s){if(p!==1&&p!==4)throw new Error(`Invalid number of channels: ${p}`);this.internalFormat=c.RGBA,this.format=c.RGBA,this.channelSize=p,this.textureType=s||c.FLOAT}encode(c,p){let s=c;return this.channelSize===1&&(u.Logger.verbose("Encoder","Exploding into a larger array"),s=this.allocate(p),c.forEach((h,f)=>s[4*f]=h)),s}allocate(c){return new Float32Array(4*c)}decode(c,p){return this.channelSize===1?c.filter((s,h)=>h%4==0).subarray(0,p):c.subarray(0,p)}},n.Uint8DataEncoder=class{constructor(c,p=1){if(this.channelSize=4,p===1)this.internalFormat=c.ALPHA,this.format=c.ALPHA,this.textureType=c.UNSIGNED_BYTE,this.channelSize=p;else{if(p!==4)throw new Error(`Invalid number of channels: ${p}`);this.internalFormat=c.RGBA,this.format=c.RGBA,this.textureType=c.UNSIGNED_BYTE,this.channelSize=p}}encode(c,p){return new Uint8Array(c.buffer,c.byteOffset,c.byteLength)}allocate(c){return new Uint8Array(c*this.channelSize)}decode(c,p){if(c instanceof Uint8Array)return c.subarray(0,p);throw new Error(`Invalid array type: ${c.constructor}`)}}},7859:(b,n,a)=>{Object.defineProperty(n,"__esModule",{value:!0}),n.getBatchDim=n.sizeToSquarishShape=n.getRowsCols=n.sizeFromShape=n.isInt=n.parseAxisParam=n.squeezeShape=n.PreferLogicalStrategy=n.AlwaysKeepOriginalSizeStrategy=void 0;const u=a(6231),c=a(2517);function p(o,t){const e=[],r=[],i=t!=null&&Array.isArray(t)&&t.length===0,d=t==null||i?null:s(t,o).sort();let g=0;for(let m=0;mm)&&o[m]===1&&(e.push(o[m]),r.push(m)),d[g]<=m&&g++}o[m]!==1&&(e.push(o[m]),r.push(m))}return{newShape:e,keptDims:r}}function s(o,t){const e=t.length;return o=o==null?t.map((r,i)=>i):[].concat(o),(0,c.assert)(o.every(r=>r>=-e&&r`All values in axis param must be in range [-${e}, ${e}) but got axis ${o}`),(0,c.assert)(o.every(h),()=>`All values in axis param must be integers but got axis ${o}`),o.map(r=>r<0?e+r:r)}function h(o){return o%1==0}function f(o){if(o.length===0)return 1;let t=o[0];for(let e=1;e=o.length?1:o.slice(t.breakAxis).reduce((m,_)=>m*_),g=t.breakAxis<=0?1:o.slice(0,t.breakAxis).reduce((m,_)=>m*_);if(!(d>e||g>e))return[d,g];u.Logger.verbose("TextureLayout",`Given width/height preferences were unattainable: shape:${o}, breakAxis:${t.breakAxis}`)}const r=o.reduce((d,g)=>d*g);let i=Math.floor(Math.sqrt(r));for(;i=e||r%i!=0)throw new Error(`The given dimensions are outside this GPU's boundaries: ${o}`);return[i,r/i]}},n.PreferLogicalStrategy=class{constructor(o){this.maxTextureSize=o}computeTextureWH(o,t){const e=this.computeTexture(o,t);return t&&t.isPacked&&(e[0]/=2,e[1]/=2),t&&t.reverseWH?[e[1],e[0]]:e}computeTexture(o,t){const e=t&&t.isPacked;if(o.length===0)return e?[2,2]:[1,1];let r=this.maxTextureSize;if(t&&t.breakAxis!==void 0){const g=t.breakAxis>=o.length?1:o.slice(t.breakAxis).reduce((_,y)=>_*y),m=t.breakAxis<=0?1:o.slice(0,t.breakAxis).reduce((_,y)=>_*y);if(!(g>r||m>r))return[g,m];u.Logger.verbose("TextureLayout",`Given width/height preferences were unattainable: shape:${o}, breakAxis:${t.breakAxis}`)}let i=o.slice(0);e&&(r*=2,i=i.map((g,m)=>m>=i.length-2?i[m]%2==0?i[m]:i[m]+1:i[m]),i.length===1&&(i=[2,i[0]])),i.length!==2&&(i=p(i).newShape);const d=f(i);return i.length<=1&&d<=r?[1,d]:i.length===2&&i[0]<=r&&i[1]<=r?i:i.length===3&&i[0]*i[1]<=r&&i[2]<=r?[i[0]*i[1],i[2]]:i.length===3&&i[0]<=r&&i[1]*i[2]<=r?[i[0],i[1]*i[2]]:i.length===4&&i[0]*i[1]*i[2]<=r&&i[3]<=r?[i[0]*i[1]*i[2],i[3]]:i.length===4&&i[0]<=r&&i[1]*i[2]*i[3]<=r?[i[0],i[1]*i[2]*i[3]]:e?l(d/4).map(g=>2*g):l(d)}},n.squeezeShape=p,n.parseAxisParam=s,n.isInt=h,n.sizeFromShape=f,n.getRowsCols=function(o){if(o.length===0)throw Error("Cannot get rows and columns of an empty shape array.");return[o.length>1?o[o.length-2]:1,o[o.length-1]]},n.sizeToSquarishShape=l,n.getBatchDim=function(o,t=2){return f(o.slice(0,o.length-t))}},4057:(b,n,a)=>{Object.defineProperty(n,"__esModule",{value:!0}),n.createTextureLayoutFromShape=n.calculateTextureWidthAndHeight=n.createTextureLayoutFromTextureType=void 0;const u=a(2517),c=a(2039);n.createTextureLayoutFromTextureType=(p,s,h)=>{const f=h===c.TextureType.unpacked||h===c.TextureType.unpackedReversed?1:4,l=h===c.TextureType.packed,o=h===c.TextureType.unpackedReversed||h===c.TextureType.packed,t=h===c.TextureType.packedLastDimension?s.length-1:void 0,e=h===c.TextureType.packedLastDimension?s.map((r,i)=>i===s.length-1?4*r:r):void 0;return(0,n.createTextureLayoutFromShape)(p,s,f,e,{isPacked:l,reverseWH:o,breakAxis:t})},n.calculateTextureWidthAndHeight=(p,s,h)=>{const f=(0,n.createTextureLayoutFromTextureType)(p,s,h);return[f.width,f.height]},n.createTextureLayoutFromShape=(p,s,h=1,f,l)=>{const o=!(!l||!l.isPacked),[t,e]=p.computeTextureWH(o&&f||s,l),r=s.length;let i=s.slice(0);if(r===0&&(i=[1]),h===1)f=s;else if(o){if(h!==4)throw new Error("a packed texture must be 4-channel");f=s,r>0&&(i[r-1]=Math.ceil(i[r-1]/2)),r>1&&(i[r-2]=Math.ceil(i[r-2]/2))}else if(!f)throw new Error("Unpacked shape is needed when using channels > 1");return{width:t,height:e,channels:h,isPacked:o,shape:i,strides:u.ShapeUtil.computeStrides(i),unpackedShape:f,reversedWH:l&&l.reverseWH}}},5702:(b,n,a)=>{Object.defineProperty(n,"__esModule",{value:!0}),n.TextureManager=void 0;const u=a(6231);n.TextureManager=class{constructor(c,p,s,h){this.glContext=c,this.layoutStrategy=p,this.profiler=s,this.config=h,this.pendingRead=new Map,h.reuseTextures&&(this.inUseTextures=new Map,this.idleTextures=new Map,this.textureLookup=new Map)}createTextureFromLayout(c,p,s,h){const f=this.toEncoderType(c),l=this.glContext.getEncoder(f,p.channels||1,h);if(p.isPacked&&h===1)throw new Error("not implemented");const o=p.width,t=p.height;let e,r;if(this.config.reuseTextures){e=`${o}x${t}_${l.format}_${l.internalFormat}_${l.textureType}`,r=this.inUseTextures.get(e),r||(r=[],this.inUseTextures.set(e,r));const d=this.idleTextures.get(e);if(d&&d.length>0){const g=d.pop();return r.push(g),h===1&&this.glContext.updateTexture(g,o,t,l,this.toTextureData(c,s)),g}}u.Logger.verbose("TextureManager",`Creating new texture of size ${p.width}x${p.height}`);const i=this.glContext.allocateTexture(o,t,l,this.toTextureData(c,s));return this.config.reuseTextures&&(r.push(i),this.textureLookup.set(i,e)),i}readTexture(c,p,s){return s||(s=1),this.profiler.event("backend","TextureManager.readTexture",()=>{const h=c.shape.reduce((l,o)=>l*o)*s,f=this.glContext.readTexture(c.texture,c.width,c.height,h,this.toEncoderType(p),s);return this.toTensorData(p,f)})}async readTextureAsync(c,p,s){const h=c.tensor.dataId;if(s||(s=1),this.pendingRead.has(h)){const f=this.pendingRead.get(h);return new Promise(l=>f==null?void 0:f.push(l))}return this.profiler.event("backend","TextureManager.readTextureAsync",async()=>{this.pendingRead.set(h,[]);const f=c.shape.reduce((e,r)=>e*r)*s;await this.glContext.createAndWaitForFence();const l=this.glContext.readTexture(c.texture,c.width,c.height,f,this.toEncoderType(p),s),o=this.toTensorData(p,l),t=this.pendingRead.get(h);return this.pendingRead.delete(h),t==null||t.forEach(e=>e(o)),o})}readUint8TextureAsFloat(c){return this.profiler.event("backend","TextureManager.readUint8TextureAsFloat",()=>{const p=c.shape.reduce((h,f)=>h*f),s=this.glContext.readTexture(c.texture,c.width,c.height,4*p,"byte",4);return new Float32Array(s.buffer,s.byteOffset,p)})}releaseTexture(c,p){let s;if(this.config.reuseTextures&&(s=this.textureLookup.get(c.texture),s)){p&&this.textureLookup.delete(s);const h=this.inUseTextures.get(s);if(h){const f=h.indexOf(c.texture);if(f!==-1){h.splice(f,1);let l=this.idleTextures.get(s);l||(l=[],this.idleTextures.set(s,l)),l.push(c.texture)}}}s&&!p||(u.Logger.verbose("TextureManager",`Deleting texture of size ${c.width}x${c.height}`),this.glContext.deleteTexture(c.texture))}toTensorData(c,p){switch(c){case"int16":return p instanceof Int16Array?p:Int16Array.from(p);case"int32":return p instanceof Int32Array?p:Int32Array.from(p);case"int8":return p instanceof Int8Array?p:Int8Array.from(p);case"uint16":return p instanceof Uint16Array?p:Uint16Array.from(p);case"uint32":return p instanceof Uint32Array?p:Uint32Array.from(p);case"uint8":case"bool":return p instanceof Uint8Array?p:Uint8Array.from(p);case"float32":return p instanceof Float32Array?p:Float32Array.from(p);case"float64":return p instanceof Float64Array?p:Float64Array.from(p);default:throw new Error(`TensorData type ${c} is not supported`)}}toTextureData(c,p){if(p)return p instanceof Float32Array?p:new Float32Array(p)}toEncoderType(c){return"float"}clearActiveTextures(){this.glContext.clearActiveTextures()}}},2039:(b,n)=>{var a;Object.defineProperty(n,"__esModule",{value:!0}),n.TextureType=void 0,(a=n.TextureType||(n.TextureType={}))[a.unpacked=0]="unpacked",a[a.unpackedReversed=1]="unpackedReversed",a[a.packed=2]="packed",a[a.downloadUint8AsFloat=3]="downloadUint8AsFloat",a[a.packedLastDimension=4]="packedLastDimension"},9390:(b,n,a)=>{Object.defineProperty(n,"__esModule",{value:!0}),n.getGlChannels=n.getCoordsDataType=n.getSqueezedParams=n.squeezeInputShape=n.generateShaderFuncNameFromInputSamplerNameAtOutCoords=n.generateShaderFuncNameFromInputSamplerName=n.repeatedTry=n.getPackedShape=void 0;const u=a(2517);n.getPackedShape=function(c){const p=c.length;return c.slice(0,p-1).concat(c[p-1]/4)},n.repeatedTry=async function(c,p=h=>0,s){return new Promise((h,f)=>{let l=0;const o=()=>{if(c())return void h();l++;const t=p(l);s!=null&&l>=s?f():setTimeout(o,t)};o()})},n.generateShaderFuncNameFromInputSamplerName=function(c){return(0,u.assert)(c!==void 0&&c.length!==0,()=>"empty string found for sampler name"),"get"+c.charAt(0).toUpperCase()+c.slice(1)},n.generateShaderFuncNameFromInputSamplerNameAtOutCoords=function(c){return(0,u.assert)(c!==void 0&&c.length!==0,()=>"empty string found for sampler name"),"get"+c.charAt(0).toUpperCase()+c.slice(1)+"AtOutCoords"},n.squeezeInputShape=function(c,p){let s=JSON.parse(JSON.stringify(c));return s=p,s},n.getSqueezedParams=function(c,p){return p.map(s=>c[s]).join(", ")},n.getCoordsDataType=function(c){if(c<=1)return"int";if(c===2)return"ivec2";if(c===3)return"ivec3";if(c===4)return"ivec4";if(c===5)return"ivec5";if(c===6)return"ivec6";throw Error(`GPU for rank ${c} is not yet supported`)},n.getGlChannels=function(c=6){return["x","y","z","w","u","v"].slice(0,c)}},7305:(b,n,a)=>{Object.defineProperty(n,"__esModule",{value:!0}),n.createNewWebGLContext=n.createWebGLContext=void 0;const u=a(6231),c=a(1713),p={};function s(h){const f=function(){if(typeof document>"u"){if(typeof OffscreenCanvas>"u")throw new TypeError("failed to create canvas: OffscreenCanvas is not supported");return new OffscreenCanvas(1,1)}const t=document.createElement("canvas");return t.width=1,t.height=1,t}();let l;const o={alpha:!1,depth:!1,antialias:!1,stencil:!1,preserveDrawingBuffer:!1,premultipliedAlpha:!1,failIfMajorPerformanceCaveat:!1};if((!h||h==="webgl2")&&(l=f.getContext("webgl2",o),l))try{return new c.WebGLContext(l,2)}catch(t){u.Logger.warning("GlContextFactory",`failed to create WebGLContext using contextId 'webgl2'. Error: ${t}`)}if((!h||h==="webgl")&&(l=f.getContext("webgl",o)||f.getContext("experimental-webgl",o),l))try{return new c.WebGLContext(l,1)}catch(t){u.Logger.warning("GlContextFactory",`failed to create WebGLContext using contextId 'webgl' or 'experimental-webgl'. Error: ${t}`)}throw new Error("WebGL is not supported")}n.createWebGLContext=function h(f){let l;f&&f!=="webgl2"||!("webgl2"in p)?f&&f!=="webgl"||!("webgl"in p)||(l=p.webgl):l=p.webgl2,l=l||s(f),f=f||l.version===1?"webgl":"webgl2";const o=l.gl;return p[f]=l,o.isContextLost()?(delete p[f],h(f)):(o.disable(o.DEPTH_TEST),o.disable(o.STENCIL_TEST),o.disable(o.BLEND),o.disable(o.DITHER),o.disable(o.POLYGON_OFFSET_FILL),o.disable(o.SAMPLE_COVERAGE),o.enable(o.SCISSOR_TEST),o.enable(o.CULL_FACE),o.cullFace(o.BACK),l)},n.createNewWebGLContext=s},1713:function(b,n,a){var u=this&&this.__createBinding||(Object.create?function(o,t,e,r){r===void 0&&(r=e);var i=Object.getOwnPropertyDescriptor(t,e);i&&!("get"in i?!t.__esModule:i.writable||i.configurable)||(i={enumerable:!0,get:function(){return t[e]}}),Object.defineProperty(o,r,i)}:function(o,t,e,r){r===void 0&&(r=e),o[r]=t[e]}),c=this&&this.__setModuleDefault||(Object.create?function(o,t){Object.defineProperty(o,"default",{enumerable:!0,value:t})}:function(o,t){o.default=t}),p=this&&this.__importStar||function(o){if(o&&o.__esModule)return o;var t={};if(o!=null)for(var e in o)e!=="default"&&Object.prototype.hasOwnProperty.call(o,e)&&u(t,o,e);return c(t,o),t};Object.defineProperty(n,"__esModule",{value:!0}),n.WebGLContext=n.linearSearchLastTrue=void 0;const s=a(1670),h=p(a(7769)),f=a(9390);function l(o){let t=0;for(;tthis.isTimerResultAvailable(o)),this.getTimerResult(o)}async createAndWaitForFence(){const o=this.createFence(this.gl);return this.pollFence(o)}createFence(o){let t;const e=o,r=e.fenceSync(e.SYNC_GPU_COMMANDS_COMPLETE,0);return o.flush(),t=r===null?()=>!0:()=>{const i=e.clientWaitSync(r,0,0);return i===e.ALREADY_SIGNALED||i===e.CONDITION_SATISFIED},{query:r,isFencePassed:t}}async pollFence(o){return new Promise(t=>{this.addItemToPoll(()=>o.isFencePassed(),()=>t())})}pollItems(){const o=l(this.itemsToPoll.map(t=>t.isDoneFn));for(let t=0;t<=o;++t){const{resolveFn:e}=this.itemsToPoll[t];e()}this.itemsToPoll=this.itemsToPoll.slice(o+1)}async addItemToPoll(o,t){this.itemsToPoll.push({isDoneFn:o,resolveFn:t}),this.itemsToPoll.length>1||await(0,f.repeatedTry)(()=>(this.pollItems(),this.itemsToPoll.length===0))}}},1036:(b,n,a)=>{Object.defineProperty(n,"__esModule",{value:!0}),n.ExecutionPlan=void 0;const u=a(6231);class c{constructor(s,h){this.op=s,this.node=h}}n.ExecutionPlan=class{constructor(p,s,h){this.graph=p,this.profiler=h,this.initialize(s)}initialize(p){this.profiler.event("session","ExecutionPlan.initialize",()=>{const s=this.graph.getNodes();if(s.length!==p.length)throw new Error("The size of nodes and OPs do not match.");this._ops=p.map((h,f)=>new c(h,s[f])),this.reset(),this._starter=[],this._ops.forEach((h,f)=>{let l=!0;for(const o of h.node.inputs)if(!this._values[o]&&this.graph.getInputIndices().indexOf(o)===-1){l=!1;break}l&&this._starter.push(f)})})}reset(){this._values=this.graph.getValues().map(p=>p.tensor)}async execute(p,s){return this.profiler.event("session","ExecutionPlan.execute",async()=>{this.reset();const h=p.createInferenceHandler(),f=this.graph.getInputIndices();if(s.length!==f.length)throw new Error(`number of input tensors don't match the number of inputs to the model: actual: ${s.length} expected: ${f.length}`);s.forEach((i,d)=>{const g=f[d];this._values[g]=i});const l=this._starter.slice(0),o=this.graph.getValues(),t=this.graph.getNodes();let e=0;for(;ethis._values[w]);if(g.indexOf(void 0)!==-1)throw new Error(`unresolved input detected: op: ${d.node}`);const m=g;u.Logger.verbose("ExecPlan",`Runing op:${d.node.name} (${m.map((w,v)=>`'${d.node.inputs[v]}': ${w.type}[${w.dims.join(",")}]`).join(", ")})`);const _=await this.profiler.event("node",d.node.name,async()=>d.op.impl(h,m,d.op.context));if(_.length!==d.node.outputs.length)throw new Error("the size of output does not match model definition.");_.forEach((w,v)=>{const S=d.node.outputs[v];if(this._values[S])throw new Error(`output [${S}] already has value: op:${d.node.name}`);this._values[S]=w});const y=new Set;_.forEach((w,v)=>{const S=d.node.outputs[v];for(const O of o[S].to){const A=t[O];let T=!0;for(const M of A.inputs)if(!this._values[M]){T=!1;break}T&&y.add(O)}}),l.push(...y)}const r=[];for(let i=0;i{Object.defineProperty(n,"__esModule",{value:!0}),n.Graph=void 0;const u=a(1446),c=a(7778),p=a(9395),s=a(9162),h=a(2517);var f=p.onnxruntime.experimental.fbs;n.Graph={from:(e,r)=>new t(e,r)};class l{constructor(r){this._from=void 0,this._to=[],this.tensor=void 0,this.type=void 0,r&&(this.type=h.ProtoUtil.tensorValueTypeFromProto(r.type.tensorType))}get from(){return this._from}get to(){return this._to}}class o{constructor(r,i){r instanceof u.onnx.NodeProto?(this.name=r.name,this.opType=r.opType,this.attributes=new c.Attribute(r.attribute)):r instanceof f.Node&&(this.name=i??r.name(),this.opType=r.opType(),this.attributes=new c.Attribute(h.ProtoUtil.tensorAttributesFromORTFormat(r))),this.inputs=[],this.outputs=[],this.executeNode=!0}}class t{constructor(r,i){if(!r)throw new TypeError("graph is empty");this.buildGraph(r),this.transformGraph(i),this.checkIsAcyclic()}getInputIndices(){return this._allInputIndices}getInputNames(){return this._allInputNames}getOutputIndices(){return this._allOutputIndices}getOutputNames(){return this._allOutputNames}getValues(){return this._allData}getNodes(){return this._nodes}buildGraph(r){if(r instanceof u.onnx.GraphProto)this.buildGraphFromOnnxFormat(r);else{if(!(r instanceof f.Graph))throw new TypeError("Graph type is not supported.");this.buildGraphFromOrtFormat(r)}}buildGraphFromOnnxFormat(r){const i=new Map;this._allData=[],this._allInputIndices=[],this._allInputNames=[],this._allOutputIndices=[],this._allOutputNames=[],this._nodes=[];const d=new Map;if(!r.input)throw new Error("missing information in graph: input");const g=[];for(const m of r.input){if(i.has(m.name))throw new Error(`duplicated input name: ${m.name}`);const _=this._allData.push(new l(m))-1;i.set(m.name,_),g.push(m.name)}if(!r.initializer)throw new Error("missing information in graph: initializer");for(const m of r.initializer){let _=i.get(m.name);if(_===void 0){const y=new l;y.type={shape:{dims:h.ProtoUtil.tensorDimsFromProto(m.dims)},tensorType:h.ProtoUtil.tensorDataTypeFromProto(m.dataType)},_=this._allData.push(y)-1,i.set(m.name,_)}this._allData[_]._from=-1,this._allData[_].tensor=s.Tensor.fromProto(m)}for(let m=0;m{this._allData[g]._to.forEach(m=>{r.add(m)})});const i=Array.from(r),d=new Array(this._nodes.length).fill("white");for(;i.length>0;){const g=i.pop();d[g]==="gray"?d[g]="black":(i.push(g),d[g]="gray",this._nodes[g].outputs.forEach(m=>{const _=this._allData[m];if(_.tensor!==void 0)throw new Error("node outputs should not be initialized");if(_._from!==g)throw new Error("from property of the Value object doesn't match index of Node being processed");_._to.forEach(y=>{if(d[y]==="gray")throw new Error("model graph is cyclic");d[y]==="white"&&i.push(y)})}))}}transformGraph(r){this.removeAllIdentityNodes(),this.removeAllDropoutNodes(),this.fuseConvActivationNodes(),r&&r.transformGraph(this),this.finalizeGraph()}finalizeGraph(){let r=0;for(let i=0;i0&&(this._nodes[i].inputs.forEach(d=>{const g=this._allData[d]._to.indexOf(i+r);g!==-1&&(this._allData[d]._to[g]=i)}),this._nodes[i].outputs.forEach(d=>{this._allData[d]._from&&this._allData[d]._from===i+r&&(this._allData[d]._from=i)})):(r++,this._nodes[i].outputs.forEach(d=>{this._allData[d]._from=-2}),this._nodes.splice(i,1),i--);r=0;for(let i=0;i0){let d=-1;this._allData[i].from!==void 0&&this._allData[i].from!==-1?(d=this._nodes[this._allData[i].from].outputs.indexOf(i+r),d!==-1&&(this._nodes[this._allData[i].from].outputs[d]=i)):(d=this._allInputIndices.indexOf(i+r),d!==-1&&(this._allInputIndices[d]=i)),this._allData[i].to.forEach(g=>{d=this._nodes[g].inputs.indexOf(i+r),d!==-1&&(this._nodes[g].inputs[d]=i)}),this._allData[i].to.length===0&&(d=this._allOutputIndices.indexOf(i+r),d!==-1&&(this._allOutputIndices[d]=i))}}else r++,this._allData.splice(i,1),i--}deleteNode(r){const i=this._nodes[r];if(i.outputs.length>1){for(let w=1;w0)throw new Error("Node deletion with more than one output connected to other nodes is not supported. ")}i.executeNode=!1;const d=i.inputs[0],g=i.outputs[0],m=this._allData[g].to,_=this._allData[d].to.indexOf(r);if(_===-1)throw new Error("The Value object doesn't have the current Node in it's 'to' property ");this._allData[d].to.splice(_,1),this._allData[g]._to=[];const y=this._allOutputIndices.indexOf(g);if(y!==-1&&(this._allOutputIndices[y]=d),m&&m.length>0)for(const w of m){const v=this._nodes[w].inputs.indexOf(g);if(v===-1)throw new Error("The Node object doesn't have the output Value in it's 'inputs' property ");this._nodes[w].inputs[v]=d,this._allData[d].to.push(w)}}removeAllDropoutNodes(){let r=0;for(const i of this._nodes){if(i.opType==="Dropout"){if(i.inputs.length!==1)throw new Error("Dropout nodes should only contain one input. ");if(i.outputs.length!==1&&i.outputs.length!==2)throw new Error("Dropout nodes should contain either 1 or 2 output(s)");if(i.outputs.length===2&&this._allData[i.outputs[1]]._to.length!==0)throw new Error("Dropout nodes's second output should not be referenced by other nodes");this.deleteNode(r)}r++}}removeAllIdentityNodes(){let r=0;for(const i of this._nodes)i.opType==="Identity"&&this.deleteNode(r),r++}isActivation(r){switch(r.opType){case"Relu":case"Sigmoid":case"Clip":return!0;default:return!1}}fuseConvActivationNodes(){for(const r of this._nodes)if(r.opType==="Conv"){const i=this._allData[r.outputs[0]]._to;if(i.length===1&&this.isActivation(this._nodes[i[0]])){const d=this._nodes[i[0]];if(d.opType==="Clip")if(d.inputs.length===1)try{r.attributes.set("activation_params","floats",[d.attributes.getFloat("min"),d.attributes.getFloat("max")])}catch{r.attributes.set("activation_params","floats",[h.MIN_CLIP,h.MAX_CLIP])}else{if(!(d.inputs.length>=3&&this._allData[d.inputs[1]].tensor!==void 0&&this._allData[d.inputs[2]].tensor!==void 0))continue;r.attributes.set("activation_params","floats",[this._allData[d.inputs[1]].tensor.floatData[0],this._allData[d.inputs[2]].tensor.floatData[0]])}r.attributes.set("activation","string",d.opType),this.deleteNode(i[0])}}}}},6231:(b,n)=>{Object.defineProperty(n,"__esModule",{value:!0}),n.now=n.Profiler=n.Logger=void 0;const a={verbose:1e3,info:2e3,warning:4e3,error:5e3,fatal:6e3},u={none:new class{log(o,t,e){}},console:new class{log(o,t,e){console.log(`${this.color(o)} ${e?"\x1B[35m"+e+"\x1B[0m ":""}${t}`)}color(o){switch(o){case"verbose":return"\x1B[34;40mv\x1B[0m";case"info":return"\x1B[32mi\x1B[0m";case"warning":return"\x1B[30;43mw\x1B[0m";case"error":return"\x1B[31;40me\x1B[0m";case"fatal":return"\x1B[101mf\x1B[0m";default:throw new Error(`unsupported severity: ${o}`)}}}},c={provider:"console",minimalSeverity:"warning",logDateTime:!0,logSourceLocation:!1};let p={"":c};function s(o,t,e,r){if(t===void 0)return i=o,{verbose:s.verbose.bind(null,i),info:s.info.bind(null,i),warning:s.warning.bind(null,i),error:s.error.bind(null,i),fatal:s.fatal.bind(null,i)};if(e===void 0)h(o,t);else if(typeof e=="number"&&r===void 0)h(o,t);else if(typeof e=="string"&&r===void 0)h(o,e,0,t);else{if(typeof e!="string"||typeof r!="number")throw new TypeError("input is valid");h(o,e,0,t)}var i}function h(o,t,e,r){const i=p[r||""]||p[""];a[o]{g.then(async y=>{i&&await i.end(),m(y)},async y=>{i&&await i.end(),_(y)})});if(!d&&i){const m=i.end();if(m&&typeof m.then=="function")return new Promise((_,y)=>{m.then(()=>{_(g)},w=>{y(w)})})}return g}begin(o,t,e){if(!this._started)throw new Error("profiler is not started yet");if(e===void 0){const r=(0,n.now)();return this.flush(r),new f(o,t,r,i=>this.endSync(i))}{const r=e.beginTimer();return new f(o,t,0,async i=>this.end(i),r,e)}}async end(o){const t=await o.checkTimer();this._timingEvents.length=this._flushBatchSize||o-this._flushTime>=this._flushIntervalInMilliseconds){for(const t=this._flushPointer;this._flushPointerperformance.now():Date.now},2644:(b,n,a)=>{Object.defineProperty(n,"__esModule",{value:!0}),n.Model=void 0;const u=a(5686),c=a(1446),p=a(7070),s=a(9395),h=a(2517);var f=s.onnxruntime.experimental.fbs;n.Model=class{constructor(){}load(l,o,t){if(!t)try{return void this.loadFromOnnxFormat(l,o)}catch(e){if(t!==void 0)throw e}this.loadFromOrtFormat(l,o)}loadFromOnnxFormat(l,o){const t=c.onnx.ModelProto.decode(l);if(h.LongUtil.longToNumber(t.irVersion)<3)throw new Error("only support ONNX model with IR_VERSION>=3");this._opsets=t.opsetImport.map(e=>({domain:e.domain,version:h.LongUtil.longToNumber(e.version)})),this._graph=p.Graph.from(t.graph,o)}loadFromOrtFormat(l,o){const t=new u.flatbuffers.ByteBuffer(l),e=f.InferenceSession.getRootAsInferenceSession(t).model();if(h.LongUtil.longToNumber(e.irVersion())<3)throw new Error("only support ONNX model with IR_VERSION>=3");this._opsets=[];for(let r=0;r{Object.defineProperty(n,"__esModule",{value:!0}),n.FLOAT_TYPES=n.INT_TYPES=n.NUMBER_TYPES=void 0,n.NUMBER_TYPES=["float32","float64","int32","int16","int8","uint16","uint32","uint8"],n.INT_TYPES=["int32","int16","int8","uint16","uint32","uint8"],n.FLOAT_TYPES=["float32","float64"]},1047:(b,n)=>{function a(u,c){if(c.endsWith("+")){const p=Number.parseInt(c.substring(0,c.length-1),10);return!isNaN(p)&&p<=u}if(c.split("-").length===2){const p=c.split("-"),s=Number.parseInt(p[0],10),h=Number.parseInt(p[1],10);return!isNaN(s)&&!isNaN(h)&&s<=u&&u<=h}return Number.parseInt(c,10)===u}Object.defineProperty(n,"__esModule",{value:!0}),n.resolveOperator=void 0,n.resolveOperator=function(u,c,p){for(const s of p){const h=s[0],f=s[1],l=s[2],o=s[3],t=s[4];if(u.opType===h){for(const e of c)if((e.domain===f||e.domain==="ai.onnx"&&f==="")&&a(e.version,l))return{opImpl:o,opInit:t}}}throw new TypeError(`cannot resolve operator '${u.opType}' with opsets: ${c.map(s=>`${s.domain||"ai.onnx"} v${s.version}`).join(", ")}`)}},9395:(b,n,a)=>{Object.defineProperty(n,"__esModule",{value:!0}),n.onnxruntime=void 0;const u=a(5686);var c,p;c=n.onnxruntime||(n.onnxruntime={}),function(s){(function(h){h[h.UNDEFINED=0]="UNDEFINED",h[h.FLOAT=1]="FLOAT",h[h.INT=2]="INT",h[h.STRING=3]="STRING",h[h.TENSOR=4]="TENSOR",h[h.GRAPH=5]="GRAPH",h[h.FLOATS=6]="FLOATS",h[h.INTS=7]="INTS",h[h.STRINGS=8]="STRINGS",h[h.TENSORS=9]="TENSORS",h[h.GRAPHS=10]="GRAPHS",h[h.SPARSE_TENSOR=11]="SPARSE_TENSOR",h[h.SPARSE_TENSORS=12]="SPARSE_TENSORS"})(s.AttributeType||(s.AttributeType={}))}((p=c.experimental||(c.experimental={})).fbs||(p.fbs={})),function(s){(function(h){(function(f){(function(l){l[l.UNKNOWN=0]="UNKNOWN",l[l.VALUE=1]="VALUE",l[l.PARAM=2]="PARAM"})(f.DimensionValueType||(f.DimensionValueType={}))})(h.fbs||(h.fbs={}))})(s.experimental||(s.experimental={}))}(n.onnxruntime||(n.onnxruntime={})),function(s){(function(h){(function(f){(function(l){l[l.UNDEFINED=0]="UNDEFINED",l[l.FLOAT=1]="FLOAT",l[l.UINT8=2]="UINT8",l[l.INT8=3]="INT8",l[l.UINT16=4]="UINT16",l[l.INT16=5]="INT16",l[l.INT32=6]="INT32",l[l.INT64=7]="INT64",l[l.STRING=8]="STRING",l[l.BOOL=9]="BOOL",l[l.FLOAT16=10]="FLOAT16",l[l.DOUBLE=11]="DOUBLE",l[l.UINT32=12]="UINT32",l[l.UINT64=13]="UINT64",l[l.COMPLEX64=14]="COMPLEX64",l[l.COMPLEX128=15]="COMPLEX128",l[l.BFLOAT16=16]="BFLOAT16"})(f.TensorDataType||(f.TensorDataType={}))})(h.fbs||(h.fbs={}))})(s.experimental||(s.experimental={}))}(n.onnxruntime||(n.onnxruntime={})),function(s){(function(h){(function(f){(function(l){l[l.Primitive=0]="Primitive",l[l.Fused=1]="Fused"})(f.NodeType||(f.NodeType={}))})(h.fbs||(h.fbs={}))})(s.experimental||(s.experimental={}))}(n.onnxruntime||(n.onnxruntime={})),function(s){(function(h){(function(f){(function(l){l[l.NONE=0]="NONE",l[l.tensor_type=1]="tensor_type",l[l.sequence_type=2]="sequence_type",l[l.map_type=3]="map_type"})(f.TypeInfoValue||(f.TypeInfoValue={}))})(h.fbs||(h.fbs={}))})(s.experimental||(s.experimental={}))}(n.onnxruntime||(n.onnxruntime={})),function(s){(function(h){(function(f){class l{constructor(){this.bb=null,this.bb_pos=0}__init(t,e){return this.bb_pos=t,this.bb=e,this}static getRootAsShape(t,e){return(e||new l).__init(t.readInt32(t.position())+t.position(),t)}static getSizePrefixedRootAsShape(t,e){return t.setPosition(t.position()+u.flatbuffers.SIZE_PREFIX_LENGTH),(e||new l).__init(t.readInt32(t.position())+t.position(),t)}dim(t,e){let r=this.bb.__offset(this.bb_pos,4);return r?(e||new s.experimental.fbs.Dimension).__init(this.bb.__indirect(this.bb.__vector(this.bb_pos+r)+4*t),this.bb):null}dimLength(){let t=this.bb.__offset(this.bb_pos,4);return t?this.bb.__vector_len(this.bb_pos+t):0}static startShape(t){t.startObject(1)}static addDim(t,e){t.addFieldOffset(0,e,0)}static createDimVector(t,e){t.startVector(4,e.length,4);for(let r=e.length-1;r>=0;r--)t.addOffset(e[r]);return t.endVector()}static startDimVector(t,e){t.startVector(4,e,4)}static endShape(t){return t.endObject()}static createShape(t,e){return l.startShape(t),l.addDim(t,e),l.endShape(t)}}f.Shape=l})(h.fbs||(h.fbs={}))})(s.experimental||(s.experimental={}))}(n.onnxruntime||(n.onnxruntime={})),function(s){(function(h){(function(f){class l{constructor(){this.bb=null,this.bb_pos=0}__init(t,e){return this.bb_pos=t,this.bb=e,this}static getRootAsDimension(t,e){return(e||new l).__init(t.readInt32(t.position())+t.position(),t)}static getSizePrefixedRootAsDimension(t,e){return t.setPosition(t.position()+u.flatbuffers.SIZE_PREFIX_LENGTH),(e||new l).__init(t.readInt32(t.position())+t.position(),t)}value(t){let e=this.bb.__offset(this.bb_pos,4);return e?(t||new s.experimental.fbs.DimensionValue).__init(this.bb.__indirect(this.bb_pos+e),this.bb):null}denotation(t){let e=this.bb.__offset(this.bb_pos,6);return e?this.bb.__string(this.bb_pos+e,t):null}static startDimension(t){t.startObject(2)}static addValue(t,e){t.addFieldOffset(0,e,0)}static addDenotation(t,e){t.addFieldOffset(1,e,0)}static endDimension(t){return t.endObject()}static createDimension(t,e,r){return l.startDimension(t),l.addValue(t,e),l.addDenotation(t,r),l.endDimension(t)}}f.Dimension=l})(h.fbs||(h.fbs={}))})(s.experimental||(s.experimental={}))}(n.onnxruntime||(n.onnxruntime={})),function(s){(function(h){(function(f){class l{constructor(){this.bb=null,this.bb_pos=0}__init(t,e){return this.bb_pos=t,this.bb=e,this}static getRootAsDimensionValue(t,e){return(e||new l).__init(t.readInt32(t.position())+t.position(),t)}static getSizePrefixedRootAsDimensionValue(t,e){return t.setPosition(t.position()+u.flatbuffers.SIZE_PREFIX_LENGTH),(e||new l).__init(t.readInt32(t.position())+t.position(),t)}dimType(){let t=this.bb.__offset(this.bb_pos,4);return t?this.bb.readInt8(this.bb_pos+t):s.experimental.fbs.DimensionValueType.UNKNOWN}dimValue(){let t=this.bb.__offset(this.bb_pos,6);return t?this.bb.readInt64(this.bb_pos+t):this.bb.createLong(0,0)}dimParam(t){let e=this.bb.__offset(this.bb_pos,8);return e?this.bb.__string(this.bb_pos+e,t):null}static startDimensionValue(t){t.startObject(3)}static addDimType(t,e){t.addFieldInt8(0,e,s.experimental.fbs.DimensionValueType.UNKNOWN)}static addDimValue(t,e){t.addFieldInt64(1,e,t.createLong(0,0))}static addDimParam(t,e){t.addFieldOffset(2,e,0)}static endDimensionValue(t){return t.endObject()}static createDimensionValue(t,e,r,i){return l.startDimensionValue(t),l.addDimType(t,e),l.addDimValue(t,r),l.addDimParam(t,i),l.endDimensionValue(t)}}f.DimensionValue=l})(h.fbs||(h.fbs={}))})(s.experimental||(s.experimental={}))}(n.onnxruntime||(n.onnxruntime={})),function(s){(function(h){(function(f){class l{constructor(){this.bb=null,this.bb_pos=0}__init(t,e){return this.bb_pos=t,this.bb=e,this}static getRootAsTensorTypeAndShape(t,e){return(e||new l).__init(t.readInt32(t.position())+t.position(),t)}static getSizePrefixedRootAsTensorTypeAndShape(t,e){return t.setPosition(t.position()+u.flatbuffers.SIZE_PREFIX_LENGTH),(e||new l).__init(t.readInt32(t.position())+t.position(),t)}elemType(){let t=this.bb.__offset(this.bb_pos,4);return t?this.bb.readInt32(this.bb_pos+t):s.experimental.fbs.TensorDataType.UNDEFINED}shape(t){let e=this.bb.__offset(this.bb_pos,6);return e?(t||new s.experimental.fbs.Shape).__init(this.bb.__indirect(this.bb_pos+e),this.bb):null}static startTensorTypeAndShape(t){t.startObject(2)}static addElemType(t,e){t.addFieldInt32(0,e,s.experimental.fbs.TensorDataType.UNDEFINED)}static addShape(t,e){t.addFieldOffset(1,e,0)}static endTensorTypeAndShape(t){return t.endObject()}static createTensorTypeAndShape(t,e,r){return l.startTensorTypeAndShape(t),l.addElemType(t,e),l.addShape(t,r),l.endTensorTypeAndShape(t)}}f.TensorTypeAndShape=l})(h.fbs||(h.fbs={}))})(s.experimental||(s.experimental={}))}(n.onnxruntime||(n.onnxruntime={})),function(s){(function(h){(function(f){class l{constructor(){this.bb=null,this.bb_pos=0}__init(t,e){return this.bb_pos=t,this.bb=e,this}static getRootAsMapType(t,e){return(e||new l).__init(t.readInt32(t.position())+t.position(),t)}static getSizePrefixedRootAsMapType(t,e){return t.setPosition(t.position()+u.flatbuffers.SIZE_PREFIX_LENGTH),(e||new l).__init(t.readInt32(t.position())+t.position(),t)}keyType(){let t=this.bb.__offset(this.bb_pos,4);return t?this.bb.readInt32(this.bb_pos+t):s.experimental.fbs.TensorDataType.UNDEFINED}valueType(t){let e=this.bb.__offset(this.bb_pos,6);return e?(t||new s.experimental.fbs.TypeInfo).__init(this.bb.__indirect(this.bb_pos+e),this.bb):null}static startMapType(t){t.startObject(2)}static addKeyType(t,e){t.addFieldInt32(0,e,s.experimental.fbs.TensorDataType.UNDEFINED)}static addValueType(t,e){t.addFieldOffset(1,e,0)}static endMapType(t){return t.endObject()}static createMapType(t,e,r){return l.startMapType(t),l.addKeyType(t,e),l.addValueType(t,r),l.endMapType(t)}}f.MapType=l})(h.fbs||(h.fbs={}))})(s.experimental||(s.experimental={}))}(n.onnxruntime||(n.onnxruntime={})),function(s){(function(h){(function(f){class l{constructor(){this.bb=null,this.bb_pos=0}__init(t,e){return this.bb_pos=t,this.bb=e,this}static getRootAsSequenceType(t,e){return(e||new l).__init(t.readInt32(t.position())+t.position(),t)}static getSizePrefixedRootAsSequenceType(t,e){return t.setPosition(t.position()+u.flatbuffers.SIZE_PREFIX_LENGTH),(e||new l).__init(t.readInt32(t.position())+t.position(),t)}elemType(t){let e=this.bb.__offset(this.bb_pos,4);return e?(t||new s.experimental.fbs.TypeInfo).__init(this.bb.__indirect(this.bb_pos+e),this.bb):null}static startSequenceType(t){t.startObject(1)}static addElemType(t,e){t.addFieldOffset(0,e,0)}static endSequenceType(t){return t.endObject()}static createSequenceType(t,e){return l.startSequenceType(t),l.addElemType(t,e),l.endSequenceType(t)}}f.SequenceType=l})(h.fbs||(h.fbs={}))})(s.experimental||(s.experimental={}))}(n.onnxruntime||(n.onnxruntime={})),function(s){(function(h){(h.fbs||(h.fbs={})).EdgeEnd=class{constructor(){this.bb=null,this.bb_pos=0}__init(f,l){return this.bb_pos=f,this.bb=l,this}nodeIndex(){return this.bb.readUint32(this.bb_pos)}srcArgIndex(){return this.bb.readInt32(this.bb_pos+4)}dstArgIndex(){return this.bb.readInt32(this.bb_pos+8)}static createEdgeEnd(f,l,o,t){return f.prep(4,12),f.writeInt32(t),f.writeInt32(o),f.writeInt32(l),f.offset()}}})(s.experimental||(s.experimental={}))}(n.onnxruntime||(n.onnxruntime={})),function(s){(function(h){(function(f){class l{constructor(){this.bb=null,this.bb_pos=0}__init(t,e){return this.bb_pos=t,this.bb=e,this}static getRootAsNodeEdge(t,e){return(e||new l).__init(t.readInt32(t.position())+t.position(),t)}static getSizePrefixedRootAsNodeEdge(t,e){return t.setPosition(t.position()+u.flatbuffers.SIZE_PREFIX_LENGTH),(e||new l).__init(t.readInt32(t.position())+t.position(),t)}nodeIndex(){let t=this.bb.__offset(this.bb_pos,4);return t?this.bb.readUint32(this.bb_pos+t):0}inputEdges(t,e){let r=this.bb.__offset(this.bb_pos,6);return r?(e||new s.experimental.fbs.EdgeEnd).__init(this.bb.__vector(this.bb_pos+r)+12*t,this.bb):null}inputEdgesLength(){let t=this.bb.__offset(this.bb_pos,6);return t?this.bb.__vector_len(this.bb_pos+t):0}outputEdges(t,e){let r=this.bb.__offset(this.bb_pos,8);return r?(e||new s.experimental.fbs.EdgeEnd).__init(this.bb.__vector(this.bb_pos+r)+12*t,this.bb):null}outputEdgesLength(){let t=this.bb.__offset(this.bb_pos,8);return t?this.bb.__vector_len(this.bb_pos+t):0}static startNodeEdge(t){t.startObject(3)}static addNodeIndex(t,e){t.addFieldInt32(0,e,0)}static addInputEdges(t,e){t.addFieldOffset(1,e,0)}static startInputEdgesVector(t,e){t.startVector(12,e,4)}static addOutputEdges(t,e){t.addFieldOffset(2,e,0)}static startOutputEdgesVector(t,e){t.startVector(12,e,4)}static endNodeEdge(t){return t.endObject()}static createNodeEdge(t,e,r,i){return l.startNodeEdge(t),l.addNodeIndex(t,e),l.addInputEdges(t,r),l.addOutputEdges(t,i),l.endNodeEdge(t)}}f.NodeEdge=l})(h.fbs||(h.fbs={}))})(s.experimental||(s.experimental={}))}(n.onnxruntime||(n.onnxruntime={})),function(s){(function(h){(function(f){class l{constructor(){this.bb=null,this.bb_pos=0}__init(t,e){return this.bb_pos=t,this.bb=e,this}static getRootAsNode(t,e){return(e||new l).__init(t.readInt32(t.position())+t.position(),t)}static getSizePrefixedRootAsNode(t,e){return t.setPosition(t.position()+u.flatbuffers.SIZE_PREFIX_LENGTH),(e||new l).__init(t.readInt32(t.position())+t.position(),t)}name(t){let e=this.bb.__offset(this.bb_pos,4);return e?this.bb.__string(this.bb_pos+e,t):null}docString(t){let e=this.bb.__offset(this.bb_pos,6);return e?this.bb.__string(this.bb_pos+e,t):null}domain(t){let e=this.bb.__offset(this.bb_pos,8);return e?this.bb.__string(this.bb_pos+e,t):null}sinceVersion(){let t=this.bb.__offset(this.bb_pos,10);return t?this.bb.readInt32(this.bb_pos+t):0}index(){let t=this.bb.__offset(this.bb_pos,12);return t?this.bb.readUint32(this.bb_pos+t):0}opType(t){let e=this.bb.__offset(this.bb_pos,14);return e?this.bb.__string(this.bb_pos+e,t):null}type(){let t=this.bb.__offset(this.bb_pos,16);return t?this.bb.readInt32(this.bb_pos+t):s.experimental.fbs.NodeType.Primitive}executionProviderType(t){let e=this.bb.__offset(this.bb_pos,18);return e?this.bb.__string(this.bb_pos+e,t):null}inputs(t,e){let r=this.bb.__offset(this.bb_pos,20);return r?this.bb.__string(this.bb.__vector(this.bb_pos+r)+4*t,e):null}inputsLength(){let t=this.bb.__offset(this.bb_pos,20);return t?this.bb.__vector_len(this.bb_pos+t):0}outputs(t,e){let r=this.bb.__offset(this.bb_pos,22);return r?this.bb.__string(this.bb.__vector(this.bb_pos+r)+4*t,e):null}outputsLength(){let t=this.bb.__offset(this.bb_pos,22);return t?this.bb.__vector_len(this.bb_pos+t):0}attributes(t,e){let r=this.bb.__offset(this.bb_pos,24);return r?(e||new s.experimental.fbs.Attribute).__init(this.bb.__indirect(this.bb.__vector(this.bb_pos+r)+4*t),this.bb):null}attributesLength(){let t=this.bb.__offset(this.bb_pos,24);return t?this.bb.__vector_len(this.bb_pos+t):0}inputArgCounts(t){let e=this.bb.__offset(this.bb_pos,26);return e?this.bb.readInt32(this.bb.__vector(this.bb_pos+e)+4*t):0}inputArgCountsLength(){let t=this.bb.__offset(this.bb_pos,26);return t?this.bb.__vector_len(this.bb_pos+t):0}inputArgCountsArray(){let t=this.bb.__offset(this.bb_pos,26);return t?new Int32Array(this.bb.bytes().buffer,this.bb.bytes().byteOffset+this.bb.__vector(this.bb_pos+t),this.bb.__vector_len(this.bb_pos+t)):null}implicitInputs(t,e){let r=this.bb.__offset(this.bb_pos,28);return r?this.bb.__string(this.bb.__vector(this.bb_pos+r)+4*t,e):null}implicitInputsLength(){let t=this.bb.__offset(this.bb_pos,28);return t?this.bb.__vector_len(this.bb_pos+t):0}static startNode(t){t.startObject(13)}static addName(t,e){t.addFieldOffset(0,e,0)}static addDocString(t,e){t.addFieldOffset(1,e,0)}static addDomain(t,e){t.addFieldOffset(2,e,0)}static addSinceVersion(t,e){t.addFieldInt32(3,e,0)}static addIndex(t,e){t.addFieldInt32(4,e,0)}static addOpType(t,e){t.addFieldOffset(5,e,0)}static addType(t,e){t.addFieldInt32(6,e,s.experimental.fbs.NodeType.Primitive)}static addExecutionProviderType(t,e){t.addFieldOffset(7,e,0)}static addInputs(t,e){t.addFieldOffset(8,e,0)}static createInputsVector(t,e){t.startVector(4,e.length,4);for(let r=e.length-1;r>=0;r--)t.addOffset(e[r]);return t.endVector()}static startInputsVector(t,e){t.startVector(4,e,4)}static addOutputs(t,e){t.addFieldOffset(9,e,0)}static createOutputsVector(t,e){t.startVector(4,e.length,4);for(let r=e.length-1;r>=0;r--)t.addOffset(e[r]);return t.endVector()}static startOutputsVector(t,e){t.startVector(4,e,4)}static addAttributes(t,e){t.addFieldOffset(10,e,0)}static createAttributesVector(t,e){t.startVector(4,e.length,4);for(let r=e.length-1;r>=0;r--)t.addOffset(e[r]);return t.endVector()}static startAttributesVector(t,e){t.startVector(4,e,4)}static addInputArgCounts(t,e){t.addFieldOffset(11,e,0)}static createInputArgCountsVector(t,e){t.startVector(4,e.length,4);for(let r=e.length-1;r>=0;r--)t.addInt32(e[r]);return t.endVector()}static startInputArgCountsVector(t,e){t.startVector(4,e,4)}static addImplicitInputs(t,e){t.addFieldOffset(12,e,0)}static createImplicitInputsVector(t,e){t.startVector(4,e.length,4);for(let r=e.length-1;r>=0;r--)t.addOffset(e[r]);return t.endVector()}static startImplicitInputsVector(t,e){t.startVector(4,e,4)}static endNode(t){return t.endObject()}static createNode(t,e,r,i,d,g,m,_,y,w,v,S,O,A){return l.startNode(t),l.addName(t,e),l.addDocString(t,r),l.addDomain(t,i),l.addSinceVersion(t,d),l.addIndex(t,g),l.addOpType(t,m),l.addType(t,_),l.addExecutionProviderType(t,y),l.addInputs(t,w),l.addOutputs(t,v),l.addAttributes(t,S),l.addInputArgCounts(t,O),l.addImplicitInputs(t,A),l.endNode(t)}}f.Node=l})(h.fbs||(h.fbs={}))})(s.experimental||(s.experimental={}))}(n.onnxruntime||(n.onnxruntime={})),function(s){(function(h){(function(f){class l{constructor(){this.bb=null,this.bb_pos=0}__init(t,e){return this.bb_pos=t,this.bb=e,this}static getRootAsValueInfo(t,e){return(e||new l).__init(t.readInt32(t.position())+t.position(),t)}static getSizePrefixedRootAsValueInfo(t,e){return t.setPosition(t.position()+u.flatbuffers.SIZE_PREFIX_LENGTH),(e||new l).__init(t.readInt32(t.position())+t.position(),t)}name(t){let e=this.bb.__offset(this.bb_pos,4);return e?this.bb.__string(this.bb_pos+e,t):null}docString(t){let e=this.bb.__offset(this.bb_pos,6);return e?this.bb.__string(this.bb_pos+e,t):null}type(t){let e=this.bb.__offset(this.bb_pos,8);return e?(t||new s.experimental.fbs.TypeInfo).__init(this.bb.__indirect(this.bb_pos+e),this.bb):null}static startValueInfo(t){t.startObject(3)}static addName(t,e){t.addFieldOffset(0,e,0)}static addDocString(t,e){t.addFieldOffset(1,e,0)}static addType(t,e){t.addFieldOffset(2,e,0)}static endValueInfo(t){return t.endObject()}static createValueInfo(t,e,r,i){return l.startValueInfo(t),l.addName(t,e),l.addDocString(t,r),l.addType(t,i),l.endValueInfo(t)}}f.ValueInfo=l})(h.fbs||(h.fbs={}))})(s.experimental||(s.experimental={}))}(n.onnxruntime||(n.onnxruntime={})),function(s){(function(h){(function(f){class l{constructor(){this.bb=null,this.bb_pos=0}__init(t,e){return this.bb_pos=t,this.bb=e,this}static getRootAsTypeInfo(t,e){return(e||new l).__init(t.readInt32(t.position())+t.position(),t)}static getSizePrefixedRootAsTypeInfo(t,e){return t.setPosition(t.position()+u.flatbuffers.SIZE_PREFIX_LENGTH),(e||new l).__init(t.readInt32(t.position())+t.position(),t)}denotation(t){let e=this.bb.__offset(this.bb_pos,4);return e?this.bb.__string(this.bb_pos+e,t):null}valueType(){let t=this.bb.__offset(this.bb_pos,6);return t?this.bb.readUint8(this.bb_pos+t):s.experimental.fbs.TypeInfoValue.NONE}value(t){let e=this.bb.__offset(this.bb_pos,8);return e?this.bb.__union(t,this.bb_pos+e):null}static startTypeInfo(t){t.startObject(3)}static addDenotation(t,e){t.addFieldOffset(0,e,0)}static addValueType(t,e){t.addFieldInt8(1,e,s.experimental.fbs.TypeInfoValue.NONE)}static addValue(t,e){t.addFieldOffset(2,e,0)}static endTypeInfo(t){return t.endObject()}static createTypeInfo(t,e,r,i){return l.startTypeInfo(t),l.addDenotation(t,e),l.addValueType(t,r),l.addValue(t,i),l.endTypeInfo(t)}}f.TypeInfo=l})(h.fbs||(h.fbs={}))})(s.experimental||(s.experimental={}))}(n.onnxruntime||(n.onnxruntime={})),function(s){(function(h){(function(f){class l{constructor(){this.bb=null,this.bb_pos=0}__init(t,e){return this.bb_pos=t,this.bb=e,this}static getRootAsOperatorSetId(t,e){return(e||new l).__init(t.readInt32(t.position())+t.position(),t)}static getSizePrefixedRootAsOperatorSetId(t,e){return t.setPosition(t.position()+u.flatbuffers.SIZE_PREFIX_LENGTH),(e||new l).__init(t.readInt32(t.position())+t.position(),t)}domain(t){let e=this.bb.__offset(this.bb_pos,4);return e?this.bb.__string(this.bb_pos+e,t):null}version(){let t=this.bb.__offset(this.bb_pos,6);return t?this.bb.readInt64(this.bb_pos+t):this.bb.createLong(0,0)}static startOperatorSetId(t){t.startObject(2)}static addDomain(t,e){t.addFieldOffset(0,e,0)}static addVersion(t,e){t.addFieldInt64(1,e,t.createLong(0,0))}static endOperatorSetId(t){return t.endObject()}static createOperatorSetId(t,e,r){return l.startOperatorSetId(t),l.addDomain(t,e),l.addVersion(t,r),l.endOperatorSetId(t)}}f.OperatorSetId=l})(h.fbs||(h.fbs={}))})(s.experimental||(s.experimental={}))}(n.onnxruntime||(n.onnxruntime={})),function(s){(function(h){(function(f){class l{constructor(){this.bb=null,this.bb_pos=0}__init(t,e){return this.bb_pos=t,this.bb=e,this}static getRootAsTensor(t,e){return(e||new l).__init(t.readInt32(t.position())+t.position(),t)}static getSizePrefixedRootAsTensor(t,e){return t.setPosition(t.position()+u.flatbuffers.SIZE_PREFIX_LENGTH),(e||new l).__init(t.readInt32(t.position())+t.position(),t)}name(t){let e=this.bb.__offset(this.bb_pos,4);return e?this.bb.__string(this.bb_pos+e,t):null}docString(t){let e=this.bb.__offset(this.bb_pos,6);return e?this.bb.__string(this.bb_pos+e,t):null}dims(t){let e=this.bb.__offset(this.bb_pos,8);return e?this.bb.readInt64(this.bb.__vector(this.bb_pos+e)+8*t):this.bb.createLong(0,0)}dimsLength(){let t=this.bb.__offset(this.bb_pos,8);return t?this.bb.__vector_len(this.bb_pos+t):0}dataType(){let t=this.bb.__offset(this.bb_pos,10);return t?this.bb.readInt32(this.bb_pos+t):s.experimental.fbs.TensorDataType.UNDEFINED}rawData(t){let e=this.bb.__offset(this.bb_pos,12);return e?this.bb.readUint8(this.bb.__vector(this.bb_pos+e)+t):0}rawDataLength(){let t=this.bb.__offset(this.bb_pos,12);return t?this.bb.__vector_len(this.bb_pos+t):0}rawDataArray(){let t=this.bb.__offset(this.bb_pos,12);return t?new Uint8Array(this.bb.bytes().buffer,this.bb.bytes().byteOffset+this.bb.__vector(this.bb_pos+t),this.bb.__vector_len(this.bb_pos+t)):null}stringData(t,e){let r=this.bb.__offset(this.bb_pos,14);return r?this.bb.__string(this.bb.__vector(this.bb_pos+r)+4*t,e):null}stringDataLength(){let t=this.bb.__offset(this.bb_pos,14);return t?this.bb.__vector_len(this.bb_pos+t):0}static startTensor(t){t.startObject(6)}static addName(t,e){t.addFieldOffset(0,e,0)}static addDocString(t,e){t.addFieldOffset(1,e,0)}static addDims(t,e){t.addFieldOffset(2,e,0)}static createDimsVector(t,e){t.startVector(8,e.length,8);for(let r=e.length-1;r>=0;r--)t.addInt64(e[r]);return t.endVector()}static startDimsVector(t,e){t.startVector(8,e,8)}static addDataType(t,e){t.addFieldInt32(3,e,s.experimental.fbs.TensorDataType.UNDEFINED)}static addRawData(t,e){t.addFieldOffset(4,e,0)}static createRawDataVector(t,e){t.startVector(1,e.length,1);for(let r=e.length-1;r>=0;r--)t.addInt8(e[r]);return t.endVector()}static startRawDataVector(t,e){t.startVector(1,e,1)}static addStringData(t,e){t.addFieldOffset(5,e,0)}static createStringDataVector(t,e){t.startVector(4,e.length,4);for(let r=e.length-1;r>=0;r--)t.addOffset(e[r]);return t.endVector()}static startStringDataVector(t,e){t.startVector(4,e,4)}static endTensor(t){return t.endObject()}static createTensor(t,e,r,i,d,g,m){return l.startTensor(t),l.addName(t,e),l.addDocString(t,r),l.addDims(t,i),l.addDataType(t,d),l.addRawData(t,g),l.addStringData(t,m),l.endTensor(t)}}f.Tensor=l})(h.fbs||(h.fbs={}))})(s.experimental||(s.experimental={}))}(n.onnxruntime||(n.onnxruntime={})),function(s){(function(h){(function(f){class l{constructor(){this.bb=null,this.bb_pos=0}__init(t,e){return this.bb_pos=t,this.bb=e,this}static getRootAsSparseTensor(t,e){return(e||new l).__init(t.readInt32(t.position())+t.position(),t)}static getSizePrefixedRootAsSparseTensor(t,e){return t.setPosition(t.position()+u.flatbuffers.SIZE_PREFIX_LENGTH),(e||new l).__init(t.readInt32(t.position())+t.position(),t)}values(t){let e=this.bb.__offset(this.bb_pos,4);return e?(t||new s.experimental.fbs.Tensor).__init(this.bb.__indirect(this.bb_pos+e),this.bb):null}indices(t){let e=this.bb.__offset(this.bb_pos,6);return e?(t||new s.experimental.fbs.Tensor).__init(this.bb.__indirect(this.bb_pos+e),this.bb):null}dims(t){let e=this.bb.__offset(this.bb_pos,8);return e?this.bb.readInt64(this.bb.__vector(this.bb_pos+e)+8*t):this.bb.createLong(0,0)}dimsLength(){let t=this.bb.__offset(this.bb_pos,8);return t?this.bb.__vector_len(this.bb_pos+t):0}static startSparseTensor(t){t.startObject(3)}static addValues(t,e){t.addFieldOffset(0,e,0)}static addIndices(t,e){t.addFieldOffset(1,e,0)}static addDims(t,e){t.addFieldOffset(2,e,0)}static createDimsVector(t,e){t.startVector(8,e.length,8);for(let r=e.length-1;r>=0;r--)t.addInt64(e[r]);return t.endVector()}static startDimsVector(t,e){t.startVector(8,e,8)}static endSparseTensor(t){return t.endObject()}static createSparseTensor(t,e,r,i){return l.startSparseTensor(t),l.addValues(t,e),l.addIndices(t,r),l.addDims(t,i),l.endSparseTensor(t)}}f.SparseTensor=l})(h.fbs||(h.fbs={}))})(s.experimental||(s.experimental={}))}(n.onnxruntime||(n.onnxruntime={})),function(s){(function(h){(function(f){class l{constructor(){this.bb=null,this.bb_pos=0}__init(t,e){return this.bb_pos=t,this.bb=e,this}static getRootAsAttribute(t,e){return(e||new l).__init(t.readInt32(t.position())+t.position(),t)}static getSizePrefixedRootAsAttribute(t,e){return t.setPosition(t.position()+u.flatbuffers.SIZE_PREFIX_LENGTH),(e||new l).__init(t.readInt32(t.position())+t.position(),t)}name(t){let e=this.bb.__offset(this.bb_pos,4);return e?this.bb.__string(this.bb_pos+e,t):null}docString(t){let e=this.bb.__offset(this.bb_pos,6);return e?this.bb.__string(this.bb_pos+e,t):null}type(){let t=this.bb.__offset(this.bb_pos,8);return t?this.bb.readInt32(this.bb_pos+t):s.experimental.fbs.AttributeType.UNDEFINED}f(){let t=this.bb.__offset(this.bb_pos,10);return t?this.bb.readFloat32(this.bb_pos+t):0}i(){let t=this.bb.__offset(this.bb_pos,12);return t?this.bb.readInt64(this.bb_pos+t):this.bb.createLong(0,0)}s(t){let e=this.bb.__offset(this.bb_pos,14);return e?this.bb.__string(this.bb_pos+e,t):null}t(t){let e=this.bb.__offset(this.bb_pos,16);return e?(t||new s.experimental.fbs.Tensor).__init(this.bb.__indirect(this.bb_pos+e),this.bb):null}g(t){let e=this.bb.__offset(this.bb_pos,18);return e?(t||new s.experimental.fbs.Graph).__init(this.bb.__indirect(this.bb_pos+e),this.bb):null}floats(t){let e=this.bb.__offset(this.bb_pos,20);return e?this.bb.readFloat32(this.bb.__vector(this.bb_pos+e)+4*t):0}floatsLength(){let t=this.bb.__offset(this.bb_pos,20);return t?this.bb.__vector_len(this.bb_pos+t):0}floatsArray(){let t=this.bb.__offset(this.bb_pos,20);return t?new Float32Array(this.bb.bytes().buffer,this.bb.bytes().byteOffset+this.bb.__vector(this.bb_pos+t),this.bb.__vector_len(this.bb_pos+t)):null}ints(t){let e=this.bb.__offset(this.bb_pos,22);return e?this.bb.readInt64(this.bb.__vector(this.bb_pos+e)+8*t):this.bb.createLong(0,0)}intsLength(){let t=this.bb.__offset(this.bb_pos,22);return t?this.bb.__vector_len(this.bb_pos+t):0}strings(t,e){let r=this.bb.__offset(this.bb_pos,24);return r?this.bb.__string(this.bb.__vector(this.bb_pos+r)+4*t,e):null}stringsLength(){let t=this.bb.__offset(this.bb_pos,24);return t?this.bb.__vector_len(this.bb_pos+t):0}tensors(t,e){let r=this.bb.__offset(this.bb_pos,26);return r?(e||new s.experimental.fbs.Tensor).__init(this.bb.__indirect(this.bb.__vector(this.bb_pos+r)+4*t),this.bb):null}tensorsLength(){let t=this.bb.__offset(this.bb_pos,26);return t?this.bb.__vector_len(this.bb_pos+t):0}graphs(t,e){let r=this.bb.__offset(this.bb_pos,28);return r?(e||new s.experimental.fbs.Graph).__init(this.bb.__indirect(this.bb.__vector(this.bb_pos+r)+4*t),this.bb):null}graphsLength(){let t=this.bb.__offset(this.bb_pos,28);return t?this.bb.__vector_len(this.bb_pos+t):0}static startAttribute(t){t.startObject(13)}static addName(t,e){t.addFieldOffset(0,e,0)}static addDocString(t,e){t.addFieldOffset(1,e,0)}static addType(t,e){t.addFieldInt32(2,e,s.experimental.fbs.AttributeType.UNDEFINED)}static addF(t,e){t.addFieldFloat32(3,e,0)}static addI(t,e){t.addFieldInt64(4,e,t.createLong(0,0))}static addS(t,e){t.addFieldOffset(5,e,0)}static addT(t,e){t.addFieldOffset(6,e,0)}static addG(t,e){t.addFieldOffset(7,e,0)}static addFloats(t,e){t.addFieldOffset(8,e,0)}static createFloatsVector(t,e){t.startVector(4,e.length,4);for(let r=e.length-1;r>=0;r--)t.addFloat32(e[r]);return t.endVector()}static startFloatsVector(t,e){t.startVector(4,e,4)}static addInts(t,e){t.addFieldOffset(9,e,0)}static createIntsVector(t,e){t.startVector(8,e.length,8);for(let r=e.length-1;r>=0;r--)t.addInt64(e[r]);return t.endVector()}static startIntsVector(t,e){t.startVector(8,e,8)}static addStrings(t,e){t.addFieldOffset(10,e,0)}static createStringsVector(t,e){t.startVector(4,e.length,4);for(let r=e.length-1;r>=0;r--)t.addOffset(e[r]);return t.endVector()}static startStringsVector(t,e){t.startVector(4,e,4)}static addTensors(t,e){t.addFieldOffset(11,e,0)}static createTensorsVector(t,e){t.startVector(4,e.length,4);for(let r=e.length-1;r>=0;r--)t.addOffset(e[r]);return t.endVector()}static startTensorsVector(t,e){t.startVector(4,e,4)}static addGraphs(t,e){t.addFieldOffset(12,e,0)}static createGraphsVector(t,e){t.startVector(4,e.length,4);for(let r=e.length-1;r>=0;r--)t.addOffset(e[r]);return t.endVector()}static startGraphsVector(t,e){t.startVector(4,e,4)}static endAttribute(t){return t.endObject()}static createAttribute(t,e,r,i,d,g,m,_,y,w,v,S,O,A){return l.startAttribute(t),l.addName(t,e),l.addDocString(t,r),l.addType(t,i),l.addF(t,d),l.addI(t,g),l.addS(t,m),l.addT(t,_),l.addG(t,y),l.addFloats(t,w),l.addInts(t,v),l.addStrings(t,S),l.addTensors(t,O),l.addGraphs(t,A),l.endAttribute(t)}}f.Attribute=l})(h.fbs||(h.fbs={}))})(s.experimental||(s.experimental={}))}(n.onnxruntime||(n.onnxruntime={})),function(s){(function(h){(function(f){class l{constructor(){this.bb=null,this.bb_pos=0}__init(t,e){return this.bb_pos=t,this.bb=e,this}static getRootAsGraph(t,e){return(e||new l).__init(t.readInt32(t.position())+t.position(),t)}static getSizePrefixedRootAsGraph(t,e){return t.setPosition(t.position()+u.flatbuffers.SIZE_PREFIX_LENGTH),(e||new l).__init(t.readInt32(t.position())+t.position(),t)}initializers(t,e){let r=this.bb.__offset(this.bb_pos,4);return r?(e||new s.experimental.fbs.Tensor).__init(this.bb.__indirect(this.bb.__vector(this.bb_pos+r)+4*t),this.bb):null}initializersLength(){let t=this.bb.__offset(this.bb_pos,4);return t?this.bb.__vector_len(this.bb_pos+t):0}nodeArgs(t,e){let r=this.bb.__offset(this.bb_pos,6);return r?(e||new s.experimental.fbs.ValueInfo).__init(this.bb.__indirect(this.bb.__vector(this.bb_pos+r)+4*t),this.bb):null}nodeArgsLength(){let t=this.bb.__offset(this.bb_pos,6);return t?this.bb.__vector_len(this.bb_pos+t):0}nodes(t,e){let r=this.bb.__offset(this.bb_pos,8);return r?(e||new s.experimental.fbs.Node).__init(this.bb.__indirect(this.bb.__vector(this.bb_pos+r)+4*t),this.bb):null}nodesLength(){let t=this.bb.__offset(this.bb_pos,8);return t?this.bb.__vector_len(this.bb_pos+t):0}maxNodeIndex(){let t=this.bb.__offset(this.bb_pos,10);return t?this.bb.readUint32(this.bb_pos+t):0}nodeEdges(t,e){let r=this.bb.__offset(this.bb_pos,12);return r?(e||new s.experimental.fbs.NodeEdge).__init(this.bb.__indirect(this.bb.__vector(this.bb_pos+r)+4*t),this.bb):null}nodeEdgesLength(){let t=this.bb.__offset(this.bb_pos,12);return t?this.bb.__vector_len(this.bb_pos+t):0}inputs(t,e){let r=this.bb.__offset(this.bb_pos,14);return r?this.bb.__string(this.bb.__vector(this.bb_pos+r)+4*t,e):null}inputsLength(){let t=this.bb.__offset(this.bb_pos,14);return t?this.bb.__vector_len(this.bb_pos+t):0}outputs(t,e){let r=this.bb.__offset(this.bb_pos,16);return r?this.bb.__string(this.bb.__vector(this.bb_pos+r)+4*t,e):null}outputsLength(){let t=this.bb.__offset(this.bb_pos,16);return t?this.bb.__vector_len(this.bb_pos+t):0}sparseInitializers(t,e){let r=this.bb.__offset(this.bb_pos,18);return r?(e||new s.experimental.fbs.SparseTensor).__init(this.bb.__indirect(this.bb.__vector(this.bb_pos+r)+4*t),this.bb):null}sparseInitializersLength(){let t=this.bb.__offset(this.bb_pos,18);return t?this.bb.__vector_len(this.bb_pos+t):0}static startGraph(t){t.startObject(8)}static addInitializers(t,e){t.addFieldOffset(0,e,0)}static createInitializersVector(t,e){t.startVector(4,e.length,4);for(let r=e.length-1;r>=0;r--)t.addOffset(e[r]);return t.endVector()}static startInitializersVector(t,e){t.startVector(4,e,4)}static addNodeArgs(t,e){t.addFieldOffset(1,e,0)}static createNodeArgsVector(t,e){t.startVector(4,e.length,4);for(let r=e.length-1;r>=0;r--)t.addOffset(e[r]);return t.endVector()}static startNodeArgsVector(t,e){t.startVector(4,e,4)}static addNodes(t,e){t.addFieldOffset(2,e,0)}static createNodesVector(t,e){t.startVector(4,e.length,4);for(let r=e.length-1;r>=0;r--)t.addOffset(e[r]);return t.endVector()}static startNodesVector(t,e){t.startVector(4,e,4)}static addMaxNodeIndex(t,e){t.addFieldInt32(3,e,0)}static addNodeEdges(t,e){t.addFieldOffset(4,e,0)}static createNodeEdgesVector(t,e){t.startVector(4,e.length,4);for(let r=e.length-1;r>=0;r--)t.addOffset(e[r]);return t.endVector()}static startNodeEdgesVector(t,e){t.startVector(4,e,4)}static addInputs(t,e){t.addFieldOffset(5,e,0)}static createInputsVector(t,e){t.startVector(4,e.length,4);for(let r=e.length-1;r>=0;r--)t.addOffset(e[r]);return t.endVector()}static startInputsVector(t,e){t.startVector(4,e,4)}static addOutputs(t,e){t.addFieldOffset(6,e,0)}static createOutputsVector(t,e){t.startVector(4,e.length,4);for(let r=e.length-1;r>=0;r--)t.addOffset(e[r]);return t.endVector()}static startOutputsVector(t,e){t.startVector(4,e,4)}static addSparseInitializers(t,e){t.addFieldOffset(7,e,0)}static createSparseInitializersVector(t,e){t.startVector(4,e.length,4);for(let r=e.length-1;r>=0;r--)t.addOffset(e[r]);return t.endVector()}static startSparseInitializersVector(t,e){t.startVector(4,e,4)}static endGraph(t){return t.endObject()}static createGraph(t,e,r,i,d,g,m,_,y){return l.startGraph(t),l.addInitializers(t,e),l.addNodeArgs(t,r),l.addNodes(t,i),l.addMaxNodeIndex(t,d),l.addNodeEdges(t,g),l.addInputs(t,m),l.addOutputs(t,_),l.addSparseInitializers(t,y),l.endGraph(t)}}f.Graph=l})(h.fbs||(h.fbs={}))})(s.experimental||(s.experimental={}))}(n.onnxruntime||(n.onnxruntime={})),function(s){(function(h){(function(f){class l{constructor(){this.bb=null,this.bb_pos=0}__init(t,e){return this.bb_pos=t,this.bb=e,this}static getRootAsModel(t,e){return(e||new l).__init(t.readInt32(t.position())+t.position(),t)}static getSizePrefixedRootAsModel(t,e){return t.setPosition(t.position()+u.flatbuffers.SIZE_PREFIX_LENGTH),(e||new l).__init(t.readInt32(t.position())+t.position(),t)}irVersion(){let t=this.bb.__offset(this.bb_pos,4);return t?this.bb.readInt64(this.bb_pos+t):this.bb.createLong(0,0)}opsetImport(t,e){let r=this.bb.__offset(this.bb_pos,6);return r?(e||new s.experimental.fbs.OperatorSetId).__init(this.bb.__indirect(this.bb.__vector(this.bb_pos+r)+4*t),this.bb):null}opsetImportLength(){let t=this.bb.__offset(this.bb_pos,6);return t?this.bb.__vector_len(this.bb_pos+t):0}producerName(t){let e=this.bb.__offset(this.bb_pos,8);return e?this.bb.__string(this.bb_pos+e,t):null}producerVersion(t){let e=this.bb.__offset(this.bb_pos,10);return e?this.bb.__string(this.bb_pos+e,t):null}domain(t){let e=this.bb.__offset(this.bb_pos,12);return e?this.bb.__string(this.bb_pos+e,t):null}modelVersion(){let t=this.bb.__offset(this.bb_pos,14);return t?this.bb.readInt64(this.bb_pos+t):this.bb.createLong(0,0)}docString(t){let e=this.bb.__offset(this.bb_pos,16);return e?this.bb.__string(this.bb_pos+e,t):null}graph(t){let e=this.bb.__offset(this.bb_pos,18);return e?(t||new s.experimental.fbs.Graph).__init(this.bb.__indirect(this.bb_pos+e),this.bb):null}graphDocString(t){let e=this.bb.__offset(this.bb_pos,20);return e?this.bb.__string(this.bb_pos+e,t):null}static startModel(t){t.startObject(9)}static addIrVersion(t,e){t.addFieldInt64(0,e,t.createLong(0,0))}static addOpsetImport(t,e){t.addFieldOffset(1,e,0)}static createOpsetImportVector(t,e){t.startVector(4,e.length,4);for(let r=e.length-1;r>=0;r--)t.addOffset(e[r]);return t.endVector()}static startOpsetImportVector(t,e){t.startVector(4,e,4)}static addProducerName(t,e){t.addFieldOffset(2,e,0)}static addProducerVersion(t,e){t.addFieldOffset(3,e,0)}static addDomain(t,e){t.addFieldOffset(4,e,0)}static addModelVersion(t,e){t.addFieldInt64(5,e,t.createLong(0,0))}static addDocString(t,e){t.addFieldOffset(6,e,0)}static addGraph(t,e){t.addFieldOffset(7,e,0)}static addGraphDocString(t,e){t.addFieldOffset(8,e,0)}static endModel(t){return t.endObject()}static createModel(t,e,r,i,d,g,m,_,y,w){return l.startModel(t),l.addIrVersion(t,e),l.addOpsetImport(t,r),l.addProducerName(t,i),l.addProducerVersion(t,d),l.addDomain(t,g),l.addModelVersion(t,m),l.addDocString(t,_),l.addGraph(t,y),l.addGraphDocString(t,w),l.endModel(t)}}f.Model=l})(h.fbs||(h.fbs={}))})(s.experimental||(s.experimental={}))}(n.onnxruntime||(n.onnxruntime={})),function(s){(function(h){(function(f){class l{constructor(){this.bb=null,this.bb_pos=0}__init(t,e){return this.bb_pos=t,this.bb=e,this}static getRootAsKernelCreateInfos(t,e){return(e||new l).__init(t.readInt32(t.position())+t.position(),t)}static getSizePrefixedRootAsKernelCreateInfos(t,e){return t.setPosition(t.position()+u.flatbuffers.SIZE_PREFIX_LENGTH),(e||new l).__init(t.readInt32(t.position())+t.position(),t)}nodeIndices(t){let e=this.bb.__offset(this.bb_pos,4);return e?this.bb.readUint32(this.bb.__vector(this.bb_pos+e)+4*t):0}nodeIndicesLength(){let t=this.bb.__offset(this.bb_pos,4);return t?this.bb.__vector_len(this.bb_pos+t):0}nodeIndicesArray(){let t=this.bb.__offset(this.bb_pos,4);return t?new Uint32Array(this.bb.bytes().buffer,this.bb.bytes().byteOffset+this.bb.__vector(this.bb_pos+t),this.bb.__vector_len(this.bb_pos+t)):null}kernelDefHashes(t){let e=this.bb.__offset(this.bb_pos,6);return e?this.bb.readUint64(this.bb.__vector(this.bb_pos+e)+8*t):this.bb.createLong(0,0)}kernelDefHashesLength(){let t=this.bb.__offset(this.bb_pos,6);return t?this.bb.__vector_len(this.bb_pos+t):0}static startKernelCreateInfos(t){t.startObject(2)}static addNodeIndices(t,e){t.addFieldOffset(0,e,0)}static createNodeIndicesVector(t,e){t.startVector(4,e.length,4);for(let r=e.length-1;r>=0;r--)t.addInt32(e[r]);return t.endVector()}static startNodeIndicesVector(t,e){t.startVector(4,e,4)}static addKernelDefHashes(t,e){t.addFieldOffset(1,e,0)}static createKernelDefHashesVector(t,e){t.startVector(8,e.length,8);for(let r=e.length-1;r>=0;r--)t.addInt64(e[r]);return t.endVector()}static startKernelDefHashesVector(t,e){t.startVector(8,e,8)}static endKernelCreateInfos(t){return t.endObject()}static createKernelCreateInfos(t,e,r){return l.startKernelCreateInfos(t),l.addNodeIndices(t,e),l.addKernelDefHashes(t,r),l.endKernelCreateInfos(t)}}f.KernelCreateInfos=l})(h.fbs||(h.fbs={}))})(s.experimental||(s.experimental={}))}(n.onnxruntime||(n.onnxruntime={})),function(s){(function(h){(function(f){class l{constructor(){this.bb=null,this.bb_pos=0}__init(t,e){return this.bb_pos=t,this.bb=e,this}static getRootAsSubGraphSessionState(t,e){return(e||new l).__init(t.readInt32(t.position())+t.position(),t)}static getSizePrefixedRootAsSubGraphSessionState(t,e){return t.setPosition(t.position()+u.flatbuffers.SIZE_PREFIX_LENGTH),(e||new l).__init(t.readInt32(t.position())+t.position(),t)}graphId(t){let e=this.bb.__offset(this.bb_pos,4);return e?this.bb.__string(this.bb_pos+e,t):null}sessionState(t){let e=this.bb.__offset(this.bb_pos,6);return e?(t||new s.experimental.fbs.SessionState).__init(this.bb.__indirect(this.bb_pos+e),this.bb):null}static startSubGraphSessionState(t){t.startObject(2)}static addGraphId(t,e){t.addFieldOffset(0,e,0)}static addSessionState(t,e){t.addFieldOffset(1,e,0)}static endSubGraphSessionState(t){let e=t.endObject();return t.requiredField(e,4),e}static createSubGraphSessionState(t,e,r){return l.startSubGraphSessionState(t),l.addGraphId(t,e),l.addSessionState(t,r),l.endSubGraphSessionState(t)}}f.SubGraphSessionState=l})(h.fbs||(h.fbs={}))})(s.experimental||(s.experimental={}))}(n.onnxruntime||(n.onnxruntime={})),function(s){(function(h){(function(f){class l{constructor(){this.bb=null,this.bb_pos=0}__init(t,e){return this.bb_pos=t,this.bb=e,this}static getRootAsSessionState(t,e){return(e||new l).__init(t.readInt32(t.position())+t.position(),t)}static getSizePrefixedRootAsSessionState(t,e){return t.setPosition(t.position()+u.flatbuffers.SIZE_PREFIX_LENGTH),(e||new l).__init(t.readInt32(t.position())+t.position(),t)}kernels(t){let e=this.bb.__offset(this.bb_pos,4);return e?(t||new s.experimental.fbs.KernelCreateInfos).__init(this.bb.__indirect(this.bb_pos+e),this.bb):null}subGraphSessionStates(t,e){let r=this.bb.__offset(this.bb_pos,6);return r?(e||new s.experimental.fbs.SubGraphSessionState).__init(this.bb.__indirect(this.bb.__vector(this.bb_pos+r)+4*t),this.bb):null}subGraphSessionStatesLength(){let t=this.bb.__offset(this.bb_pos,6);return t?this.bb.__vector_len(this.bb_pos+t):0}static startSessionState(t){t.startObject(2)}static addKernels(t,e){t.addFieldOffset(0,e,0)}static addSubGraphSessionStates(t,e){t.addFieldOffset(1,e,0)}static createSubGraphSessionStatesVector(t,e){t.startVector(4,e.length,4);for(let r=e.length-1;r>=0;r--)t.addOffset(e[r]);return t.endVector()}static startSubGraphSessionStatesVector(t,e){t.startVector(4,e,4)}static endSessionState(t){return t.endObject()}static createSessionState(t,e,r){return l.startSessionState(t),l.addKernels(t,e),l.addSubGraphSessionStates(t,r),l.endSessionState(t)}}f.SessionState=l})(h.fbs||(h.fbs={}))})(s.experimental||(s.experimental={}))}(n.onnxruntime||(n.onnxruntime={})),function(s){(function(h){(function(f){class l{constructor(){this.bb=null,this.bb_pos=0}__init(t,e){return this.bb_pos=t,this.bb=e,this}static getRootAsInferenceSession(t,e){return(e||new l).__init(t.readInt32(t.position())+t.position(),t)}static getSizePrefixedRootAsInferenceSession(t,e){return t.setPosition(t.position()+u.flatbuffers.SIZE_PREFIX_LENGTH),(e||new l).__init(t.readInt32(t.position())+t.position(),t)}static bufferHasIdentifier(t){return t.__has_identifier("ORTM")}ortVersion(t){let e=this.bb.__offset(this.bb_pos,4);return e?this.bb.__string(this.bb_pos+e,t):null}model(t){let e=this.bb.__offset(this.bb_pos,6);return e?(t||new s.experimental.fbs.Model).__init(this.bb.__indirect(this.bb_pos+e),this.bb):null}sessionState(t){let e=this.bb.__offset(this.bb_pos,8);return e?(t||new s.experimental.fbs.SessionState).__init(this.bb.__indirect(this.bb_pos+e),this.bb):null}static startInferenceSession(t){t.startObject(3)}static addOrtVersion(t,e){t.addFieldOffset(0,e,0)}static addModel(t,e){t.addFieldOffset(1,e,0)}static addSessionState(t,e){t.addFieldOffset(2,e,0)}static endInferenceSession(t){return t.endObject()}static finishInferenceSessionBuffer(t,e){t.finish(e,"ORTM")}static finishSizePrefixedInferenceSessionBuffer(t,e){t.finish(e,"ORTM",!0)}static createInferenceSession(t,e,r,i){return l.startInferenceSession(t),l.addOrtVersion(t,e),l.addModel(t,r),l.addSessionState(t,i),l.endInferenceSession(t)}}f.InferenceSession=l})(h.fbs||(h.fbs={}))})(s.experimental||(s.experimental={}))}(n.onnxruntime||(n.onnxruntime={}))},7448:(b,n,a)=>{Object.defineProperty(n,"__esModule",{value:!0}),n.OnnxjsSessionHandler=void 0;const u=a(1670),c=a(9162);n.OnnxjsSessionHandler=class{constructor(p){this.session=p,this.inputNames=this.session.inputNames,this.outputNames=this.session.outputNames}async dispose(){}async run(p,s,h){const f=new Map;for(const t in p)if(Object.hasOwnProperty.call(p,t)){const e=p[t];f.set(t,new c.Tensor(e.dims,e.type,void 0,void 0,e.data))}const l=await this.session.run(f),o={};return l.forEach((t,e)=>{o[e]=new u.Tensor(t.type,t.data,t.dims)}),o}startProfiling(){this.session.startProfiling()}endProfiling(){this.session.endProfiling()}}},6919:(b,n,a)=>{Object.defineProperty(n,"__esModule",{value:!0}),n.Session=void 0;const u=a(7067),c=a(1296),p=a(7091),s=a(1036),h=a(6231),f=a(2644);n.Session=class{constructor(l={}){this._initialized=!1,this.backendHint=l.backendHint,this.profiler=h.Profiler.create(l.profiler),this.context={profiler:this.profiler,graphInputTypes:[],graphInputDims:[]}}get inputNames(){return this._model.graph.getInputNames()}get outputNames(){return this._model.graph.getOutputNames()}startProfiling(){this.profiler.start()}endProfiling(){this.profiler.stop()}async loadModel(l,o,t){await this.profiler.event("session","Session.loadModel",async()=>{const e=await(0,p.resolveBackend)(this.backendHint);if(this.sessionHandler=e.createSessionHandler(this.context),this._model=new f.Model,typeof l=="string"){const r=l.endsWith(".ort");if(typeof fetch>"u"){const i=await(0,c.promisify)(u.readFile)(l);this.initialize(i,r)}else{const i=await fetch(l),d=await i.arrayBuffer();this.initialize(new Uint8Array(d),r)}}else if(ArrayBuffer.isView(l))this.initialize(l);else{const r=new Uint8Array(l,o||0,t||l.byteLength);this.initialize(r)}})}initialize(l,o){if(this._initialized)throw new Error("already initialized");this.profiler.event("session","Session.initialize",()=>{const t=this.sessionHandler.transformGraph?this.sessionHandler:void 0;this._model.load(l,t,o),this.sessionHandler.onGraphInitialized&&this.sessionHandler.onGraphInitialized(this._model.graph),this.initializeOps(this._model.graph),this._executionPlan=new s.ExecutionPlan(this._model.graph,this._ops,this.profiler)}),this._initialized=!0}async run(l){if(!this._initialized)throw new Error("session not initialized yet");return this.profiler.event("session","Session.run",async()=>{const o=this.normalizeAndValidateInputs(l),t=await this._executionPlan.execute(this.sessionHandler,o);return this.createOutput(t)})}normalizeAndValidateInputs(l){const o=this._model.graph.getInputNames();if(Array.isArray(l)){if(l.length!==o.length)throw new Error(`incorrect input array length: expected ${o.length} but got ${l.length}`)}else{if(l.size!==o.length)throw new Error(`incorrect input map size: expected ${o.length} but got ${l.size}`);const t=new Array(l.size);let e=0;for(let r=0;rtypeof A=="string")))throw new TypeError("cache should be a string array");O&&(this.cache=new Array(S))}else{if(w!==void 0){const A=e(m);if(!(w instanceof A))throw new TypeError(`cache should be type ${A.name}`)}if(O){const A=new ArrayBuffer(S*function(T){switch(T){case"bool":case"int8":case"uint8":return 1;case"int16":case"uint16":return 2;case"int32":case"uint32":case"float32":return 4;case"float64":return 8;default:throw new Error(`cannot calculate sizeof() on type ${T}`)}}(m));this.cache=function(T,M){return new(e(M))(T)}(A,m)}}}static fromProto(g){if(!g)throw new Error("cannot construct Value from an empty tensor");const m=f.ProtoUtil.tensorDataTypeFromProto(g.dataType),_=f.ProtoUtil.tensorDimsFromProto(g.dims),y=new o(_,m);if(m==="string")g.stringData.forEach((w,v)=>{y.data[v]=(0,f.decodeUtf8String)(w)});else if(g.rawData&&typeof g.rawData.byteLength=="number"&&g.rawData.byteLength>0){const w=y.data,v=new DataView(g.rawData.buffer,g.rawData.byteOffset,g.rawData.byteLength),S=t(g.dataType),O=g.rawData.byteLength/S;if(g.rawData.byteLength%S!=0)throw new Error("invalid buffer length");if(w.length!==O)throw new Error("buffer length mismatch");for(let A=0;A0){const w=y.data,v=new DataView(g.rawDataArray().buffer,g.rawDataArray().byteOffset,g.rawDataLength()),S=t(g.dataType()),O=g.rawDataLength()/S;if(g.rawDataLength()%S!=0)throw new Error("invalid buffer length");if(w.length!==O)throw new Error("buffer length mismatch");for(let A=0;A1&&M>1)return;O[S-A]=Math.max(T,M)}return O}static index(m,_){const y=new Array(_.length);return l.fillIndex(m,_,y),y}static fillIndex(m,_,y){const w=m.length-_.length;for(let v=0;v<_.length;v++)y[v]=m[w+v]%_[v]}static calc(m,_,y,w,v){const S=l.calcShape(m.dims,_.dims);if(S){if(w&&!e.areEqual(S,m.dims))return;const O=e.size(S),A=w?m:new h.Tensor(S,v||m.type);if(S.length===0)A.set([],y(m.get([]),_.get([])));else{const T=new Array(S.length),M=new Array(m.dims.length),N=new Array(_.dims.length);let B,$=0,L=0,H=!1,C=!1;m.dims.length===0&&($=m.get([]),H=!0),_.dims.length===0&&(L=_.get([]),C=!0);for(let z=0;z=0;J--)T[J]=B%S[J],B=Math.floor(B/S[J]);H||(l.fillIndex(T,m.dims,M),$=m.get(M)),C||(l.fillIndex(T,_.dims,N),L=_.get(N)),A.set(T,y($,L))}}return A}}static isValidBroadcast(m,_){const y=m.length,w=_.length;if(y>w)return!1;for(let v=1;v<=y;v++)if(m[y-v]!==1&&m[y-v]!==_[w-v])return!1;return!0}static getBroadcastDims(m,_){const y=m.length,w=[];for(let v=0;v1&&O===1&&w.unshift(S)}return w}}n.BroadcastUtil=l,n.arrayCopyHelper=function(g,m,_,y,w){if(y<0||y>=m.length)throw new Error("sourceIndex out of bounds");if(_<0||_>=g.length)throw new Error("targetIndex out of bounds");if(y+w>m.length)throw new Error("source indices to be copied are outside bounds");if(_+w>g.length)throw new Error("target array is too small to hold result");for(let v=0;vp.default.isLong(_)?_.toNumber():_)}static tensorValueTypeFromProto(m){return{tensorType:o.tensorDataTypeFromProto(m.elemType),shape:{dims:o.tensorDimsFromProto(m.shape.dim.map(_=>_.dimValue))}}}static tensorDimsFromORTFormat(m){const _=[];for(let y=0;ym.length)throw new Error(`invalid dimension of ${_} for sizeFromDimension as Tensor has ${m.length} dimensions.`);return e.getSizeFromDimensionRange(m,_,m.length)}static sizeToDimension(m,_){if(_<0||_>m.length)throw new Error(`invalid dimension of ${_} for sizeToDimension as Tensor has ${m.length} dimensions.`);return e.getSizeFromDimensionRange(m,0,_)}static getSizeFromDimensionRange(m,_,y){let w=1;for(let v=_;v=0;--w)y[w]=y[w+1]*m[w+1];return y}static transpose(m){return m.slice().reverse()}static indicesToOffset(m,_,y){y===void 0&&(y=m.length);let w=0;for(let v=0;v=_)throw new Error("unsupported axis for this operation.");return m<0?m+_:m}static normalizeAxes(m,_){return m.map(y=>this.normalizeAxis(y,_))}static incrementIndex(m,_,y){if(_.length===0||m.length===0)throw new Error("Index incrementing unsupported for scalar Tensor");if(y===void 0)y=_.length;else if(y<=0||y>_.length)throw new Error("Incorrect axis to increment on");for(let w=y-1;w>=0&&(m[w]++,!(m[w]<_[w]));--w)m[w]=0}static calculateReshapedDims(m,_){if(_.length===0){if(m.length===0||e.size(m)===1)return[];throw new Error("cannot reshape to a scalar Tensor")}const y=_.length,w=new Array(y);let v=-1,S=1;for(let A=0;A=m.length)throw new Error("the dimension with value zero exceeds the dimension size of the input tensor");w[A]=m[A]}else w[A]=_[A];S*=w[A]}}const O=e.size(m);if(v!==-1){if(O%S!=0)throw new Error(`the input tensor cannot be reshaped to the requested shape. Input shape: [${m}] Output shape: [${_}]`);w[v]=O/S}else if(S!==O)throw new Error("reshapedDims and originalDims don't have matching sizes");return w}static sortBasedOnPerm(m,_){return _?_.map(y=>m[y]):m.slice().reverse()}static padShape(m,_){const y=m.length;return m.map((w,v)=>w+_[v]+_[v+y])}static areEqual(m,_){return m.length===_.length&&m.every((y,w)=>y===_[w])}static validateDimsAndCalcSize(m){if(m.length>6)throw new TypeError("Only rank 0 to 6 is supported for tensor shape.");let _=1;for(const y of m){if(!Number.isInteger(y))throw new TypeError(`Invalid shape: ${y} is not an integer`);if(y<0||y>2147483647)throw new TypeError(`Invalid shape: length ${y} is not allowed`);_*=y}return _}static flattenShape(m,_){_<0&&(_+=m.length);const y=m.reduce((v,S)=>v*S,1),w=m.slice(_).reduce((v,S)=>v*S,1);return[y/w,w]}static squeezeShape(m,_){const y=new Array;_=e.normalizeAxes(_,m.length);for(let w=0;w=0;if(v&&m[w]!==1)throw new Error("squeeze an axis of size different than 1");(_.length===0&&m[w]>1||_.length>0&&!v)&&y.push(m[w])}return y}static unsqueezeShape(m,_){const y=new Array(m.length+_.length);y.fill(0);for(let v=0;v<_.length;v++){const S=e.normalizeAxis(_[v],y.length);if(S>=y.length)throw new Error("'axes' has an out of range axis");if(y[S]!==0)throw new Error("'axes' has a duplicate axis");y[S]=1}let w=0;for(let v=0;v=m.length)throw new Error("sourceIndex out of bounds");if(_<0||_>=g.length)throw new Error("targetIndex out of bounds");if(y+w>m.length)throw new Error("source indices to be copied are outside bounds");if(_+w>g.length)throw new Error("target array is too small to hold result");for(let v=0;v=m.length)throw new Error("sourceIndex out of bounds");if(_<0||_>=g.length)throw new Error("targetIndex out of bounds");if(y+w>m.length)throw new Error("source indices to be copied are outside bounds");if(_+w>g.length)throw new Error("target array is too small to hold result");for(let S=0;S=m.length)throw new Error("sourceIndex out of bounds");if(_<0||_>=g.length)throw new Error("targetIndex out of bounds");if(y+w>m.length)throw new Error("source indices to be copied are outside bounds");if(_+w>g.length)throw new Error("target array is too small to hold result");for(let S=0;S=m.length)throw new Error("sourceIndex out of bounds");if(_<0||_>=g.length)throw new Error("targetIndex out of bounds");if(y+w>m.length)throw new Error("source indices to be copied are outside bounds");if(_+w>g.length)throw new Error("target array is too small to hold result");for(let v=0;v_.push(L));const O=i.calcReduceShape(S,_,!0),A=e.size(O),T=new h.Tensor(O,m.type),M=e.computeStrides(O),N=e.computeStrides(S),B=new Array(S.length);for(let $=0;$=_.length)return S(m[v]);const T=_[w],M=T>=y.length?1:e.size(y.slice(T+1));for(let N=0;Nv!==0)}}n.ReduceUtil=i;class d{static adjustPoolAttributes(m,_,y,w,v,S){if(!m&&y.length!==_.length-2)throw new Error("length of specified kernel shapes should be 2 less than length of input dimensions");if(m)for(let O=0;O<_.length-2;O++)O>=y.length?y.push(_[O+2]):y[O]=_[O+2];for(let O=0;O=y[O]||S[O+y.length]>=y[O])throw new Error("pads should be smaller than kernel")}}static adjustPadsBasedOnAutoPad(m,_,y,w,v,S){if(S){if(v.length!==2*(m.length-2))throw new Error("length of pads should be twice the length of data dimensions");if(_.length!==m.length-2)throw new Error("length of strides should be the length of data dimensions");if(w.length!==m.length-2)throw new Error("length of kernel shapes should be the length of data dimensions");for(let O=0;O{Object.defineProperty(n,"__esModule",{value:!0}),n.iterateExtraOptions=void 0,n.iterateExtraOptions=(a,u,c,p)=>{if(typeof a=="object"&&a!==null){if(c.has(a))throw new Error("Circular reference in options");c.add(a)}Object.entries(a).forEach(([s,h])=>{const f=u?u+s:s;if(typeof h=="object")(0,n.iterateExtraOptions)(h,f+".",c,p);else if(typeof h=="string"||typeof h=="number")p(f,h.toString());else{if(typeof h!="boolean")throw new Error("Can't handle extra config type: "+typeof h);p(f,h?"1":"0")}})}},2157:function(b,n,a){var u,c=this&&this.__createBinding||(Object.create?function(M,N,B,$){$===void 0&&($=B);var L=Object.getOwnPropertyDescriptor(N,B);L&&!("get"in L?!N.__esModule:L.writable||L.configurable)||(L={enumerable:!0,get:function(){return N[B]}}),Object.defineProperty(M,$,L)}:function(M,N,B,$){$===void 0&&($=B),M[$]=N[B]}),p=this&&this.__setModuleDefault||(Object.create?function(M,N){Object.defineProperty(M,"default",{enumerable:!0,value:N})}:function(M,N){M.default=N}),s=this&&this.__importStar||function(M){if(M&&M.__esModule)return M;var N={};if(M!=null)for(var B in M)B!=="default"&&Object.prototype.hasOwnProperty.call(M,B)&&c(N,M,B);return p(N,M),N};Object.defineProperty(n,"__esModule",{value:!0}),n.endProfiling=n.run=n.releaseSession=n.createSession=n.createSessionFinalize=n.createSessionAllocate=n.initOrt=n.initWasm=void 0;const h=a(1670),f=s(a(349)),l=a(6361),o=()=>!!h.env.wasm.proxy&&typeof document<"u";let t,e,r,i=!1,d=!1,g=!1;const m=[],_=[],y=[],w=[],v=[],S=[],O=()=>{if(i||!d||g||!t)throw new Error("worker not ready")},A=M=>{switch(M.data.type){case"init-wasm":i=!1,M.data.err?(g=!0,e[1](M.data.err)):(d=!0,e[0]());break;case"init-ort":M.data.err?r[1](M.data.err):r[0]();break;case"create_allocate":M.data.err?m.shift()[1](M.data.err):m.shift()[0](M.data.out);break;case"create_finalize":M.data.err?_.shift()[1](M.data.err):_.shift()[0](M.data.out);break;case"create":M.data.err?y.shift()[1](M.data.err):y.shift()[0](M.data.out);break;case"release":M.data.err?w.shift()[1](M.data.err):w.shift()[0]();break;case"run":M.data.err?v.shift()[1](M.data.err):v.shift()[0](M.data.out);break;case"end-profiling":M.data.err?S.shift()[1](M.data.err):S.shift()[0]()}},T=typeof document<"u"?(u=document==null?void 0:document.currentScript)===null||u===void 0?void 0:u.src:void 0;n.initWasm=async()=>{if(o()){if(d)return;if(i)throw new Error("multiple calls to 'initWasm()' detected.");if(g)throw new Error("previous call to 'initWasm()' failed.");return i=!0,h.env.wasm.wasmPaths===void 0&&T&&T.indexOf("blob:")!==0&&(h.env.wasm.wasmPaths=T.substr(0,+T.lastIndexOf("/")+1)),new Promise((M,N)=>{t==null||t.terminate(),t=a(9710).Z(),t.onmessage=A,e=[M,N];const B={type:"init-wasm",in:h.env.wasm};t.postMessage(B)})}return(0,l.initializeWebAssembly)(h.env.wasm)},n.initOrt=async(M,N)=>{if(o())return O(),new Promise((B,$)=>{r=[B,$];const L={type:"init-ort",in:{numThreads:M,loggingLevel:N}};t.postMessage(L)});f.initOrt(M,N)},n.createSessionAllocate=async M=>o()?(O(),new Promise((N,B)=>{m.push([N,B]);const $={type:"create_allocate",in:{model:M}};t.postMessage($,[M.buffer])})):f.createSessionAllocate(M),n.createSessionFinalize=async(M,N)=>o()?(O(),new Promise((B,$)=>{_.push([B,$]);const L={type:"create_finalize",in:{modeldata:M,options:N}};t.postMessage(L)})):f.createSessionFinalize(M,N),n.createSession=async(M,N)=>o()?(O(),new Promise((B,$)=>{y.push([B,$]);const L={type:"create",in:{model:M,options:N}};t.postMessage(L,[M.buffer])})):f.createSession(M,N),n.releaseSession=async M=>{if(o())return O(),new Promise((N,B)=>{w.push([N,B]);const $={type:"release",in:M};t.postMessage($)});f.releaseSession(M)},n.run=async(M,N,B,$,L)=>o()?(O(),new Promise((H,C)=>{v.push([H,C]);const z={type:"run",in:{sessionId:M,inputIndices:N,inputs:B,outputIndices:$,options:L}};t.postMessage(z,f.extractTransferableBuffers(B))})):f.run(M,N,B,$,L),n.endProfiling=async M=>{if(o())return O(),new Promise((N,B)=>{S.push([N,B]);const $={type:"end-profiling",in:M};t.postMessage($)});f.endProfiling(M)}},586:(b,n,a)=>{Object.defineProperty(n,"__esModule",{value:!0}),n.setRunOptions=void 0;const u=a(7967),c=a(4983),p=a(6361);n.setRunOptions=s=>{const h=(0,p.getInstance)();let f=0;const l=[],o=s||{};try{if((s==null?void 0:s.logSeverityLevel)===void 0)o.logSeverityLevel=2;else if(typeof s.logSeverityLevel!="number"||!Number.isInteger(s.logSeverityLevel)||s.logSeverityLevel<0||s.logSeverityLevel>4)throw new Error(`log serverity level is not valid: ${s.logSeverityLevel}`);if((s==null?void 0:s.logVerbosityLevel)===void 0)o.logVerbosityLevel=0;else if(typeof s.logVerbosityLevel!="number"||!Number.isInteger(s.logVerbosityLevel))throw new Error(`log verbosity level is not valid: ${s.logVerbosityLevel}`);(s==null?void 0:s.terminate)===void 0&&(o.terminate=!1);let t=0;if((s==null?void 0:s.tag)!==void 0&&(t=(0,c.allocWasmString)(s.tag,l)),f=h._OrtCreateRunOptions(o.logSeverityLevel,o.logVerbosityLevel,!!o.terminate,t),f===0)throw new Error("Can't create run options");return(s==null?void 0:s.extra)!==void 0&&(0,u.iterateExtraOptions)(s.extra,"",new WeakSet,(e,r)=>{const i=(0,c.allocWasmString)(e,l),d=(0,c.allocWasmString)(r,l);if(h._OrtAddRunConfigEntry(f,i,d)!==0)throw new Error(`Can't set a run config entry: ${e} - ${r}`)}),[f,l]}catch(t){throw f!==0&&h._OrtReleaseRunOptions(f),l.forEach(h._free),t}}},2306:(b,n,a)=>{Object.defineProperty(n,"__esModule",{value:!0}),n.OnnxruntimeWebAssemblySessionHandler=void 0;const u=a(2806),c=a(1670),p=a(2850),s=a(2157);let h;n.OnnxruntimeWebAssemblySessionHandler=class{async createSessionAllocate(f){const l=await fetch(f),o=await l.arrayBuffer();return(0,s.createSessionAllocate)(new Uint8Array(o))}async loadModel(f,l){if(h||(await(0,s.initOrt)(c.env.wasm.numThreads,(o=>{switch(o){case"verbose":return 0;case"info":return 1;case"warning":return 2;case"error":return 3;case"fatal":return 4;default:throw new Error(`unsupported logging level: ${o}`)}})(c.env.logLevel)),h=!0),typeof f=="string")if(typeof fetch>"u"){const o=await(0,p.promisify)(u.readFile)(f);[this.sessionId,this.inputNames,this.outputNames]=await(0,s.createSession)(o,l)}else{const o=await this.createSessionAllocate(f);[this.sessionId,this.inputNames,this.outputNames]=await(0,s.createSessionFinalize)(o,l)}else[this.sessionId,this.inputNames,this.outputNames]=await(0,s.createSession)(f,l)}async dispose(){return(0,s.releaseSession)(this.sessionId)}async run(f,l,o){const t=[],e=[];Object.entries(f).forEach(g=>{const m=g[0],_=g[1],y=this.inputNames.indexOf(m);if(y===-1)throw new Error(`invalid input '${m}'`);t.push(_),e.push(y)});const r=[];Object.entries(l).forEach(g=>{const m=g[0],_=this.outputNames.indexOf(m);if(_===-1)throw new Error(`invalid output '${m}'`);r.push(_)});const i=await(0,s.run)(this.sessionId,e,t.map(g=>[g.type,g.dims,g.data]),r,o),d={};for(let g=0;g{Object.defineProperty(n,"__esModule",{value:!0}),n.setSessionOptions=void 0;const u=a(7967),c=a(4983),p=a(6361);n.setSessionOptions=s=>{const h=(0,p.getInstance)();let f=0;const l=[],o=s||{};(t=>{t.extra||(t.extra={}),t.extra.session||(t.extra.session={});const e=t.extra.session;e.use_ort_model_bytes_directly||(e.use_ort_model_bytes_directly="1")})(o);try{(s==null?void 0:s.graphOptimizationLevel)===void 0&&(o.graphOptimizationLevel="all");const t=(i=>{switch(i){case"disabled":return 0;case"basic":return 1;case"extended":return 2;case"all":return 99;default:throw new Error(`unsupported graph optimization level: ${i}`)}})(o.graphOptimizationLevel);(s==null?void 0:s.enableCpuMemArena)===void 0&&(o.enableCpuMemArena=!0),(s==null?void 0:s.enableMemPattern)===void 0&&(o.enableMemPattern=!0),(s==null?void 0:s.executionMode)===void 0&&(o.executionMode="sequential");const e=(i=>{switch(i){case"sequential":return 0;case"parallel":return 1;default:throw new Error(`unsupported execution mode: ${i}`)}})(o.executionMode);let r=0;if((s==null?void 0:s.logId)!==void 0&&(r=(0,c.allocWasmString)(s.logId,l)),(s==null?void 0:s.logSeverityLevel)===void 0)o.logSeverityLevel=2;else if(typeof s.logSeverityLevel!="number"||!Number.isInteger(s.logSeverityLevel)||s.logSeverityLevel<0||s.logSeverityLevel>4)throw new Error(`log serverity level is not valid: ${s.logSeverityLevel}`);if((s==null?void 0:s.logVerbosityLevel)===void 0)o.logVerbosityLevel=0;else if(typeof s.logVerbosityLevel!="number"||!Number.isInteger(s.logVerbosityLevel))throw new Error(`log verbosity level is not valid: ${s.logVerbosityLevel}`);if((s==null?void 0:s.enableProfiling)===void 0&&(o.enableProfiling=!1),f=h._OrtCreateSessionOptions(t,!!o.enableCpuMemArena,!!o.enableMemPattern,e,!!o.enableProfiling,0,r,o.logSeverityLevel,o.logVerbosityLevel),f===0)throw new Error("Can't create session options");return s!=null&&s.executionProviders&&((i,d,g)=>{for(const m of d){let _=typeof m=="string"?m:m.name;switch(_){case"xnnpack":_="XNNPACK";break;case"wasm":case"cpu":continue;default:throw new Error(`not supported EP: ${_}`)}const y=(0,c.allocWasmString)(_,g);if((0,p.getInstance)()._OrtAppendExecutionProvider(i,y)!==0)throw new Error(`Can't append execution provider: ${_}`)}})(f,s.executionProviders,l),(s==null?void 0:s.extra)!==void 0&&(0,u.iterateExtraOptions)(s.extra,"",new WeakSet,(i,d)=>{const g=(0,c.allocWasmString)(i,l),m=(0,c.allocWasmString)(d,l);if(h._OrtAddSessionConfigEntry(f,g,m)!==0)throw new Error(`Can't set a session config entry: ${i} - ${d}`)}),[f,l]}catch(t){throw f!==0&&h._OrtReleaseSessionOptions(f),l.forEach(h._free),t}}},4983:(b,n,a)=>{Object.defineProperty(n,"__esModule",{value:!0}),n.allocWasmString=void 0;const u=a(6361);n.allocWasmString=(c,p)=>{const s=(0,u.getInstance)(),h=s.lengthBytesUTF8(c)+1,f=s._malloc(h);return s.stringToUTF8(c,f,h),p.push(f),f}},349:(b,n,a)=>{Object.defineProperty(n,"__esModule",{value:!0}),n.extractTransferableBuffers=n.endProfiling=n.run=n.releaseSession=n.createSession=n.createSessionFinalize=n.createSessionAllocate=n.initOrt=void 0;const u=a(586),c=a(4919),p=a(4983),s=a(6361);n.initOrt=(t,e)=>{const r=(0,s.getInstance)()._OrtInit(t,e);if(r!==0)throw new Error(`Can't initialize onnxruntime. error code = ${r}`)};const h=new Map;n.createSessionAllocate=t=>{const e=(0,s.getInstance)(),r=e._malloc(t.byteLength);return e.HEAPU8.set(t,r),[r,t.byteLength]},n.createSessionFinalize=(t,e)=>{const r=(0,s.getInstance)();let i=0,d=0,g=[];try{if([d,g]=(0,c.setSessionOptions)(e),i=r._OrtCreateSession(t[0],t[1],d),i===0)throw new Error("Can't create a session")}finally{r._free(t[0]),r._OrtReleaseSessionOptions(d),g.forEach(r._free)}const m=r._OrtGetInputCount(i),_=r._OrtGetOutputCount(i),y=[],w=[],v=[],S=[];for(let O=0;O{const r=(0,n.createSessionAllocate)(t);return(0,n.createSessionFinalize)(r,e)},n.releaseSession=t=>{const e=(0,s.getInstance)(),r=h.get(t);if(!r)throw new Error("invalid session id");const i=r[0],d=r[1],g=r[2];d.forEach(e._OrtFree),g.forEach(e._OrtFree),e._OrtReleaseSession(i),h.delete(t)};const f=t=>{switch(t){case"int8":return 3;case"uint8":return 2;case"bool":return 9;case"int16":return 5;case"uint16":return 4;case"int32":return 6;case"uint32":return 12;case"float32":return 1;case"float64":return 11;case"string":return 8;case"int64":return 7;case"uint64":return 13;default:throw new Error(`unsupported data type: ${t}`)}},l=t=>{switch(t){case 3:return"int8";case 2:return"uint8";case 9:return"bool";case 5:return"int16";case 4:return"uint16";case 6:return"int32";case 12:return"uint32";case 1:return"float32";case 11:return"float64";case 8:return"string";case 7:return"int64";case 13:return"uint64";default:throw new Error(`unsupported data type: ${t}`)}},o=t=>{switch(t){case"float32":return Float32Array;case"uint8":case"bool":return Uint8Array;case"int8":return Int8Array;case"uint16":return Uint16Array;case"int16":return Int16Array;case"int32":return Int32Array;case"float64":return Float64Array;case"uint32":return Uint32Array;case"int64":return BigInt64Array;case"uint64":return BigUint64Array;default:throw new Error(`unsupported type: ${t}`)}};n.run=(t,e,r,i,d)=>{const g=(0,s.getInstance)(),m=h.get(t);if(!m)throw new Error("invalid session id");const _=m[0],y=m[1],w=m[2],v=e.length,S=i.length;let O=0,A=[];const T=[],M=[];try{[O,A]=(0,u.setRunOptions)(d);for(let C=0;Cg.HEAP32[Oe++]=Te);const ce=g._OrtCreateTensor(f(z),te,ne,Me,J.length);if(ce===0)throw new Error("Can't create a tensor");T.push(ce)}finally{g.stackRestore(me)}}const N=g.stackSave(),B=g.stackAlloc(4*v),$=g.stackAlloc(4*v),L=g.stackAlloc(4*S),H=g.stackAlloc(4*S);try{let C=B/4,z=$/4,J=L/4,X=H/4;for(let me=0;meve*Be);if(Te=l(He),Te==="string"){const ve=[];let Be=ye/4;for(let Ue=0;Ue{const e=(0,s.getInstance)(),r=h.get(t);if(!r)throw new Error("invalid session id");const i=r[0],d=e._OrtEndProfiling(i);if(d===0)throw new Error("Can't get an profile file name");e._OrtFree(d)},n.extractTransferableBuffers=t=>{const e=[];for(const r of t){const i=r[2];!Array.isArray(i)&&i.buffer&&e.push(i.buffer)}return e}},6361:function(b,n,a){var u=this&&this.__createBinding||(Object.create?function(d,g,m,_){_===void 0&&(_=m);var y=Object.getOwnPropertyDescriptor(g,m);y&&!("get"in y?!g.__esModule:y.writable||y.configurable)||(y={enumerable:!0,get:function(){return g[m]}}),Object.defineProperty(d,_,y)}:function(d,g,m,_){_===void 0&&(_=m),d[_]=g[m]}),c=this&&this.__setModuleDefault||(Object.create?function(d,g){Object.defineProperty(d,"default",{enumerable:!0,value:g})}:function(d,g){d.default=g}),p=this&&this.__importStar||function(d){if(d&&d.__esModule)return d;var g={};if(d!=null)for(var m in d)m!=="default"&&Object.prototype.hasOwnProperty.call(d,m)&&u(g,d,m);return c(g,d),g},s=this&&this.__importDefault||function(d){return d&&d.__esModule?d:{default:d}};Object.defineProperty(n,"__esModule",{value:!0}),n.dispose=n.getInstance=n.initializeWebAssembly=void 0;const h=p(a(6449)),f=s(a(932)),l=a(3474);let o,t=!1,e=!1,r=!1;const i=(d,g)=>g?d?"ort-wasm-simd-threaded.wasm":"ort-wasm-threaded.wasm":d?"ort-wasm-simd.wasm":"ort-wasm.wasm";n.initializeWebAssembly=async d=>{if(t)return Promise.resolve();if(e)throw new Error("multiple calls to 'initializeWebAssembly()' detected.");if(r)throw new Error("previous call to 'initializeWebAssembly()' failed.");e=!0;const g=d.initTimeout,m=d.numThreads,_=d.simd,y=m>1&&(()=>{try{return typeof SharedArrayBuffer<"u"&&(typeof MessageChannel<"u"&&new MessageChannel().port1.postMessage(new SharedArrayBuffer(1)),WebAssembly.validate(new Uint8Array([0,97,115,109,1,0,0,0,1,4,1,96,0,0,3,2,1,0,5,4,1,3,1,1,10,11,1,9,0,65,0,254,16,2,0,26,11])))}catch{return!1}})(),w=_&&(()=>{try{return WebAssembly.validate(new Uint8Array([0,97,115,109,1,0,0,0,1,4,1,96,0,0,3,2,1,0,10,30,1,28,0,65,0,253,15,253,12,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,253,186,1,26,11]))}catch{return!1}})(),v=typeof d.wasmPaths=="string"?d.wasmPaths:void 0,S=i(!1,y),O=i(w,y),A=typeof d.wasmPaths=="object"?d.wasmPaths[O]:void 0;let T=!1;const M=[];if(g>0&&M.push(new Promise(N=>{setTimeout(()=>{T=!0,N()},g)})),M.push(new Promise((N,B)=>{const $=y?l:f.default,L={locateFile:(H,C)=>y&&H.endsWith(".worker.js")&&typeof Blob<"u"?URL.createObjectURL(new Blob([a(4154)],{type:"text/javascript"})):H===S?A??(v??C)+O:C+H};if(y)if(typeof Blob>"u")L.mainScriptUrlOrBlob=h.join("/","ort-wasm-threaded.js");else{const H=`var ortWasmThreaded=(function(){var _scriptDir;return ${$.toString()}})();`;L.mainScriptUrlOrBlob=new Blob([H],{type:"text/javascript"})}$(L).then(H=>{e=!1,t=!0,o=H,N()},H=>{e=!1,r=!0,B(H)})})),await Promise.race(M),T)throw new Error(`WebAssembly backend initializing failed due to timeout: ${g}ms`)},n.getInstance=()=>{if(t&&o)return o;throw new Error("WebAssembly is not initialized yet.")},n.dispose=()=>{var d;!t||e||r||(e=!0,(d=o.PThread)===null||d===void 0||d.terminateAllThreads(),o=void 0,e=!1,t=!1,r=!0)}},9710:(b,n,a)=>{a.d(n,{Z:()=>p});var u=a(477),c=a.n(u);function p(){return c()('/*!\n* ONNX Runtime Web v1.14.0\n* Copyright (c) Microsoft Corporation. All rights reserved.\n* Licensed under the MIT License.\n*/\n(()=>{var t={474:(t,e,n)=>{var _scriptDir,r=(_scriptDir=(_scriptDir="undefined"!=typeof document&&document.currentScript?document.currentScript.src:void 0)||"/index.js",function(t){function e(){return j.buffer!=D&&N(j.buffer),P}function r(){return j.buffer!=D&&N(j.buffer),U}function a(){return j.buffer!=D&&N(j.buffer),F}function i(){return j.buffer!=D&&N(j.buffer),I}function o(){return j.buffer!=D&&N(j.buffer),W}var u,c,s;t=t||{},u||(u=void 0!==t?t:{}),u.ready=new Promise((function(t,e){c=t,s=e}));var l,f,p,h,d,y,b=Object.assign({},u),m="./this.program",g=(t,e)=>{throw e},v="object"==typeof window,w="function"==typeof importScripts,_="object"==typeof process&&"object"==typeof process.versions&&"string"==typeof process.versions.node,O=u.ENVIRONMENT_IS_PTHREAD||!1,A="";function S(t){return u.locateFile?u.locateFile(t,A):A+t}if(_){let e;A=w?n(908).dirname(A)+"/":"//",y=()=>{d||(h=n(384),d=n(908))},l=function(t,e){return y(),t=d.normalize(t),h.readFileSync(t,e?void 0:"utf8")},p=t=>((t=l(t,!0)).buffer||(t=new Uint8Array(t)),t),f=(t,e,n)=>{y(),t=d.normalize(t),h.readFile(t,(function(t,r){t?n(t):e(r.buffer)}))},1{if(Q())throw process.exitCode=t,e;e instanceof ct||x("exiting due to exception: "+e),process.exit(t)},u.inspect=function(){return"[Emscripten Module object]"};try{e=n(925)}catch(t){throw console.error(\'The "worker_threads" module is not supported in this node.js build - perhaps a newer version is needed?\'),t}n.g.Worker=e.Worker}else(v||w)&&(w?A=self.location.href:"undefined"!=typeof document&&document.currentScript&&(A=document.currentScript.src),_scriptDir&&(A=_scriptDir),A=0!==A.indexOf("blob:")?A.substr(0,A.replace(/[?#].*/,"").lastIndexOf("/")+1):"",_||(l=t=>{var e=new XMLHttpRequest;return e.open("GET",t,!1),e.send(null),e.responseText},w&&(p=t=>{var e=new XMLHttpRequest;return e.open("GET",t,!1),e.responseType="arraybuffer",e.send(null),new Uint8Array(e.response)}),f=(t,e,n)=>{var r=new XMLHttpRequest;r.open("GET",t,!0),r.responseType="arraybuffer",r.onload=()=>{200==r.status||0==r.status&&r.response?e(r.response):n()},r.onerror=n,r.send(null)}));_&&"undefined"==typeof performance&&(n.g.performance=n(953).performance);var T=console.log.bind(console),E=console.warn.bind(console);_&&(y(),T=t=>h.writeSync(1,t+"\\n"),E=t=>h.writeSync(2,t+"\\n"));var M,C=u.print||T,x=u.printErr||E;Object.assign(u,b),b=null,u.thisProgram&&(m=u.thisProgram),u.quit&&(g=u.quit),u.wasmBinary&&(M=u.wasmBinary);var R=u.noExitRuntime||!1;"object"!=typeof WebAssembly&&at("no native wasm support detected");var j,k,D,P,U,F,I,W,H=!1,L="undefined"!=typeof TextDecoder?new TextDecoder("utf8"):void 0;function z(t,e,n){var r=(e>>>=0)+n;for(n=e;t[n]&&!(n>=r);)++n;if(16(a=224==(240&a)?(15&a)<<12|i<<6|o:(7&a)<<18|i<<12|o<<6|63&t[e++])?r+=String.fromCharCode(a):(a-=65536,r+=String.fromCharCode(55296|a>>10,56320|1023&a))}}else r+=String.fromCharCode(a)}return r}function Y(t,e){return(t>>>=0)?z(r(),t,e):""}function B(t,e,n,r){if(!(0>>=0;r=n+r-1;for(var i=0;i=o&&(o=65536+((1023&o)<<10)|1023&t.charCodeAt(++i)),127>=o){if(n>=r)break;e[n++>>>0]=o}else{if(2047>=o){if(n+1>=r)break;e[n++>>>0]=192|o>>6}else{if(65535>=o){if(n+2>=r)break;e[n++>>>0]=224|o>>12}else{if(n+3>=r)break;e[n++>>>0]=240|o>>18,e[n++>>>0]=128|o>>12&63}e[n++>>>0]=128|o>>6&63}e[n++>>>0]=128|63&o}}return e[n>>>0]=0,n-a}function G(t){for(var e=0,n=0;n=r?e++:2047>=r?e+=2:55296<=r&&57343>=r?(e+=4,++n):e+=3}return e}function N(t){D=t,u.HEAP8=P=new Int8Array(t),u.HEAP16=new Int16Array(t),u.HEAP32=F=new Int32Array(t),u.HEAPU8=U=new Uint8Array(t),u.HEAPU16=new Uint16Array(t),u.HEAPU32=I=new Uint32Array(t),u.HEAPF32=new Float32Array(t),u.HEAPF64=W=new Float64Array(t)}O&&(D=u.buffer);var V=u.INITIAL_MEMORY||16777216;if(O)j=u.wasmMemory,D=u.buffer;else if(u.wasmMemory)j=u.wasmMemory;else if(!((j=new WebAssembly.Memory({initial:V/65536,maximum:65536,shared:!0})).buffer instanceof SharedArrayBuffer))throw x("requested a shared WebAssembly.Memory but the returned buffer is not a SharedArrayBuffer, indicating that while the browser has SharedArrayBuffer it does not have WebAssembly threads support - you may need to set a flag"),_&&console.log("(on node you may need: --experimental-wasm-threads --experimental-wasm-bulk-memory and also use a recent version)"),Error("bad memory");j&&(D=j.buffer),V=D.byteLength,N(D);var $,q=[],X=[],J=[],Z=[];function Q(){return R||!1}function K(){var t=u.preRun.shift();q.unshift(t)}var tt,et=0,nt=null,rt=null;function at(t){throw O?postMessage({cmd:"onAbort",arg:t}):u.onAbort&&u.onAbort(t),x(t="Aborted("+t+")"),H=!0,t=new WebAssembly.RuntimeError(t+". Build with -sASSERTIONS for more info."),s(t),t}function it(){return tt.startsWith("data:application/octet-stream;base64,")}function ot(){var t=tt;try{if(t==tt&&M)return new Uint8Array(M);if(p)return p(t);throw"both async and sync fetching of the wasm failed"}catch(t){at(t)}}tt="ort-wasm-threaded.wasm",it()||(tt=S(tt));var ut={};function ct(t){this.name="ExitStatus",this.message="Program terminated with exit("+t+")",this.status=t}function st(t){(t=ht.Vb[t])||at(),ht.mc(t)}function lt(t){var e=ht.Cc();if(!e)return 6;ht.ac.push(e),ht.Vb[t.Ub]=e,e.Ub=t.Ub;var n={cmd:"run",start_routine:t.Ic,arg:t.zc,pthread_ptr:t.Ub};return e.$b=()=>{n.time=performance.now(),e.postMessage(n,t.Nc)},e.loaded&&(e.$b(),delete e.$b),0}function ft(t){if(O)return $t(1,1,t);Q()||(ht.oc(),u.onExit&&u.onExit(t),H=!0),g(t,new ct(t))}function pt(t,e){if(!e&&O)throw bt(t),"unwind";Q()||O||(me(),dt(J),be(0),re[1].length&&ae(1,10),re[2].length&&ae(2,10),ht.oc()),ft(t)}var ht={Yb:[],ac:[],qc:[],Vb:{},fc:function(){O&&ht.Ec()},Pc:function(){},Ec:function(){ht.receiveObjectTransfer=ht.Gc,ht.threadInitTLS=ht.pc,ht.setExitStatus=ht.nc,R=!1},nc:function(){},oc:function(){for(var t of Object.values(ht.Vb))ht.mc(t);for(t of ht.Yb)t.terminate();ht.Yb=[]},mc:function(t){var e=t.Ub;delete ht.Vb[e],ht.Yb.push(t),ht.ac.splice(ht.ac.indexOf(t),1),t.Ub=0,Oe(e)},Gc:function(){},pc:function(){ht.qc.forEach((t=>t()))},Fc:function(t,e){t.onmessage=n=>{var r=(n=n.data).cmd;if(t.Ub&&(ht.Bc=t.Ub),n.targetThread&&n.targetThread!=he()){var a=ht.Vb[n.Qc];a?a.postMessage(n,n.transferList):x(\'Internal error! Worker sent a message "\'+r+\'" to target pthread \'+n.targetThread+", but that thread no longer exists!")}else"processProxyingQueue"===r?zt(n.queue):"spawnThread"===r?lt(n):"cleanupThread"===r?st(n.thread):"killThread"===r?(n=n.thread,r=ht.Vb[n],delete ht.Vb[n],r.terminate(),Oe(n),ht.ac.splice(ht.ac.indexOf(r),1),r.Ub=0):"cancelThread"===r?ht.Vb[n.thread].postMessage({cmd:"cancel"}):"loaded"===r?(t.loaded=!0,e&&e(t),t.$b&&(t.$b(),delete t.$b)):"print"===r?C("Thread "+n.threadId+": "+n.text):"printErr"===r?x("Thread "+n.threadId+": "+n.text):"alert"===r?alert("Thread "+n.threadId+": "+n.text):"setimmediate"===n.target?t.postMessage(n):"onAbort"===r?u.onAbort&&u.onAbort(n.arg):r&&x("worker sent an unknown command "+r);ht.Bc=void 0},t.onerror=t=>{throw x("worker sent an error! "+t.filename+":"+t.lineno+": "+t.message),t},_&&(t.on("message",(function(e){t.onmessage({data:e})})),t.on("error",(function(e){t.onerror(e)})),t.on("detachedExit",(function(){}))),t.postMessage({cmd:"load",urlOrBlob:u.mainScriptUrlOrBlob||_scriptDir,wasmMemory:j,wasmModule:k})},yc:function(){var t=S("ort-wasm-threaded.worker.js");ht.Yb.push(new Worker(t))},Cc:function(){return 0==ht.Yb.length&&(ht.yc(),ht.Fc(ht.Yb[0])),ht.Yb.pop()}};function dt(t){for(;0>2>>>0];t=a()[t+48>>2>>>0],Te(e,e-t),Me(e)};var mt=[];function gt(t){var e=mt[t];return e||(t>=mt.length&&(mt.length=t+1),mt[t]=e=$.get(t)),e}u.invokeEntryPoint=function(t,e){t=gt(t)(e),Q()?ht.nc(t):Ae(t)};var vt,wt,_t=[],Ot=0,At=0;function St(t){this.Zb=t,this.Sb=t-24,this.xc=function(t){i()[this.Sb+4>>2>>>0]=t},this.bc=function(){return i()[this.Sb+4>>2>>>0]},this.wc=function(t){i()[this.Sb+8>>2>>>0]=t},this.Dc=function(){return i()[this.Sb+8>>2>>>0]},this.rc=function(){a()[this.Sb>>2>>>0]=0},this.hc=function(t){t=t?1:0,e()[this.Sb+12>>0>>>0]=t},this.uc=function(){return 0!=e()[this.Sb+12>>0>>>0]},this.ic=function(t){t=t?1:0,e()[this.Sb+13>>0>>>0]=t},this.kc=function(){return 0!=e()[this.Sb+13>>0>>>0]},this.fc=function(t,e){this.cc(0),this.xc(t),this.wc(e),this.rc(),this.hc(!1),this.ic(!1)},this.sc=function(){Atomics.add(a(),this.Sb>>2,1)},this.Hc=function(){return 1===Atomics.sub(a(),this.Sb>>2,1)},this.cc=function(t){i()[this.Sb+16>>2>>>0]=t},this.tc=function(){return i()[this.Sb+16>>2>>>0]},this.vc=function(){if(Re(this.bc()))return i()[this.Zb>>2>>>0];var t=this.tc();return 0!==t?t:this.Zb}}function Tt(t){return ye(new St(t).Sb)}function Et(t,e,n,r){return O?$t(3,1,t,e,n,r):Mt(t,e,n,r)}function Mt(t,e,n,r){if("undefined"==typeof SharedArrayBuffer)return x("Current environment does not support SharedArrayBuffer, pthreads are not available!"),6;var a=[];return O&&0===a.length?Et(t,e,n,r):(t={Ic:n,Ub:t,zc:r,Nc:a},O?(t.Oc="spawnThread",postMessage(t,a),0):lt(t))}function Ct(t,e,n){return O?$t(4,1,t,e,n):0}function xt(t,e){if(O)return $t(5,1,t,e)}function Rt(t,e){if(O)return $t(6,1,t,e)}function jt(t,e,n){if(O)return $t(7,1,t,e,n)}function kt(t,e,n){return O?$t(8,1,t,e,n):0}function Dt(t,e){if(O)return $t(9,1,t,e)}function Pt(t,e,n){if(O)return $t(10,1,t,e,n)}function Ut(t,e,n,r){if(O)return $t(11,1,t,e,n,r)}function Ft(t,e,n,r){if(O)return $t(12,1,t,e,n,r)}function It(t,e,n,r){if(O)return $t(13,1,t,e,n,r)}function Wt(t){if(O)return $t(14,1,t)}function Ht(t,e){if(O)return $t(15,1,t,e)}function Lt(t,e,n){if(O)return $t(16,1,t,e,n)}function zt(t){Atomics.store(a(),t>>2,1),he()&&_e(t),Atomics.compareExchange(a(),t>>2,1,0)}function Yt(t){return i()[t>>>2]+4294967296*a()[t+4>>>2]}function Bt(t,e,n,r,a,i){return O?$t(17,1,t,e,n,r,a,i):-52}function Gt(t,e,n,r,a,i){if(O)return $t(18,1,t,e,n,r,a,i)}function Nt(t){var n=G(t)+1,r=de(n);return r&&B(t,e(),r,n),r}function Vt(t,e,n){function r(t){return(t=t.toTimeString().match(/\\(([A-Za-z ]+)\\)$/))?t[1]:"GMT"}if(O)return $t(19,1,t,e,n);var o=(new Date).getFullYear(),u=new Date(o,0,1),c=new Date(o,6,1);o=u.getTimezoneOffset();var s=c.getTimezoneOffset(),l=Math.max(o,s);a()[t>>2>>>0]=60*l,a()[e>>2>>>0]=Number(o!=s),t=r(u),e=r(c),t=Nt(t),e=Nt(e),s>2>>>0]=t,i()[n+4>>2>>>0]=e):(i()[n>>2>>>0]=e,i()[n+4>>2>>>0]=t)}function $t(t,e){var n=arguments.length-2,r=arguments;return yt((()=>{for(var a=Ce(8*n),i=a>>3,u=0;u>>0]=c}return we(t,n,a,e)}))}u.executeNotifiedProxyingQueue=zt,wt=_?()=>{var t=process.hrtime();return 1e3*t[0]+t[1]/1e6}:O?()=>performance.now()-u.__performance_now_clock_drift:()=>performance.now();var qt,Xt=[],Jt={};function Zt(){if(!qt){var t,e={USER:"web_user",LOGNAME:"web_user",PATH:"/",PWD:"/",HOME:"/home/web_user",LANG:("object"==typeof navigator&&navigator.languages&&navigator.languages[0]||"C").replace("-","_")+".UTF-8",_:m||"./this.program"};for(t in Jt)void 0===Jt[t]?delete e[t]:e[t]=Jt[t];var n=[];for(t in e)n.push(t+"="+e[t]);qt=n}return qt}function Qt(t,n){if(O)return $t(20,1,t,n);var r=0;return Zt().forEach((function(a,o){var u=n+r;for(o=i()[t+4*o>>2>>>0]=u,u=0;u>0>>>0]=a.charCodeAt(u);e()[o>>0>>>0]=0,r+=a.length+1})),0}function Kt(t,e){if(O)return $t(21,1,t,e);var n=Zt();i()[t>>2>>>0]=n.length;var r=0;return n.forEach((function(t){r+=t.length+1})),i()[e>>2>>>0]=r,0}function te(t){return O?$t(22,1,t):52}function ee(t,e,n,r){return O?$t(23,1,t,e,n,r):52}function ne(t,e,n,r,a){return O?$t(24,1,t,e,n,r,a):70}var re=[null,[],[]];function ae(t,e){var n=re[t];0===e||10===e?((1===t?C:x)(z(n,0)),n.length=0):n.push(e)}function ie(t,e,n,a){if(O)return $t(25,1,t,e,n,a);for(var o=0,u=0;u>2>>>0],s=i()[e+4>>2>>>0];e+=8;for(var l=0;l>>0]);o+=s}return i()[a>>2>>>0]=o,0}var oe=0;function ue(t){return 0==t%4&&(0!=t%100||0==t%400)}var ce=[31,29,31,30,31,30,31,31,30,31,30,31],se=[31,28,31,30,31,30,31,31,30,31,30,31];function le(t,n,r,i){function o(t,e,n){for(t="number"==typeof t?t.toString():t||"";t.lengtht?-1:0r-t.getDate())){t.setDate(t.getDate()+e);break}e-=r-t.getDate()+1,t.setDate(1),11>n?t.setMonth(n+1):(t.setMonth(0),t.setFullYear(t.getFullYear()+1))}return n=new Date(t.getFullYear()+1,0,4),e=s(new Date(t.getFullYear(),0,4)),n=s(n),0>=c(e,t)?0>=c(n,t)?t.getFullYear()+1:t.getFullYear():t.getFullYear()-1}var f=a()[i+40>>2>>>0];for(var p in i={Lc:a()[i>>2>>>0],Kc:a()[i+4>>2>>>0],dc:a()[i+8>>2>>>0],jc:a()[i+12>>2>>>0],ec:a()[i+16>>2>>>0],Xb:a()[i+20>>2>>>0],Tb:a()[i+24>>2>>>0],Wb:a()[i+28>>2>>>0],Rc:a()[i+32>>2>>>0],Jc:a()[i+36>>2>>>0],Mc:f?Y(f):""},r=Y(r),f={"%c":"%a %b %d %H:%M:%S %Y","%D":"%m/%d/%y","%F":"%Y-%m-%d","%h":"%b","%r":"%I:%M:%S %p","%R":"%H:%M","%T":"%H:%M:%S","%x":"%m/%d/%y","%X":"%H:%M:%S","%Ec":"%c","%EC":"%C","%Ex":"%m/%d/%y","%EX":"%H:%M:%S","%Ey":"%y","%EY":"%Y","%Od":"%d","%Oe":"%e","%OH":"%H","%OI":"%I","%Om":"%m","%OM":"%M","%OS":"%S","%Ou":"%u","%OU":"%U","%OV":"%V","%Ow":"%w","%OW":"%W","%Oy":"%y"})r=r.replace(new RegExp(p,"g"),f[p]);var h="Sunday Monday Tuesday Wednesday Thursday Friday Saturday".split(" "),d="January February March April May June July August September October November December".split(" ");for(p in f={"%a":function(t){return h[t.Tb].substring(0,3)},"%A":function(t){return h[t.Tb]},"%b":function(t){return d[t.ec].substring(0,3)},"%B":function(t){return d[t.ec]},"%C":function(t){return u((t.Xb+1900)/100|0,2)},"%d":function(t){return u(t.jc,2)},"%e":function(t){return o(t.jc,2," ")},"%g":function(t){return l(t).toString().substring(2)},"%G":function(t){return l(t)},"%H":function(t){return u(t.dc,2)},"%I":function(t){return 0==(t=t.dc)?t=12:12t.dc?"AM":"PM"},"%S":function(t){return u(t.Lc,2)},"%t":function(){return"\\t"},"%u":function(t){return t.Tb||7},"%U":function(t){return u(Math.floor((t.Wb+7-t.Tb)/7),2)},"%V":function(t){var e=Math.floor((t.Wb+7-(t.Tb+6)%7)/7);if(2>=(t.Tb+371-t.Wb-2)%7&&e++,e)53==e&&(4==(n=(t.Tb+371-t.Wb)%7)||3==n&&ue(t.Xb)||(e=1));else{e=52;var n=(t.Tb+7-t.Wb-1)%7;(4==n||5==n&&ue(t.Xb%400-1))&&e++}return u(e,2)},"%w":function(t){return t.Tb},"%W":function(t){return u(Math.floor((t.Wb+7-(t.Tb+6)%7)/7),2)},"%y":function(t){return(t.Xb+1900).toString().substring(2)},"%Y":function(t){return t.Xb+1900},"%z":function(t){var e=0<=(t=t.Jc);return t=Math.abs(t)/60,(e?"+":"-")+String("0000"+(t/60*100+t%60)).slice(-4)},"%Z":function(t){return t.Mc},"%%":function(){return"%"}},r=r.replace(/%%/g,"\\0\\0"),f)r.includes(p)&&(r=r.replace(new RegExp(p,"g"),f[p](i)));return p=function(t){var e=Array(G(t)+1);return B(t,e,0,e.length),e}(r=r.replace(/\\0\\0/g,"%")),p.length>n?0:(function(t,n){e().set(t,n>>>0)}(p,t),p.length-1)}ht.fc();var fe=[null,ft,bt,Et,Ct,xt,Rt,jt,kt,Dt,Pt,Ut,Ft,It,Wt,Ht,Lt,Bt,Gt,Vt,Qt,Kt,te,ee,ne,ie],pe={b:function(t){return de(t+24)+24},n:function(t){return(t=new St(t)).uc()||(t.hc(!0),Ot--),t.ic(!1),_t.push(t),t.sc(),t.vc()},ma:function(t){throw x("Unexpected exception thrown, this is not properly supported - aborting"),H=!0,t},x:function(){Se(0);var t=_t.pop();if(t.Hc()&&!t.kc()){var e=t.Dc();e&>(e)(t.Zb),Tt(t.Zb)}At=0},e:function(){var t=At;if(!t)return oe=0;var e=new St(t);e.cc(t);var n=e.bc();if(!n)return oe=0,t;for(var r=Array.prototype.slice.call(arguments),a=0;azt(r)));else if(O)postMessage({targetThread:t,cmd:"processProxyingQueue",queue:r});else{if(!(t=ht.Vb[t]))return;t.postMessage({cmd:"processProxyingQueue",queue:r})}return 1},Ea:function(){return-1},Pa:function(t,e){t=new Date(1e3*Yt(t)),a()[e>>2>>>0]=t.getUTCSeconds(),a()[e+4>>2>>>0]=t.getUTCMinutes(),a()[e+8>>2>>>0]=t.getUTCHours(),a()[e+12>>2>>>0]=t.getUTCDate(),a()[e+16>>2>>>0]=t.getUTCMonth(),a()[e+20>>2>>>0]=t.getUTCFullYear()-1900,a()[e+24>>2>>>0]=t.getUTCDay(),t=(t.getTime()-Date.UTC(t.getUTCFullYear(),0,1,0,0,0,0))/864e5|0,a()[e+28>>2>>>0]=t},Qa:function(t,e){t=new Date(1e3*Yt(t)),a()[e>>2>>>0]=t.getSeconds(),a()[e+4>>2>>>0]=t.getMinutes(),a()[e+8>>2>>>0]=t.getHours(),a()[e+12>>2>>>0]=t.getDate(),a()[e+16>>2>>>0]=t.getMonth(),a()[e+20>>2>>>0]=t.getFullYear()-1900,a()[e+24>>2>>>0]=t.getDay();var n=new Date(t.getFullYear(),0,1),r=(t.getTime()-n.getTime())/864e5|0;a()[e+28>>2>>>0]=r,a()[e+36>>2>>>0]=-60*t.getTimezoneOffset(),r=new Date(t.getFullYear(),6,1).getTimezoneOffset(),t=0|(r!=(n=n.getTimezoneOffset())&&t.getTimezoneOffset()==Math.min(n,r)),a()[e+32>>2>>>0]=t},Ra:function(t){var e=new Date(a()[t+20>>2>>>0]+1900,a()[t+16>>2>>>0],a()[t+12>>2>>>0],a()[t+8>>2>>>0],a()[t+4>>2>>>0],a()[t>>2>>>0],0),n=a()[t+32>>2>>>0],r=e.getTimezoneOffset(),i=new Date(e.getFullYear(),0,1),o=new Date(e.getFullYear(),6,1).getTimezoneOffset(),u=i.getTimezoneOffset(),c=Math.min(u,o);return 0>n?a()[t+32>>2>>>0]=Number(o!=u&&c==r):0>2>>>0]=e.getDay(),n=(e.getTime()-i.getTime())/864e5|0,a()[t+28>>2>>>0]=n,a()[t>>2>>>0]=e.getSeconds(),a()[t+4>>2>>>0]=e.getMinutes(),a()[t+8>>2>>>0]=e.getHours(),a()[t+12>>2>>>0]=e.getDate(),a()[t+16>>2>>>0]=e.getMonth(),e.getTime()/1e3|0},Aa:Bt,Ba:Gt,Sa:function t(e,n,r){t.Ac||(t.Ac=!0,Vt(e,n,r))},y:function(){at("")},U:function(){if(!_&&!w){var t="Blocking on the main thread is very dangerous, see https://emscripten.org/docs/porting/pthreads.html#blocking-on-the-main-browser-thread";vt||(vt={}),vt[t]||(vt[t]=1,_&&(t="warning: "+t),x(t))}},ra:function(){return 4294901760},B:wt,Ia:function(t,e,n){r().copyWithin(t>>>0,e>>>0,e+n>>>0)},F:function(){return _?n(993).cpus().length:navigator.hardwareConcurrency},Da:function(t,e,n){Xt.length=e,n>>=3;for(var r=0;r>>0];return(0>t?ut[-t-1]:fe[t]).apply(null,Xt)},qa:function(t){var e=r().length;if((t>>>=0)<=e||4294901760=n;n*=2){var a=e*(1+.2/n);a=Math.min(a,t+100663296);var i=Math;a=Math.max(t,a),i=i.min.call(i,4294901760,a+(65536-a%65536)%65536);t:{try{j.grow(i-D.byteLength+65535>>>16),N(j.buffer);var o=1;break t}catch(t){}o=void 0}if(o)return!0}return!1},Na:function(){throw"unwind"},Ga:Qt,Ha:Kt,J:pt,I:te,S:ee,ga:ne,R:ie,d:function(){return oe},na:function t(r,a){t.lc||(t.lc=function(){if("object"==typeof crypto&&"function"==typeof crypto.getRandomValues){var t=new Uint8Array(1);return()=>(crypto.getRandomValues(t),t[0])}if(_)try{var e=n(Object(function(){var t=new Error("Cannot find module \'crypto\'");throw t.code="MODULE_NOT_FOUND",t}()));return()=>e.randomBytes(1)[0]}catch(t){}return()=>at("randomDevice")}());for(var i=0;i>0>>>0]=t.lc();return 0},ia:function(t,e,n){var r=Ee();try{return gt(t)(e,n)}catch(t){if(Me(r),t!==t+0)throw t;Se(1,0)}},ja:function(t,e,n){var r=Ee();try{return gt(t)(e,n)}catch(t){if(Me(r),t!==t+0)throw t;Se(1,0)}},K:function(t){var e=Ee();try{return gt(t)()}catch(t){if(Me(e),t!==t+0)throw t;Se(1,0)}},f:function(t,e){var n=Ee();try{return gt(t)(e)}catch(t){if(Me(n),t!==t+0)throw t;Se(1,0)}},P:function(t,e,n){var r=Ee();try{return gt(t)(e,n)}catch(t){if(Me(r),t!==t+0)throw t;Se(1,0)}},Q:function(t,e,n){var r=Ee();try{return gt(t)(e,n)}catch(t){if(Me(r),t!==t+0)throw t;Se(1,0)}},k:function(t,e,n){var r=Ee();try{return gt(t)(e,n)}catch(t){if(Me(r),t!==t+0)throw t;Se(1,0)}},p:function(t,e,n,r){var a=Ee();try{return gt(t)(e,n,r)}catch(t){if(Me(a),t!==t+0)throw t;Se(1,0)}},q:function(t,e,n,r,a){var i=Ee();try{return gt(t)(e,n,r,a)}catch(t){if(Me(i),t!==t+0)throw t;Se(1,0)}},N:function(t,e,n,r,a,i){var o=Ee();try{return gt(t)(e,n,r,a,i)}catch(t){if(Me(o),t!==t+0)throw t;Se(1,0)}},s:function(t,e,n,r,a,i){var o=Ee();try{return gt(t)(e,n,r,a,i)}catch(t){if(Me(o),t!==t+0)throw t;Se(1,0)}},w:function(t,e,n,r,a,i,o){var u=Ee();try{return gt(t)(e,n,r,a,i,o)}catch(t){if(Me(u),t!==t+0)throw t;Se(1,0)}},L:function(t,e,n,r,a,i,o,u){var c=Ee();try{return gt(t)(e,n,r,a,i,o,u)}catch(t){if(Me(c),t!==t+0)throw t;Se(1,0)}},E:function(t,e,n,r,a,i,o,u,c,s,l,f){var p=Ee();try{return gt(t)(e,n,r,a,i,o,u,c,s,l,f)}catch(t){if(Me(p),t!==t+0)throw t;Se(1,0)}},aa:function(t,e,n,r,a,i,o,u){var c=Ee();try{return He(t,e,n,r,a,i,o,u)}catch(t){if(Me(c),t!==t+0)throw t;Se(1,0)}},_:function(t,e,n,r,a,i,o){var u=Ee();try{return ke(t,e,n,r,a,i,o)}catch(t){if(Me(u),t!==t+0)throw t;Se(1,0)}},Z:function(t,e,n,r,a){var i=Ee();try{return Le(t,e,n,r,a)}catch(t){if(Me(i),t!==t+0)throw t;Se(1,0)}},ca:function(t,e,n,r){var a=Ee();try{return Ie(t,e,n,r)}catch(t){if(Me(a),t!==t+0)throw t;Se(1,0)}},$:function(t){var e=Ee();try{return je(t)}catch(t){if(Me(e),t!==t+0)throw t;Se(1,0)}},ba:function(t,e){var n=Ee();try{return We(t,e)}catch(t){if(Me(n),t!==t+0)throw t;Se(1,0)}},Y:function(t,e,n){var r=Ee();try{return De(t,e,n)}catch(t){if(Me(r),t!==t+0)throw t;Se(1,0)}},g:function(t){var e=Ee();try{gt(t)()}catch(t){if(Me(e),t!==t+0)throw t;Se(1,0)}},r:function(t,e){var n=Ee();try{gt(t)(e)}catch(t){if(Me(n),t!==t+0)throw t;Se(1,0)}},i:function(t,e,n){var r=Ee();try{gt(t)(e,n)}catch(t){if(Me(r),t!==t+0)throw t;Se(1,0)}},ha:function(t,e,n,r){var a=Ee();try{gt(t)(e,n,r)}catch(t){if(Me(a),t!==t+0)throw t;Se(1,0)}},m:function(t,e,n,r){var a=Ee();try{gt(t)(e,n,r)}catch(t){if(Me(a),t!==t+0)throw t;Se(1,0)}},v:function(t,e,n,r,a){var i=Ee();try{gt(t)(e,n,r,a)}catch(t){if(Me(i),t!==t+0)throw t;Se(1,0)}},u:function(t,e,n,r,a,i){var o=Ee();try{gt(t)(e,n,r,a,i)}catch(t){if(Me(o),t!==t+0)throw t;Se(1,0)}},O:function(t,e,n,r,a,i,o){var u=Ee();try{gt(t)(e,n,r,a,i,o)}catch(t){if(Me(u),t!==t+0)throw t;Se(1,0)}},A:function(t,e,n,r,a,i,o,u){var c=Ee();try{gt(t)(e,n,r,a,i,o,u)}catch(t){if(Me(c),t!==t+0)throw t;Se(1,0)}},ka:function(t,e,n,r,a,i,o,u,c){var s=Ee();try{gt(t)(e,n,r,a,i,o,u,c)}catch(t){if(Me(s),t!==t+0)throw t;Se(1,0)}},C:function(t,e,n,r,a,i,o,u,c,s,l){var f=Ee();try{gt(t)(e,n,r,a,i,o,u,c,s,l)}catch(t){if(Me(f),t!==t+0)throw t;Se(1,0)}},D:function(t,e,n,r,a,i,o,u,c,s,l,f,p,h,d,y){var b=Ee();try{gt(t)(e,n,r,a,i,o,u,c,s,l,f,p,h,d,y)}catch(t){if(Me(b),t!==t+0)throw t;Se(1,0)}},fa:function(t,e,n,r,a,i,o,u){var c=Ee();try{Pe(t,e,n,r,a,i,o,u)}catch(t){if(Me(c),t!==t+0)throw t;Se(1,0)}},da:function(t,e,n,r,a,i,o,u,c,s,l,f){var p=Ee();try{Fe(t,e,n,r,a,i,o,u,c,s,l,f)}catch(t){if(Me(p),t!==t+0)throw t;Se(1,0)}},ea:function(t,e,n,r,a,i){var o=Ee();try{Ue(t,e,n,r,a,i)}catch(t){if(Me(o),t!==t+0)throw t;Se(1,0)}},o:function(t){return t},a:j||u.wasmMemory,G:function(t){oe=t},la:le,z:function(t,e,n,r){return le(t,e,n,r)}};!function(){function t(t,e){u.asm=t.exports,ht.qc.push(u.asm.sb),$=u.asm.ub,X.unshift(u.asm.Va),k=e,O||(et--,u.monitorRunDependencies&&u.monitorRunDependencies(et),0==et&&(null!==nt&&(clearInterval(nt),nt=null),rt&&(t=rt,rt=null,t())))}function e(e){t(e.instance,e.module)}function n(t){return function(){if(!M&&(v||w)){if("function"==typeof fetch&&!tt.startsWith("file://"))return fetch(tt,{credentials:"same-origin"}).then((function(t){if(!t.ok)throw"failed to load wasm binary file at \'"+tt+"\'";return t.arrayBuffer()})).catch((function(){return ot()}));if(f)return new Promise((function(t,e){f(tt,(function(e){t(new Uint8Array(e))}),e)}))}return Promise.resolve().then((function(){return ot()}))}().then((function(t){return WebAssembly.instantiate(t,r)})).then((function(t){return t})).then(t,(function(t){x("failed to asynchronously prepare wasm: "+t),at(t)}))}var r={a:pe};if(O||(et++,u.monitorRunDependencies&&u.monitorRunDependencies(et)),u.instantiateWasm)try{return u.instantiateWasm(r,t)}catch(t){return x("Module.instantiateWasm callback failed with error: "+t),!1}(M||"function"!=typeof WebAssembly.instantiateStreaming||it()||tt.startsWith("file://")||_||"function"!=typeof fetch?n(e):fetch(tt,{credentials:"same-origin"}).then((function(t){return WebAssembly.instantiateStreaming(t,r).then(e,(function(t){return x("wasm streaming compile failed: "+t),x("falling back to ArrayBuffer instantiation"),n(e)}))}))).catch(s)}(),u.___wasm_call_ctors=function(){return(u.___wasm_call_ctors=u.asm.Va).apply(null,arguments)},u._OrtInit=function(){return(u._OrtInit=u.asm.Wa).apply(null,arguments)},u._OrtCreateSessionOptions=function(){return(u._OrtCreateSessionOptions=u.asm.Xa).apply(null,arguments)},u._OrtAppendExecutionProvider=function(){return(u._OrtAppendExecutionProvider=u.asm.Ya).apply(null,arguments)},u._OrtAddSessionConfigEntry=function(){return(u._OrtAddSessionConfigEntry=u.asm.Za).apply(null,arguments)},u._OrtReleaseSessionOptions=function(){return(u._OrtReleaseSessionOptions=u.asm._a).apply(null,arguments)},u._OrtCreateSession=function(){return(u._OrtCreateSession=u.asm.$a).apply(null,arguments)},u._OrtReleaseSession=function(){return(u._OrtReleaseSession=u.asm.ab).apply(null,arguments)},u._OrtGetInputCount=function(){return(u._OrtGetInputCount=u.asm.bb).apply(null,arguments)},u._OrtGetOutputCount=function(){return(u._OrtGetOutputCount=u.asm.cb).apply(null,arguments)},u._OrtGetInputName=function(){return(u._OrtGetInputName=u.asm.db).apply(null,arguments)},u._OrtGetOutputName=function(){return(u._OrtGetOutputName=u.asm.eb).apply(null,arguments)},u._OrtFree=function(){return(u._OrtFree=u.asm.fb).apply(null,arguments)},u._OrtCreateTensor=function(){return(u._OrtCreateTensor=u.asm.gb).apply(null,arguments)},u._OrtGetTensorData=function(){return(u._OrtGetTensorData=u.asm.hb).apply(null,arguments)},u._OrtReleaseTensor=function(){return(u._OrtReleaseTensor=u.asm.ib).apply(null,arguments)},u._OrtCreateRunOptions=function(){return(u._OrtCreateRunOptions=u.asm.jb).apply(null,arguments)},u._OrtAddRunConfigEntry=function(){return(u._OrtAddRunConfigEntry=u.asm.kb).apply(null,arguments)},u._OrtReleaseRunOptions=function(){return(u._OrtReleaseRunOptions=u.asm.lb).apply(null,arguments)},u._OrtRun=function(){return(u._OrtRun=u.asm.mb).apply(null,arguments)},u._OrtEndProfiling=function(){return(u._OrtEndProfiling=u.asm.nb).apply(null,arguments)};var he=u._pthread_self=function(){return(he=u._pthread_self=u.asm.ob).apply(null,arguments)},de=u._malloc=function(){return(de=u._malloc=u.asm.pb).apply(null,arguments)},ye=u._free=function(){return(ye=u._free=u.asm.qb).apply(null,arguments)},be=u._fflush=function(){return(be=u._fflush=u.asm.rb).apply(null,arguments)};u.__emscripten_tls_init=function(){return(u.__emscripten_tls_init=u.asm.sb).apply(null,arguments)};var me=u.___funcs_on_exit=function(){return(me=u.___funcs_on_exit=u.asm.tb).apply(null,arguments)},ge=u.__emscripten_thread_init=function(){return(ge=u.__emscripten_thread_init=u.asm.vb).apply(null,arguments)};u.__emscripten_thread_crashed=function(){return(u.__emscripten_thread_crashed=u.asm.wb).apply(null,arguments)};var ve,we=u._emscripten_run_in_main_runtime_thread_js=function(){return(we=u._emscripten_run_in_main_runtime_thread_js=u.asm.xb).apply(null,arguments)},_e=u.__emscripten_proxy_execute_task_queue=function(){return(_e=u.__emscripten_proxy_execute_task_queue=u.asm.yb).apply(null,arguments)},Oe=u.__emscripten_thread_free_data=function(){return(Oe=u.__emscripten_thread_free_data=u.asm.zb).apply(null,arguments)},Ae=u.__emscripten_thread_exit=function(){return(Ae=u.__emscripten_thread_exit=u.asm.Ab).apply(null,arguments)},Se=u._setThrew=function(){return(Se=u._setThrew=u.asm.Bb).apply(null,arguments)},Te=u._emscripten_stack_set_limits=function(){return(Te=u._emscripten_stack_set_limits=u.asm.Cb).apply(null,arguments)},Ee=u.stackSave=function(){return(Ee=u.stackSave=u.asm.Db).apply(null,arguments)},Me=u.stackRestore=function(){return(Me=u.stackRestore=u.asm.Eb).apply(null,arguments)},Ce=u.stackAlloc=function(){return(Ce=u.stackAlloc=u.asm.Fb).apply(null,arguments)},xe=u.___cxa_can_catch=function(){return(xe=u.___cxa_can_catch=u.asm.Gb).apply(null,arguments)},Re=u.___cxa_is_pointer_type=function(){return(Re=u.___cxa_is_pointer_type=u.asm.Hb).apply(null,arguments)},je=u.dynCall_j=function(){return(je=u.dynCall_j=u.asm.Ib).apply(null,arguments)},ke=u.dynCall_iiiiij=function(){return(ke=u.dynCall_iiiiij=u.asm.Jb).apply(null,arguments)},De=u.dynCall_jii=function(){return(De=u.dynCall_jii=u.asm.Kb).apply(null,arguments)},Pe=u.dynCall_viiiiij=function(){return(Pe=u.dynCall_viiiiij=u.asm.Lb).apply(null,arguments)},Ue=u.dynCall_vjji=function(){return(Ue=u.dynCall_vjji=u.asm.Mb).apply(null,arguments)},Fe=u.dynCall_viiijjjii=function(){return(Fe=u.dynCall_viiijjjii=u.asm.Nb).apply(null,arguments)},Ie=u.dynCall_iij=function(){return(Ie=u.dynCall_iij=u.asm.Ob).apply(null,arguments)},We=u.dynCall_ji=function(){return(We=u.dynCall_ji=u.asm.Pb).apply(null,arguments)},He=u.dynCall_iiiiiij=function(){return(He=u.dynCall_iiiiiij=u.asm.Qb).apply(null,arguments)},Le=u.dynCall_iiij=function(){return(Le=u.dynCall_iiij=u.asm.Rb).apply(null,arguments)};function ze(){function t(){if(!ve&&(ve=!0,u.calledRun=!0,!H)&&(O||dt(X),c(u),u.onRuntimeInitialized&&u.onRuntimeInitialized(),!O)){if(u.postRun)for("function"==typeof u.postRun&&(u.postRun=[u.postRun]);u.postRun.length;){var t=u.postRun.shift();Z.unshift(t)}dt(Z)}}if(!(0{var _scriptDir,r=(_scriptDir=(_scriptDir="undefined"!=typeof document&&document.currentScript?document.currentScript.src:void 0)||"/index.js",function(t){var e,r,a;t=t||{},e||(e=void 0!==t?t:{}),e.ready=new Promise((function(t,e){r=t,a=e}));var i,o,u,c,s,l,f=Object.assign({},e),p="./this.program",h=(t,e)=>{throw e},d="object"==typeof window,y="function"==typeof importScripts,b="object"==typeof process&&"object"==typeof process.versions&&"string"==typeof process.versions.node,m="";b?(m=y?n(908).dirname(m)+"/":"//",l=()=>{s||(c=n(384),s=n(908))},i=function(t,e){return l(),t=s.normalize(t),c.readFileSync(t,e?void 0:"utf8")},u=t=>((t=i(t,!0)).buffer||(t=new Uint8Array(t)),t),o=(t,e,n)=>{l(),t=s.normalize(t),c.readFile(t,(function(t,r){t?n(t):e(r.buffer)}))},1{if(_||0{var e=new XMLHttpRequest;return e.open("GET",t,!1),e.send(null),e.responseText},y&&(u=t=>{var e=new XMLHttpRequest;return e.open("GET",t,!1),e.responseType="arraybuffer",e.send(null),new Uint8Array(e.response)}),o=(t,e,n)=>{var r=new XMLHttpRequest;r.open("GET",t,!0),r.responseType="arraybuffer",r.onload=()=>{200==r.status||0==r.status&&r.response?e(r.response):n()},r.onerror=n,r.send(null)});var g,v=e.print||console.log.bind(console),w=e.printErr||console.warn.bind(console);Object.assign(e,f),f=null,e.thisProgram&&(p=e.thisProgram),e.quit&&(h=e.quit),e.wasmBinary&&(g=e.wasmBinary);var _=e.noExitRuntime||!1;"object"!=typeof WebAssembly&&V("no native wasm support detected");var O,A,S,T,E,M,C=!1,x="undefined"!=typeof TextDecoder?new TextDecoder("utf8"):void 0;function R(t,e,n){var r=(e>>>=0)+n;for(n=e;t[n]&&!(n>=r);)++n;if(16(a=224==(240&a)?(15&a)<<12|i<<6|o:(7&a)<<18|i<<12|o<<6|63&t[e++])?r+=String.fromCharCode(a):(a-=65536,r+=String.fromCharCode(55296|a>>10,56320|1023&a))}}else r+=String.fromCharCode(a)}return r}function j(t,e){return(t>>>=0)?R(T,t,e):""}function k(t,e,n,r){if(!(0>>=0;r=n+r-1;for(var i=0;i=o&&(o=65536+((1023&o)<<10)|1023&t.charCodeAt(++i)),127>=o){if(n>=r)break;e[n++>>>0]=o}else{if(2047>=o){if(n+1>=r)break;e[n++>>>0]=192|o>>6}else{if(65535>=o){if(n+2>=r)break;e[n++>>>0]=224|o>>12}else{if(n+3>=r)break;e[n++>>>0]=240|o>>18,e[n++>>>0]=128|o>>12&63}e[n++>>>0]=128|o>>6&63}e[n++>>>0]=128|63&o}}return e[n>>>0]=0,n-a}function D(t){for(var e=0,n=0;n=r?e++:2047>=r?e+=2:55296<=r&&57343>=r?(e+=4,++n):e+=3}return e}function P(){var t=O.buffer;A=t,e.HEAP8=S=new Int8Array(t),e.HEAP16=new Int16Array(t),e.HEAP32=E=new Int32Array(t),e.HEAPU8=T=new Uint8Array(t),e.HEAPU16=new Uint16Array(t),e.HEAPU32=M=new Uint32Array(t),e.HEAPF32=new Float32Array(t),e.HEAPF64=new Float64Array(t)}var U,F=[],I=[],W=[],H=[],L=0;function z(){var t=e.preRun.shift();F.unshift(t)}var Y,B=0,G=null,N=null;function V(t){throw e.onAbort&&e.onAbort(t),w(t="Aborted("+t+")"),C=!0,t=new WebAssembly.RuntimeError(t+". Build with -sASSERTIONS for more info."),a(t),t}function $(){return Y.startsWith("data:application/octet-stream;base64,")}if(Y="ort-wasm.wasm",!$()){var q=Y;Y=e.locateFile?e.locateFile(q,m):m+q}function X(){var t=Y;try{if(t==Y&&g)return new Uint8Array(g);if(u)return u(t);throw"both async and sync fetching of the wasm failed"}catch(t){V(t)}}function J(t){this.name="ExitStatus",this.message="Program terminated with exit("+t+")",this.status=t}function Z(t){for(;0>2>>>0]=t},this.Eb=function(){return M[this.zb+4>>2>>>0]},this.Sb=function(t){M[this.zb+8>>2>>>0]=t},this.Wb=function(){return M[this.zb+8>>2>>>0]},this.Tb=function(){E[this.zb>>2>>>0]=0},this.Ib=function(t){S[this.zb+12>>0>>>0]=t?1:0},this.Pb=function(){return 0!=S[this.zb+12>>0>>>0]},this.Jb=function(t){S[this.zb+13>>0>>>0]=t?1:0},this.Lb=function(){return 0!=S[this.zb+13>>0>>>0]},this.Rb=function(t,e){this.Fb(0),this.Ub(t),this.Sb(e),this.Tb(),this.Ib(!1),this.Jb(!1)},this.Nb=function(){E[this.zb>>2>>>0]+=1},this.Xb=function(){var t=E[this.zb>>2>>>0];return E[this.zb>>2>>>0]=t-1,1===t},this.Fb=function(t){M[this.zb+16>>2>>>0]=t},this.Ob=function(){return M[this.zb+16>>2>>>0]},this.Qb=function(){if(Mt(this.Eb()))return M[this.Db>>2>>>0];var t=this.Ob();return 0!==t?t:this.Db}}function nt(t){return vt(new et(t).zb)}var rt=[];function at(t){var e=rt[t];return e||(t>=rt.length&&(rt.length=t+1),rt[t]=e=U.get(t)),e}function it(t){var e=D(t)+1,n=gt(e);return n&&k(t,S,n,e),n}var ot={};function ut(){if(!ct){var t,e={USER:"web_user",LOGNAME:"web_user",PATH:"/",PWD:"/",HOME:"/home/web_user",LANG:("object"==typeof navigator&&navigator.languages&&navigator.languages[0]||"C").replace("-","_")+".UTF-8",_:p||"./this.program"};for(t in ot)void 0===ot[t]?delete e[t]:e[t]=ot[t];var n=[];for(t in e)n.push(t+"="+e[t]);ct=n}return ct}var ct,st=[null,[],[]];function lt(t,e){var n=st[t];0===e||10===e?((1===t?v:w)(R(n,0)),n.length=0):n.push(e)}var ft=0;function pt(t){return 0==t%4&&(0!=t%100||0==t%400)}var ht=[31,29,31,30,31,30,31,31,30,31,30,31],dt=[31,28,31,30,31,30,31,31,30,31,30,31];function yt(t,e,n,r){function a(t,e,n){for(t="number"==typeof t?t.toString():t||"";t.lengtht?-1:0r-t.getDate())){t.setDate(t.getDate()+e);break}e-=r-t.getDate()+1,t.setDate(1),11>n?t.setMonth(n+1):(t.setMonth(0),t.setFullYear(t.getFullYear()+1))}return n=new Date(t.getFullYear()+1,0,4),e=u(new Date(t.getFullYear(),0,4)),n=u(n),0>=o(e,t)?0>=o(n,t)?t.getFullYear()+1:t.getFullYear():t.getFullYear()-1}var s=E[r+40>>2>>>0];for(var l in r={$b:E[r>>2>>>0],Zb:E[r+4>>2>>>0],Gb:E[r+8>>2>>>0],Kb:E[r+12>>2>>>0],Hb:E[r+16>>2>>>0],Cb:E[r+20>>2>>>0],Ab:E[r+24>>2>>>0],Bb:E[r+28>>2>>>0],bc:E[r+32>>2>>>0],Yb:E[r+36>>2>>>0],ac:s?j(s):""},n=j(n),s={"%c":"%a %b %d %H:%M:%S %Y","%D":"%m/%d/%y","%F":"%Y-%m-%d","%h":"%b","%r":"%I:%M:%S %p","%R":"%H:%M","%T":"%H:%M:%S","%x":"%m/%d/%y","%X":"%H:%M:%S","%Ec":"%c","%EC":"%C","%Ex":"%m/%d/%y","%EX":"%H:%M:%S","%Ey":"%y","%EY":"%Y","%Od":"%d","%Oe":"%e","%OH":"%H","%OI":"%I","%Om":"%m","%OM":"%M","%OS":"%S","%Ou":"%u","%OU":"%U","%OV":"%V","%Ow":"%w","%OW":"%W","%Oy":"%y"})n=n.replace(new RegExp(l,"g"),s[l]);var f="Sunday Monday Tuesday Wednesday Thursday Friday Saturday".split(" "),p="January February March April May June July August September October November December".split(" ");for(l in s={"%a":function(t){return f[t.Ab].substring(0,3)},"%A":function(t){return f[t.Ab]},"%b":function(t){return p[t.Hb].substring(0,3)},"%B":function(t){return p[t.Hb]},"%C":function(t){return i((t.Cb+1900)/100|0,2)},"%d":function(t){return i(t.Kb,2)},"%e":function(t){return a(t.Kb,2," ")},"%g":function(t){return c(t).toString().substring(2)},"%G":function(t){return c(t)},"%H":function(t){return i(t.Gb,2)},"%I":function(t){return 0==(t=t.Gb)?t=12:12t.Gb?"AM":"PM"},"%S":function(t){return i(t.$b,2)},"%t":function(){return"\\t"},"%u":function(t){return t.Ab||7},"%U":function(t){return i(Math.floor((t.Bb+7-t.Ab)/7),2)},"%V":function(t){var e=Math.floor((t.Bb+7-(t.Ab+6)%7)/7);if(2>=(t.Ab+371-t.Bb-2)%7&&e++,e)53==e&&(4==(n=(t.Ab+371-t.Bb)%7)||3==n&&pt(t.Cb)||(e=1));else{e=52;var n=(t.Ab+7-t.Bb-1)%7;(4==n||5==n&&pt(t.Cb%400-1))&&e++}return i(e,2)},"%w":function(t){return t.Ab},"%W":function(t){return i(Math.floor((t.Bb+7-(t.Ab+6)%7)/7),2)},"%y":function(t){return(t.Cb+1900).toString().substring(2)},"%Y":function(t){return t.Cb+1900},"%z":function(t){var e=0<=(t=t.Yb);return t=Math.abs(t)/60,(e?"+":"-")+String("0000"+(t/60*100+t%60)).slice(-4)},"%Z":function(t){return t.ac},"%%":function(){return"%"}},n=n.replace(/%%/g,"\\0\\0"),s)n.includes(l)&&(n=n.replace(new RegExp(l,"g"),s[l](r)));return l=function(t){var e=Array(D(t)+1);return k(t,e,0,e.length),e}(n=n.replace(/\\0\\0/g,"%")),l.length>e?0:(S.set(l,t>>>0),l.length-1)}var bt={a:function(t){return gt(t+24)+24},m:function(t){return(t=new et(t)).Pb()||(t.Ib(!0),K--),t.Jb(!1),Q.push(t),t.Nb(),t.Qb()},ia:function(t){throw w("Unexpected exception thrown, this is not properly supported - aborting"),C=!0,t},w:function(){Ot(0);var t=Q.pop();if(t.Xb()&&!t.Lb()){var e=t.Wb();e&&at(e)(t.Db),nt(t.Db)}tt=0},d:function(){var t=tt;if(!t)return ft=0;var e=new et(t);e.Fb(t);var n=e.Eb();if(!n)return ft=0,t;for(var r=Array.prototype.slice.call(arguments),a=0;a>>2]+4294967296*E[t+4>>>2])),E[e>>2>>>0]=t.getUTCSeconds(),E[e+4>>2>>>0]=t.getUTCMinutes(),E[e+8>>2>>>0]=t.getUTCHours(),E[e+12>>2>>>0]=t.getUTCDate(),E[e+16>>2>>>0]=t.getUTCMonth(),E[e+20>>2>>>0]=t.getUTCFullYear()-1900,E[e+24>>2>>>0]=t.getUTCDay(),E[e+28>>2>>>0]=(t.getTime()-Date.UTC(t.getUTCFullYear(),0,1,0,0,0,0))/864e5|0},Ea:function(t,e){t=new Date(1e3*(M[t>>>2]+4294967296*E[t+4>>>2])),E[e>>2>>>0]=t.getSeconds(),E[e+4>>2>>>0]=t.getMinutes(),E[e+8>>2>>>0]=t.getHours(),E[e+12>>2>>>0]=t.getDate(),E[e+16>>2>>>0]=t.getMonth(),E[e+20>>2>>>0]=t.getFullYear()-1900,E[e+24>>2>>>0]=t.getDay();var n=new Date(t.getFullYear(),0,1);E[e+28>>2>>>0]=(t.getTime()-n.getTime())/864e5|0,E[e+36>>2>>>0]=-60*t.getTimezoneOffset();var r=new Date(t.getFullYear(),6,1).getTimezoneOffset();n=n.getTimezoneOffset(),E[e+32>>2>>>0]=0|(r!=n&&t.getTimezoneOffset()==Math.min(n,r))},Fa:function(t){var e=new Date(E[t+20>>2>>>0]+1900,E[t+16>>2>>>0],E[t+12>>2>>>0],E[t+8>>2>>>0],E[t+4>>2>>>0],E[t>>2>>>0],0),n=E[t+32>>2>>>0],r=e.getTimezoneOffset(),a=new Date(e.getFullYear(),0,1),i=new Date(e.getFullYear(),6,1).getTimezoneOffset(),o=a.getTimezoneOffset(),u=Math.min(o,i);return 0>n?E[t+32>>2>>>0]=Number(i!=o&&u==r):0>2>>>0]=e.getDay(),E[t+28>>2>>>0]=(e.getTime()-a.getTime())/864e5|0,E[t>>2>>>0]=e.getSeconds(),E[t+4>>2>>>0]=e.getMinutes(),E[t+8>>2>>>0]=e.getHours(),E[t+12>>2>>>0]=e.getDate(),E[t+16>>2>>>0]=e.getMonth(),e.getTime()/1e3|0},sa:function(){return-52},ta:function(){},Ga:function t(e,n,r){t.Vb||(t.Vb=!0,function(t,e,n){function r(t){return(t=t.toTimeString().match(/\\(([A-Za-z ]+)\\)$/))?t[1]:"GMT"}var a=(new Date).getFullYear(),i=new Date(a,0,1),o=new Date(a,6,1);a=i.getTimezoneOffset();var u=o.getTimezoneOffset();E[t>>2>>>0]=60*Math.max(a,u),E[e>>2>>>0]=Number(a!=u),t=r(i),e=r(o),t=it(t),e=it(e),u>2>>>0]=t,M[n+4>>2>>>0]=e):(M[n>>2>>>0]=e,M[n+4>>2>>>0]=t)}(e,n,r))},B:function(){V("")},ma:function(){return 4294901760},I:b?()=>{var t=process.hrtime();return 1e3*t[0]+t[1]/1e6}:()=>performance.now(),xa:function(t,e,n){T.copyWithin(t>>>0,e>>>0,e+n>>>0)},G:function(t){var e=T.length;if(4294901760<(t>>>=0))return!1;for(var n=1;4>=n;n*=2){var r=e*(1+.2/n);r=Math.min(r,t+100663296);var a=Math;r=Math.max(t,r),a=a.min.call(a,4294901760,r+(65536-r%65536)%65536);t:{try{O.grow(a-A.byteLength+65535>>>16),P();var i=1;break t}catch(t){}i=void 0}if(i)return!0}return!1},va:function(t,e){var n=0;return ut().forEach((function(r,a){var i=e+n;for(a=M[t+4*a>>2>>>0]=i,i=0;i>0>>>0]=r.charCodeAt(i);S[a>>0>>>0]=0,n+=r.length+1})),0},wa:function(t,e){var n=ut();M[t>>2>>>0]=n.length;var r=0;return n.forEach((function(t){r+=t.length+1})),M[e>>2>>>0]=r,0},ba:function(t){_||0>2>>>0],u=M[e+4>>2>>>0];e+=8;for(var c=0;c>>0]);a+=u}return M[r>>2>>>0]=a,0},c:function(){return ft},ja:function t(e,r){t.Mb||(t.Mb=function(){if("object"==typeof crypto&&"function"==typeof crypto.getRandomValues){var t=new Uint8Array(1);return()=>(crypto.getRandomValues(t),t[0])}if(b)try{var e=n(Object(function(){var t=new Error("Cannot find module \'crypto\'");throw t.code="MODULE_NOT_FOUND",t}()));return()=>e.randomBytes(1)[0]}catch(t){}return()=>V("randomDevice")}());for(var a=0;a>0>>>0]=t.Mb();return 0},ea:function(t,e,n){var r=At();try{return at(t)(e,n)}catch(t){if(St(r),t!==t+0)throw t;Ot(1,0)}},fa:function(t,e,n){var r=At();try{return at(t)(e,n)}catch(t){if(St(r),t!==t+0)throw t;Ot(1,0)}},J:function(t){var e=At();try{return at(t)()}catch(t){if(St(e),t!==t+0)throw t;Ot(1,0)}},e:function(t,e){var n=At();try{return at(t)(e)}catch(t){if(St(n),t!==t+0)throw t;Ot(1,0)}},N:function(t,e,n){var r=At();try{return at(t)(e,n)}catch(t){if(St(r),t!==t+0)throw t;Ot(1,0)}},O:function(t,e,n){var r=At();try{return at(t)(e,n)}catch(t){if(St(r),t!==t+0)throw t;Ot(1,0)}},j:function(t,e,n){var r=At();try{return at(t)(e,n)}catch(t){if(St(r),t!==t+0)throw t;Ot(1,0)}},o:function(t,e,n,r){var a=At();try{return at(t)(e,n,r)}catch(t){if(St(a),t!==t+0)throw t;Ot(1,0)}},p:function(t,e,n,r,a){var i=At();try{return at(t)(e,n,r,a)}catch(t){if(St(i),t!==t+0)throw t;Ot(1,0)}},M:function(t,e,n,r,a,i){var o=At();try{return at(t)(e,n,r,a,i)}catch(t){if(St(o),t!==t+0)throw t;Ot(1,0)}},r:function(t,e,n,r,a,i){var o=At();try{return at(t)(e,n,r,a,i)}catch(t){if(St(o),t!==t+0)throw t;Ot(1,0)}},v:function(t,e,n,r,a,i,o){var u=At();try{return at(t)(e,n,r,a,i,o)}catch(t){if(St(u),t!==t+0)throw t;Ot(1,0)}},K:function(t,e,n,r,a,i,o,u){var c=At();try{return at(t)(e,n,r,a,i,o,u)}catch(t){if(St(c),t!==t+0)throw t;Ot(1,0)}},D:function(t,e,n,r,a,i,o,u,c,s,l,f){var p=At();try{return at(t)(e,n,r,a,i,o,u,c,s,l,f)}catch(t){if(St(p),t!==t+0)throw t;Ot(1,0)}},X:function(t,e,n,r,a,i,o,u){var c=At();try{return Ft(t,e,n,r,a,i,o,u)}catch(t){if(St(c),t!==t+0)throw t;Ot(1,0)}},V:function(t,e,n,r,a,i,o){var u=At();try{return xt(t,e,n,r,a,i,o)}catch(t){if(St(u),t!==t+0)throw t;Ot(1,0)}},U:function(t,e,n,r,a){var i=At();try{return It(t,e,n,r,a)}catch(t){if(St(i),t!==t+0)throw t;Ot(1,0)}},Z:function(t,e,n,r){var a=At();try{return Pt(t,e,n,r)}catch(t){if(St(a),t!==t+0)throw t;Ot(1,0)}},W:function(t){var e=At();try{return Ct(t)}catch(t){if(St(e),t!==t+0)throw t;Ot(1,0)}},Y:function(t,e){var n=At();try{return Ut(t,e)}catch(t){if(St(n),t!==t+0)throw t;Ot(1,0)}},T:function(t,e,n){var r=At();try{return Rt(t,e,n)}catch(t){if(St(r),t!==t+0)throw t;Ot(1,0)}},f:function(t){var e=At();try{at(t)()}catch(t){if(St(e),t!==t+0)throw t;Ot(1,0)}},q:function(t,e){var n=At();try{at(t)(e)}catch(t){if(St(n),t!==t+0)throw t;Ot(1,0)}},h:function(t,e,n){var r=At();try{at(t)(e,n)}catch(t){if(St(r),t!==t+0)throw t;Ot(1,0)}},da:function(t,e,n,r){var a=At();try{at(t)(e,n,r)}catch(t){if(St(a),t!==t+0)throw t;Ot(1,0)}},l:function(t,e,n,r){var a=At();try{at(t)(e,n,r)}catch(t){if(St(a),t!==t+0)throw t;Ot(1,0)}},t:function(t,e,n,r,a){var i=At();try{at(t)(e,n,r,a)}catch(t){if(St(i),t!==t+0)throw t;Ot(1,0)}},u:function(t,e,n,r,a,i){var o=At();try{at(t)(e,n,r,a,i)}catch(t){if(St(o),t!==t+0)throw t;Ot(1,0)}},x:function(t,e,n,r,a,i,o){var u=At();try{at(t)(e,n,r,a,i,o)}catch(t){if(St(u),t!==t+0)throw t;Ot(1,0)}},z:function(t,e,n,r,a,i,o,u){var c=At();try{at(t)(e,n,r,a,i,o,u)}catch(t){if(St(c),t!==t+0)throw t;Ot(1,0)}},ga:function(t,e,n,r,a,i,o,u,c){var s=At();try{at(t)(e,n,r,a,i,o,u,c)}catch(t){if(St(s),t!==t+0)throw t;Ot(1,0)}},A:function(t,e,n,r,a,i,o,u,c,s,l){var f=At();try{at(t)(e,n,r,a,i,o,u,c,s,l)}catch(t){if(St(f),t!==t+0)throw t;Ot(1,0)}},C:function(t,e,n,r,a,i,o,u,c,s,l,f,p,h,d,y){var b=At();try{at(t)(e,n,r,a,i,o,u,c,s,l,f,p,h,d,y)}catch(t){if(St(b),t!==t+0)throw t;Ot(1,0)}},aa:function(t,e,n,r,a,i,o,u){var c=At();try{jt(t,e,n,r,a,i,o,u)}catch(t){if(St(c),t!==t+0)throw t;Ot(1,0)}},_:function(t,e,n,r,a,i,o,u,c,s,l,f){var p=At();try{Dt(t,e,n,r,a,i,o,u,c,s,l,f)}catch(t){if(St(p),t!==t+0)throw t;Ot(1,0)}},$:function(t,e,n,r,a,i){var o=At();try{kt(t,e,n,r,a,i)}catch(t){if(St(o),t!==t+0)throw t;Ot(1,0)}},n:function(t){return t},F:function(t){ft=t},ha:yt,y:function(t,e,n,r){return yt(t,e,n,r)}};!function(){function t(t){e.asm=t.exports,O=e.asm.Ka,P(),U=e.asm.ib,I.unshift(e.asm.La),B--,e.monitorRunDependencies&&e.monitorRunDependencies(B),0==B&&(null!==G&&(clearInterval(G),G=null),N&&(t=N,N=null,t()))}function n(e){t(e.instance)}function r(t){return function(){if(!g&&(d||y)){if("function"==typeof fetch&&!Y.startsWith("file://"))return fetch(Y,{credentials:"same-origin"}).then((function(t){if(!t.ok)throw"failed to load wasm binary file at \'"+Y+"\'";return t.arrayBuffer()})).catch((function(){return X()}));if(o)return new Promise((function(t,e){o(Y,(function(e){t(new Uint8Array(e))}),e)}))}return Promise.resolve().then((function(){return X()}))}().then((function(t){return WebAssembly.instantiate(t,i)})).then((function(t){return t})).then(t,(function(t){w("failed to asynchronously prepare wasm: "+t),V(t)}))}var i={a:bt};if(B++,e.monitorRunDependencies&&e.monitorRunDependencies(B),e.instantiateWasm)try{return e.instantiateWasm(i,t)}catch(t){return w("Module.instantiateWasm callback failed with error: "+t),!1}(g||"function"!=typeof WebAssembly.instantiateStreaming||$()||Y.startsWith("file://")||b||"function"!=typeof fetch?r(n):fetch(Y,{credentials:"same-origin"}).then((function(t){return WebAssembly.instantiateStreaming(t,i).then(n,(function(t){return w("wasm streaming compile failed: "+t),w("falling back to ArrayBuffer instantiation"),r(n)}))}))).catch(a)}(),e.___wasm_call_ctors=function(){return(e.___wasm_call_ctors=e.asm.La).apply(null,arguments)},e._OrtInit=function(){return(e._OrtInit=e.asm.Ma).apply(null,arguments)},e._OrtCreateSessionOptions=function(){return(e._OrtCreateSessionOptions=e.asm.Na).apply(null,arguments)},e._OrtAppendExecutionProvider=function(){return(e._OrtAppendExecutionProvider=e.asm.Oa).apply(null,arguments)},e._OrtAddSessionConfigEntry=function(){return(e._OrtAddSessionConfigEntry=e.asm.Pa).apply(null,arguments)},e._OrtReleaseSessionOptions=function(){return(e._OrtReleaseSessionOptions=e.asm.Qa).apply(null,arguments)},e._OrtCreateSession=function(){return(e._OrtCreateSession=e.asm.Ra).apply(null,arguments)},e._OrtReleaseSession=function(){return(e._OrtReleaseSession=e.asm.Sa).apply(null,arguments)},e._OrtGetInputCount=function(){return(e._OrtGetInputCount=e.asm.Ta).apply(null,arguments)},e._OrtGetOutputCount=function(){return(e._OrtGetOutputCount=e.asm.Ua).apply(null,arguments)},e._OrtGetInputName=function(){return(e._OrtGetInputName=e.asm.Va).apply(null,arguments)},e._OrtGetOutputName=function(){return(e._OrtGetOutputName=e.asm.Wa).apply(null,arguments)},e._OrtFree=function(){return(e._OrtFree=e.asm.Xa).apply(null,arguments)},e._OrtCreateTensor=function(){return(e._OrtCreateTensor=e.asm.Ya).apply(null,arguments)},e._OrtGetTensorData=function(){return(e._OrtGetTensorData=e.asm.Za).apply(null,arguments)},e._OrtReleaseTensor=function(){return(e._OrtReleaseTensor=e.asm._a).apply(null,arguments)},e._OrtCreateRunOptions=function(){return(e._OrtCreateRunOptions=e.asm.$a).apply(null,arguments)},e._OrtAddRunConfigEntry=function(){return(e._OrtAddRunConfigEntry=e.asm.ab).apply(null,arguments)},e._OrtReleaseRunOptions=function(){return(e._OrtReleaseRunOptions=e.asm.bb).apply(null,arguments)},e._OrtRun=function(){return(e._OrtRun=e.asm.cb).apply(null,arguments)},e._OrtEndProfiling=function(){return(e._OrtEndProfiling=e.asm.db).apply(null,arguments)};var mt,gt=e._malloc=function(){return(gt=e._malloc=e.asm.eb).apply(null,arguments)},vt=e._free=function(){return(vt=e._free=e.asm.fb).apply(null,arguments)},wt=e._fflush=function(){return(wt=e._fflush=e.asm.gb).apply(null,arguments)},_t=e.___funcs_on_exit=function(){return(_t=e.___funcs_on_exit=e.asm.hb).apply(null,arguments)},Ot=e._setThrew=function(){return(Ot=e._setThrew=e.asm.jb).apply(null,arguments)},At=e.stackSave=function(){return(At=e.stackSave=e.asm.kb).apply(null,arguments)},St=e.stackRestore=function(){return(St=e.stackRestore=e.asm.lb).apply(null,arguments)},Tt=e.stackAlloc=function(){return(Tt=e.stackAlloc=e.asm.mb).apply(null,arguments)},Et=e.___cxa_can_catch=function(){return(Et=e.___cxa_can_catch=e.asm.nb).apply(null,arguments)},Mt=e.___cxa_is_pointer_type=function(){return(Mt=e.___cxa_is_pointer_type=e.asm.ob).apply(null,arguments)},Ct=e.dynCall_j=function(){return(Ct=e.dynCall_j=e.asm.pb).apply(null,arguments)},xt=e.dynCall_iiiiij=function(){return(xt=e.dynCall_iiiiij=e.asm.qb).apply(null,arguments)},Rt=e.dynCall_jii=function(){return(Rt=e.dynCall_jii=e.asm.rb).apply(null,arguments)},jt=e.dynCall_viiiiij=function(){return(jt=e.dynCall_viiiiij=e.asm.sb).apply(null,arguments)},kt=e.dynCall_vjji=function(){return(kt=e.dynCall_vjji=e.asm.tb).apply(null,arguments)},Dt=e.dynCall_viiijjjii=function(){return(Dt=e.dynCall_viiijjjii=e.asm.ub).apply(null,arguments)},Pt=e.dynCall_iij=function(){return(Pt=e.dynCall_iij=e.asm.vb).apply(null,arguments)},Ut=e.dynCall_ji=function(){return(Ut=e.dynCall_ji=e.asm.wb).apply(null,arguments)},Ft=e.dynCall_iiiiiij=function(){return(Ft=e.dynCall_iiiiiij=e.asm.xb).apply(null,arguments)},It=e.dynCall_iiij=function(){return(It=e.dynCall_iiij=e.asm.yb).apply(null,arguments)};function Wt(){function t(){if(!mt&&(mt=!0,e.calledRun=!0,!C)){if(Z(I),r(e),e.onRuntimeInitialized&&e.onRuntimeInitialized(),e.postRun)for("function"==typeof e.postRun&&(e.postRun=[e.postRun]);e.postRun.length;){var t=e.postRun.shift();H.unshift(t)}Z(H)}}if(!(0{"use strict";Object.defineProperty(e,"__esModule",{value:!0}),e.iterateExtraOptions=void 0,e.iterateExtraOptions=(t,n,r,a)=>{if("object"==typeof t&&null!==t){if(r.has(t))throw new Error("Circular reference in options");r.add(t)}Object.entries(t).forEach((([t,i])=>{const o=n?n+t:t;if("object"==typeof i)(0,e.iterateExtraOptions)(i,o+".",r,a);else if("string"==typeof i||"number"==typeof i)a(o,i.toString());else{if("boolean"!=typeof i)throw new Error("Can\'t handle extra config type: "+typeof i);a(o,i?"1":"0")}}))}},586:(t,e,n)=>{"use strict";Object.defineProperty(e,"__esModule",{value:!0}),e.setRunOptions=void 0;const r=n(967),a=n(983),i=n(361);e.setRunOptions=t=>{const e=(0,i.getInstance)();let n=0;const o=[],u=t||{};try{if(void 0===(null==t?void 0:t.logSeverityLevel))u.logSeverityLevel=2;else if("number"!=typeof t.logSeverityLevel||!Number.isInteger(t.logSeverityLevel)||t.logSeverityLevel<0||t.logSeverityLevel>4)throw new Error(`log serverity level is not valid: ${t.logSeverityLevel}`);if(void 0===(null==t?void 0:t.logVerbosityLevel))u.logVerbosityLevel=0;else if("number"!=typeof t.logVerbosityLevel||!Number.isInteger(t.logVerbosityLevel))throw new Error(`log verbosity level is not valid: ${t.logVerbosityLevel}`);void 0===(null==t?void 0:t.terminate)&&(u.terminate=!1);let i=0;if(void 0!==(null==t?void 0:t.tag)&&(i=(0,a.allocWasmString)(t.tag,o)),n=e._OrtCreateRunOptions(u.logSeverityLevel,u.logVerbosityLevel,!!u.terminate,i),0===n)throw new Error("Can\'t create run options");return void 0!==(null==t?void 0:t.extra)&&(0,r.iterateExtraOptions)(t.extra,"",new WeakSet,((t,r)=>{const i=(0,a.allocWasmString)(t,o),u=(0,a.allocWasmString)(r,o);if(0!==e._OrtAddRunConfigEntry(n,i,u))throw new Error(`Can\'t set a run config entry: ${t} - ${r}`)})),[n,o]}catch(t){throw 0!==n&&e._OrtReleaseRunOptions(n),o.forEach(e._free),t}}},919:(t,e,n)=>{"use strict";Object.defineProperty(e,"__esModule",{value:!0}),e.setSessionOptions=void 0;const r=n(967),a=n(983),i=n(361);e.setSessionOptions=t=>{const e=(0,i.getInstance)();let n=0;const o=[],u=t||{};(t=>{t.extra||(t.extra={}),t.extra.session||(t.extra.session={});const e=t.extra.session;e.use_ort_model_bytes_directly||(e.use_ort_model_bytes_directly="1")})(u);try{void 0===(null==t?void 0:t.graphOptimizationLevel)&&(u.graphOptimizationLevel="all");const c=(t=>{switch(t){case"disabled":return 0;case"basic":return 1;case"extended":return 2;case"all":return 99;default:throw new Error(`unsupported graph optimization level: ${t}`)}})(u.graphOptimizationLevel);void 0===(null==t?void 0:t.enableCpuMemArena)&&(u.enableCpuMemArena=!0),void 0===(null==t?void 0:t.enableMemPattern)&&(u.enableMemPattern=!0),void 0===(null==t?void 0:t.executionMode)&&(u.executionMode="sequential");const s=(t=>{switch(t){case"sequential":return 0;case"parallel":return 1;default:throw new Error(`unsupported execution mode: ${t}`)}})(u.executionMode);let l=0;if(void 0!==(null==t?void 0:t.logId)&&(l=(0,a.allocWasmString)(t.logId,o)),void 0===(null==t?void 0:t.logSeverityLevel))u.logSeverityLevel=2;else if("number"!=typeof t.logSeverityLevel||!Number.isInteger(t.logSeverityLevel)||t.logSeverityLevel<0||t.logSeverityLevel>4)throw new Error(`log serverity level is not valid: ${t.logSeverityLevel}`);if(void 0===(null==t?void 0:t.logVerbosityLevel))u.logVerbosityLevel=0;else if("number"!=typeof t.logVerbosityLevel||!Number.isInteger(t.logVerbosityLevel))throw new Error(`log verbosity level is not valid: ${t.logVerbosityLevel}`);if(void 0===(null==t?void 0:t.enableProfiling)&&(u.enableProfiling=!1),n=e._OrtCreateSessionOptions(c,!!u.enableCpuMemArena,!!u.enableMemPattern,s,!!u.enableProfiling,0,l,u.logSeverityLevel,u.logVerbosityLevel),0===n)throw new Error("Can\'t create session options");return(null==t?void 0:t.executionProviders)&&((t,e,n)=>{for(const r of e){let e="string"==typeof r?r:r.name;switch(e){case"xnnpack":e="XNNPACK";break;case"wasm":case"cpu":continue;default:throw new Error(`not supported EP: ${e}`)}const o=(0,a.allocWasmString)(e,n);if(0!==(0,i.getInstance)()._OrtAppendExecutionProvider(t,o))throw new Error(`Can\'t append execution provider: ${e}`)}})(n,t.executionProviders,o),void 0!==(null==t?void 0:t.extra)&&(0,r.iterateExtraOptions)(t.extra,"",new WeakSet,((t,r)=>{const i=(0,a.allocWasmString)(t,o),u=(0,a.allocWasmString)(r,o);if(0!==e._OrtAddSessionConfigEntry(n,i,u))throw new Error(`Can\'t set a session config entry: ${t} - ${r}`)})),[n,o]}catch(t){throw 0!==n&&e._OrtReleaseSessionOptions(n),o.forEach(e._free),t}}},983:(t,e,n)=>{"use strict";Object.defineProperty(e,"__esModule",{value:!0}),e.allocWasmString=void 0;const r=n(361);e.allocWasmString=(t,e)=>{const n=(0,r.getInstance)(),a=n.lengthBytesUTF8(t)+1,i=n._malloc(a);return n.stringToUTF8(t,i,a),e.push(i),i}},349:(t,e,n)=>{"use strict";Object.defineProperty(e,"__esModule",{value:!0}),e.extractTransferableBuffers=e.endProfiling=e.run=e.releaseSession=e.createSession=e.createSessionFinalize=e.createSessionAllocate=e.initOrt=void 0;const r=n(586),a=n(919),i=n(983),o=n(361);e.initOrt=(t,e)=>{const n=(0,o.getInstance)()._OrtInit(t,e);if(0!==n)throw new Error(`Can\'t initialize onnxruntime. error code = ${n}`)};const u=new Map;e.createSessionAllocate=t=>{const e=(0,o.getInstance)(),n=e._malloc(t.byteLength);return e.HEAPU8.set(t,n),[n,t.byteLength]},e.createSessionFinalize=(t,e)=>{const n=(0,o.getInstance)();let r=0,i=0,c=[];try{if([i,c]=(0,a.setSessionOptions)(e),r=n._OrtCreateSession(t[0],t[1],i),0===r)throw new Error("Can\'t create a session")}finally{n._free(t[0]),n._OrtReleaseSessionOptions(i),c.forEach(n._free)}const s=n._OrtGetInputCount(r),l=n._OrtGetOutputCount(r),f=[],p=[],h=[],d=[];for(let t=0;t{const r=(0,e.createSessionAllocate)(t);return(0,e.createSessionFinalize)(r,n)},e.releaseSession=t=>{const e=(0,o.getInstance)(),n=u.get(t);if(!n)throw new Error("invalid session id");const r=n[0],a=n[1],i=n[2];a.forEach(e._OrtFree),i.forEach(e._OrtFree),e._OrtReleaseSession(r),u.delete(t)};const c=t=>{switch(t){case"int8":return 3;case"uint8":return 2;case"bool":return 9;case"int16":return 5;case"uint16":return 4;case"int32":return 6;case"uint32":return 12;case"float32":return 1;case"float64":return 11;case"string":return 8;case"int64":return 7;case"uint64":return 13;default:throw new Error(`unsupported data type: ${t}`)}},s=t=>{switch(t){case 3:return"int8";case 2:return"uint8";case 9:return"bool";case 5:return"int16";case 4:return"uint16";case 6:return"int32";case 12:return"uint32";case 1:return"float32";case 11:return"float64";case 8:return"string";case 7:return"int64";case 13:return"uint64";default:throw new Error(`unsupported data type: ${t}`)}},l=t=>{switch(t){case"float32":return Float32Array;case"uint8":case"bool":return Uint8Array;case"int8":return Int8Array;case"uint16":return Uint16Array;case"int16":return Int16Array;case"int32":return Int32Array;case"float64":return Float64Array;case"uint32":return Uint32Array;case"int64":return BigInt64Array;case"uint64":return BigUint64Array;default:throw new Error(`unsupported type: ${t}`)}};e.run=(t,e,n,a,f)=>{const p=(0,o.getInstance)(),h=u.get(t);if(!h)throw new Error("invalid session id");const d=h[0],y=h[1],b=h[2],m=e.length,g=a.length;let v=0,w=[];const _=[],O=[];try{[v,w]=(0,r.setRunOptions)(f);for(let t=0;tp.HEAP32[t++]=e));const n=p._OrtCreateTensor(c(e),o,u,l,r.length);if(0===n)throw new Error("Can\'t create a tensor");_.push(n)}finally{p.stackRestore(s)}}const t=p.stackSave(),o=p.stackAlloc(4*m),u=p.stackAlloc(4*m),h=p.stackAlloc(4*g),A=p.stackAlloc(4*g);try{let n=o/4,r=u/4,i=h/4,c=A/4;for(let t=0;tt*e));if(a=s(o),"string"===a){const t=[];let e=i/4;for(let n=0;n{const e=(0,o.getInstance)(),n=u.get(t);if(!n)throw new Error("invalid session id");const r=n[0],a=e._OrtEndProfiling(r);if(0===a)throw new Error("Can\'t get an profile file name");e._OrtFree(a)},e.extractTransferableBuffers=t=>{const e=[];for(const n of t){const t=n[2];!Array.isArray(t)&&t.buffer&&e.push(t.buffer)}return e}},361:function(t,e,n){"use strict";var r=this&&this.__createBinding||(Object.create?function(t,e,n,r){void 0===r&&(r=n);var a=Object.getOwnPropertyDescriptor(e,n);a&&!("get"in a?!e.__esModule:a.writable||a.configurable)||(a={enumerable:!0,get:function(){return e[n]}}),Object.defineProperty(t,r,a)}:function(t,e,n,r){void 0===r&&(r=n),t[r]=e[n]}),a=this&&this.__setModuleDefault||(Object.create?function(t,e){Object.defineProperty(t,"default",{enumerable:!0,value:e})}:function(t,e){t.default=e}),i=this&&this.__importStar||function(t){if(t&&t.__esModule)return t;var e={};if(null!=t)for(var n in t)"default"!==n&&Object.prototype.hasOwnProperty.call(t,n)&&r(e,t,n);return a(e,t),e},o=this&&this.__importDefault||function(t){return t&&t.__esModule?t:{default:t}};Object.defineProperty(e,"__esModule",{value:!0}),e.dispose=e.getInstance=e.initializeWebAssembly=void 0;const u=i(n(449)),c=o(n(932)),s=n(474);let l,f=!1,p=!1,h=!1;const d=(t,e)=>e?t?"ort-wasm-simd-threaded.wasm":"ort-wasm-threaded.wasm":t?"ort-wasm-simd.wasm":"ort-wasm.wasm";e.initializeWebAssembly=async t=>{if(f)return Promise.resolve();if(p)throw new Error("multiple calls to \'initializeWebAssembly()\' detected.");if(h)throw new Error("previous call to \'initializeWebAssembly()\' failed.");p=!0;const e=t.initTimeout,r=t.numThreads,a=t.simd,i=r>1&&(()=>{try{return"undefined"!=typeof SharedArrayBuffer&&("undefined"!=typeof MessageChannel&&(new MessageChannel).port1.postMessage(new SharedArrayBuffer(1)),WebAssembly.validate(new Uint8Array([0,97,115,109,1,0,0,0,1,4,1,96,0,0,3,2,1,0,5,4,1,3,1,1,10,11,1,9,0,65,0,254,16,2,0,26,11])))}catch(t){return!1}})(),o=a&&(()=>{try{return WebAssembly.validate(new Uint8Array([0,97,115,109,1,0,0,0,1,4,1,96,0,0,3,2,1,0,10,30,1,28,0,65,0,253,15,253,12,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,253,186,1,26,11]))}catch(t){return!1}})(),y="string"==typeof t.wasmPaths?t.wasmPaths:void 0,b=d(!1,i),m=d(o,i),g="object"==typeof t.wasmPaths?t.wasmPaths[m]:void 0;let v=!1;const w=[];if(e>0&&w.push(new Promise((t=>{setTimeout((()=>{v=!0,t()}),e)}))),w.push(new Promise(((t,e)=>{const r=i?s:c.default,a={locateFile:(t,e)=>i&&t.endsWith(".worker.js")&&"undefined"!=typeof Blob?URL.createObjectURL(new Blob([n(154)],{type:"text/javascript"})):t===b?null!=g?g:(null!=y?y:e)+m:e+t};if(i)if("undefined"==typeof Blob)a.mainScriptUrlOrBlob=u.join("/","ort-wasm-threaded.js");else{const t=`var ortWasmThreaded=(function(){var _scriptDir;return ${r.toString()}})();`;a.mainScriptUrlOrBlob=new Blob([t],{type:"text/javascript"})}r(a).then((e=>{p=!1,f=!0,l=e,t()}),(t=>{p=!1,h=!0,e(t)}))}))),await Promise.race(w),v)throw new Error(`WebAssembly backend initializing failed due to timeout: ${e}ms`)},e.getInstance=()=>{if(f&&l)return l;throw new Error("WebAssembly is not initialized yet.")},e.dispose=()=>{var t;!f||p||h||(p=!0,null===(t=l.PThread)||void 0===t||t.terminateAllThreads(),l=void 0,p=!1,f=!1,h=!0)}},154:t=>{"use strict";t.exports=\'"use strict";var e={},t="object"==typeof process&&"object"==typeof process.versions&&"string"==typeof process.versions.node;if(t){var r=require("worker_threads"),a=r.parentPort;a.on("message",(e=>onmessage({data:e})));var o=require("fs");Object.assign(global,{self:global,require:require,Module:e,location:{href:__filename},Worker:r.Worker,importScripts:function(e){(0,eval)(o.readFileSync(e,"utf8"))},postMessage:function(e){a.postMessage(e)},performance:global.performance||{now:function(){return Date.now()}}})}var s=!1,n=[],i=function(){var e=Array.prototype.slice.call(arguments).join(" ");t?o.writeSync(2,e+"\\\\n"):console.error(e)};self.alert=function(){var t=Array.prototype.slice.call(arguments).join(" ");postMessage({cmd:"alert",text:t,threadId:e._pthread_self()})},e.instantiateWasm=(t,r)=>{var a=new WebAssembly.Instance(e.wasmModule,t);return r(a),e.wasmModule=null,a.exports},self.onunhandledrejection=e=>{throw e.reason??e},self.onmessage=t=>{try{if("load"===t.data.cmd){if(e.wasmModule=t.data.wasmModule,e.wasmMemory=t.data.wasmMemory,e.buffer=e.wasmMemory.buffer,e.ENVIRONMENT_IS_PTHREAD=!0,"string"==typeof t.data.urlOrBlob)importScripts(t.data.urlOrBlob);else{var r=URL.createObjectURL(t.data.urlOrBlob);importScripts(r),URL.revokeObjectURL(r)}ortWasmThreaded(e).then((function(t){e=t}))}else if("run"===t.data.cmd){e.__performance_now_clock_drift=performance.now()-t.data.time,e.__emscripten_thread_init(t.data.pthread_ptr,0,0,1),e.establishStackSpace(),e.PThread.receiveObjectTransfer(t.data),e.PThread.threadInitTLS(),s||(n.forEach((t=>{e.executeNotifiedProxyingQueue(t)})),n=[],s=!0);try{e.invokeEntryPoint(t.data.start_routine,t.data.arg)}catch(t){if("unwind"!=t){if(!(t instanceof e.ExitStatus))throw t;e.keepRuntimeAlive()||e.__emscripten_thread_exit(t.status)}}}else"cancel"===t.data.cmd?e._pthread_self()&&e.__emscripten_thread_exit(-1):"setimmediate"===t.data.target||("processProxyingQueue"===t.data.cmd?s?e.executeNotifiedProxyingQueue(t.data.queue):n.push(t.data.queue):(i("worker.js received unknown command "+t.data.cmd),i(t.data)))}catch(t){throw i("worker.js onmessage() captured an uncaught exception: "+t),t&&t.stack&&i(t.stack),e.__emscripten_thread_crashed&&e.__emscripten_thread_crashed(),t}};\\n\'},384:()=>{},993:()=>{},908:()=>{},953:()=>{},925:()=>{},449:()=>{}},e={};function n(r){var a=e[r];if(void 0!==a)return a.exports;var i=e[r]={exports:{}};return t[r].call(i.exports,i,i.exports,n),i.exports}n.g=function(){if("object"==typeof globalThis)return globalThis;try{return this||new Function("return this")()}catch(t){if("object"==typeof window)return window}}(),(()=>{"use strict";const t=n(349),e=n(361);self.onmessage=n=>{switch(n.data.type){case"init-wasm":(0,e.initializeWebAssembly)(n.data.in).then((()=>postMessage({type:"init-wasm"})),(t=>postMessage({type:"init-wasm",err:t})));break;case"init-ort":try{const{numThreads:e,loggingLevel:r}=n.data.in;(0,t.initOrt)(e,r),postMessage({type:"init-ort"})}catch(t){postMessage({type:"init-ort",err:t})}break;case"create_allocate":try{const{model:e}=n.data.in,r=(0,t.createSessionAllocate)(e);postMessage({type:"create_allocate",out:r})}catch(t){postMessage({type:"create_allocate",err:t})}break;case"create_finalize":try{const{modeldata:e,options:r}=n.data.in,a=(0,t.createSessionFinalize)(e,r);postMessage({type:"create_finalize",out:a})}catch(t){postMessage({type:"create_finalize",err:t})}break;case"create":try{const{model:e,options:r}=n.data.in,a=(0,t.createSession)(e,r);postMessage({type:"create",out:a})}catch(t){postMessage({type:"create",err:t})}break;case"release":try{const e=n.data.in;(0,t.releaseSession)(e),postMessage({type:"release"})}catch(t){postMessage({type:"release",err:t})}break;case"run":try{const{sessionId:e,inputIndices:r,inputs:a,outputIndices:i,options:o}=n.data.in,u=(0,t.run)(e,r,a,i,o);postMessage({type:"run",out:u},(0,t.extractTransferableBuffers)(u))}catch(t){postMessage({type:"run",err:t})}break;case"end-profiling":try{const e=n.data.in;(0,t.endProfiling)(e),postMessage({type:"end-profiling"})}catch(t){postMessage({type:"end-profiling",err:t})}}}})()})();\n',"Worker",void 0,void 0)}},477:b=>{b.exports=function(n,a,u,c){var p=self||window;try{try{var s;try{s=new p.Blob([n])}catch{(s=new(p.BlobBuilder||p.WebKitBlobBuilder||p.MozBlobBuilder||p.MSBlobBuilder)).append(n),s=s.getBlob()}var h=p.URL||p.webkitURL,f=h.createObjectURL(s),l=new p[a](f,u);return h.revokeObjectURL(f),l}catch{return new p[a]("data:application/javascript,".concat(encodeURIComponent(n)),u)}}catch{if(!c)throw Error("Inline worker is not supported");return new p[a](c,u)}}},4154:b=>{b.exports=`"use strict";var e={},t="object"==typeof process&&"object"==typeof process.versions&&"string"==typeof process.versions.node;if(t){var r=require("worker_threads"),a=r.parentPort;a.on("message",(e=>onmessage({data:e})));var o=require("fs");Object.assign(global,{self:global,require:require,Module:e,location:{href:__filename},Worker:r.Worker,importScripts:function(e){(0,eval)(o.readFileSync(e,"utf8"))},postMessage:function(e){a.postMessage(e)},performance:global.performance||{now:function(){return Date.now()}}})}var s=!1,n=[],i=function(){var e=Array.prototype.slice.call(arguments).join(" ");t?o.writeSync(2,e+"\\n"):console.error(e)};self.alert=function(){var t=Array.prototype.slice.call(arguments).join(" ");postMessage({cmd:"alert",text:t,threadId:e._pthread_self()})},e.instantiateWasm=(t,r)=>{var a=new WebAssembly.Instance(e.wasmModule,t);return r(a),e.wasmModule=null,a.exports},self.onunhandledrejection=e=>{throw e.reason??e},self.onmessage=t=>{try{if("load"===t.data.cmd){if(e.wasmModule=t.data.wasmModule,e.wasmMemory=t.data.wasmMemory,e.buffer=e.wasmMemory.buffer,e.ENVIRONMENT_IS_PTHREAD=!0,"string"==typeof t.data.urlOrBlob)importScripts(t.data.urlOrBlob);else{var r=URL.createObjectURL(t.data.urlOrBlob);importScripts(r),URL.revokeObjectURL(r)}ortWasmThreaded(e).then((function(t){e=t}))}else if("run"===t.data.cmd){e.__performance_now_clock_drift=performance.now()-t.data.time,e.__emscripten_thread_init(t.data.pthread_ptr,0,0,1),e.establishStackSpace(),e.PThread.receiveObjectTransfer(t.data),e.PThread.threadInitTLS(),s||(n.forEach((t=>{e.executeNotifiedProxyingQueue(t)})),n=[],s=!0);try{e.invokeEntryPoint(t.data.start_routine,t.data.arg)}catch(t){if("unwind"!=t){if(!(t instanceof e.ExitStatus))throw t;e.keepRuntimeAlive()||e.__emscripten_thread_exit(t.status)}}}else"cancel"===t.data.cmd?e._pthread_self()&&e.__emscripten_thread_exit(-1):"setimmediate"===t.data.target||("processProxyingQueue"===t.data.cmd?s?e.executeNotifiedProxyingQueue(t.data.queue):n.push(t.data.queue):(i("worker.js received unknown command "+t.data.cmd),i(t.data)))}catch(t){throw i("worker.js onmessage() captured an uncaught exception: "+t),t&&t.stack&&i(t.stack),e.__emscripten_thread_crashed&&e.__emscripten_thread_crashed(),t}}; -`},1670:b=>{b.exports=__WEBPACK_EXTERNAL_MODULE__1670__},7067:()=>{},1296:()=>{},1384:()=>{},3993:()=>{},908:()=>{},6953:()=>{},9925:()=>{},2806:()=>{},6449:()=>{},2850:()=>{},5381:()=>{},5686:(b,n,a)=>{a.r(n),a.d(n,{flatbuffers:()=>u});var u={};u.Offset,u.Table,u.SIZEOF_SHORT=2,u.SIZEOF_INT=4,u.FILE_IDENTIFIER_LENGTH=4,u.SIZE_PREFIX_LENGTH=4,u.Encoding={UTF8_BYTES:1,UTF16_STRING:2},u.int32=new Int32Array(2),u.float32=new Float32Array(u.int32.buffer),u.float64=new Float64Array(u.int32.buffer),u.isLittleEndian=new Uint16Array(new Uint8Array([1,0]).buffer)[0]===1,u.Long=function(c,p){this.low=0|c,this.high=0|p},u.Long.create=function(c,p){return c==0&&p==0?u.Long.ZERO:new u.Long(c,p)},u.Long.prototype.toFloat64=function(){return(this.low>>>0)+4294967296*this.high},u.Long.prototype.equals=function(c){return this.low==c.low&&this.high==c.high},u.Long.ZERO=new u.Long(0,0),u.Builder=function(c){if(c)p=c;else var p=1024;this.bb=u.ByteBuffer.allocate(p),this.space=p,this.minalign=1,this.vtable=null,this.vtable_in_use=0,this.isNested=!1,this.object_start=0,this.vtables=[],this.vector_num_elems=0,this.force_defaults=!1},u.Builder.prototype.clear=function(){this.bb.clear(),this.space=this.bb.capacity(),this.minalign=1,this.vtable=null,this.vtable_in_use=0,this.isNested=!1,this.object_start=0,this.vtables=[],this.vector_num_elems=0,this.force_defaults=!1},u.Builder.prototype.forceDefaults=function(c){this.force_defaults=c},u.Builder.prototype.dataBuffer=function(){return this.bb},u.Builder.prototype.asUint8Array=function(){return this.bb.bytes().subarray(this.bb.position(),this.bb.position()+this.offset())},u.Builder.prototype.prep=function(c,p){c>this.minalign&&(this.minalign=c);for(var s=1+~(this.bb.capacity()-this.space+p)&c-1;this.space=0&&this.vtable[p]==0;p--);for(var s=p+1;p>=0;p--)this.addInt16(this.vtable[p]!=0?c-this.vtable[p]:0);this.addInt16(c-this.object_start);var h=(s+2)*u.SIZEOF_SHORT;this.addInt16(h);var f=0,l=this.space;e:for(p=0;p=0;l--)this.writeInt8(f.charCodeAt(l))}this.prep(this.minalign,u.SIZEOF_INT+h),this.addOffset(c),h&&this.addInt32(this.bb.capacity()-this.space),this.bb.setPosition(this.space)},u.Builder.prototype.finishSizePrefixed=function(c,p){this.finish(c,p,!0)},u.Builder.prototype.requiredField=function(c,p){var s=this.bb.capacity()-c,h=s-this.bb.readInt32(s);if(this.bb.readInt16(h+p)==0)throw new Error("FlatBuffers: field "+p+" must be set")},u.Builder.prototype.startVector=function(c,p,s){this.notNested(),this.vector_num_elems=p,this.prep(u.SIZEOF_INT,c*p),this.prep(s,c*p)},u.Builder.prototype.endVector=function(){return this.writeInt32(this.vector_num_elems),this.offset()},u.Builder.prototype.createString=function(c){if(c instanceof Uint8Array)var p=c;else{p=[];for(var s=0;s=56320?f:(f<<10)+c.charCodeAt(s++)+-56613888)<128?p.push(h):(h<2048?p.push(h>>6&31|192):(h<65536?p.push(h>>12&15|224):p.push(h>>18&7|240,h>>12&63|128),p.push(h>>6&63|128)),p.push(63&h|128))}}this.addInt8(0),this.startVector(1,p.length,1),this.bb.setPosition(this.space-=p.length),s=0;for(var l=this.space,o=this.bb.bytes();s>24},u.ByteBuffer.prototype.readUint8=function(c){return this.bytes_[c]},u.ByteBuffer.prototype.readInt16=function(c){return this.readUint16(c)<<16>>16},u.ByteBuffer.prototype.readUint16=function(c){return this.bytes_[c]|this.bytes_[c+1]<<8},u.ByteBuffer.prototype.readInt32=function(c){return this.bytes_[c]|this.bytes_[c+1]<<8|this.bytes_[c+2]<<16|this.bytes_[c+3]<<24},u.ByteBuffer.prototype.readUint32=function(c){return this.readInt32(c)>>>0},u.ByteBuffer.prototype.readInt64=function(c){return new u.Long(this.readInt32(c),this.readInt32(c+4))},u.ByteBuffer.prototype.readUint64=function(c){return new u.Long(this.readUint32(c),this.readUint32(c+4))},u.ByteBuffer.prototype.readFloat32=function(c){return u.int32[0]=this.readInt32(c),u.float32[0]},u.ByteBuffer.prototype.readFloat64=function(c){return u.int32[u.isLittleEndian?0:1]=this.readInt32(c),u.int32[u.isLittleEndian?1:0]=this.readInt32(c+4),u.float64[0]},u.ByteBuffer.prototype.writeInt8=function(c,p){this.bytes_[c]=p},u.ByteBuffer.prototype.writeUint8=function(c,p){this.bytes_[c]=p},u.ByteBuffer.prototype.writeInt16=function(c,p){this.bytes_[c]=p,this.bytes_[c+1]=p>>8},u.ByteBuffer.prototype.writeUint16=function(c,p){this.bytes_[c]=p,this.bytes_[c+1]=p>>8},u.ByteBuffer.prototype.writeInt32=function(c,p){this.bytes_[c]=p,this.bytes_[c+1]=p>>8,this.bytes_[c+2]=p>>16,this.bytes_[c+3]=p>>24},u.ByteBuffer.prototype.writeUint32=function(c,p){this.bytes_[c]=p,this.bytes_[c+1]=p>>8,this.bytes_[c+2]=p>>16,this.bytes_[c+3]=p>>24},u.ByteBuffer.prototype.writeInt64=function(c,p){this.writeInt32(c,p.low),this.writeInt32(c+4,p.high)},u.ByteBuffer.prototype.writeUint64=function(c,p){this.writeUint32(c,p.low),this.writeUint32(c+4,p.high)},u.ByteBuffer.prototype.writeFloat32=function(c,p){u.float32[0]=p,this.writeInt32(c,u.int32[0])},u.ByteBuffer.prototype.writeFloat64=function(c,p){u.float64[0]=p,this.writeInt32(c,u.int32[u.isLittleEndian?0:1]),this.writeInt32(c+4,u.int32[u.isLittleEndian?1:0])},u.ByteBuffer.prototype.getBufferIdentifier=function(){if(this.bytes_.length>10),56320+(1023&l)))}return h},u.ByteBuffer.prototype.__indirect=function(c){return c+this.readInt32(c)},u.ByteBuffer.prototype.__vector=function(c){return c+this.readInt32(c)+u.SIZEOF_INT},u.ByteBuffer.prototype.__vector_len=function(c){return this.readInt32(c+this.readInt32(c))},u.ByteBuffer.prototype.__has_identifier=function(c){if(c.length!=u.FILE_IDENTIFIER_LENGTH)throw new Error("FlatBuffers: file identifier must be length "+u.FILE_IDENTIFIER_LENGTH);for(var p=0;p{var n=b&&b.__esModule?()=>b.default:()=>b;return __webpack_require__.d(n,{a:n}),n},__webpack_require__.d=(b,n)=>{for(var a in n)__webpack_require__.o(n,a)&&!__webpack_require__.o(b,a)&&Object.defineProperty(b,a,{enumerable:!0,get:n[a]})},__webpack_require__.g=function(){if(typeof globalThis=="object")return globalThis;try{return this||new Function("return this")()}catch{if(typeof window=="object")return window}}(),__webpack_require__.o=(b,n)=>Object.prototype.hasOwnProperty.call(b,n),__webpack_require__.r=b=>{typeof Symbol<"u"&&Symbol.toStringTag&&Object.defineProperty(b,Symbol.toStringTag,{value:"Module"}),Object.defineProperty(b,"__esModule",{value:!0})};var __webpack_exports__=__webpack_require__(6018);return __webpack_exports__})())})(ortWeb_min$1);var ortWeb_minExports=ortWeb_min$1.exports,ortWeb_min=getDefaultExportFromCjs(ortWeb_minExports),ONNX_WEB=_mergeNamespaces({__proto__:null,default:ortWeb_min},[ortWeb_minExports]);let ONNX;const executionProviders=["wasm"];typeof process<"u"&&((nt=process==null?void 0:process.release)==null?void 0:nt.name)==="node"?(ONNX=sharp??ONNX_NODE,executionProviders.unshift("cpu")):(ONNX=ortWeb_min??ONNX_WEB,typeof navigator<"u"&&/iP(hone|od|ad).+16_4.+AppleWebKit/.test(navigator.userAgent)&&(ONNX.env.wasm.simd=!1));const{env:onnx_env}=ONNX,VERSION="2.6.2",WEB_CACHE_AVAILABLE=typeof self<"u"&&"caches"in self,FS_AVAILABLE=!isEmpty(sharp),PATH_AVAILABLE=!isEmpty(sharp),RUNNING_LOCALLY=FS_AVAILABLE&&PATH_AVAILABLE,__dirname=RUNNING_LOCALLY?sharp.dirname(sharp.dirname(sharp.fileURLToPath(self.location.href))):"./",DEFAULT_CACHE_DIR=RUNNING_LOCALLY?sharp.join(__dirname,"/.cache/"):null,DEFAULT_LOCAL_MODEL_PATH="/models/",localModelPath=RUNNING_LOCALLY?sharp.join(__dirname,DEFAULT_LOCAL_MODEL_PATH):DEFAULT_LOCAL_MODEL_PATH;onnx_env.wasm.wasmPaths=RUNNING_LOCALLY?sharp.join(__dirname,"/dist/"):`https://cdn.jsdelivr.net/npm/@xenova/transformers@${VERSION}/dist/`;const env={backends:{onnx:onnx_env,tfjs:{}},__dirname,version:VERSION,allowRemoteModels:!0,remoteHost:"https://huggingface.co/",remotePathTemplate:"{model}/resolve/{revision}/",allowLocalModels:!0,localModelPath,useFS:FS_AVAILABLE,useBrowserCache:WEB_CACHE_AVAILABLE,useFSCache:FS_AVAILABLE,cacheDir:DEFAULT_CACHE_DIR,useCustomCache:!1,customCache:null};function isEmpty(b){return Object.keys(b).length===0}globalThis.ReadableStream||(globalThis.ReadableStream=sharp.ReadableStream);class FileResponse{constructor(n){je(this,"_CONTENT_TYPE_MAP",{txt:"text/plain",html:"text/html",css:"text/css",js:"text/javascript",json:"application/json",png:"image/png",jpg:"image/jpeg",jpeg:"image/jpeg",gif:"image/gif"});if(this.filePath=n,this.headers=new Headers,this.exists=sharp.existsSync(n),this.exists){this.status=200,this.statusText="OK";let a=sharp.statSync(n);this.headers.set("content-length",a.size.toString()),this.updateContentType();let u=this;this.body=new ReadableStream({start(c){u.arrayBuffer().then(p=>{c.enqueue(new Uint8Array(p)),c.close()})}})}else this.status=404,this.statusText="Not Found",this.body=null}updateContentType(){const n=this.filePath.toString().split(".").pop().toLowerCase();this.headers.set("content-type",this._CONTENT_TYPE_MAP[n]??"application/octet-stream")}clone(){let n=new FileResponse(this.filePath);return n.exists=this.exists,n.status=this.status,n.statusText=this.statusText,n.headers=new Headers(this.headers),n}async arrayBuffer(){return(await sharp.promises.readFile(this.filePath)).buffer}async blob(){const n=await sharp.promises.readFile(this.filePath);return new Blob([n],{type:this.headers.get("content-type")})}async text(){return await sharp.promises.readFile(this.filePath,"utf8")}async json(){return JSON.parse(await this.text())}}function isValidHttpUrl(b,n=null){let a;try{a=new URL(b)}catch{return!1}return n&&!n.includes(a.hostname)?!1:a.protocol==="http:"||a.protocol==="https:"}async function getFile(b){var n,a,u;if(env.useFS&&!isValidHttpUrl(b))return new FileResponse(b);if(typeof process<"u"&&((n=process==null?void 0:process.release)==null?void 0:n.name)==="node"){const c=!!((a=process.env)!=null&&a.TESTING_REMOTELY),p=env.version,s=new Headers;if(s.set("User-Agent",`transformers.js/${p}; is_ci/${c};`),isValidHttpUrl(b,["huggingface.co","hf.co"])){const f=(u=process.env)==null?void 0:u.HF_ACCESS_TOKEN;f&&s.set("Authorization",`Bearer ${f}`)}return fetch(b,{headers:s})}else return fetch(b)}const ERROR_MAPPING={400:"Bad request error occurred while trying to load file",401:"Unauthorized access to file",403:"Forbidden access to file",404:"Could not locate file",408:"Request timeout error occurred while trying to load file",500:"Internal server error error occurred while trying to load file",502:"Bad gateway error occurred while trying to load file",503:"Service unavailable error occurred while trying to load file",504:"Gateway timeout error occurred while trying to load file"};function handleError(b,n,a){if(!a)return null;const u=ERROR_MAPPING[b]??`Error (${b}) occurred while trying to load file`;throw Error(`${u}: "${n}".`)}class FileCache{constructor(n){this.path=n}async match(n){let a=sharp.join(this.path,n),u=new FileResponse(a);if(u.exists)return u}async put(n,a){const u=Buffer.from(await a.arrayBuffer());let c=sharp.join(this.path,n);try{await sharp.promises.mkdir(sharp.dirname(c),{recursive:!0}),await sharp.promises.writeFile(c,u)}catch(p){console.warn("An error occurred while writing the file to cache:",p)}}}async function tryCache(b,...n){for(let a of n)try{let u=await b.match(a);if(u)return u}catch{continue}}async function getModelFile(b,n,a=!0,u={}){if(!env.allowLocalModels&&u.local_files_only)throw Error("Invalid configuration detected: local models are disabled (`env.allowLocalModels=false`) but you have requested to only use local models (`local_files_only=true`).");dispatchCallback(u.progress_callback,{status:"initiate",name:b,file:n});let c;if(!c&&env.useBrowserCache){if(typeof caches>"u")throw Error("Browser cache is not available in this environment.");try{c=await caches.open("transformers-cache")}catch(d){console.warn("An error occurred while opening the browser cache:",d)}}if(!c&&env.useFSCache&&(c=new FileCache(u.cache_dir??env.cacheDir)),!c&&env.useCustomCache)throw Error("`env.useCustomCache=true`, but `env.customCache` is not defined.");const p=u.revision??"main";let s=pathJoin(b,n),h=pathJoin(env.localModelPath,s),f=pathJoin(env.remoteHost,env.remotePathTemplate.replaceAll("{model}",b).replaceAll("{revision}",p),n),l=p==="main"?s:pathJoin(b,p,n),o,t=c instanceof FileCache?l:f,e=!1,r;if(c&&(r=await tryCache(c,h,t)),r===void 0){if(env.allowLocalModels)if(isValidHttpUrl(s)){if(u.local_files_only)throw new Error(`\`local_files_only=true\`, but attempted to load a remote file from: ${s}.`)}else try{r=await getFile(h),o=h}catch(g){console.warn(`Unable to load from local path "${h}": "${g}"`)}if(r===void 0||r.status===404){if(u.local_files_only||!env.allowRemoteModels){if(a)throw Error(`\`local_files_only=true\` or \`env.allowRemoteModels=false\` and file was not found locally at "${h}".`);return null}if(r=await getFile(f),r.status!==200)return handleError(r.status,f,a);o=t}e=c&&typeof Response<"u"&&r instanceof Response&&r.status===200}dispatchCallback(u.progress_callback,{status:"download",name:b,file:n});const i=await readResponse(r,d=>{dispatchCallback(u.progress_callback,{status:"progress",...d,name:b,file:n})});return e&&o&&await c.match(o)===void 0&&await c.put(o,new Response(i,{headers:r.headers})).catch(d=>{console.warn(`Unable to add response to browser cache: ${d}.`)}),dispatchCallback(u.progress_callback,{status:"done",name:b,file:n}),i}async function getModelJSON(b,n,a=!0,u={}){let c=await getModelFile(b,n,a,u);if(c===null)return{};let s=new TextDecoder("utf-8").decode(c);return JSON.parse(s)}async function readResponse(b,n){const a=b.headers.get("Content-Length");a===null&&console.warn("Unable to determine content-length from response headers. Will expand buffer when needed.");let u=parseInt(a??"0"),c=new Uint8Array(u),p=0;const s=b.body.getReader();async function h(){const{done:f,value:l}=await s.read();if(f)return;let o=p+l.length;if(o>u){u=o;let e=new Uint8Array(u);e.set(c),c=e}c.set(l,p),p=o;const t=p/u*100;return n({progress:t,loaded:p,total:u}),h()}return await h(),c}function pathJoin(...b){return b=b.map((n,a)=>(a&&(n=n.replace(new RegExp("^/"),"")),a!==b.length-1&&(n=n.replace(new RegExp("/$"),"")),n)),b.join("/")}function transpose_data(b,n,a){const u=new Array(a.length),c=new Array(a.length);for(let h=a.length-1,f=1;h>=0;--h)c[h]=f,u[h]=n[a[h]],f*=u[h];const p=a.map((h,f)=>c[a.indexOf(f)]),s=new b.constructor(b.length);for(let h=0;h=0;--l)f+=o%n[l]*p[l],o=Math.floor(o/n[l]);s[f]=b[h]}return[s,u]}function softmax(b){const n=max(b)[0],a=b.map(p=>Math.exp(p-n)),u=a.reduce((p,s)=>p+s,0);return a.map(p=>p/u)}function log_softmax(b){return softmax(b).map(u=>Math.log(u))}function getTopItems(b,n=0){return b=Array.from(b).map((a,u)=>[u,a]).sort((a,u)=>u[1]-a[1]),n>0&&(b=b.slice(0,n)),b}function min(b){if(b.length===0)throw Error("Array must not be empty");let n=b[0],a=0;for(let u=1;un&&(n=b[u],a=u);return[n,a]}function medianFilter(b,n){if(n%2===0||n<=0)throw new Error("Window size must be a positive odd number");const a=new b.constructor(b.length),u=new b.constructor(n),c=Math.floor(n/2);for(let p=0;p=b.length&&(f=2*(b.length-1)-f),u[s++]=b[f]}u.sort(),a[p]=u[c]}return a}function round(b,n){const a=Math.pow(10,n);return Math.round(b*a)/a}const ONNXTensor$1=ONNX.Tensor;class Tensor extends ONNXTensor$1{constructor(...n){return n[0]instanceof ONNX.Tensor?super(n[0].type,n[0].data,n[0].dims):super(...n),new Proxy(this,{get:(a,u)=>{if(typeof u=="string"){let c=Number(u);if(Number.isInteger(c))return a._getitem(c)}return a[u]},set:(a,u,c)=>a[u]=c})}*[Symbol.iterator](){const[n,...a]=this.dims;if(a.length>0){const u=a.reduce((c,p)=>c*p);for(let c=0;c0){const c=u.reduce((p,s)=>p*s);return this._subarray(n,c,u)}else return new Tensor(this.type,[this.data[n]],u)}indexOf(n){for(let a=0;al[1])throw new Error(`Invalid slice: ${l}`);let o=[Math.max(l[0],0),Math.min(l[1],this.dims[f])];u.push(o),a.push(o[1]-o[0])}else throw new Error(`Invalid slice: ${l}`)}let c=u.map(([f,l])=>l-f),p=c.reduce((f,l)=>f*l),s=new this.data.constructor(p);const h=this.stride();for(let f=0;f=0;--o){const e=c[o];l+=(t%e+u[o][0])*h[o],t=Math.floor(t/e)}s[f]=this.data[l]}return new Tensor(this.type,s,a)}transpose(...n){return transpose(this,n)}sum(n=null,a=!1){return this.norm(1,n,a)}norm(n="fro",a=null,u=!1){if(n==="fro")n=2;else if(typeof n=="string")throw Error(`Unsupported norm: ${n}`);if(a===null){let s=this.data.reduce((h,f)=>h+f**n,0)**(1/n);return new Tensor(this.type,[s],[])}a=safeIndex(a,this.dims.length);const c=this.dims.slice();c[a]=1;const p=new this.data.constructor(this.data.length/this.dims[a]);for(let s=0;s=0;--f){const t=this.dims[f];if(f!==a){const e=l%t;h+=e*o,o*=c[f]}l=Math.floor(l/t)}p[h]+=this.data[s]**n}if(n!==1)for(let s=0;s=0;--s){const l=this.dims[s];if(s!==a){const o=h%l;p+=o*f,f*=this.dims[s]}h=Math.floor(h/l)}this.data[c]/=u.data[p]}return this}normalize(n=2,a=1){return this.clone().normalize_(n,a)}stride(){return dimsToStride(this.dims)}squeeze(n=null){return new Tensor(this.type,this.data,calc_squeeze_dims(this.dims,n))}squeeze_(n=null){return this.dims=calc_squeeze_dims(this.dims,n),this}unsqueeze(n=null){return new Tensor(this.type,this.data,calc_unsqueeze_dims(this.dims,n))}unsqueeze_(n=null){return this.dims=calc_unsqueeze_dims(this.dims,n),this}flatten_(n=0,a=-1){a=(a+this.dims.length)%this.dims.length;let u=this.dims.slice(0,n),c=this.dims.slice(n,a+1),p=this.dims.slice(a+1);return this.dims=[...u,c.reduce((s,h)=>s*h,1),...p],this}flatten(n=0,a=-1){return this.clone().flatten_(n,a)}view(...n){let a=-1;for(let u=0;us!==a?c*p:c,1);n[a]=this.data.length/u}return new Tensor(this.type,this.data,n)}neg_(){for(let n=0;np*s);if(a!==u)throw Error(`cannot reshape array of size ${a} into shape (${n})`);let c=b;for(let p=n.length-1;p>=0;p--)c=c.reduce((s,h)=>{let f=s[s.length-1];return f.lengtha!==1):typeof n=="number"?b[n]===1&&b.splice(n,1):Array.isArray(n)&&(b=b.filter((a,u)=>a!==1||!n.includes(u))),b}function calc_unsqueeze_dims(b,n){return n=safeIndex(n,b.length+1),b=b.slice(),b.splice(n,0,1),b}function safeIndex(b,n,a=null){if(b<-n||b>=n)throw new Error(`IndexError: index ${b} is out of bounds for dimension${a===null?"":" "+a} with size ${n}`);return b<0&&(b=(b%n+n)%n),b}function cat(b,n=0){n=safeIndex(n,b[0].dims.length);const a=b[0].dims.slice();a[n]=b.reduce((s,h)=>s+h.dims[n],0);const u=a.reduce((s,h)=>s*h,1),c=new b[0].data.constructor(u),p=b[0].type;if(n===0){let s=0;for(let h of b)c.set(h.data,s),s+=h.data.length}else{let s=0;for(let h=0;h=0;--t){const i=f.dims[t];let d=e%i;t===n&&(d+=s),o+=d*r,r*=a[t],e=Math.floor(e/i)}c[o]=f.data[l]}s+=f.dims[n]}}return new Tensor(p,c,a)}function stack(b,n=0){return cat(b.map(a=>a.unsqueeze(n)),n)}function std_mean(b,n=null,a=1,u=!1){if(n===null){const l=b.data.reduce((r,i)=>r+i,0)/b.data.length,o=Math.sqrt(b.data.reduce((r,i)=>r+(i-l)**2,0)/(b.data.length-a)),t=new Tensor(b.type,[l],[]);return[new Tensor(b.type,[o],[]),t]}n=safeIndex(n,b.dims.length);const c=mean(b,n,u),p=b.dims.slice();p[n]=1;const s=new b.data.constructor(b.data.length/b.dims[n]);for(let f=0;f=0;--o){const r=b.dims[o];if(o!==n){const i=t%r;l+=i*e,e*=p[o]}t=Math.floor(t/r)}s[l]+=(b.data[f]-c.data[l])**2}for(let f=0;fs+h,0);return new Tensor(b.type,[p/b.data.length],[])}n=safeIndex(n,b.dims.length);const u=b.dims.slice();u[n]=1;const c=new b.data.constructor(b.data.length/b.dims[n]);for(let p=0;p=0;--h){const o=b.dims[h];if(h!==n){const t=f%o;s+=t*l,l*=u[h]}f=Math.floor(f/o)}c[s]+=b.data[p]}if(b.dims[n]!==1)for(let p=0;p0||h>0;)switch(f.push(s-1),l.push(h-1),p[s][h].item()){case 0:--s,--h;break;case 1:--s;break;case 2:--h;break;default:throw new Error(`Internal error in dynamic time warping. Unexpected trace[${s}, ${h}]. Please file a bug report.`)}return f.reverse(),l.reverse(),[f,l]}function dimsToStride(b){const n=new Array(b.length);for(let a=b.length-1,u=1;a>=0;--a)n[a]=u,u*=b[a];return n}function ones(b){const n=b.reduce((a,u)=>a*u,1);return new Tensor("int64",new BigInt64Array(n).fill(1n),b)}function ones_like(b){return ones(b.dims)}class PriorityQueue{constructor(n=(a,u)=>a>u){this._heap=[],this._comparator=n}get size(){return this._heap.length}isEmpty(){return this.size===0}peek(){return this._heap[0]}push(...n){return this.extend(n)}extend(n){for(const a of n)this._heap.push(a),this._siftUp();return this.size}pop(){const n=this.peek(),a=this.size-1;return a>0&&this._swap(0,a),this._heap.pop(),this._siftDown(),n}replace(n){const a=this.peek();return this._heap[0]=n,this._siftDown(),a}_parent(n){return(n+1>>>1)-1}_left(n){return(n<<1)+1}_right(n){return n+1<<1}_greater(n,a){return this._comparator(this._heap[n],this._heap[a])}_swap(n,a){const u=this._heap[n];this._heap[n]=this._heap[a],this._heap[a]=u}_siftUp(){let n=this.size-1;for(;n>0&&this._greater(n,this._parent(n));)this._swap(n,this._parent(n)),n=this._parent(n)}_siftDown(){let n=0;for(;this._left(n)[]),this.endNodes=Array.from({length:this.len+1},()=>[]);const c=new TokenLatticeNode(this.bosTokenId,0,0,0,0),p=new TokenLatticeNode(this.eosTokenId,1,this.len,0,0);this.nodes.push(c.clone()),this.nodes.push(p.clone()),this.beginNodes[this.len].push(p),this.endNodes[0].push(c)}insert(n,a,u,c){const p=this.nodes.length,s=new TokenLatticeNode(c,p,n,a,u);this.beginNodes[n].push(s),this.endNodes[n+a].push(s),this.nodes.push(s)}viterbi(){const n=this.len;let a=0;for(;a<=n;){if(this.beginNodes[a].length==0)return[];for(let h of this.beginNodes[a]){h.prev=null;let f=0,l=null;for(let o of this.endNodes[a]){const t=o.backtraceScore+h.score;(l===null||t>f)&&(l=o.clone(),f=t)}if(l!==null)h.prev=l,h.backtraceScore=f;else return[]}++a}const u=[],p=this.beginNodes[n][0].prev;if(p===null)return[];let s=p.clone();for(;s.prev!==null;)u.push(s.clone()),s=s.clone().prev.clone();return u.reverse(),u}piece(n){return this.sentence.slice(n.pos,n.pos+n.length)}tokens(){return this.viterbi().map(a=>this.piece(a))}tokenIds(){return this.viterbi().map(a=>a.tokenId)}}class TokenLatticeNode{constructor(n,a,u,c,p){this.tokenId=n,this.nodeId=a,this.pos=u,this.length=c,this.score=p,this.prev=null,this.backtraceScore=0}clone(){const n=new TokenLatticeNode(this.tokenId,this.nodeId,this.pos,this.length,this.score);return n.prev=this.prev,n.backtraceScore=this.backtraceScore,n}}async function loadTokenizer(b,n){return await Promise.all([getModelJSON(b,"tokenizer.json",!0,n),getModelJSON(b,"tokenizer_config.json",!0,n)])}function createPattern(b,n=!0){return b.Regex!==void 0?new RegExp(n?b.Regex:`(${b.Regex})`,"gu"):b.String!==void 0?b.String:(console.warn("Unknown pattern type:",b),null)}function objectToMap(b){return new Map(Object.entries(b))}function clean_up_tokenization(b){return b.replace(/ \./g,".").replace(/ \?/g,"?").replace(/ \!/g,"!").replace(/ ,/g,",").replace(/ \' /g,"'").replace(/ n\'t/g,"n't").replace(/ \'m/g,"'m").replace(/ \'s/g,"'s").replace(/ \'ve/g,"'ve").replace(/ \'re/g,"'re")}function remove_accents(b){return b.replace(/[\u0300-\u036f]/g,"")}function lowercase_and_remove_accent(b){return remove_accents(b.toLowerCase())}function fuse(b,n){let a=[],u=0;for(;uthis.tokens_to_ids.get(u)??this.unk_token_id);return this.fuse_unk&&(a=fuse(a,this.unk_token_id)),a}convert_ids_to_tokens(n){return n.map(a=>this.vocab[a]??this.unk_token)}}class WordPieceTokenizer extends TokenizerModel{constructor(n){super(n),this.tokens_to_ids=objectToMap(n.vocab),this.unk_token_id=this.tokens_to_ids.get(n.unk_token),this.unk_token=n.unk_token,this.vocab=new Array(this.tokens_to_ids.size);for(const[a,u]of this.tokens_to_ids)this.vocab[u]=a}encode(n){let a=[];for(let u of n){let c=[...u],p=!1,s=0,h=[];for(;s0&&(o=this.config.continuing_subword_prefix+o),this.tokens_to_ids.has(o)){l=o;break}--f}if(l===null){p=!0;break}h.push(l),s=f}p?a.push(this.unk_token):a.push(...h)}return a}}class Unigram extends TokenizerModel{constructor(n,a){super(n);const u=n.vocab.length;this.vocab=new Array(u),this.scores=new Array(u);for(let c=0;c[c,p])),this.bosToken=" ",this.bosTokenId=this.tokens_to_ids.get(this.bosToken),this.eosToken=a.eos_token,this.eosTokenId=this.tokens_to_ids.get(this.eosToken),this.unkToken=this.vocab[this.unk_token_id],this.minScore=min(this.scores)[0],this.unkScore=this.minScore-10,this.scores[this.unk_token_id]=this.unkScore,this.trie=new CharTrie,this.trie.extend(this.vocab),this.fuse_unk=!0}populateNodes(n){const a=n.sentence,u=a.length;let c=0;for(;c{const b=[...Array.from({length:"~".charCodeAt(0)-"!".charCodeAt(0)+1},(c,p)=>p+"!".charCodeAt(0)),...Array.from({length:"¬".charCodeAt(0)-"¡".charCodeAt(0)+1},(c,p)=>p+"¡".charCodeAt(0)),...Array.from({length:"ÿ".charCodeAt(0)-"®".charCodeAt(0)+1},(c,p)=>p+"®".charCodeAt(0))];let n=b.slice(),a=0;for(let c=0;c<256;++c)b.includes(c)||(b.push(c),n.push(256+a),a+=1);let u=n.map(c=>String.fromCharCode(c));return Object.fromEntries(b.map((c,p)=>[c,u[p]]))})(),UNICODE_TO_BYTES=reverseDictionary(BYTES_TO_UNICODE);class BPE extends TokenizerModel{constructor(n){super(n),this.BPE_SPLIT_TOKEN=" ",this.tokens_to_ids=objectToMap(n.vocab),this.unk_token_id=this.tokens_to_ids.get(n.unk_token),this.unk_token=n.unk_token,this.vocab=new Array(this.tokens_to_ids.size);for(const[a,u]of this.tokens_to_ids)this.vocab[u]=a;this.bpe_ranks=new Map(n.merges.map((a,u)=>[a,u])),this.merges=n.merges.map(a=>a.split(this.BPE_SPLIT_TOKEN)),this.end_of_word_suffix=n.end_of_word_suffix,this.continuing_subword_suffix=n.continuing_subword_suffix??null,this.byte_fallback=this.config.byte_fallback??!1,this.byte_fallback&&(this.text_encoder=new TextEncoder),this.cache=new Map}bpe(n){if(n.length===0)return[];const a=this.cache.get(n);if(a!==void 0)return a;const u=Array.from(n);this.end_of_word_suffix&&(u[u.length-1]+=this.end_of_word_suffix);let c=[];if(u.length>1){const p=new PriorityQueue((f,l)=>f.score`<0x${s.toString(16).toUpperCase().padStart(2,"0")}>`)):a.push(this.unk_token)}return a}}class LegacyTokenizerModel extends TokenizerModel{constructor(n,a){super(n),this.tokens_to_ids=objectToMap(a.target_lang?n.vocab[a.target_lang]:n.vocab),this.bos_token=a.bos_token,this.bos_token_id=this.tokens_to_ids.get(this.bos_token),this.eos_token=a.eos_token,this.eos_token_id=this.tokens_to_ids.get(this.eos_token),this.pad_token=a.pad_token,this.pad_token_id=this.tokens_to_ids.get(this.pad_token),this.unk_token=a.unk_token,this.unk_token_id=this.tokens_to_ids.get(this.unk_token),this.vocab=new Array(this.tokens_to_ids.size);for(const[u,c]of this.tokens_to_ids)this.vocab[c]=u}encode(n){return n}}class Normalizer extends Callable{constructor(n){super(),this.config=n}static fromConfig(n){if(n===null)return null;switch(n.type){case"BertNormalizer":return new BertNormalizer(n);case"Precompiled":return new Precompiled(n);case"Sequence":return new NormalizerSequence(n);case"Replace":return new Replace(n);case"NFC":return new NFC(n);case"NFKD":return new NFKD(n);case"Strip":return new StripNormalizer(n);case"StripAccents":return new StripAccents(n);case"Lowercase":return new Lowercase(n);case"Prepend":return new Prepend(n);default:throw new Error(`Unknown Normalizer type: ${n.type}`)}}normalize(n){throw Error("normalize should be implemented in subclass.")}_call(n){return this.normalize(n)}}class Replace extends Normalizer{normalize(n){let a=createPattern(this.config.pattern);return a===null||(n=n.replaceAll(a,this.config.content)),n}}class NFC extends Normalizer{normalize(n){return n=n.normalize("NFC"),n}}class NFKD extends Normalizer{normalize(n){return n=n.normalize("NFKD"),n}}class StripNormalizer extends Normalizer{normalize(n){return this.config.strip_left&&this.config.strip_right?n=n.trim():(this.config.strip_left&&(n=n.trimStart()),this.config.strip_right&&(n=n.trimEnd())),n}}class StripAccents extends Normalizer{normalize(n){return n=remove_accents(n),n}}class Lowercase extends Normalizer{normalize(n){return n=n.toLowerCase(),n}}class Prepend extends Normalizer{normalize(n){return n=this.config.prepend+n,n}}class NormalizerSequence extends Normalizer{constructor(n){super(n),this.normalizers=n.normalizers.map(a=>Normalizer.fromConfig(a))}normalize(n){return this.normalizers.reduce((a,u)=>u.normalize(a),n)}}class BertNormalizer extends Normalizer{_tokenize_chinese_chars(n){let a=[];for(let u=0;u=19968&&n<=40959||n>=13312&&n<=19903||n>=131072&&n<=173791||n>=173824&&n<=177983||n>=177984&&n<=178207||n>=178208&&n<=183983||n>=63744&&n<=64255||n>=194560&&n<=195103}stripAccents(n){return n.normalize("NFD").replace(/[\u0300-\u036f]/g,"")}normalize(n){return this.config.handle_chinese_chars&&(n=this._tokenize_chinese_chars(n)),this.config.lowercase?(n=n.toLowerCase(),this.config.strip_accents!==!1&&(n=this.stripAccents(n))):this.config.strip_accents&&(n=this.stripAccents(n)),n}}class PreTokenizer extends Callable{static fromConfig(n){if(n===null)return null;switch(n.type){case"BertPreTokenizer":return new BertPreTokenizer(n);case"Sequence":return new PreTokenizerSequence(n);case"WhitespaceSplit":return new WhitespaceSplit(n);case"Metaspace":return new MetaspacePreTokenizer(n);case"ByteLevel":return new ByteLevelPreTokenizer(n);case"Split":return new SplitPreTokenizer(n);case"Punctuation":return new PunctuationPreTokenizer(n);case"Digits":return new DigitsPreTokenizer(n);case"Replace":return new ReplacePreTokenizer(n);default:throw new Error(`Unknown PreTokenizer type: ${n.type}`)}}pre_tokenize_text(n){throw Error("pre_tokenize_text should be implemented in subclass.")}pre_tokenize(n){let a=[];return Array.isArray(n)?a=n.map(u=>this.pre_tokenize_text(u)):a=this.pre_tokenize_text(n),a.flat()}_call(n){return this.pre_tokenize(n)}}class BertPreTokenizer extends PreTokenizer{constructor(n){super(),this.pattern=new RegExp(`[^\\s${PUNCTUATION_REGEX}]+|[${PUNCTUATION_REGEX}]`,"gu")}pre_tokenize_text(n){return n.trim().match(this.pattern)||[]}}class ByteLevelPreTokenizer extends PreTokenizer{constructor(n){super(),this.config=n,this.add_prefix_space=this.config.add_prefix_space,this.trim_offsets=this.config.trim_offsets,this.use_regex=this.config.use_regex??!0,this.pattern=/'s|'t|'re|'ve|'m|'ll|'d| ?\p{L}+| ?\p{N}+| ?[^\s\p{L}\p{N}]+|\s+(?!\S)|\s+/gu,this.byte_encoder=BYTES_TO_UNICODE,this.text_encoder=new TextEncoder}pre_tokenize_text(n){return this.add_prefix_space&&!n.startsWith(" ")&&(n=" "+n),(this.use_regex?n.match(this.pattern)||[]:[n]).map(u=>Array.from(this.text_encoder.encode(u),c=>this.byte_encoder[c]).join(""))}}class SplitPreTokenizer extends PreTokenizer{constructor(n){super(),this.config=n,this.pattern=createPattern(this.config.pattern,this.config.invert)}pre_tokenize_text(n){return this.pattern===null?[]:this.config.invert?n.match(this.pattern)||[]:n.split(this.pattern).filter(a=>a)}}class PunctuationPreTokenizer extends PreTokenizer{constructor(n){super(),this.config=n,this.pattern=new RegExp(`[^${PUNCTUATION_REGEX}]+|[${PUNCTUATION_REGEX}]+`,"gu")}pre_tokenize_text(n){return n.match(this.pattern)||[]}}class DigitsPreTokenizer extends PreTokenizer{constructor(n){super(),this.config=n;const a=`[^\\d]+|\\d${this.config.individual_digits?"":"+"}`;this.pattern=new RegExp(a,"gu")}pre_tokenize_text(n){return n.match(this.pattern)||[]}}class PostProcessor extends Callable{constructor(n){super(),this.config=n}static fromConfig(n){if(n===null)return null;switch(n.type){case"TemplateProcessing":return new TemplateProcessing(n);case"ByteLevel":return new ByteLevelPostProcessor(n);case"RobertaProcessing":return new RobertaProcessing(n);case"BertProcessing":return new BertProcessing(n);default:throw new Error(`Unknown PostProcessor type: ${n.type}`)}}post_process(n,...a){throw Error("post_process should be implemented in subclass.")}_call(n,...a){return this.post_process(n,...a)}}class BertProcessing extends PostProcessor{constructor(n){super(n),this.cls=n.cls[0],this.sep=n.sep[0]}post_process(n,a=null){return n=mergeArrays([this.cls],n,[this.sep]),a!==null&&(n=mergeArrays(n,[this.sep],a,[this.sep])),n}}class RobertaProcessing extends BertProcessing{}class TemplateProcessing extends PostProcessor{constructor(n){super(n),this.single=n.single,this.pair=n.pair}post_process(n,a=null){let u=a===null?this.single:this.pair,c=[];for(let p of u)"SpecialToken"in p?c.push(p.SpecialToken.id):"Sequence"in p&&(p.Sequence.id==="A"?c=mergeArrays(c,n):p.Sequence.id==="B"&&(c=mergeArrays(c,a)));return c}}class ByteLevelPostProcessor extends PostProcessor{post_process(n){return n}}class Decoder extends Callable{constructor(n){super(),this.config=n,this.added_tokens=[],this.end_of_word_suffix=null,this.trim_offsets=n.trim_offsets}static fromConfig(n){switch(n.type){case"WordPiece":return new WordPieceDecoder(n);case"Metaspace":return new MetaspaceDecoder(n);case"ByteLevel":return new ByteLevelDecoder(n);case"Replace":return new ReplaceDecoder(n);case"ByteFallback":return new ByteFallback(n);case"Fuse":return new FuseDecoder(n);case"Strip":return new StripDecoder(n);case"Sequence":return new DecoderSequence(n);case"CTC":return new CTCDecoder(n);case"BPEDecoder":return new BPEDecoder(n);default:throw new Error(`Unknown Decoder type: ${n.type}`)}}_call(n){return this.decode(n)}decode(n){return this.decode_chain(n).join("")}decode_chain(n){throw Error("`decode_chain` should be implemented in subclass.")}}class ReplaceDecoder extends Decoder{decode_chain(n){let a=createPattern(this.config.pattern);return a===null?n:n.map(u=>u.replaceAll(a,this.config.content))}}class ByteFallback extends Decoder{constructor(n){super(n),this.text_decoder=new TextDecoder}decode_chain(n){let a=[],u=[];for(let c of n){let p=null;if(c.length===6&&c.startsWith("<0x")&&c.endsWith(">")){let s=parseInt(c.slice(3,5),16);isNaN(s)||(p=s)}if(p!==null)u.push(p);else{if(u.length>0){let s=this.text_decoder.decode(Uint8Array.from(u));a.push(s),u=[]}a.push(c)}}if(u.length>0){let c=this.text_decoder.decode(Uint8Array.from(u));a.push(c),u=[]}return a}}class FuseDecoder extends Decoder{decode_chain(n){return[n.join("")]}}class StripDecoder extends Decoder{constructor(n){super(n),this.content=this.config.content,this.start=this.config.start,this.stop=this.config.stop}decode_chain(n){return n.map(a=>{let u=0;for(let p=0;p(u!==0&&(a.startsWith(this.config.prefix)?a=a.replace(this.config.prefix,""):a=" "+a),this.cleanup&&(a=clean_up_tokenization(a)),a))}}class ByteLevelDecoder extends Decoder{constructor(n){super(n),this.byte_decoder=UNICODE_TO_BYTES,this.text_decoder=new TextDecoder("utf-8",{fatal:!1,ignoreBOM:!0}),this.end_of_word_suffix=null}convert_tokens_to_string(n){let a=n.join(""),u=new Uint8Array([...a].map(p=>this.byte_decoder[p]));return this.text_decoder.decode(u)}decode_chain(n){let a=[],u=[];for(let c of n)this.added_tokens.includes(c)?(u.length>0&&(a.push(this.convert_tokens_to_string(u)),u=[]),a.push(c)):u.push(c);return u.length>0&&a.push(this.convert_tokens_to_string(u)),a}}class CTCDecoder extends Decoder{constructor(n){super(n),this.pad_token=this.config.pad_token,this.word_delimiter_token=this.config.word_delimiter_token,this.cleanup=this.config.cleanup}convert_tokens_to_string(n){if(n.length===0)return"";let a=[n[0]];for(let p=1;pp!==this.pad_token).join("");return this.cleanup&&(c=clean_up_tokenization(c).replaceAll(this.word_delimiter_token," ").trim()),c}decode_chain(n){return[this.convert_tokens_to_string(n)]}}class DecoderSequence extends Decoder{constructor(n){super(n),this.decoders=n.decoders.map(a=>Decoder.fromConfig(a))}decode_chain(n){return this.decoders.reduce((a,u)=>u.decode_chain(a),n)}}class BPEDecoder extends Decoder{constructor(n){super(n),this.suffix=this.config.suffix}decode_chain(n){return n.map((a,u)=>a.replaceAll(this.suffix,u===n.length-1?"":" "))}}class MetaspacePreTokenizer extends PreTokenizer{constructor(n){super(),this.addPrefixSpace=n.add_prefix_space,this.replacement=n.replacement,this.strRep=n.str_rep||this.replacement}pre_tokenize(n){typeof n=="string"&&(n=n.trimStart().split(/\s+/));const a=[];for(let u of n){let c=u.replaceAll(" ",this.strRep);this.addPrefixSpace&&!c.startsWith(this.replacement)&&(c=this.strRep+c),a.push(c)}return a}}class MetaspaceDecoder extends Decoder{constructor(n){super(n),this.addPrefixSpace=n.add_prefix_space,this.replacement=n.replacement}decode_chain(n){let a=[];for(let u=0;uu.normalize("NFKC")).join("~"):n=n.normalize("NFKC"),n}}class PreTokenizerSequence extends PreTokenizer{constructor(n){super(),this.tokenizers=n.pretokenizers.map(a=>PreTokenizer.fromConfig(a))}pre_tokenize_text(n){return typeof n=="string"&&(n=[n]),this.tokenizers.reduce((a,u)=>u.pre_tokenize(a),n)}}class WhitespaceSplit extends PreTokenizer{constructor(n){super()}pre_tokenize_text(n){return whitespace_split(n)}}class ReplacePreTokenizer extends PreTokenizer{constructor(n){super(),this.config=n,this.pattern=createPattern(this.config.pattern),this.content=this.config.content}pre_tokenize_text(n){return this.pattern===null?[n]:[n.replaceAll(this.pattern,this.config.content)]}}class PreTrainedTokenizer extends Callable{constructor(n,a){super(),this.normalizer=Normalizer.fromConfig(n.normalizer),this.pre_tokenizer=PreTokenizer.fromConfig(n.pre_tokenizer),this.model=TokenizerModel.fromConfig(n.model,a),this.post_processor=PostProcessor.fromConfig(n.post_processor),this.decoder=Decoder.fromConfig(n.decoder),this.decoder.end_of_word_suffix=this.model.end_of_word_suffix,this.special_tokens=[],this.all_special_ids=[],this.added_tokens=[];for(let u of n.added_tokens){let c=u.id,p=u.content;this.added_tokens.push(p),this.model.tokens_to_ids.set(p,c),this.model.vocab[c]=p,u.special&&(this.special_tokens.push(p),this.all_special_ids.push(c))}this.special_tokens.push(...a.additional_special_tokens??[]),this.special_tokens=[...new Set(this.special_tokens)],this.decoder.added_tokens=this.added_tokens,this.added_tokens_regex=this.added_tokens.length>0?new RegExp("("+this.added_tokens.map(escapeRegExp).join("|")+")"):null,this.mask_token=this.getToken(a,"mask_token"),this.mask_token_id=this.model.tokens_to_ids.get(this.mask_token),this.pad_token=this.getToken(a,"pad_token","eos_token"),this.pad_token_id=this.model.tokens_to_ids.get(this.pad_token),this.sep_token=this.getToken(a,"sep_token"),this.sep_token_id=this.model.tokens_to_ids.get(this.sep_token),this.model_max_length=a.model_max_length,this.remove_space=a.remove_space,this.clean_up_tokenization_spaces=a.clean_up_tokenization_spaces??!0,this.do_lowercase_and_remove_accent=a.do_lowercase_and_remove_accent??!1,this.padding_side="right"}getToken(n,...a){for(let u of a){let c=n[u];if(c)if(typeof c=="object"){if(c.__type==="AddedToken")return c.content;throw Error(`Unknown token: ${c}`)}else return c}return null}static async from_pretrained(n,{progress_callback:a=null,config:u=null,cache_dir:c=null,local_files_only:p=!1,revision:s="main"}={}){let h=await loadTokenizer(n,{progress_callback:a,config:u,cache_dir:c,local_files_only:p,revision:s});return new this(...h)}prepare_model_inputs(n){return n}_call(n,{text_pair:a=null,add_special_tokens:u=!0,padding:c=!1,truncation:p=null,max_length:s=null,return_tensor:h=!0}={}){let f;if(Array.isArray(n)){if(n.length===0)throw Error("text array must be non-empty");if(a!==null){if(Array.isArray(a)){if(n.length!==a.length)throw Error("text and text_pair must have the same length")}else throw Error("text_pair must also be an array");f=n.map((e,r)=>this.encode(e,a[r],{add_special_tokens:u}))}else f=n.map(e=>this.encode(e,null,{add_special_tokens:u}))}else{if(n===null)throw Error("text may not be null");if(Array.isArray(a))throw Error("When specifying `text_pair`, since `text` is a string, `text_pair` must also be a string (i.e., not an array).");f=[this.encode(n,a,{add_special_tokens:u})]}let l=max(f.map(e=>e.length))[0];s===null&&(s=l),s=Math.min(s,this.model_max_length);let o=[];if(c||p)for(let e=0;es)p&&(f[e]=f[e].slice(0,s)),o.push(new Array(f[e].length).fill(1));else if(c){let r=s-f[e].length;this.padding_side==="right"?(o.push(new Array(f[e].length).fill(1).concat(new Array(r).fill(0))),f[e].push(...new Array(r).fill(this.pad_token_id))):(o.push(new Array(r).fill(0).concat(new Array(f[e].length).fill(1))),f[e].unshift(...new Array(r).fill(this.pad_token_id)))}else o.push(new Array(f[e].length).fill(1));else o=f.map(e=>new Array(e.length).fill(1));if(h){if(!(c&&p)&&f.some(r=>r.length!==f[0].length))throw Error("Unable to create tensor, you should probably activate truncation and/or padding with 'padding=true' and 'truncation=true' to have batched tensors with the same length.");let e=[f.length,f[0].length];f=new Tensor("int64",BigInt64Array.from(f.flat().map(BigInt)),e),o=new Tensor("int64",BigInt64Array.from(o.flat().map(BigInt)),e)}else Array.isArray(n)||(f=f[0],o=o[0]);let t={input_ids:f,attention_mask:o};return t=this.prepare_model_inputs(t),t}_encode_text(n){return n===null?null:(this.added_tokens_regex?n.split(this.added_tokens_regex).filter(c=>c):[n]).map(c=>{if(this.added_tokens.includes(c))return c;{this.remove_space===!0&&(c=c.trim().split(/\s+/).join(" ")),this.do_lowercase_and_remove_accent&&(c=lowercase_and_remove_accent(c)),this.normalizer!==null&&(c=this.normalizer(c));let p=this.pre_tokenizer!==null?this.pre_tokenizer(c):[c];return this.model(p)}}).flat()}encode(n,a=null,{add_special_tokens:u=!0}={}){let c=this._encode_text(n),p=this._encode_text(a),s=this.post_processor!==null&&u?this.post_processor(c,p):mergeArrays(c??[],p??[]);return this.model.convert_tokens_to_ids(s)}batch_decode(n,a={}){return n.map(u=>this.decode(u,a))}decode(n,a={}){if(!Array.isArray(n)||n.length===0||!isIntegralNumber(n[0]))throw Error("token_ids must be a non-empty array of integers.");return this.decode_single(n,a)}decode_single(n,{skip_special_tokens:a=!1,clean_up_tokenization_spaces:u=null}){let c=this.model.convert_ids_to_tokens(n);a&&(c=c.filter(s=>!this.special_tokens.includes(s)));let p=this.decoder(c);return this.decoder.end_of_word_suffix&&(p=p.replaceAll(this.decoder.end_of_word_suffix," "),a&&(p=p.trim())),(u??this.clean_up_tokenization_spaces)&&(p=clean_up_tokenization(p)),p}}function add_token_types(b){if(b.input_ids instanceof Tensor)b.token_type_ids=new Tensor("int64",new BigInt64Array(b.input_ids.data.length),b.input_ids.dims);else if(Array.isArray(b.input_ids))Array.isArray(b.input_ids[0])?b.token_type_ids=b.input_ids.map(n=>new Array(n.length).fill(0)):b.token_type_ids=new Array(b.input_ids.length).fill(0);else throw new Error("Input ids must be a Tensor or an Array");return b}class BertTokenizer extends PreTrainedTokenizer{prepare_model_inputs(n){return add_token_types(n)}}class AlbertTokenizer extends PreTrainedTokenizer{prepare_model_inputs(n){return add_token_types(n)}}class MobileBertTokenizer extends PreTrainedTokenizer{prepare_model_inputs(n){return add_token_types(n)}}class SqueezeBertTokenizer extends PreTrainedTokenizer{prepare_model_inputs(n){return add_token_types(n)}}class DebertaTokenizer extends PreTrainedTokenizer{prepare_model_inputs(n){return add_token_types(n)}}class DebertaV2Tokenizer extends PreTrainedTokenizer{prepare_model_inputs(n){return add_token_types(n)}}class HerbertTokenizer extends PreTrainedTokenizer{prepare_model_inputs(n){return add_token_types(n)}}class DistilBertTokenizer extends PreTrainedTokenizer{}class CamembertTokenizer extends PreTrainedTokenizer{}class XLMTokenizer extends PreTrainedTokenizer{constructor(n,a){super(n,a),console.warn('WARNING: `XLMTokenizer` is not yet supported by Hugging Face\'s "fast" tokenizers library. Therefore, you may experience slightly inaccurate results.')}prepare_model_inputs(n){return add_token_types(n)}}class T5Tokenizer extends PreTrainedTokenizer{}class GPT2Tokenizer extends PreTrainedTokenizer{}class BartTokenizer extends PreTrainedTokenizer{}class MBartTokenizer extends PreTrainedTokenizer{constructor(n,a){super(n,a),this.languageRegex=/^[a-z]{2}_[A-Z]{2}$/,this.language_codes=this.special_tokens.filter(u=>this.languageRegex.test(u)),this.lang_to_token=u=>u}_build_translation_inputs(n,a,u){return _build_translation_inputs(this,n,a,u)}}class MBart50Tokenizer extends MBartTokenizer{}class RobertaTokenizer extends PreTrainedTokenizer{}class BloomTokenizer extends PreTrainedTokenizer{constructor(n,a){var p,s;const u=".,!?…。,、।۔،",c=(s=(p=n.pre_tokenizer)==null?void 0:p.pretokenizers[0])==null?void 0:s.pattern;c&&c.Regex===` ?[^(\\s|[${u}])]+`&&(c.Regex=` ?[^\\s${u}]+`),super(n,a)}}class LlamaTokenizer extends PreTrainedTokenizer{}class CodeLlamaTokenizer extends PreTrainedTokenizer{}class XLMRobertaTokenizer extends PreTrainedTokenizer{}class MPNetTokenizer extends PreTrainedTokenizer{}class FalconTokenizer extends PreTrainedTokenizer{}class GPTNeoXTokenizer extends PreTrainedTokenizer{}function _build_translation_inputs(b,n,a,u){if(!("language_codes"in b)||!Array.isArray(b.language_codes))throw new Error("Tokenizer must have `language_codes` attribute set and it should be an array of language ids.");if(!("languageRegex"in b)||!(b.languageRegex instanceof RegExp))throw new Error("Tokenizer must have `languageRegex` attribute set and it should be a regular expression.");if(!("lang_to_token"in b)||typeof b.lang_to_token!="function")throw new Error("Tokenizer must have `lang_to_token` attribute set and it should be a function.");const c=u.src_lang,p=u.tgt_lang;if(!b.language_codes.includes(p))throw new Error(`Target language code "${p}" is not valid. Must be one of: {${b.language_codes.join(", ")}}`);if(c!==void 0){if(!b.language_codes.includes(c))throw new Error(`Source language code "${c}" is not valid. Must be one of: {${b.language_codes.join(", ")}}`);for(let s of b.post_processor.config.single)if("SpecialToken"in s&&b.languageRegex.test(s.SpecialToken.id)){s.SpecialToken.id=b.lang_to_token(c);break}}return u.forced_bos_token_id=b.model.convert_tokens_to_ids([b.lang_to_token(p)])[0],b._call(n,a)}class NllbTokenizer extends PreTrainedTokenizer{constructor(n,a){super(n,a),this.languageRegex=/^[a-z]{3}_[A-Z][a-z]{3}$/,this.language_codes=this.special_tokens.filter(u=>this.languageRegex.test(u)),this.lang_to_token=u=>u}_build_translation_inputs(n,a,u){return _build_translation_inputs(this,n,a,u)}}class M2M100Tokenizer extends PreTrainedTokenizer{constructor(n,a){super(n,a),this.languageRegex=/^__[a-z]{2,3}__$/,this.language_codes=this.special_tokens.filter(u=>this.languageRegex.test(u)).map(u=>u.slice(2,-2)),this.lang_to_token=u=>`__${u}__`}_build_translation_inputs(n,a,u){return _build_translation_inputs(this,n,a,u)}}const WHISPER_LANGUAGES=[["en","english"],["zh","chinese"],["de","german"],["es","spanish"],["ru","russian"],["ko","korean"],["fr","french"],["ja","japanese"],["pt","portuguese"],["tr","turkish"],["pl","polish"],["ca","catalan"],["nl","dutch"],["ar","arabic"],["sv","swedish"],["it","italian"],["id","indonesian"],["hi","hindi"],["fi","finnish"],["vi","vietnamese"],["he","hebrew"],["uk","ukrainian"],["el","greek"],["ms","malay"],["cs","czech"],["ro","romanian"],["da","danish"],["hu","hungarian"],["ta","tamil"],["no","norwegian"],["th","thai"],["ur","urdu"],["hr","croatian"],["bg","bulgarian"],["lt","lithuanian"],["la","latin"],["mi","maori"],["ml","malayalam"],["cy","welsh"],["sk","slovak"],["te","telugu"],["fa","persian"],["lv","latvian"],["bn","bengali"],["sr","serbian"],["az","azerbaijani"],["sl","slovenian"],["kn","kannada"],["et","estonian"],["mk","macedonian"],["br","breton"],["eu","basque"],["is","icelandic"],["hy","armenian"],["ne","nepali"],["mn","mongolian"],["bs","bosnian"],["kk","kazakh"],["sq","albanian"],["sw","swahili"],["gl","galician"],["mr","marathi"],["pa","punjabi"],["si","sinhala"],["km","khmer"],["sn","shona"],["yo","yoruba"],["so","somali"],["af","afrikaans"],["oc","occitan"],["ka","georgian"],["be","belarusian"],["tg","tajik"],["sd","sindhi"],["gu","gujarati"],["am","amharic"],["yi","yiddish"],["lo","lao"],["uz","uzbek"],["fo","faroese"],["ht","haitian creole"],["ps","pashto"],["tk","turkmen"],["nn","nynorsk"],["mt","maltese"],["sa","sanskrit"],["lb","luxembourgish"],["my","myanmar"],["bo","tibetan"],["tl","tagalog"],["mg","malagasy"],["as","assamese"],["tt","tatar"],["haw","hawaiian"],["ln","lingala"],["ha","hausa"],["ba","bashkir"],["jw","javanese"],["su","sundanese"]],WHISPER_LANGUAGE_MAPPING=new Map(WHISPER_LANGUAGES),WHISPER_TO_LANGUAGE_CODE_MAPPING=new Map([...WHISPER_LANGUAGES.map(([b,n])=>[n,b]),["burmese","my"],["valencian","ca"],["flemish","nl"],["haitian","ht"],["letzeburgesch","lb"],["pushto","ps"],["panjabi","pa"],["moldavian","ro"],["moldovan","ro"],["sinhalese","si"],["castilian","es"]]);class WhisperTokenizer extends PreTrainedTokenizer{_decode_asr(n,{return_timestamps:a=!1,return_language:u=!1,time_precision:c=null,force_full_sequences:p=!0}={}){if(c===null)throw Error("Must specify time_precision");let s=null;const h=a==="word";function f(){return{language:s,timestamp:[null,null],text:""}}const l=[];let o=f(),t=0;const e=this.model.convert_tokens_to_ids(["<|notimestamps|>"])[0]+1;let r=[],i=[],d=!1,g=null;const m=new Set(this.all_special_ids);for(let w of n){const v=w.tokens,S=h?w.token_timestamps:null;let O=null,A=e;if("stride"in w){const[N,B,$]=w.stride;if(t-=B,g=N-$,B&&(A=B/c+e),$)for(let L=v.length-1;L>=0;--L){const H=v[L];if(H>=e){if(O!==null&&(H-e)*c=e){const $=(B-e)*c+t,L=round($,2);if(O!==null&&B>=O)d=!0;else if(d||r.length>0&&B0?(r.push(T),h&&i.push(M)):r.every(N=>N.length===0)&&(o=f(),r=[],T=[],i=[],M=[])}if(r.length>0){if(p&&a)throw new Error("Whisper did not predict an ending timestamp, which can happen if audio is cut off in the middle of a word. Also make sure WhisperTimeStampLogitsProcessor was used during generation.");const[w,v]=this.findLongestCommonSequence(r,i),S=this.decode(w);o.text=S,h&&(o.words=this.collateWordTimestamps(w,v,s)),l.push(o)}let _=Object.create(null);const y=l.map(w=>w.text).join("");if(a||u){for(let w=0;w0;let h=s?[]:null,f=s?a[0]:null;for(let l=1;lL===N[H]).length,$=B/w+v;B>1&&$>t&&(t=$,e=[S,O,T,M])}const[i,d,g,m]=e,_=Math.floor((d+i)/2),y=Math.floor((m+g)/2);p.push(...u.slice(0,_)),u=o.slice(y),c=u.length,s&&(h.push(...f.slice(0,_)),f=a[l].slice(y))}return p.push(...u),s?(h.push(...f),[p,h]):[p,[]]}collateWordTimestamps(n,a,u){let[c,p,s]=this.combineTokensIntoWords(n,u),h=[];for(let f=0;f=c){let h=(s-c)*u;h=round(h,2),p.push(`<|${h}|>`),p.push([])}else p[p.length-1].push(s);return p=p.map(s=>typeof s=="string"?s:super.decode(s,a)),p.join("")}splitTokensOnUnicode(n){const a=this.decode(n,{decode_with_timestamps:!0}),u="�";let c=[],p=[],s=[],h=[],f=[],l=0;for(let o=0;o=this.model.tokens_to_ids.get("<|endoftext|>"),i=o.startsWith(" "),d=o.trim(),g=f.test(d);if(r||i||g||p.length===0)p.push(o),s.push(t),h.push(e);else{const m=p.length-1;p[m]+=o,s[m].push(...t),h[m].push(...e)}}return[p,s,h]}mergePunctuations(n,a,u,c,p){let s=structuredClone(n),h=structuredClone(a),f=structuredClone(u),l=s.length-2,o=s.length-1;for(;l>=0;)s[l].startsWith(" ")&&c.includes(s[l].trim())?(s[o]=s[l]+s[o],h[o]=mergeArrays(h[l],h[o]),f[o]=mergeArrays(f[l],f[o]),s[l]="",h[l]=[],f[l]=[]):o=l,--l;for(l=0,o=1;ot),h.filter(t=>t.length>0),f.filter(t=>t.length>0)]}get_decoder_prompt_ids({language:n=null,task:a=null,no_timestamps:u=!0}={}){let c=[];if(n){n=n.toLowerCase();let p=WHISPER_TO_LANGUAGE_CODE_MAPPING.get(n);if(p===void 0)if(WHISPER_LANGUAGE_MAPPING.has(n))p=n;else{const f=n.length===2?WHISPER_LANGUAGE_MAPPING.keys():WHISPER_LANGUAGE_MAPPING.values();throw new Error(`Language "${n}" is not supported. Must be one of: ${JSON.stringify(f)}`)}let s=this.model.tokens_to_ids.get(`<|${p}|>`);if(s===void 0)throw new Error(`Unable to find language "${p}" in model vocabulary. Please report this issue at https://github.com/xenova/transformers.js/issues/new/choose.`);c.push(s)}else c.push(null);if(a){if(a=a.toLowerCase(),a!=="transcribe"&&a!=="translate")throw new Error(`Task "${a}" is not supported. Must be one of: ["transcribe", "translate"]`);let p=this.model.tokens_to_ids.get(`<|${a}|>`);if(p===void 0)throw new Error(`Unable to find task "${a}" in model vocabulary. Please report this issue at https://github.com/xenova/transformers.js/issues/new/choose.`);c.push(p)}else c.push(null);if(u){let p=this.model.tokens_to_ids.get("<|notimestamps|>");if(p===void 0)throw new Error('Unable to find "<|notimestamps|>" in model vocabulary. Please report this issue at https://github.com/xenova/transformers.js/issues/new/choose.');c.push(p)}return c.map((p,s)=>[s+1,p]).filter(p=>p[1]!==null)}}class CodeGenTokenizer extends PreTrainedTokenizer{}class CLIPTokenizer extends PreTrainedTokenizer{}class MarianTokenizer extends PreTrainedTokenizer{constructor(n,a){super(n,a),this.languageRegex=/^(>>\w+<<)\s*/g,this.supported_language_codes=this.model.vocab.filter(u=>this.languageRegex.test(u)),console.warn('WARNING: `MarianTokenizer` is not yet supported by Hugging Face\'s "fast" tokenizers library. Therefore, you may experience slightly inaccurate results.')}_encode_text(n){if(n===null)return null;let[a,...u]=n.trim().split(this.languageRegex);if(u.length===0)return super._encode_text(a);if(u.length===2){let[c,p]=u;return this.supported_language_codes.includes(c)||console.warn(`Unsupported language code "${c}" detected, which may lead to unexpected behavior. Should be one of: ${JSON.stringify(this.supported_language_codes)}`),mergeArrays([c],super._encode_text(p))}}}class Wav2Vec2CTCTokenizer extends PreTrainedTokenizer{}class BlenderbotTokenizer extends PreTrainedTokenizer{}class BlenderbotSmallTokenizer extends PreTrainedTokenizer{}class SpeechT5Tokenizer extends PreTrainedTokenizer{}class AutoTokenizer{static async from_pretrained(n,{quantized:a=!0,progress_callback:u=null,config:c=null,cache_dir:p=null,local_files_only:s=!1,revision:h="main"}={}){let[f,l]=await loadTokenizer(n,{quantized:a,progress_callback:u,config:c,cache_dir:p,local_files_only:s,revision:h}),o=l.tokenizer_class.replace(/Fast$/,""),t=this.TOKENIZER_CLASS_MAPPING[o];return t||(console.warn(`Unknown tokenizer class "${o}", attempting to construct from base class.`),t=PreTrainedTokenizer),new t(f,l)}}je(AutoTokenizer,"TOKENIZER_CLASS_MAPPING",{T5Tokenizer,DistilBertTokenizer,CamembertTokenizer,DebertaTokenizer,DebertaV2Tokenizer,BertTokenizer,HerbertTokenizer,XLMTokenizer,MobileBertTokenizer,SqueezeBertTokenizer,AlbertTokenizer,GPT2Tokenizer,BartTokenizer,MBartTokenizer,MBart50Tokenizer,RobertaTokenizer,WhisperTokenizer,CodeGenTokenizer,CLIPTokenizer,MarianTokenizer,BloomTokenizer,NllbTokenizer,M2M100Tokenizer,LlamaTokenizer,CodeLlamaTokenizer,XLMRobertaTokenizer,MPNetTokenizer,FalconTokenizer,GPTNeoXTokenizer,Wav2Vec2CTCTokenizer,BlenderbotTokenizer,BlenderbotSmallTokenizer,SpeechT5Tokenizer,PreTrainedTokenizer});async function loadConfig(b,n){return await getModelJSON(b,"config.json",!0,n)}class PretrainedConfig{constructor(n){this.model_type=null,this.is_encoder_decoder=!1,Object.assign(this,n)}static async from_pretrained(n,{progress_callback:a=null,config:u=null,cache_dir:c=null,local_files_only:p=!1,revision:s="main"}={}){let h=u??await loadConfig(n,{progress_callback:a,config:u,cache_dir:c,local_files_only:p,revision:s});return new this(h)}}class AutoConfig{static async from_pretrained(...n){return PretrainedConfig.from_pretrained(...n)}}class LogitsProcessorList extends Callable{constructor(){super(),this.processors=[]}push(n){this.processors.push(n)}extend(n){this.processors.push(...n)}_call(n,a){for(let u of a)this.processors.forEach(c=>c(n,u))}[Symbol.iterator](){return this.processors.values()}}class LogitsProcessor extends Callable{_call(n,a){throw Error("`_call` should be implemented in a subclass")}}class ForceTokensLogitsProcessor extends LogitsProcessor{constructor(n){super(),this.force_token_map=Object.fromEntries(n??[])}_call(n,a){let u=this.force_token_map[n.length];return exists(u)&&(a.data.fill(-1/0),a.data[u]=0),a}}class ForcedBOSTokenLogitsProcessor extends LogitsProcessor{constructor(n){super(),this.bos_token_id=n}_call(n,a){return n.length===1&&(a.data.fill(-1/0),a.data[this.bos_token_id]=0),a}}class ForcedEOSTokenLogitsProcessor extends LogitsProcessor{constructor(n,a){super(),this.max_length=n,this.forced_eos_token_id=a}_call(n,a){}}class SuppressTokensAtBeginLogitsProcessor extends LogitsProcessor{constructor(n,a){super(),this.begin_suppress_tokens=n,this.begin_index=a}_call(n,a){if(n.length===this.begin_index)for(let u of this.begin_suppress_tokens)a.data[u]=-1/0;return a}}class WhisperTimeStampLogitsProcessor extends LogitsProcessor{constructor(n){super(),this.eos_token_id=n.eos_token_id,this.no_timestamps_token_id=n.no_timestamps_token_id,this.timestamp_begin=this.no_timestamps_token_id+1,this.begin_index=(n.forced_decoder_ids||[]).length+2,n.forced_decoder_ids.slice(-1)[0][1]===this.no_timestamps_token_id&&(this.begin_index-=1),this.max_initial_timestamp_index=n.max_initial_timestamp_index}_call(n,a){if(a.data[this.no_timestamps_token_id]=-1/0,n.length===this.begin_index-1)return a.data.fill(-1/0),a.data[this.timestamp_begin]=0,a;const u=n.slice(this.begin_index),c=u.length>=1&&u[u.length-1]>=this.timestamp_begin,p=u.length<2||u[u.length-2]>=this.timestamp_begin;if(c&&(p?a.data.subarray(this.timestamp_begin).fill(-1/0):a.data.subarray(0,this.eos_token_id).fill(-1/0)),n.length===this.begin_index&&this.max_initial_timestamp_index!==null){const l=this.timestamp_begin+this.max_initial_timestamp_index;a.data.subarray(l+1).fill(-1/0)}const s=log_softmax(a.data),h=Math.log(s.subarray(this.timestamp_begin).map(Math.exp).reduce((l,o)=>l+o)),f=max(s.subarray(0,this.timestamp_begin))[0];return h>f&&a.data.subarray(0,this.timestamp_begin).fill(-1/0),a}}class NoRepeatNGramLogitsProcessor extends LogitsProcessor{constructor(n){super(),this.no_repeat_ngram_size=n}getNgrams(n){const a=n.length,u=[];for(let p=0;p0&&(c=c.map(p=>p/this.generation_config.temperature)),c}randomSelect(n){let a=n.reduce((c,p)=>c+p,0),u=Math.random()*a;for(let c=0;c1)return new BeamSearchSampler(n);if(n.num_return_sequences>1)throw Error(`num_return_sequences has to be 1 when doing greedy search, but is ${n.num_return_sequences}.`);return new GreedySampler(n)}}class GreedySampler extends Sampler{sample(n,a=-1){let u=this.getLogits(n,a);return[[max(u)[1],0]]}}class MultinomialSampler extends Sampler{sample(n,a=-1){let u=n.dims.at(-1);this.generation_config.top_k>0&&(u=Math.min(this.generation_config.top_k,u));const c=this.getLogits(n,a),p=getTopItems(c,u),s=softmax(p.map(h=>h[1]));return Array.from({length:this.generation_config.num_beams},()=>{const h=this.randomSelect(s);return[p[h][0],Math.log(s[h])]})}}class BeamSearchSampler extends Sampler{sample(n,a=-1){let u=n.dims.at(-1);this.generation_config.top_k>0&&(u=Math.min(this.generation_config.top_k,u));const c=this.getLogits(n,a),p=getTopItems(c,u),s=softmax(p.map(h=>h[1]));return Array.from({length:this.generation_config.num_beams},(h,f)=>[p[f][0],Math.log(s[f])])}}const{InferenceSession,Tensor:ONNXTensor}=ONNX,MODEL_TYPES={EncoderOnly:0,EncoderDecoder:1,Seq2Seq:2,Vision2Seq:3,DecoderOnly:4},MODEL_TYPE_MAPPING=new Map,MODEL_NAME_TO_CLASS_MAPPING=new Map,MODEL_CLASS_TO_NAME_MAPPING=new Map;async function constructSession(b,n,a){let u=`onnx/${n}${a.quantized?"_quantized":""}.onnx`,c=await getModelFile(b,u,!0,a);try{return await InferenceSession.create(c,{executionProviders})}catch(p){if(executionProviders.length===1&&executionProviders[0]==="wasm")throw p;return console.warn(p),console.warn("Something went wrong during model construction (most likely a missing operation). Using `wasm` as a fallback. "),await InferenceSession.create(c,{executionProviders:["wasm"]})}}async function validateInputs(b,n){const a={},u=[];for(let s of b.inputNames)n[s]===void 0?u.push(s):a[s]=n[s];if(u.length>0)throw new Error(`An error occurred during model execution: "Missing the following inputs: ${u.join(", ")}.`);const c=Object.keys(n).length,p=b.inputNames.length;if(c>p){let s=Object.keys(n).filter(h=>!b.inputNames.includes(h));console.warn(`WARNING: Too many inputs were provided (${c} > ${p}). The following inputs will be ignored: "${s.join(", ")}".`)}return a}async function sessionRun(b,n){const a=await validateInputs(b,n);try{let u=await b.run(a);return u=replaceTensors(u),u}catch(u){throw console.error(`An error occurred during model execution: "${u}".`),console.error("Inputs given to model:",a),u}}function replaceTensors(b){for(let n in b)b[n]instanceof ONNXTensor?b[n]=new Tensor(b[n]):typeof b[n]=="object"&&replaceTensors(b[n]);return b}function toI64Tensor(b){if(b instanceof Tensor)return b;if(b.length===0)throw Error("items must be non-empty");if(Array.isArray(b[0])){if(b.some(n=>n.length!==b[0].length))throw Error("Unable to create tensor, you should probably activate truncation and/or padding with 'padding=True' and/or 'truncation=True' to have batched tensors with the same length.");return new Tensor("int64",BigInt64Array.from(b.flat().map(n=>BigInt(n))),[b.length,b[0].length])}else return new Tensor("int64",BigInt64Array.from(b.map(n=>BigInt(n))),[1,b.length])}function prepareAttentionMask(b,n){let a=b.config.pad_token_id??null,u=b.config.eos_token_id??null;isIntegralNumber(u)&&(u=[u]);let c=n.indexOf(a)!==-1,p=u===null||!u.includes(a);if(c&&p){let s=BigInt64Array.from(n.data.map(h=>h!=a));return new Tensor("int64",s,n.dims)}else return ones_like(n)}function boolTensor(b){return new Tensor("bool",[b],[1])}async function seq2seqForward(b,n){let{encoder_outputs:a,past_key_values:u}=n;a||(a=(await encoderForward(b,n)).last_hidden_state);let c={input_ids:n.decoder_input_ids,encoder_hidden_states:a,use_cache_branch:boolTensor(!!u)};b.decoder_merged_session.inputNames.includes("encoder_attention_mask")&&(c.encoder_attention_mask=n.attention_mask),b.addPastKeyValues(c,u);const p=await sessionRun(b.decoder_merged_session,c);let s=p.logits;u=b.getPastKeyValues(p,u);const h=b.getAttentions(p);return new Seq2SeqLMOutput({logits:s,past_key_values:u,encoder_outputs:a,...h})}function seq2seqStartBeams(b,n,a,u){let c=[],p=0;const s=b.requires_attention_mask??!0;let h=a.decoder_input_ids??a.decoder_start_token_id??a.bos_token_id??a.eos_token_id;h instanceof Tensor?h=h.tolist().flat():Array.isArray(h)||(h=[h]);for(let f of n){f.dims=[1,...f.dims];let l={inputs:f,encoder_outputs:null,prev_model_outputs:null,output_token_ids:h,done:!1,score:0,id:p++};s&&(l.attention_mask=prepareAttentionMask(b,f)),c.push(l)}return c}async function seq2seqRunBeam(b,n){var s;const a=b.main_input_name;let u=n.output_token_ids;n.prev_model_outputs&&(u=u.slice(-1));let c={[a]:n.inputs,decoder_input_ids:toI64Tensor(u),encoder_outputs:n.encoder_outputs,past_key_values:(s=n.prev_model_outputs)==null?void 0:s.past_key_values};n.attention_mask&&(c.attention_mask=n.attention_mask);let p=await b.forward(c);return n.prev_model_outputs=p,n.encoder_outputs=p.encoder_outputs,p}function seq2seqUpdatebeam(b,n){b.output_token_ids=[...b.output_token_ids,n]}async function encoderForward(b,n){let a={};for(let u of b.session.inputNames)a[u]=n[u];return await sessionRun(b.session,a)}async function decoderForward(b,n){let{input_ids:a,past_key_values:u,attention_mask:c}=n,p={input_ids:a,attention_mask:c??prepareAttentionMask(b,a),use_cache_branch:boolTensor(!!u)};b.addPastKeyValues(p,u);let s=await sessionRun(b.session,p),h=s.logits;return u=b.getPastKeyValues(s,u),{logits:h,past_key_values:u}}function decoderStartBeams(b,n,a,u,c){let p=[],s=0;for(let h of n){let f=h.tolist().map(Number);h.dims=[1,...h.dims];let l;c?(l=c[s],l.dims=[1,...l.dims]):l=prepareAttentionMask(b,h);let o={input:h,model_input_ids:h,attention_mask:l,prev_model_outputs:null,output_token_ids:f,num_output_tokens:u,done:!1,score:0,id:s++};p.push(o)}return p}async function decoderRunBeam(b,n){var p;let a=new BigInt64Array(n.output_token_ids.length).fill(1n),u={input_ids:n.model_input_ids,attention_mask:new Tensor("int64",a,[1,a.length]),past_key_values:(p=n.prev_model_outputs)==null?void 0:p.past_key_values},c=await b.forward(u);return n.prev_model_outputs=c,c}function decoderUpdatebeam(b,n){b.output_token_ids=[...b.output_token_ids,n],b.model_input_ids=new Tensor("int64",[BigInt(n)],[1,1])}class PreTrainedModel extends Callable{constructor(a,u){super();je(this,"main_input_name","input_ids");this.config=a,this.session=u;const c=MODEL_CLASS_TO_NAME_MAPPING.get(this.constructor),p=MODEL_TYPE_MAPPING.get(c);this.can_generate=!1,this._runBeam=null,this._getStartBeams=null,this._updateBeam=null,this._forward=null,p===MODEL_TYPES.DecoderOnly?(this.can_generate=!0,this._runBeam=decoderRunBeam,this._getStartBeams=decoderStartBeams,this._updateBeam=decoderUpdatebeam,this._forward=decoderForward):p===MODEL_TYPES.Seq2Seq||p===MODEL_TYPES.Vision2Seq?(this.can_generate=!0,this._runBeam=seq2seqRunBeam,this._getStartBeams=seq2seqStartBeams,this._updateBeam=seq2seqUpdatebeam,this._forward=seq2seqForward):p===MODEL_TYPES.EncoderDecoder?this._forward=encoderForward:this._forward=encoderForward}async dispose(){let a=[];for(let u of Object.keys(this)){let c=this[u];c instanceof InferenceSession&&a.push(c.handler.dispose())}return await Promise.all(a)}static async from_pretrained(a,{quantized:u=!0,progress_callback:c=null,config:p=null,cache_dir:s=null,local_files_only:h=!1,revision:f="main",model_file_name:l=null}={}){let o={quantized:u,progress_callback:c,config:p,cache_dir:s,local_files_only:h,revision:f,model_file_name:l};const t=MODEL_CLASS_TO_NAME_MAPPING.get(this),e=MODEL_TYPE_MAPPING.get(t);let r;return e===MODEL_TYPES.DecoderOnly?r=await Promise.all([AutoConfig.from_pretrained(a,o),constructSession(a,o.model_file_name??"decoder_model_merged",o),getModelJSON(a,"generation_config.json",!1,o)]):e===MODEL_TYPES.Seq2Seq||e===MODEL_TYPES.Vision2Seq?r=await Promise.all([AutoConfig.from_pretrained(a,o),constructSession(a,"encoder_model",o),constructSession(a,"decoder_model_merged",o),getModelJSON(a,"generation_config.json",!1,o)]):e===MODEL_TYPES.EncoderDecoder?r=await Promise.all([AutoConfig.from_pretrained(a,o),constructSession(a,"encoder_model",o),constructSession(a,"decoder_model_merged",o)]):(e!==MODEL_TYPES.EncoderOnly&&console.warn(`Model type for '${t}' not found, assuming encoder-only architecture. Please report this at https://github.com/xenova/transformers.js/issues/new/choose.`),r=await Promise.all([AutoConfig.from_pretrained(a,o),constructSession(a,o.model_file_name??"model",o)])),new this(...r)}async _call(a){return await this.forward(a)}async forward(a){return await this._forward(this,a)}_get_logits_processor(a,u,c=null){const p=new LogitsProcessorList;if(a.repetition_penalty!==null&&a.repetition_penalty!==1&&p.push(new RepetitionPenaltyLogitsProcessor(a.repetition_penalty)),a.no_repeat_ngram_size!==null&&a.no_repeat_ngram_size>0&&p.push(new NoRepeatNGramLogitsProcessor(a.no_repeat_ngram_size)),a.min_length!==null&&a.eos_token_id!==null&&a.min_length>0&&p.push(new MinLengthLogitsProcessor(a.min_length,a.eos_token_id)),a.min_new_tokens!==null&&a.eos_token_id!==null&&a.min_new_tokens>0&&p.push(new MinNewTokensLengthLogitsProcessor(u,a.min_new_tokens,a.eos_token_id)),a.forced_bos_token_id!==null&&p.push(new ForcedBOSTokenLogitsProcessor(a.forced_bos_token_id)),a.forced_eos_token_id!==null&&p.push(new ForcedEOSTokenLogitsProcessor(a.max_length,a.forced_eos_token_id)),a.begin_suppress_tokens!==null){let s=u>1||a.forced_bos_token_id===null?u:u+1;a.forced_decoder_ids!==null&&(s+=a.forced_decoder_ids[a.forced_decoder_ids.length-1][0]),p.push(new SuppressTokensAtBeginLogitsProcessor(a.begin_suppress_tokens,s))}return a.forced_decoder_ids!==null&&p.push(new ForceTokensLogitsProcessor(a.forced_decoder_ids)),c!==null&&p.extend(c),p}_get_generation_config(a){let u=new GenerationConfig(this.config);return"generation_config"in this&&Object.assign(u,this.generation_config),a!==null&&Object.assign(u,a),u}async generate(a,u=null,c=null,{inputs_attention_mask:p=null}={}){if(!this.can_generate){let m=`The current model class (${MODEL_CLASS_TO_NAME_MAPPING.get(this.constructor)}) is not compatible with \`.generate()\`, as it doesn't have a language model head.`;const _=this.config.model_type,y=MODEL_WITH_LM_HEAD_MAPPING_NAMES.get(_)??MODEL_FOR_SEQ_TO_SEQ_CAUSAL_LM_MAPPING_NAMES.get(_)??MODEL_FOR_SPEECH_SEQ_2_SEQ_MAPPING_NAMES.get(_)??MODEL_FOR_VISION_2_SEQ_MAPPING_NAMES.get(_);throw y&&(m+=` Please use the following class instead: '${y[0]}'`),Error(m)}if(!(a instanceof Tensor)&&!isTypedArray(a)&&!Array.isArray(a))throw Error(`\`inputs\` must be a Tensor, TypedArray, or Array, but is "${a.constructor.name}".`);let s;if(this.config.is_encoder_decoder)s=0;else if(s=a instanceof Tensor?a.dims.at(-1):a.length,s===0)throw Error("Must supply a non-empty array of input token ids.");u=this._get_generation_config(u),c=c??new LogitsProcessorList,c=this._get_logits_processor(u,s,c);let h=u.eos_token_id;h!==null&&!Array.isArray(h)&&(h=[h]);let f=1;const l=f+(u.max_new_tokens??1/0),o=Number.isInteger(u.max_length)&&(u.max_new_tokens??null)===null;let t=Sampler.getSampler(u),e=this.getStartBeams(a,u,f,p);for(;e.some(g=>!g.done)&&f=u.max_length){m.done=!0,g.push(m);continue}let _=await this.runBeam(m);u.output_attentions&&this.addAttentionsToBeam(m,_),u.output_scores;let y=_.logits.slice(null,-1,null);c(m.output_token_ids,y);let w=t(y);for(let[v,S]of w){let O={...m};this.updateBeam(O,v),O.score+=S,h&&h.includes(v)&&(O.done=!0),g.push(O)}}++f,g=this.groupBeams(g).map(m=>m.sort((_,y)=>y.score-_.score).slice(0,u.num_beams)),e=g.flat(),u.callback_function&&u.callback_function(e)}const r=this.groupBeams(e),i=g=>r.map(m=>u.num_return_sequences>1?m.slice(0,u.num_return_sequences).map(_=>_[g]):[m[0][g]]).flat(),d=i("output_token_ids");if(u.return_dict_in_generate){const g=i("decoder_attentions"),m=i("cross_attentions");return{sequences:d,decoder_attentions:g,cross_attentions:m}}else return d}addAttentionsToBeam(a,u){if(this.config.is_encoder_decoder){if(!u.cross_attentions||u.cross_attentions.length===0)throw Error("`output_attentions` is true, but the model did not produce cross-attentions. This is most likely because the model was not exported with `output_attentions=True`.");a.cross_attentions||(a.cross_attentions=[]),a.cross_attentions.push(u.cross_attentions)}if(!u.decoder_attentions||u.decoder_attentions.length===0)throw Error("`output_attentions` is true, but the model did not produce decoder-attentions. This is most likely because the model was not exported with `output_attentions=True`.");a.decoder_attentions||(a.decoder_attentions=[]),a.decoder_attentions.push(u.decoder_attentions)}groupBeams(a){const u=Object.create(null);for(const c of a)u[c.id]===void 0?u[c.id]=[c]:u[c.id].push(c);return Object.values(u)}getPastKeyValues(a,u){const c=Object.create(null);for(const p in a)if(p.startsWith("present")){let s=p.replace("present","past_key_values");u&&p.includes("encoder")?c[s]=u[s]:c[s]=a[p]}return c}getAttentions(a){const u=Object.create(null);for(const c of["cross_attentions","decoder_attentions"]){const p=[];for(const s in a)if(s.startsWith(c)){const h=s.split(".").pop();p[h]=a[s]}u[c]=p}return u}addPastKeyValues(a,u){if(u)Object.assign(a,u);else if(this.config.is_encoder_decoder&&(this.add_encoder_pkv??!0)){let c=[1,this.num_encoder_heads,0,this.encoder_dim_kv],p=[1,this.num_decoder_heads,0,this.decoder_dim_kv];for(let s=0;s{let t=Array.from({length:this.config.decoder_layers},(m,_)=>cat(o.map(y=>y[_]),2)),e=stack(u.map(([m,_])=>c?t[m].slice(null,_,null,[0,c]):t[m].slice(null,_)));e=e.transpose(1,0,2,3);let[r,i]=std_mean(e,-2,0,!0),d=e.clone();for(let m=0;me[_+1]-e[_]),d=mergeArrays([1],i).map(m=>!!m),g=[];for(let m=0;m=e&&(Array.from(O.data).filter(T=>T>=u).length>0||m>=t))break}const _=cat(i),{waveform:y}=await sessionRun(s.session,{spectrogram:_});return{spectrogram:_,waveform:y}}}class SpeechT5HifiGan extends PreTrainedModel{constructor(){super(...arguments);je(this,"main_input_name","spectrogram")}}const MODEL_MAPPING_NAMES_ENCODER_ONLY=new Map([["bert",["BertModel",BertModel]],["camembert",["CamembertModel",CamembertModel]],["deberta",["DebertaModel",DebertaModel]],["deberta-v2",["DebertaV2Model",DebertaV2Model]],["mpnet",["MPNetModel",MPNetModel]],["albert",["AlbertModel",AlbertModel]],["distilbert",["DistilBertModel",DistilBertModel]],["roberta",["RobertaModel",RobertaModel]],["xlm",["XLMModel",XLMModel]],["xlm-roberta",["XLMRobertaModel",XLMRobertaModel]],["clip",["CLIPModel",CLIPModel]],["mobilebert",["MobileBertModel",MobileBertModel]],["squeezebert",["SqueezeBertModel",SqueezeBertModel]],["wav2vec2",["Wav2Vec2Model",Wav2Vec2Model]],["wavlm",["WavLMModel",WavLMModel]],["detr",["DetrModel",DetrModel]],["vit",["ViTModel",ViTModel]],["mobilevit",["MobileViTModel",MobileViTModel]],["beit",["BeitModel",BeitModel]],["deit",["DeiTModel",DeiTModel]],["resnet",["ResNetModel",ResNetModel]],["swin",["SwinModel",SwinModel]],["donut-swin",["DonutSwinModel",DonutSwinModel]],["yolos",["YolosModel",YolosModel]],["hifigan",["SpeechT5HifiGan",SpeechT5HifiGan]],["sam",["SamModel",SamModel]]]),MODEL_MAPPING_NAMES_ENCODER_DECODER=new Map([["t5",["T5Model",T5Model]],["longt5",["LongT5Model",LongT5Model]],["mt5",["MT5Model",MT5Model]],["bart",["BartModel",BartModel]],["mbart",["MBartModel",MBartModel]],["marian",["MarianModel",MarianModel]],["whisper",["WhisperModel",WhisperModel]],["m2m_100",["M2M100Model",M2M100Model]],["blenderbot",["BlenderbotModel",BlenderbotModel]],["blenderbot-small",["BlenderbotSmallModel",BlenderbotSmallModel]]]),MODEL_MAPPING_NAMES_DECODER_ONLY=new Map([["bloom",["BloomModel",BloomModel]],["gpt2",["GPT2Model",GPT2Model]],["gptj",["GPTJModel",GPTJModel]],["gpt_bigcode",["GPTBigCodeModel",GPTBigCodeModel]],["gpt_neo",["GPTNeoModel",GPTNeoModel]],["gpt_neox",["GPTNeoXModel",GPTNeoXModel]],["codegen",["CodeGenModel",CodeGenModel]],["llama",["LlamaModel",LlamaModel]],["mpt",["MptModel",MptModel]],["opt",["OPTModel",OPTModel]]]),MODEL_FOR_SPEECH_SEQ_2_SEQ_MAPPING_NAMES=new Map([["speecht5",["SpeechT5ForSpeechToText",SpeechT5ForSpeechToText]],["whisper",["WhisperForConditionalGeneration",WhisperForConditionalGeneration]]]),MODEL_FOR_TEXT_TO_SPECTROGRAM_MAPPING_NAMES=new Map([["speecht5",["SpeechT5ForTextToSpeech",SpeechT5ForTextToSpeech]]]),MODEL_FOR_SEQUENCE_CLASSIFICATION_MAPPING_NAMES=new Map([["bert",["BertForSequenceClassification",BertForSequenceClassification]],["camembert",["CamembertForSequenceClassification",CamembertForSequenceClassification]],["deberta",["DebertaForSequenceClassification",DebertaForSequenceClassification]],["deberta-v2",["DebertaV2ForSequenceClassification",DebertaV2ForSequenceClassification]],["mpnet",["MPNetForSequenceClassification",MPNetForSequenceClassification]],["albert",["AlbertForSequenceClassification",AlbertForSequenceClassification]],["distilbert",["DistilBertForSequenceClassification",DistilBertForSequenceClassification]],["roberta",["RobertaForSequenceClassification",RobertaForSequenceClassification]],["xlm",["XLMForSequenceClassification",XLMForSequenceClassification]],["xlm-roberta",["XLMRobertaForSequenceClassification",XLMRobertaForSequenceClassification]],["bart",["BartForSequenceClassification",BartForSequenceClassification]],["mbart",["MBartForSequenceClassification",MBartForSequenceClassification]],["mobilebert",["MobileBertForSequenceClassification",MobileBertForSequenceClassification]],["squeezebert",["SqueezeBertForSequenceClassification",SqueezeBertForSequenceClassification]]]),MODEL_FOR_TOKEN_CLASSIFICATION_MAPPING_NAMES=new Map([["bert",["BertForTokenClassification",BertForTokenClassification]],["camembert",["CamembertForTokenClassification",CamembertForTokenClassification]],["deberta",["DebertaForTokenClassification",DebertaForTokenClassification]],["deberta-v2",["DebertaV2ForTokenClassification",DebertaV2ForTokenClassification]],["mpnet",["MPNetForTokenClassification",MPNetForTokenClassification]],["distilbert",["DistilBertForTokenClassification",DistilBertForTokenClassification]],["roberta",["RobertaForTokenClassification",RobertaForTokenClassification]],["xlm",["XLMForTokenClassification",XLMForTokenClassification]],["xlm-roberta",["XLMRobertaForTokenClassification",XLMRobertaForTokenClassification]]]),MODEL_FOR_SEQ_TO_SEQ_CAUSAL_LM_MAPPING_NAMES=new Map([["t5",["T5ForConditionalGeneration",T5ForConditionalGeneration]],["longt5",["LongT5ForConditionalGeneration",LongT5ForConditionalGeneration]],["mt5",["MT5ForConditionalGeneration",MT5ForConditionalGeneration]],["bart",["BartForConditionalGeneration",BartForConditionalGeneration]],["mbart",["MBartForConditionalGeneration",MBartForConditionalGeneration]],["marian",["MarianMTModel",MarianMTModel]],["m2m_100",["M2M100ForConditionalGeneration",M2M100ForConditionalGeneration]],["blenderbot",["BlenderbotForConditionalGeneration",BlenderbotForConditionalGeneration]],["blenderbot-small",["BlenderbotSmallForConditionalGeneration",BlenderbotSmallForConditionalGeneration]]]),MODEL_WITH_LM_HEAD_MAPPING_NAMES=new Map([["bloom",["BloomForCausalLM",BloomForCausalLM]],["gpt2",["GPT2LMHeadModel",GPT2LMHeadModel]],["gptj",["GPTJForCausalLM",GPTJForCausalLM]],["gpt_bigcode",["GPTBigCodeForCausalLM",GPTBigCodeForCausalLM]],["gpt_neo",["GPTNeoForCausalLM",GPTNeoForCausalLM]],["gpt_neox",["GPTNeoXForCausalLM",GPTNeoXForCausalLM]],["codegen",["CodeGenForCausalLM",CodeGenForCausalLM]],["llama",["LlamaForCausalLM",LlamaForCausalLM]],["mpt",["MptForCausalLM",MptForCausalLM]],["opt",["OPTForCausalLM",OPTForCausalLM]],["mbart",["MBartForCausalLM",MBartForCausalLM]]]),MODEL_FOR_MASKED_LM_MAPPING_NAMES=new Map([["bert",["BertForMaskedLM",BertForMaskedLM]],["camembert",["CamembertForMaskedLM",CamembertForMaskedLM]],["deberta",["DebertaForMaskedLM",DebertaForMaskedLM]],["deberta-v2",["DebertaV2ForMaskedLM",DebertaV2ForMaskedLM]],["mpnet",["MPNetForMaskedLM",MPNetForMaskedLM]],["albert",["AlbertForMaskedLM",AlbertForMaskedLM]],["distilbert",["DistilBertForMaskedLM",DistilBertForMaskedLM]],["roberta",["RobertaForMaskedLM",RobertaForMaskedLM]],["xlm",["XLMWithLMHeadModel",XLMWithLMHeadModel]],["xlm-roberta",["XLMRobertaForMaskedLM",XLMRobertaForMaskedLM]],["mobilebert",["MobileBertForMaskedLM",MobileBertForMaskedLM]],["squeezebert",["SqueezeBertForMaskedLM",SqueezeBertForMaskedLM]]]),MODEL_FOR_QUESTION_ANSWERING_MAPPING_NAMES=new Map([["bert",["BertForQuestionAnswering",BertForQuestionAnswering]],["camembert",["CamembertForQuestionAnswering",CamembertForQuestionAnswering]],["deberta",["DebertaForQuestionAnswering",DebertaForQuestionAnswering]],["deberta-v2",["DebertaV2ForQuestionAnswering",DebertaV2ForQuestionAnswering]],["mpnet",["MPNetForQuestionAnswering",MPNetForQuestionAnswering]],["albert",["AlbertForQuestionAnswering",AlbertForQuestionAnswering]],["distilbert",["DistilBertForQuestionAnswering",DistilBertForQuestionAnswering]],["roberta",["RobertaForQuestionAnswering",RobertaForQuestionAnswering]],["xlm",["XLMForQuestionAnswering",XLMForQuestionAnswering]],["xlm-roberta",["XLMRobertaForQuestionAnswering",XLMRobertaForQuestionAnswering]],["mobilebert",["MobileBertForQuestionAnswering",MobileBertForQuestionAnswering]],["squeezebert",["SqueezeBertForQuestionAnswering",SqueezeBertForQuestionAnswering]]]),MODEL_FOR_VISION_2_SEQ_MAPPING_NAMES=new Map([["vision-encoder-decoder",["VisionEncoderDecoderModel",VisionEncoderDecoderModel]]]),MODEL_FOR_IMAGE_CLASSIFICATION_MAPPING_NAMES=new Map([["vit",["ViTForImageClassification",ViTForImageClassification]],["mobilevit",["MobileViTForImageClassification",MobileViTForImageClassification]],["beit",["BeitForImageClassification",BeitForImageClassification]],["deit",["DeiTForImageClassification",DeiTForImageClassification]],["resnet",["ResNetForImageClassification",ResNetForImageClassification]],["swin",["SwinForImageClassification",SwinForImageClassification]]]),MODEL_FOR_OBJECT_DETECTION_MAPPING_NAMES=new Map([["detr",["DetrForObjectDetection",DetrForObjectDetection]],["yolos",["YolosForObjectDetection",YolosForObjectDetection]]]),MODEL_FOR_IMAGE_SEGMENTATION_MAPPING_NAMES=new Map([["detr",["DetrForSegmentation",DetrForSegmentation]]]),MODEL_FOR_MASK_GENERATION_MAPPING_NAMES=new Map([["sam",["SamModel",SamModel]]]),MODEL_FOR_CTC_MAPPING_NAMES=new Map([["wav2vec2",["Wav2Vec2ForCTC",Wav2Vec2ForCTC]],["wavlm",["WavLMForCTC",WavLMForCTC]]]),MODEL_FOR_AUDIO_CLASSIFICATION_MAPPING_NAMES=new Map([["wav2vec2",["Wav2Vec2ForSequenceClassification",Wav2Vec2ForSequenceClassification]],["wavlm",["WavLMForSequenceClassification",WavLMForSequenceClassification]]]),MODEL_CLASS_TYPE_MAPPING=[[MODEL_MAPPING_NAMES_ENCODER_ONLY,MODEL_TYPES.EncoderOnly],[MODEL_MAPPING_NAMES_ENCODER_DECODER,MODEL_TYPES.EncoderDecoder],[MODEL_MAPPING_NAMES_DECODER_ONLY,MODEL_TYPES.DecoderOnly],[MODEL_FOR_SEQUENCE_CLASSIFICATION_MAPPING_NAMES,MODEL_TYPES.EncoderOnly],[MODEL_FOR_TOKEN_CLASSIFICATION_MAPPING_NAMES,MODEL_TYPES.EncoderOnly],[MODEL_FOR_SEQ_TO_SEQ_CAUSAL_LM_MAPPING_NAMES,MODEL_TYPES.Seq2Seq],[MODEL_FOR_SPEECH_SEQ_2_SEQ_MAPPING_NAMES,MODEL_TYPES.Seq2Seq],[MODEL_WITH_LM_HEAD_MAPPING_NAMES,MODEL_TYPES.DecoderOnly],[MODEL_FOR_MASKED_LM_MAPPING_NAMES,MODEL_TYPES.EncoderOnly],[MODEL_FOR_QUESTION_ANSWERING_MAPPING_NAMES,MODEL_TYPES.EncoderOnly],[MODEL_FOR_VISION_2_SEQ_MAPPING_NAMES,MODEL_TYPES.Vision2Seq],[MODEL_FOR_IMAGE_CLASSIFICATION_MAPPING_NAMES,MODEL_TYPES.EncoderOnly],[MODEL_FOR_IMAGE_SEGMENTATION_MAPPING_NAMES,MODEL_TYPES.EncoderOnly],[MODEL_FOR_OBJECT_DETECTION_MAPPING_NAMES,MODEL_TYPES.EncoderOnly],[MODEL_FOR_MASK_GENERATION_MAPPING_NAMES,MODEL_TYPES.EncoderOnly],[MODEL_FOR_CTC_MAPPING_NAMES,MODEL_TYPES.EncoderOnly],[MODEL_FOR_AUDIO_CLASSIFICATION_MAPPING_NAMES,MODEL_TYPES.EncoderOnly],[MODEL_FOR_TEXT_TO_SPECTROGRAM_MAPPING_NAMES,MODEL_TYPES.Seq2Seq]];for(const[b,n]of MODEL_CLASS_TYPE_MAPPING)for(const[a,u]of b.values())MODEL_TYPE_MAPPING.set(a,n),MODEL_CLASS_TO_NAME_MAPPING.set(u,a),MODEL_NAME_TO_CLASS_MAPPING.set(a,u);const CUSTOM_MAPPING=[["CLIPTextModelWithProjection",CLIPTextModelWithProjection,MODEL_TYPES.EncoderOnly],["CLIPVisionModelWithProjection",CLIPVisionModelWithProjection,MODEL_TYPES.EncoderOnly]];for(const[b,n,a]of CUSTOM_MAPPING)MODEL_TYPE_MAPPING.set(b,a),MODEL_CLASS_TO_NAME_MAPPING.set(n,b),MODEL_NAME_TO_CLASS_MAPPING.set(b,n);class Seq2SeqLMOutput extends ModelOutput{constructor({logits:n,past_key_values:a,encoder_outputs:u,decoder_attentions:c=null,cross_attentions:p=null}){super(),this.logits=n,this.past_key_values=a,this.encoder_outputs=u,this.decoder_attentions=c,this.cross_attentions=p}}class SequenceClassifierOutput extends ModelOutput{constructor({logits:n}){super(),this.logits=n}}class TokenClassifierOutput extends ModelOutput{constructor({logits:n}){super(),this.logits=n}}class MaskedLMOutput extends ModelOutput{constructor({logits:n}){super(),this.logits=n}}class QuestionAnsweringModelOutput extends ModelOutput{constructor({start_logits:n,end_logits:a}){super(),this.start_logits=n,this.end_logits=a}}class CausalLMOutput extends ModelOutput{constructor({logits:n}){super(),this.logits=n}}const BROWSER_ENV=typeof self<"u";if(!BROWSER_ENV){if(!sharp)throw new Error("Unable to load image processing library.")}function encodeWAV(b){let n=44;const a=new ArrayBuffer(n+b.length*4),u=new DataView(a),c=16e3;writeString(u,0,"RIFF"),u.setUint32(4,36+b.length*4,!0),writeString(u,8,"WAVE"),writeString(u,12,"fmt "),u.setUint32(16,16,!0),u.setUint16(20,3,!0),u.setUint16(22,1,!0),u.setUint32(24,c,!0),u.setUint32(28,c*4,!0),u.setUint16(32,4,!0),u.setUint16(34,32,!0),writeString(u,36,"data"),u.setUint32(40,b.length*4,!0);for(let p=0;p{const c=await Promise.all([this.tokenizer,this.model_instance,this.vocoder_instance]);self.postMessage({status:"ready"}),a(c)})}static async getSpeakerEmbeddings(n){const a=`${this.BASE_URL}${n}.bin`;return new Tensor("float32",new Float32Array(await(await fetch(a)).arrayBuffer()),[1,512])}}je(MyTextToSpeechPipeline,"BASE_URL","https://huggingface.co/datasets/Xenova/cmu-arctic-xvectors-extracted/resolve/main/"),je(MyTextToSpeechPipeline,"model_id","Xenova/speecht5_tts"),je(MyTextToSpeechPipeline,"vocoder_id","Xenova/speecht5_hifigan"),je(MyTextToSpeechPipeline,"tokenizer_instance",null),je(MyTextToSpeechPipeline,"model_instance",null),je(MyTextToSpeechPipeline,"vocoder_instance",null);const speaker_embeddings_cache=new Map;self.addEventListener("message",async b=>{const[n,a,u]=await MyTextToSpeechPipeline.getInstance(f=>{self.postMessage(f)}),{input_ids:c}=n(b.data.text);let p=speaker_embeddings_cache.get(b.data.speaker_id);p===void 0&&(p=await MyTextToSpeechPipeline.getSpeakerEmbeddings(b.data.speaker_id),speaker_embeddings_cache.set(b.data.speaker_id,p));const{waveform:s}=await a.generate_speech(c,p,{vocoder:u}),h=encodeWAV(s.data);self.postMessage({status:"complete",output:new Blob([h],{type:"audio/wav"})})})})(); diff --git a/spaces/Xenova/whisper-web/assets/index-2d33b655.css b/spaces/Xenova/whisper-web/assets/index-2d33b655.css deleted file mode 100644 index 5ce6e33a6204138cb0a397864ddfa1bfcdb6e591..0000000000000000000000000000000000000000 --- a/spaces/Xenova/whisper-web/assets/index-2d33b655.css +++ /dev/null @@ -1 +0,0 @@ -*,:before,:after{box-sizing:border-box;border-width:0;border-style:solid;border-color:#e5e7eb}:before,:after{--tw-content: ""}html{line-height:1.5;-webkit-text-size-adjust:100%;-moz-tab-size:4;-o-tab-size:4;tab-size:4;font-family:ui-sans-serif,system-ui,-apple-system,BlinkMacSystemFont,Segoe UI,Roboto,Helvetica Neue,Arial,Noto Sans,sans-serif,"Apple Color Emoji","Segoe UI Emoji",Segoe UI Symbol,"Noto Color Emoji";font-feature-settings:normal;font-variation-settings:normal}body{margin:0;line-height:inherit}hr{height:0;color:inherit;border-top-width:1px}abbr:where([title]){-webkit-text-decoration:underline dotted;text-decoration:underline dotted}h1,h2,h3,h4,h5,h6{font-size:inherit;font-weight:inherit}a{color:inherit;text-decoration:inherit}b,strong{font-weight:bolder}code,kbd,samp,pre{font-family:ui-monospace,SFMono-Regular,Menlo,Monaco,Consolas,Liberation Mono,Courier New,monospace;font-size:1em}small{font-size:80%}sub,sup{font-size:75%;line-height:0;position:relative;vertical-align:baseline}sub{bottom:-.25em}sup{top:-.5em}table{text-indent:0;border-color:inherit;border-collapse:collapse}button,input,optgroup,select,textarea{font-family:inherit;font-size:100%;font-weight:inherit;line-height:inherit;color:inherit;margin:0;padding:0}button,select{text-transform:none}button,[type=button],[type=reset],[type=submit]{-webkit-appearance:button;background-color:transparent;background-image:none}:-moz-focusring{outline:auto}:-moz-ui-invalid{box-shadow:none}progress{vertical-align:baseline}::-webkit-inner-spin-button,::-webkit-outer-spin-button{height:auto}[type=search]{-webkit-appearance:textfield;outline-offset:-2px}::-webkit-search-decoration{-webkit-appearance:none}::-webkit-file-upload-button{-webkit-appearance:button;font:inherit}summary{display:list-item}blockquote,dl,dd,h1,h2,h3,h4,h5,h6,hr,figure,p,pre{margin:0}fieldset{margin:0;padding:0}legend{padding:0}ol,ul,menu{list-style:none;margin:0;padding:0}textarea{resize:vertical}input::-moz-placeholder,textarea::-moz-placeholder{opacity:1;color:#9ca3af}input::placeholder,textarea::placeholder{opacity:1;color:#9ca3af}button,[role=button]{cursor:pointer}:disabled{cursor:default}img,svg,video,canvas,audio,iframe,embed,object{display:block;vertical-align:middle}img,video{max-width:100%;height:auto}[hidden]{display:none}*,:before,:after{--tw-border-spacing-x: 0;--tw-border-spacing-y: 0;--tw-translate-x: 0;--tw-translate-y: 0;--tw-rotate: 0;--tw-skew-x: 0;--tw-skew-y: 0;--tw-scale-x: 1;--tw-scale-y: 1;--tw-pan-x: ;--tw-pan-y: ;--tw-pinch-zoom: ;--tw-scroll-snap-strictness: proximity;--tw-gradient-from-position: ;--tw-gradient-via-position: ;--tw-gradient-to-position: ;--tw-ordinal: ;--tw-slashed-zero: ;--tw-numeric-figure: ;--tw-numeric-spacing: ;--tw-numeric-fraction: ;--tw-ring-inset: ;--tw-ring-offset-width: 0px;--tw-ring-offset-color: #fff;--tw-ring-color: rgb(59 130 246 / .5);--tw-ring-offset-shadow: 0 0 #0000;--tw-ring-shadow: 0 0 #0000;--tw-shadow: 0 0 #0000;--tw-shadow-colored: 0 0 #0000;--tw-blur: ;--tw-brightness: ;--tw-contrast: ;--tw-grayscale: ;--tw-hue-rotate: ;--tw-invert: ;--tw-saturate: ;--tw-sepia: ;--tw-drop-shadow: ;--tw-backdrop-blur: ;--tw-backdrop-brightness: ;--tw-backdrop-contrast: ;--tw-backdrop-grayscale: ;--tw-backdrop-hue-rotate: ;--tw-backdrop-invert: ;--tw-backdrop-opacity: ;--tw-backdrop-saturate: ;--tw-backdrop-sepia: }::backdrop{--tw-border-spacing-x: 0;--tw-border-spacing-y: 0;--tw-translate-x: 0;--tw-translate-y: 0;--tw-rotate: 0;--tw-skew-x: 0;--tw-skew-y: 0;--tw-scale-x: 1;--tw-scale-y: 1;--tw-pan-x: ;--tw-pan-y: ;--tw-pinch-zoom: ;--tw-scroll-snap-strictness: proximity;--tw-gradient-from-position: ;--tw-gradient-via-position: ;--tw-gradient-to-position: ;--tw-ordinal: ;--tw-slashed-zero: ;--tw-numeric-figure: ;--tw-numeric-spacing: ;--tw-numeric-fraction: ;--tw-ring-inset: ;--tw-ring-offset-width: 0px;--tw-ring-offset-color: #fff;--tw-ring-color: rgb(59 130 246 / .5);--tw-ring-offset-shadow: 0 0 #0000;--tw-ring-shadow: 0 0 #0000;--tw-shadow: 0 0 #0000;--tw-shadow-colored: 0 0 #0000;--tw-blur: ;--tw-brightness: ;--tw-contrast: ;--tw-grayscale: ;--tw-hue-rotate: ;--tw-invert: ;--tw-saturate: ;--tw-sepia: ;--tw-drop-shadow: ;--tw-backdrop-blur: ;--tw-backdrop-brightness: ;--tw-backdrop-contrast: ;--tw-backdrop-grayscale: ;--tw-backdrop-hue-rotate: ;--tw-backdrop-invert: ;--tw-backdrop-opacity: ;--tw-backdrop-saturate: ;--tw-backdrop-sepia: }.container{width:100%}@media (min-width: 640px){.container{max-width:640px}}@media (min-width: 768px){.container{max-width:768px}}@media (min-width: 1024px){.container{max-width:1024px}}@media (min-width: 1280px){.container{max-width:1280px}}@media (min-width: 1536px){.container{max-width:1536px}}.static{position:static}.fixed{position:fixed}.absolute{position:absolute}.relative{position:relative}.inset-0{inset:0px}.right-4{right:1rem}.top-0{top:0px}.z-10{z-index:10}.my-2{margin-top:.5rem;margin-bottom:.5rem}.mb-1{margin-bottom:.25rem}.mb-2{margin-bottom:.5rem}.mb-3{margin-bottom:.75rem}.mb-5{margin-bottom:1.25rem}.ml-2{margin-left:.5rem}.ml-4{margin-left:1rem}.mr-2{margin-right:.5rem}.mr-3{margin-right:.75rem}.mr-5{margin-right:1.25rem}.ms-1{-webkit-margin-start:.25rem;margin-inline-start:.25rem}.mt-0{margin-top:0}.mt-0\.5{margin-top:.125rem}.mt-1{margin-top:.25rem}.mt-3{margin-top:.75rem}.mt-4{margin-top:1rem}.block{display:block}.inline{display:inline}.flex{display:flex}.inline-flex{display:inline-flex}.hidden{display:none}.h-1{height:.25rem}.h-14{height:3.5rem}.h-4{height:1rem}.h-7{height:1.75rem}.h-full{height:100%}.max-h-\[20rem\]{max-height:20rem}.min-h-full{min-height:100%}.min-h-screen{min-height:100vh}.w-4{width:1rem}.w-7{width:1.75rem}.w-\[1px\]{width:1px}.w-full{width:100%}.max-w-md{max-width:28rem}.scale-100{--tw-scale-x: 1;--tw-scale-y: 1;transform:translate(var(--tw-translate-x),var(--tw-translate-y)) rotate(var(--tw-rotate)) skew(var(--tw-skew-x)) skewY(var(--tw-skew-y)) scaleX(var(--tw-scale-x)) scaleY(var(--tw-scale-y))}.scale-95{--tw-scale-x: .95;--tw-scale-y: .95;transform:translate(var(--tw-translate-x),var(--tw-translate-y)) rotate(var(--tw-rotate)) skew(var(--tw-skew-x)) skewY(var(--tw-skew-y)) scaleX(var(--tw-scale-x)) scaleY(var(--tw-scale-y))}.transform{transform:translate(var(--tw-translate-x),var(--tw-translate-y)) rotate(var(--tw-rotate)) skew(var(--tw-skew-x)) skewY(var(--tw-skew-y)) scaleX(var(--tw-scale-x)) scaleY(var(--tw-scale-y))}@keyframes spin{to{transform:rotate(360deg)}}.animate-spin{animation:spin 1s linear infinite}.flex-row{flex-direction:row}.flex-row-reverse{flex-direction:row-reverse}.flex-col{flex-direction:column}.items-center{align-items:center}.justify-center{justify-content:center}.justify-between{justify-content:space-between}.space-x-2>:not([hidden])~:not([hidden]){--tw-space-x-reverse: 0;margin-right:calc(.5rem * var(--tw-space-x-reverse));margin-left:calc(.5rem * calc(1 - var(--tw-space-x-reverse)))}.overflow-hidden{overflow:hidden}.overflow-y-auto{overflow-y:auto}.whitespace-nowrap{white-space:nowrap}.rounded-2xl{border-radius:1rem}.rounded-full{border-radius:9999px}.rounded-lg{border-radius:.5rem}.rounded-md{border-radius:.375rem}.border{border-width:1px}.border-gray-300{--tw-border-opacity: 1;border-color:rgb(209 213 219 / var(--tw-border-opacity))}.border-gray-400{--tw-border-opacity: 1;border-color:rgb(156 163 175 / var(--tw-border-opacity))}.border-transparent{border-color:transparent}.bg-black{--tw-bg-opacity: 1;background-color:rgb(0 0 0 / var(--tw-bg-opacity))}.bg-blue-500{--tw-bg-opacity: 1;background-color:rgb(59 130 246 / var(--tw-bg-opacity))}.bg-blue-600{--tw-bg-opacity: 1;background-color:rgb(37 99 235 / var(--tw-bg-opacity))}.bg-blue-700{--tw-bg-opacity: 1;background-color:rgb(29 78 216 / var(--tw-bg-opacity))}.bg-gray-200{--tw-bg-opacity: 1;background-color:rgb(229 231 235 / var(--tw-bg-opacity))}.bg-gray-50{--tw-bg-opacity: 1;background-color:rgb(249 250 251 / var(--tw-bg-opacity))}.bg-green-500{--tw-bg-opacity: 1;background-color:rgb(34 197 94 / var(--tw-bg-opacity))}.bg-indigo-100{--tw-bg-opacity: 1;background-color:rgb(224 231 255 / var(--tw-bg-opacity))}.bg-indigo-600{--tw-bg-opacity: 1;background-color:rgb(79 70 229 / var(--tw-bg-opacity))}.bg-slate-200{--tw-bg-opacity: 1;background-color:rgb(226 232 240 / var(--tw-bg-opacity))}.bg-white{--tw-bg-opacity: 1;background-color:rgb(255 255 255 / var(--tw-bg-opacity))}.bg-opacity-25{--tw-bg-opacity: .25}.p-2{padding:.5rem}.p-2\.5{padding:.625rem}.p-4{padding:1rem}.p-6{padding:1.5rem}.px-1{padding-left:.25rem;padding-right:.25rem}.px-2{padding-left:.5rem;padding-right:.5rem}.px-4{padding-left:1rem;padding-right:1rem}.px-5{padding-left:1.25rem;padding-right:1.25rem}.py-2{padding-top:.5rem;padding-bottom:.5rem}.py-2\.5{padding-top:.625rem;padding-bottom:.625rem}.text-left{text-align:left}.text-center{text-align:center}.text-right{text-align:right}.align-middle{vertical-align:middle}.text-5xl{font-size:3rem;line-height:1}.text-lg{font-size:1.125rem;line-height:1.75rem}.text-sm{font-size:.875rem;line-height:1.25rem}.font-extrabold{font-weight:800}.font-medium{font-weight:500}.font-semibold{font-weight:600}.leading-6{line-height:1.5rem}.tracking-tight{letter-spacing:-.025em}.text-gray-500{--tw-text-opacity: 1;color:rgb(107 114 128 / var(--tw-text-opacity))}.text-gray-900{--tw-text-opacity: 1;color:rgb(17 24 39 / var(--tw-text-opacity))}.text-indigo-100{--tw-text-opacity: 1;color:rgb(224 231 255 / var(--tw-text-opacity))}.text-indigo-900{--tw-text-opacity: 1;color:rgb(49 46 129 / var(--tw-text-opacity))}.text-slate-500{--tw-text-opacity: 1;color:rgb(100 116 139 / var(--tw-text-opacity))}.text-slate-900{--tw-text-opacity: 1;color:rgb(15 23 42 / var(--tw-text-opacity))}.text-white{--tw-text-opacity: 1;color:rgb(255 255 255 / var(--tw-text-opacity))}.opacity-0{opacity:0}.opacity-100{opacity:1}.shadow-xl{--tw-shadow: 0 20px 25px -5px rgb(0 0 0 / .1), 0 8px 10px -6px rgb(0 0 0 / .1);--tw-shadow-colored: 0 20px 25px -5px var(--tw-shadow-color), 0 8px 10px -6px var(--tw-shadow-color);box-shadow:var(--tw-ring-offset-shadow, 0 0 #0000),var(--tw-ring-shadow, 0 0 #0000),var(--tw-shadow)}.shadow-black\/5{--tw-shadow-color: rgb(0 0 0 / .05);--tw-shadow: var(--tw-shadow-colored)}.ring-1{--tw-ring-offset-shadow: var(--tw-ring-inset) 0 0 0 var(--tw-ring-offset-width) var(--tw-ring-offset-color);--tw-ring-shadow: var(--tw-ring-inset) 0 0 0 calc(1px + var(--tw-ring-offset-width)) var(--tw-ring-color);box-shadow:var(--tw-ring-offset-shadow),var(--tw-ring-shadow),var(--tw-shadow, 0 0 #0000)}.ring-slate-700\/10{--tw-ring-color: rgb(51 65 85 / .1)}.filter{filter:var(--tw-blur) var(--tw-brightness) var(--tw-contrast) var(--tw-grayscale) var(--tw-hue-rotate) var(--tw-invert) var(--tw-saturate) var(--tw-sepia) var(--tw-drop-shadow)}.transition-all{transition-property:all;transition-timing-function:cubic-bezier(.4,0,.2,1);transition-duration:.15s}.duration-100{transition-duration:.1s}.duration-200{transition-duration:.2s}.duration-300{transition-duration:.3s}.ease-in{transition-timing-function:cubic-bezier(.4,0,1,1)}.ease-out{transition-timing-function:cubic-bezier(0,0,.2,1)}html,body,#root{height:100%}audio::-webkit-media-controls-panel{background-color:#fff}.container{width:41rem;max-width:95vw}.hover\:bg-blue-800:hover{--tw-bg-opacity: 1;background-color:rgb(30 64 175 / var(--tw-bg-opacity))}.hover\:bg-green-600:hover{--tw-bg-opacity: 1;background-color:rgb(22 163 74 / var(--tw-bg-opacity))}.hover\:bg-indigo-200:hover{--tw-bg-opacity: 1;background-color:rgb(199 210 254 / var(--tw-bg-opacity))}.hover\:bg-indigo-50:hover{--tw-bg-opacity: 1;background-color:rgb(238 242 255 / var(--tw-bg-opacity))}.hover\:bg-indigo-500:hover{--tw-bg-opacity: 1;background-color:rgb(99 102 241 / var(--tw-bg-opacity))}.hover\:text-indigo-600:hover{--tw-text-opacity: 1;color:rgb(79 70 229 / var(--tw-text-opacity))}.focus\:border-blue-500:focus{--tw-border-opacity: 1;border-color:rgb(59 130 246 / var(--tw-border-opacity))}.focus\:outline-none:focus{outline:2px solid transparent;outline-offset:2px}.focus\:ring-4:focus{--tw-ring-offset-shadow: var(--tw-ring-inset) 0 0 0 var(--tw-ring-offset-width) var(--tw-ring-offset-color);--tw-ring-shadow: var(--tw-ring-inset) 0 0 0 calc(4px + var(--tw-ring-offset-width)) var(--tw-ring-color);box-shadow:var(--tw-ring-offset-shadow),var(--tw-ring-shadow),var(--tw-shadow, 0 0 #0000)}.focus\:ring-blue-300:focus{--tw-ring-opacity: 1;--tw-ring-color: rgb(147 197 253 / var(--tw-ring-opacity))}.focus\:ring-blue-500:focus{--tw-ring-opacity: 1;--tw-ring-color: rgb(59 130 246 / var(--tw-ring-opacity))}.focus\:ring-green-300:focus{--tw-ring-opacity: 1;--tw-ring-color: rgb(134 239 172 / var(--tw-ring-opacity))}.focus-visible\:ring-2:focus-visible{--tw-ring-offset-shadow: var(--tw-ring-inset) 0 0 0 var(--tw-ring-offset-width) var(--tw-ring-offset-color);--tw-ring-shadow: var(--tw-ring-inset) 0 0 0 calc(2px + var(--tw-ring-offset-width)) var(--tw-ring-color);box-shadow:var(--tw-ring-offset-shadow),var(--tw-ring-shadow),var(--tw-shadow, 0 0 #0000)}.focus-visible\:ring-indigo-500:focus-visible{--tw-ring-opacity: 1;--tw-ring-color: rgb(99 102 241 / var(--tw-ring-opacity))}.focus-visible\:ring-offset-2:focus-visible{--tw-ring-offset-width: 2px}@media (prefers-color-scheme: dark){.dark\:border-gray-600{--tw-border-opacity: 1;border-color:rgb(75 85 99 / var(--tw-border-opacity))}.dark\:bg-blue-600{--tw-bg-opacity: 1;background-color:rgb(37 99 235 / var(--tw-bg-opacity))}.dark\:bg-gray-700{--tw-bg-opacity: 1;background-color:rgb(55 65 81 / var(--tw-bg-opacity))}.dark\:bg-green-600{--tw-bg-opacity: 1;background-color:rgb(22 163 74 / var(--tw-bg-opacity))}.dark\:text-white{--tw-text-opacity: 1;color:rgb(255 255 255 / var(--tw-text-opacity))}.dark\:placeholder-gray-400::-moz-placeholder{--tw-placeholder-opacity: 1;color:rgb(156 163 175 / var(--tw-placeholder-opacity))}.dark\:placeholder-gray-400::placeholder{--tw-placeholder-opacity: 1;color:rgb(156 163 175 / var(--tw-placeholder-opacity))}.dark\:hover\:bg-blue-700:hover{--tw-bg-opacity: 1;background-color:rgb(29 78 216 / var(--tw-bg-opacity))}.dark\:hover\:bg-green-700:hover{--tw-bg-opacity: 1;background-color:rgb(21 128 61 / var(--tw-bg-opacity))}.dark\:focus\:border-blue-500:focus{--tw-border-opacity: 1;border-color:rgb(59 130 246 / var(--tw-border-opacity))}.dark\:focus\:ring-blue-500:focus{--tw-ring-opacity: 1;--tw-ring-color: rgb(59 130 246 / var(--tw-ring-opacity))}.dark\:focus\:ring-blue-800:focus{--tw-ring-opacity: 1;--tw-ring-color: rgb(30 64 175 / var(--tw-ring-opacity))}.dark\:focus\:ring-green-800:focus{--tw-ring-opacity: 1;--tw-ring-color: rgb(22 101 52 / var(--tw-ring-opacity))}}@media (min-width: 640px){.sm\:text-2xl{font-size:1.5rem;line-height:2rem}.sm\:text-7xl{font-size:4.5rem;line-height:1}} diff --git a/spaces/XzJosh/Ava-Bert-VITS2/transforms.py b/spaces/XzJosh/Ava-Bert-VITS2/transforms.py deleted file mode 100644 index 4793d67ca5a5630e0ffe0f9fb29445c949e64dae..0000000000000000000000000000000000000000 --- a/spaces/XzJosh/Ava-Bert-VITS2/transforms.py +++ /dev/null @@ -1,193 +0,0 @@ -import torch -from torch.nn import functional as F - -import numpy as np - - -DEFAULT_MIN_BIN_WIDTH = 1e-3 -DEFAULT_MIN_BIN_HEIGHT = 1e-3 -DEFAULT_MIN_DERIVATIVE = 1e-3 - - -def piecewise_rational_quadratic_transform(inputs, - unnormalized_widths, - unnormalized_heights, - unnormalized_derivatives, - inverse=False, - tails=None, - tail_bound=1., - min_bin_width=DEFAULT_MIN_BIN_WIDTH, - min_bin_height=DEFAULT_MIN_BIN_HEIGHT, - min_derivative=DEFAULT_MIN_DERIVATIVE): - - if tails is None: - spline_fn = rational_quadratic_spline - spline_kwargs = {} - else: - spline_fn = unconstrained_rational_quadratic_spline - spline_kwargs = { - 'tails': tails, - 'tail_bound': tail_bound - } - - outputs, logabsdet = spline_fn( - inputs=inputs, - unnormalized_widths=unnormalized_widths, - unnormalized_heights=unnormalized_heights, - unnormalized_derivatives=unnormalized_derivatives, - inverse=inverse, - min_bin_width=min_bin_width, - min_bin_height=min_bin_height, - min_derivative=min_derivative, - **spline_kwargs - ) - return outputs, logabsdet - - -def searchsorted(bin_locations, inputs, eps=1e-6): - bin_locations[..., -1] += eps - return torch.sum( - inputs[..., None] >= bin_locations, - dim=-1 - ) - 1 - - -def unconstrained_rational_quadratic_spline(inputs, - unnormalized_widths, - unnormalized_heights, - unnormalized_derivatives, - inverse=False, - tails='linear', - tail_bound=1., - min_bin_width=DEFAULT_MIN_BIN_WIDTH, - min_bin_height=DEFAULT_MIN_BIN_HEIGHT, - min_derivative=DEFAULT_MIN_DERIVATIVE): - inside_interval_mask = (inputs >= -tail_bound) & (inputs <= tail_bound) - outside_interval_mask = ~inside_interval_mask - - outputs = torch.zeros_like(inputs) - logabsdet = torch.zeros_like(inputs) - - if tails == 'linear': - unnormalized_derivatives = F.pad(unnormalized_derivatives, pad=(1, 1)) - constant = np.log(np.exp(1 - min_derivative) - 1) - unnormalized_derivatives[..., 0] = constant - unnormalized_derivatives[..., -1] = constant - - outputs[outside_interval_mask] = inputs[outside_interval_mask] - logabsdet[outside_interval_mask] = 0 - else: - raise RuntimeError('{} tails are not implemented.'.format(tails)) - - outputs[inside_interval_mask], logabsdet[inside_interval_mask] = rational_quadratic_spline( - inputs=inputs[inside_interval_mask], - unnormalized_widths=unnormalized_widths[inside_interval_mask, :], - unnormalized_heights=unnormalized_heights[inside_interval_mask, :], - unnormalized_derivatives=unnormalized_derivatives[inside_interval_mask, :], - inverse=inverse, - left=-tail_bound, right=tail_bound, bottom=-tail_bound, top=tail_bound, - min_bin_width=min_bin_width, - min_bin_height=min_bin_height, - min_derivative=min_derivative - ) - - return outputs, logabsdet - -def rational_quadratic_spline(inputs, - unnormalized_widths, - unnormalized_heights, - unnormalized_derivatives, - inverse=False, - left=0., right=1., bottom=0., top=1., - min_bin_width=DEFAULT_MIN_BIN_WIDTH, - min_bin_height=DEFAULT_MIN_BIN_HEIGHT, - min_derivative=DEFAULT_MIN_DERIVATIVE): - if torch.min(inputs) < left or torch.max(inputs) > right: - raise ValueError('Input to a transform is not within its domain') - - num_bins = unnormalized_widths.shape[-1] - - if min_bin_width * num_bins > 1.0: - raise ValueError('Minimal bin width too large for the number of bins') - if min_bin_height * num_bins > 1.0: - raise ValueError('Minimal bin height too large for the number of bins') - - widths = F.softmax(unnormalized_widths, dim=-1) - widths = min_bin_width + (1 - min_bin_width * num_bins) * widths - cumwidths = torch.cumsum(widths, dim=-1) - cumwidths = F.pad(cumwidths, pad=(1, 0), mode='constant', value=0.0) - cumwidths = (right - left) * cumwidths + left - cumwidths[..., 0] = left - cumwidths[..., -1] = right - widths = cumwidths[..., 1:] - cumwidths[..., :-1] - - derivatives = min_derivative + F.softplus(unnormalized_derivatives) - - heights = F.softmax(unnormalized_heights, dim=-1) - heights = min_bin_height + (1 - min_bin_height * num_bins) * heights - cumheights = torch.cumsum(heights, dim=-1) - cumheights = F.pad(cumheights, pad=(1, 0), mode='constant', value=0.0) - cumheights = (top - bottom) * cumheights + bottom - cumheights[..., 0] = bottom - cumheights[..., -1] = top - heights = cumheights[..., 1:] - cumheights[..., :-1] - - if inverse: - bin_idx = searchsorted(cumheights, inputs)[..., None] - else: - bin_idx = searchsorted(cumwidths, inputs)[..., None] - - input_cumwidths = cumwidths.gather(-1, bin_idx)[..., 0] - input_bin_widths = widths.gather(-1, bin_idx)[..., 0] - - input_cumheights = cumheights.gather(-1, bin_idx)[..., 0] - delta = heights / widths - input_delta = delta.gather(-1, bin_idx)[..., 0] - - input_derivatives = derivatives.gather(-1, bin_idx)[..., 0] - input_derivatives_plus_one = derivatives[..., 1:].gather(-1, bin_idx)[..., 0] - - input_heights = heights.gather(-1, bin_idx)[..., 0] - - if inverse: - a = (((inputs - input_cumheights) * (input_derivatives - + input_derivatives_plus_one - - 2 * input_delta) - + input_heights * (input_delta - input_derivatives))) - b = (input_heights * input_derivatives - - (inputs - input_cumheights) * (input_derivatives - + input_derivatives_plus_one - - 2 * input_delta)) - c = - input_delta * (inputs - input_cumheights) - - discriminant = b.pow(2) - 4 * a * c - assert (discriminant >= 0).all() - - root = (2 * c) / (-b - torch.sqrt(discriminant)) - outputs = root * input_bin_widths + input_cumwidths - - theta_one_minus_theta = root * (1 - root) - denominator = input_delta + ((input_derivatives + input_derivatives_plus_one - 2 * input_delta) - * theta_one_minus_theta) - derivative_numerator = input_delta.pow(2) * (input_derivatives_plus_one * root.pow(2) - + 2 * input_delta * theta_one_minus_theta - + input_derivatives * (1 - root).pow(2)) - logabsdet = torch.log(derivative_numerator) - 2 * torch.log(denominator) - - return outputs, -logabsdet - else: - theta = (inputs - input_cumwidths) / input_bin_widths - theta_one_minus_theta = theta * (1 - theta) - - numerator = input_heights * (input_delta * theta.pow(2) - + input_derivatives * theta_one_minus_theta) - denominator = input_delta + ((input_derivatives + input_derivatives_plus_one - 2 * input_delta) - * theta_one_minus_theta) - outputs = input_cumheights + numerator / denominator - - derivative_numerator = input_delta.pow(2) * (input_derivatives_plus_one * theta.pow(2) - + 2 * input_delta * theta_one_minus_theta - + input_derivatives * (1 - theta).pow(2)) - logabsdet = torch.log(derivative_numerator) - 2 * torch.log(denominator) - - return outputs, logabsdet diff --git a/spaces/XzJosh/Bella-Bert-VITS2/models.py b/spaces/XzJosh/Bella-Bert-VITS2/models.py deleted file mode 100644 index d4afe44d883691610c5903e602a3ca245fcb3a5c..0000000000000000000000000000000000000000 --- a/spaces/XzJosh/Bella-Bert-VITS2/models.py +++ /dev/null @@ -1,707 +0,0 @@ -import copy -import math -import torch -from torch import nn -from torch.nn import functional as F - -import commons -import modules -import attentions -import monotonic_align - -from torch.nn import Conv1d, ConvTranspose1d, AvgPool1d, Conv2d -from torch.nn.utils import weight_norm, remove_weight_norm, spectral_norm - -from commons import init_weights, get_padding -from text import symbols, num_tones, num_languages -class DurationDiscriminator(nn.Module): #vits2 - def __init__(self, in_channels, filter_channels, kernel_size, p_dropout, gin_channels=0): - super().__init__() - - self.in_channels = in_channels - self.filter_channels = filter_channels - self.kernel_size = kernel_size - self.p_dropout = p_dropout - self.gin_channels = gin_channels - - self.drop = nn.Dropout(p_dropout) - self.conv_1 = nn.Conv1d(in_channels, filter_channels, kernel_size, padding=kernel_size//2) - self.norm_1 = modules.LayerNorm(filter_channels) - self.conv_2 = nn.Conv1d(filter_channels, filter_channels, kernel_size, padding=kernel_size//2) - self.norm_2 = modules.LayerNorm(filter_channels) - self.dur_proj = nn.Conv1d(1, filter_channels, 1) - - self.pre_out_conv_1 = nn.Conv1d(2*filter_channels, filter_channels, kernel_size, padding=kernel_size//2) - self.pre_out_norm_1 = modules.LayerNorm(filter_channels) - self.pre_out_conv_2 = nn.Conv1d(filter_channels, filter_channels, kernel_size, padding=kernel_size//2) - self.pre_out_norm_2 = modules.LayerNorm(filter_channels) - - if gin_channels != 0: - self.cond = nn.Conv1d(gin_channels, in_channels, 1) - - self.output_layer = nn.Sequential( - nn.Linear(filter_channels, 1), - nn.Sigmoid() - ) - - def forward_probability(self, x, x_mask, dur, g=None): - dur = self.dur_proj(dur) - x = torch.cat([x, dur], dim=1) - x = self.pre_out_conv_1(x * x_mask) - x = torch.relu(x) - x = self.pre_out_norm_1(x) - x = self.drop(x) - x = self.pre_out_conv_2(x * x_mask) - x = torch.relu(x) - x = self.pre_out_norm_2(x) - x = self.drop(x) - x = x * x_mask - x = x.transpose(1, 2) - output_prob = self.output_layer(x) - return output_prob - - def forward(self, x, x_mask, dur_r, dur_hat, g=None): - x = torch.detach(x) - if g is not None: - g = torch.detach(g) - x = x + self.cond(g) - x = self.conv_1(x * x_mask) - x = torch.relu(x) - x = self.norm_1(x) - x = self.drop(x) - x = self.conv_2(x * x_mask) - x = torch.relu(x) - x = self.norm_2(x) - x = self.drop(x) - - output_probs = [] - for dur in [dur_r, dur_hat]: - output_prob = self.forward_probability(x, x_mask, dur, g) - output_probs.append(output_prob) - - return output_probs - -class TransformerCouplingBlock(nn.Module): - def __init__(self, - channels, - hidden_channels, - filter_channels, - n_heads, - n_layers, - kernel_size, - p_dropout, - n_flows=4, - gin_channels=0, - share_parameter=False - ): - - super().__init__() - self.channels = channels - self.hidden_channels = hidden_channels - self.kernel_size = kernel_size - self.n_layers = n_layers - self.n_flows = n_flows - self.gin_channels = gin_channels - - self.flows = nn.ModuleList() - - self.wn = attentions.FFT(hidden_channels, filter_channels, n_heads, n_layers, kernel_size, p_dropout, isflow = True, gin_channels = self.gin_channels) if share_parameter else None - - for i in range(n_flows): - self.flows.append( - modules.TransformerCouplingLayer(channels, hidden_channels, kernel_size, n_layers, n_heads, p_dropout, filter_channels, mean_only=True, wn_sharing_parameter=self.wn, gin_channels = self.gin_channels)) - self.flows.append(modules.Flip()) - - def forward(self, x, x_mask, g=None, reverse=False): - if not reverse: - for flow in self.flows: - x, _ = flow(x, x_mask, g=g, reverse=reverse) - else: - for flow in reversed(self.flows): - x = flow(x, x_mask, g=g, reverse=reverse) - return x - -class StochasticDurationPredictor(nn.Module): - def __init__(self, in_channels, filter_channels, kernel_size, p_dropout, n_flows=4, gin_channels=0): - super().__init__() - filter_channels = in_channels # it needs to be removed from future version. - self.in_channels = in_channels - self.filter_channels = filter_channels - self.kernel_size = kernel_size - self.p_dropout = p_dropout - self.n_flows = n_flows - self.gin_channels = gin_channels - - self.log_flow = modules.Log() - self.flows = nn.ModuleList() - self.flows.append(modules.ElementwiseAffine(2)) - for i in range(n_flows): - self.flows.append(modules.ConvFlow(2, filter_channels, kernel_size, n_layers=3)) - self.flows.append(modules.Flip()) - - self.post_pre = nn.Conv1d(1, filter_channels, 1) - self.post_proj = nn.Conv1d(filter_channels, filter_channels, 1) - self.post_convs = modules.DDSConv(filter_channels, kernel_size, n_layers=3, p_dropout=p_dropout) - self.post_flows = nn.ModuleList() - self.post_flows.append(modules.ElementwiseAffine(2)) - for i in range(4): - self.post_flows.append(modules.ConvFlow(2, filter_channels, kernel_size, n_layers=3)) - self.post_flows.append(modules.Flip()) - - self.pre = nn.Conv1d(in_channels, filter_channels, 1) - self.proj = nn.Conv1d(filter_channels, filter_channels, 1) - self.convs = modules.DDSConv(filter_channels, kernel_size, n_layers=3, p_dropout=p_dropout) - if gin_channels != 0: - self.cond = nn.Conv1d(gin_channels, filter_channels, 1) - - def forward(self, x, x_mask, w=None, g=None, reverse=False, noise_scale=1.0): - x = torch.detach(x) - x = self.pre(x) - if g is not None: - g = torch.detach(g) - x = x + self.cond(g) - x = self.convs(x, x_mask) - x = self.proj(x) * x_mask - - if not reverse: - flows = self.flows - assert w is not None - - logdet_tot_q = 0 - h_w = self.post_pre(w) - h_w = self.post_convs(h_w, x_mask) - h_w = self.post_proj(h_w) * x_mask - e_q = torch.randn(w.size(0), 2, w.size(2)).to(device=x.device, dtype=x.dtype) * x_mask - z_q = e_q - for flow in self.post_flows: - z_q, logdet_q = flow(z_q, x_mask, g=(x + h_w)) - logdet_tot_q += logdet_q - z_u, z1 = torch.split(z_q, [1, 1], 1) - u = torch.sigmoid(z_u) * x_mask - z0 = (w - u) * x_mask - logdet_tot_q += torch.sum((F.logsigmoid(z_u) + F.logsigmoid(-z_u)) * x_mask, [1, 2]) - logq = torch.sum(-0.5 * (math.log(2 * math.pi) + (e_q ** 2)) * x_mask, [1, 2]) - logdet_tot_q - - logdet_tot = 0 - z0, logdet = self.log_flow(z0, x_mask) - logdet_tot += logdet - z = torch.cat([z0, z1], 1) - for flow in flows: - z, logdet = flow(z, x_mask, g=x, reverse=reverse) - logdet_tot = logdet_tot + logdet - nll = torch.sum(0.5 * (math.log(2 * math.pi) + (z ** 2)) * x_mask, [1, 2]) - logdet_tot - return nll + logq # [b] - else: - flows = list(reversed(self.flows)) - flows = flows[:-2] + [flows[-1]] # remove a useless vflow - z = torch.randn(x.size(0), 2, x.size(2)).to(device=x.device, dtype=x.dtype) * noise_scale - for flow in flows: - z = flow(z, x_mask, g=x, reverse=reverse) - z0, z1 = torch.split(z, [1, 1], 1) - logw = z0 - return logw - - -class DurationPredictor(nn.Module): - def __init__(self, in_channels, filter_channels, kernel_size, p_dropout, gin_channels=0): - super().__init__() - - self.in_channels = in_channels - self.filter_channels = filter_channels - self.kernel_size = kernel_size - self.p_dropout = p_dropout - self.gin_channels = gin_channels - - self.drop = nn.Dropout(p_dropout) - self.conv_1 = nn.Conv1d(in_channels, filter_channels, kernel_size, padding=kernel_size // 2) - self.norm_1 = modules.LayerNorm(filter_channels) - self.conv_2 = nn.Conv1d(filter_channels, filter_channels, kernel_size, padding=kernel_size // 2) - self.norm_2 = modules.LayerNorm(filter_channels) - self.proj = nn.Conv1d(filter_channels, 1, 1) - - if gin_channels != 0: - self.cond = nn.Conv1d(gin_channels, in_channels, 1) - - def forward(self, x, x_mask, g=None): - x = torch.detach(x) - if g is not None: - g = torch.detach(g) - x = x + self.cond(g) - x = self.conv_1(x * x_mask) - x = torch.relu(x) - x = self.norm_1(x) - x = self.drop(x) - x = self.conv_2(x * x_mask) - x = torch.relu(x) - x = self.norm_2(x) - x = self.drop(x) - x = self.proj(x * x_mask) - return x * x_mask - - -class TextEncoder(nn.Module): - def __init__(self, - n_vocab, - out_channels, - hidden_channels, - filter_channels, - n_heads, - n_layers, - kernel_size, - p_dropout, - gin_channels=0): - super().__init__() - self.n_vocab = n_vocab - self.out_channels = out_channels - self.hidden_channels = hidden_channels - self.filter_channels = filter_channels - self.n_heads = n_heads - self.n_layers = n_layers - self.kernel_size = kernel_size - self.p_dropout = p_dropout - self.gin_channels = gin_channels - self.emb = nn.Embedding(len(symbols), hidden_channels) - nn.init.normal_(self.emb.weight, 0.0, hidden_channels ** -0.5) - self.tone_emb = nn.Embedding(num_tones, hidden_channels) - nn.init.normal_(self.tone_emb.weight, 0.0, hidden_channels ** -0.5) - self.language_emb = nn.Embedding(num_languages, hidden_channels) - nn.init.normal_(self.language_emb.weight, 0.0, hidden_channels ** -0.5) - self.bert_proj = nn.Conv1d(1024, hidden_channels, 1) - - self.encoder = attentions.Encoder( - hidden_channels, - filter_channels, - n_heads, - n_layers, - kernel_size, - p_dropout, - gin_channels=self.gin_channels) - self.proj = nn.Conv1d(hidden_channels, out_channels * 2, 1) - - def forward(self, x, x_lengths, tone, language, bert, g=None): - x = (self.emb(x)+ self.tone_emb(tone)+ self.language_emb(language)+self.bert_proj(bert).transpose(1,2)) * math.sqrt(self.hidden_channels) # [b, t, h] - x = torch.transpose(x, 1, -1) # [b, h, t] - x_mask = torch.unsqueeze(commons.sequence_mask(x_lengths, x.size(2)), 1).to(x.dtype) - - x = self.encoder(x * x_mask, x_mask, g=g) - stats = self.proj(x) * x_mask - - m, logs = torch.split(stats, self.out_channels, dim=1) - return x, m, logs, x_mask - - -class ResidualCouplingBlock(nn.Module): - def __init__(self, - channels, - hidden_channels, - kernel_size, - dilation_rate, - n_layers, - n_flows=4, - gin_channels=0): - super().__init__() - self.channels = channels - self.hidden_channels = hidden_channels - self.kernel_size = kernel_size - self.dilation_rate = dilation_rate - self.n_layers = n_layers - self.n_flows = n_flows - self.gin_channels = gin_channels - - self.flows = nn.ModuleList() - for i in range(n_flows): - self.flows.append( - modules.ResidualCouplingLayer(channels, hidden_channels, kernel_size, dilation_rate, n_layers, - gin_channels=gin_channels, mean_only=True)) - self.flows.append(modules.Flip()) - - def forward(self, x, x_mask, g=None, reverse=False): - if not reverse: - for flow in self.flows: - x, _ = flow(x, x_mask, g=g, reverse=reverse) - else: - for flow in reversed(self.flows): - x = flow(x, x_mask, g=g, reverse=reverse) - return x - - -class PosteriorEncoder(nn.Module): - def __init__(self, - in_channels, - out_channels, - hidden_channels, - kernel_size, - dilation_rate, - n_layers, - gin_channels=0): - super().__init__() - self.in_channels = in_channels - self.out_channels = out_channels - self.hidden_channels = hidden_channels - self.kernel_size = kernel_size - self.dilation_rate = dilation_rate - self.n_layers = n_layers - self.gin_channels = gin_channels - - self.pre = nn.Conv1d(in_channels, hidden_channels, 1) - self.enc = modules.WN(hidden_channels, kernel_size, dilation_rate, n_layers, gin_channels=gin_channels) - self.proj = nn.Conv1d(hidden_channels, out_channels * 2, 1) - - def forward(self, x, x_lengths, g=None): - x_mask = torch.unsqueeze(commons.sequence_mask(x_lengths, x.size(2)), 1).to(x.dtype) - x = self.pre(x) * x_mask - x = self.enc(x, x_mask, g=g) - stats = self.proj(x) * x_mask - m, logs = torch.split(stats, self.out_channels, dim=1) - z = (m + torch.randn_like(m) * torch.exp(logs)) * x_mask - return z, m, logs, x_mask - - -class Generator(torch.nn.Module): - def __init__(self, initial_channel, resblock, resblock_kernel_sizes, resblock_dilation_sizes, upsample_rates, - upsample_initial_channel, upsample_kernel_sizes, gin_channels=0): - super(Generator, self).__init__() - self.num_kernels = len(resblock_kernel_sizes) - self.num_upsamples = len(upsample_rates) - self.conv_pre = Conv1d(initial_channel, upsample_initial_channel, 7, 1, padding=3) - resblock = modules.ResBlock1 if resblock == '1' else modules.ResBlock2 - - self.ups = nn.ModuleList() - for i, (u, k) in enumerate(zip(upsample_rates, upsample_kernel_sizes)): - self.ups.append(weight_norm( - ConvTranspose1d(upsample_initial_channel // (2 ** i), upsample_initial_channel // (2 ** (i + 1)), - k, u, padding=(k - u) // 2))) - - self.resblocks = nn.ModuleList() - for i in range(len(self.ups)): - ch = upsample_initial_channel // (2 ** (i + 1)) - for j, (k, d) in enumerate(zip(resblock_kernel_sizes, resblock_dilation_sizes)): - self.resblocks.append(resblock(ch, k, d)) - - self.conv_post = Conv1d(ch, 1, 7, 1, padding=3, bias=False) - self.ups.apply(init_weights) - - if gin_channels != 0: - self.cond = nn.Conv1d(gin_channels, upsample_initial_channel, 1) - - def forward(self, x, g=None): - x = self.conv_pre(x) - if g is not None: - x = x + self.cond(g) - - for i in range(self.num_upsamples): - x = F.leaky_relu(x, modules.LRELU_SLOPE) - x = self.ups[i](x) - xs = None - for j in range(self.num_kernels): - if xs is None: - xs = self.resblocks[i * self.num_kernels + j](x) - else: - xs += self.resblocks[i * self.num_kernels + j](x) - x = xs / self.num_kernels - x = F.leaky_relu(x) - x = self.conv_post(x) - x = torch.tanh(x) - - return x - - def remove_weight_norm(self): - print('Removing weight norm...') - for l in self.ups: - remove_weight_norm(l) - for l in self.resblocks: - l.remove_weight_norm() - - -class DiscriminatorP(torch.nn.Module): - def __init__(self, period, kernel_size=5, stride=3, use_spectral_norm=False): - super(DiscriminatorP, self).__init__() - self.period = period - self.use_spectral_norm = use_spectral_norm - norm_f = weight_norm if use_spectral_norm == False else spectral_norm - self.convs = nn.ModuleList([ - norm_f(Conv2d(1, 32, (kernel_size, 1), (stride, 1), padding=(get_padding(kernel_size, 1), 0))), - norm_f(Conv2d(32, 128, (kernel_size, 1), (stride, 1), padding=(get_padding(kernel_size, 1), 0))), - norm_f(Conv2d(128, 512, (kernel_size, 1), (stride, 1), padding=(get_padding(kernel_size, 1), 0))), - norm_f(Conv2d(512, 1024, (kernel_size, 1), (stride, 1), padding=(get_padding(kernel_size, 1), 0))), - norm_f(Conv2d(1024, 1024, (kernel_size, 1), 1, padding=(get_padding(kernel_size, 1), 0))), - ]) - self.conv_post = norm_f(Conv2d(1024, 1, (3, 1), 1, padding=(1, 0))) - - def forward(self, x): - fmap = [] - - # 1d to 2d - b, c, t = x.shape - if t % self.period != 0: # pad first - n_pad = self.period - (t % self.period) - x = F.pad(x, (0, n_pad), "reflect") - t = t + n_pad - x = x.view(b, c, t // self.period, self.period) - - for l in self.convs: - x = l(x) - x = F.leaky_relu(x, modules.LRELU_SLOPE) - fmap.append(x) - x = self.conv_post(x) - fmap.append(x) - x = torch.flatten(x, 1, -1) - - return x, fmap - - -class DiscriminatorS(torch.nn.Module): - def __init__(self, use_spectral_norm=False): - super(DiscriminatorS, self).__init__() - norm_f = weight_norm if use_spectral_norm == False else spectral_norm - self.convs = nn.ModuleList([ - norm_f(Conv1d(1, 16, 15, 1, padding=7)), - norm_f(Conv1d(16, 64, 41, 4, groups=4, padding=20)), - norm_f(Conv1d(64, 256, 41, 4, groups=16, padding=20)), - norm_f(Conv1d(256, 1024, 41, 4, groups=64, padding=20)), - norm_f(Conv1d(1024, 1024, 41, 4, groups=256, padding=20)), - norm_f(Conv1d(1024, 1024, 5, 1, padding=2)), - ]) - self.conv_post = norm_f(Conv1d(1024, 1, 3, 1, padding=1)) - - def forward(self, x): - fmap = [] - - for l in self.convs: - x = l(x) - x = F.leaky_relu(x, modules.LRELU_SLOPE) - fmap.append(x) - x = self.conv_post(x) - fmap.append(x) - x = torch.flatten(x, 1, -1) - - return x, fmap - - -class MultiPeriodDiscriminator(torch.nn.Module): - def __init__(self, use_spectral_norm=False): - super(MultiPeriodDiscriminator, self).__init__() - periods = [2, 3, 5, 7, 11] - - discs = [DiscriminatorS(use_spectral_norm=use_spectral_norm)] - discs = discs + [DiscriminatorP(i, use_spectral_norm=use_spectral_norm) for i in periods] - self.discriminators = nn.ModuleList(discs) - - def forward(self, y, y_hat): - y_d_rs = [] - y_d_gs = [] - fmap_rs = [] - fmap_gs = [] - for i, d in enumerate(self.discriminators): - y_d_r, fmap_r = d(y) - y_d_g, fmap_g = d(y_hat) - y_d_rs.append(y_d_r) - y_d_gs.append(y_d_g) - fmap_rs.append(fmap_r) - fmap_gs.append(fmap_g) - - return y_d_rs, y_d_gs, fmap_rs, fmap_gs - -class ReferenceEncoder(nn.Module): - ''' - inputs --- [N, Ty/r, n_mels*r] mels - outputs --- [N, ref_enc_gru_size] - ''' - - def __init__(self, spec_channels, gin_channels=0): - - super().__init__() - self.spec_channels = spec_channels - ref_enc_filters = [32, 32, 64, 64, 128, 128] - K = len(ref_enc_filters) - filters = [1] + ref_enc_filters - convs = [weight_norm(nn.Conv2d(in_channels=filters[i], - out_channels=filters[i + 1], - kernel_size=(3, 3), - stride=(2, 2), - padding=(1, 1))) for i in range(K)] - self.convs = nn.ModuleList(convs) - # self.wns = nn.ModuleList([weight_norm(num_features=ref_enc_filters[i]) for i in range(K)]) - - out_channels = self.calculate_channels(spec_channels, 3, 2, 1, K) - self.gru = nn.GRU(input_size=ref_enc_filters[-1] * out_channels, - hidden_size=256 // 2, - batch_first=True) - self.proj = nn.Linear(128, gin_channels) - - def forward(self, inputs, mask=None): - N = inputs.size(0) - out = inputs.view(N, 1, -1, self.spec_channels) # [N, 1, Ty, n_freqs] - for conv in self.convs: - out = conv(out) - # out = wn(out) - out = F.relu(out) # [N, 128, Ty//2^K, n_mels//2^K] - - out = out.transpose(1, 2) # [N, Ty//2^K, 128, n_mels//2^K] - T = out.size(1) - N = out.size(0) - out = out.contiguous().view(N, T, -1) # [N, Ty//2^K, 128*n_mels//2^K] - - self.gru.flatten_parameters() - memory, out = self.gru(out) # out --- [1, N, 128] - - return self.proj(out.squeeze(0)) - - def calculate_channels(self, L, kernel_size, stride, pad, n_convs): - for i in range(n_convs): - L = (L - kernel_size + 2 * pad) // stride + 1 - return L - - -class SynthesizerTrn(nn.Module): - """ - Synthesizer for Training - """ - - def __init__(self, - n_vocab, - spec_channels, - segment_size, - inter_channels, - hidden_channels, - filter_channels, - n_heads, - n_layers, - kernel_size, - p_dropout, - resblock, - resblock_kernel_sizes, - resblock_dilation_sizes, - upsample_rates, - upsample_initial_channel, - upsample_kernel_sizes, - n_speakers=256, - gin_channels=256, - use_sdp=True, - n_flow_layer = 4, - n_layers_trans_flow = 3, - flow_share_parameter = False, - use_transformer_flow = True, - **kwargs): - - super().__init__() - self.n_vocab = n_vocab - self.spec_channels = spec_channels - self.inter_channels = inter_channels - self.hidden_channels = hidden_channels - self.filter_channels = filter_channels - self.n_heads = n_heads - self.n_layers = n_layers - self.kernel_size = kernel_size - self.p_dropout = p_dropout - self.resblock = resblock - self.resblock_kernel_sizes = resblock_kernel_sizes - self.resblock_dilation_sizes = resblock_dilation_sizes - self.upsample_rates = upsample_rates - self.upsample_initial_channel = upsample_initial_channel - self.upsample_kernel_sizes = upsample_kernel_sizes - self.segment_size = segment_size - self.n_speakers = n_speakers - self.gin_channels = gin_channels - self.n_layers_trans_flow = n_layers_trans_flow - self.use_spk_conditioned_encoder = kwargs.get("use_spk_conditioned_encoder", True) - self.use_sdp = use_sdp - self.use_noise_scaled_mas = kwargs.get("use_noise_scaled_mas", False) - self.mas_noise_scale_initial = kwargs.get("mas_noise_scale_initial", 0.01) - self.noise_scale_delta = kwargs.get("noise_scale_delta", 2e-6) - self.current_mas_noise_scale = self.mas_noise_scale_initial - if self.use_spk_conditioned_encoder and gin_channels > 0: - self.enc_gin_channels = gin_channels - self.enc_p = TextEncoder(n_vocab, - inter_channels, - hidden_channels, - filter_channels, - n_heads, - n_layers, - kernel_size, - p_dropout, - gin_channels=self.enc_gin_channels) - self.dec = Generator(inter_channels, resblock, resblock_kernel_sizes, resblock_dilation_sizes, upsample_rates, - upsample_initial_channel, upsample_kernel_sizes, gin_channels=gin_channels) - self.enc_q = PosteriorEncoder(spec_channels, inter_channels, hidden_channels, 5, 1, 16, - gin_channels=gin_channels) - if use_transformer_flow: - self.flow = TransformerCouplingBlock(inter_channels, hidden_channels, filter_channels, n_heads, n_layers_trans_flow, 5, p_dropout, n_flow_layer, gin_channels=gin_channels,share_parameter= flow_share_parameter) - else: - self.flow = ResidualCouplingBlock(inter_channels, hidden_channels, 5, 1, n_flow_layer, gin_channels=gin_channels) - self.sdp = StochasticDurationPredictor(hidden_channels, 192, 3, 0.5, 4, gin_channels=gin_channels) - self.dp = DurationPredictor(hidden_channels, 256, 3, 0.5, gin_channels=gin_channels) - - if n_speakers >= 1: - self.emb_g = nn.Embedding(n_speakers, gin_channels) - else: - self.ref_enc = ReferenceEncoder(spec_channels, gin_channels) - - def forward(self, x, x_lengths, y, y_lengths, sid, tone, language, bert): - if self.n_speakers > 0: - g = self.emb_g(sid).unsqueeze(-1) # [b, h, 1] - else: - g = self.ref_enc(y.transpose(1,2)).unsqueeze(-1) - x, m_p, logs_p, x_mask = self.enc_p(x, x_lengths, tone, language, bert,g=g) - z, m_q, logs_q, y_mask = self.enc_q(y, y_lengths, g=g) - z_p = self.flow(z, y_mask, g=g) - - with torch.no_grad(): - # negative cross-entropy - s_p_sq_r = torch.exp(-2 * logs_p) # [b, d, t] - neg_cent1 = torch.sum(-0.5 * math.log(2 * math.pi) - logs_p, [1], keepdim=True) # [b, 1, t_s] - neg_cent2 = torch.matmul(-0.5 * (z_p ** 2).transpose(1, 2), - s_p_sq_r) # [b, t_t, d] x [b, d, t_s] = [b, t_t, t_s] - neg_cent3 = torch.matmul(z_p.transpose(1, 2), (m_p * s_p_sq_r)) # [b, t_t, d] x [b, d, t_s] = [b, t_t, t_s] - neg_cent4 = torch.sum(-0.5 * (m_p ** 2) * s_p_sq_r, [1], keepdim=True) # [b, 1, t_s] - neg_cent = neg_cent1 + neg_cent2 + neg_cent3 + neg_cent4 - if self.use_noise_scaled_mas: - epsilon = torch.std(neg_cent) * torch.randn_like(neg_cent) * self.current_mas_noise_scale - neg_cent = neg_cent + epsilon - - attn_mask = torch.unsqueeze(x_mask, 2) * torch.unsqueeze(y_mask, -1) - attn = monotonic_align.maximum_path(neg_cent, attn_mask.squeeze(1)).unsqueeze(1).detach() - - w = attn.sum(2) - - l_length_sdp = self.sdp(x, x_mask, w, g=g) - l_length_sdp = l_length_sdp / torch.sum(x_mask) - - logw_ = torch.log(w + 1e-6) * x_mask - logw = self.dp(x, x_mask, g=g) - l_length_dp = torch.sum((logw - logw_) ** 2, [1, 2]) / torch.sum(x_mask) # for averaging - - l_length = l_length_dp + l_length_sdp - - # expand prior - m_p = torch.matmul(attn.squeeze(1), m_p.transpose(1, 2)).transpose(1, 2) - logs_p = torch.matmul(attn.squeeze(1), logs_p.transpose(1, 2)).transpose(1, 2) - - z_slice, ids_slice = commons.rand_slice_segments(z, y_lengths, self.segment_size) - o = self.dec(z_slice, g=g) - return o, l_length, attn, ids_slice, x_mask, y_mask, (z, z_p, m_p, logs_p, m_q, logs_q), (x, logw, logw_) - - def infer(self, x, x_lengths, sid, tone, language, bert, noise_scale=.667, length_scale=1, noise_scale_w=0.8, max_len=None, sdp_ratio=0,y=None): - #x, m_p, logs_p, x_mask = self.enc_p(x, x_lengths, tone, language, bert) - # g = self.gst(y) - if self.n_speakers > 0: - g = self.emb_g(sid).unsqueeze(-1) # [b, h, 1] - else: - g = self.ref_enc(y.transpose(1,2)).unsqueeze(-1) - x, m_p, logs_p, x_mask = self.enc_p(x, x_lengths, tone, language, bert,g=g) - logw = self.sdp(x, x_mask, g=g, reverse=True, noise_scale=noise_scale_w) * (sdp_ratio) + self.dp(x, x_mask, g=g) * (1 - sdp_ratio) - w = torch.exp(logw) * x_mask * length_scale - w_ceil = torch.ceil(w) - y_lengths = torch.clamp_min(torch.sum(w_ceil, [1, 2]), 1).long() - y_mask = torch.unsqueeze(commons.sequence_mask(y_lengths, None), 1).to(x_mask.dtype) - attn_mask = torch.unsqueeze(x_mask, 2) * torch.unsqueeze(y_mask, -1) - attn = commons.generate_path(w_ceil, attn_mask) - - m_p = torch.matmul(attn.squeeze(1), m_p.transpose(1, 2)).transpose(1, 2) # [b, t', t], [b, t, d] -> [b, d, t'] - logs_p = torch.matmul(attn.squeeze(1), logs_p.transpose(1, 2)).transpose(1, - 2) # [b, t', t], [b, t, d] -> [b, d, t'] - - z_p = m_p + torch.randn_like(m_p) * torch.exp(logs_p) * noise_scale - z = self.flow(z_p, y_mask, g=g, reverse=True) - o = self.dec((z * y_mask)[:, :, :max_len], g=g) - return o, attn, y_mask, (z, z_p, m_p, logs_p) diff --git a/spaces/XzJosh/Carol-Bert-VITS2/text/japanese.py b/spaces/XzJosh/Carol-Bert-VITS2/text/japanese.py deleted file mode 100644 index ddedafa0c5b7986068dc6c91637a86febc3923a9..0000000000000000000000000000000000000000 --- a/spaces/XzJosh/Carol-Bert-VITS2/text/japanese.py +++ /dev/null @@ -1,104 +0,0 @@ -# modified from https://github.com/CjangCjengh/vits/blob/main/text/japanese.py -import re -import sys - -import pyopenjtalk - -from text import symbols - -# Regular expression matching Japanese without punctuation marks: -_japanese_characters = re.compile( - r'[A-Za-z\d\u3005\u3040-\u30ff\u4e00-\u9fff\uff11-\uff19\uff21-\uff3a\uff41-\uff5a\uff66-\uff9d]') - -# Regular expression matching non-Japanese characters or punctuation marks: -_japanese_marks = re.compile( - r'[^A-Za-z\d\u3005\u3040-\u30ff\u4e00-\u9fff\uff11-\uff19\uff21-\uff3a\uff41-\uff5a\uff66-\uff9d]') - -# List of (symbol, Japanese) pairs for marks: -_symbols_to_japanese = [(re.compile('%s' % x[0]), x[1]) for x in [ - ('%', 'パーセント') -]] - - -# List of (consonant, sokuon) pairs: -_real_sokuon = [(re.compile('%s' % x[0]), x[1]) for x in [ - (r'Q([↑↓]*[kg])', r'k#\1'), - (r'Q([↑↓]*[tdjʧ])', r't#\1'), - (r'Q([↑↓]*[sʃ])', r's\1'), - (r'Q([↑↓]*[pb])', r'p#\1') -]] - -# List of (consonant, hatsuon) pairs: -_real_hatsuon = [(re.compile('%s' % x[0]), x[1]) for x in [ - (r'N([↑↓]*[pbm])', r'm\1'), - (r'N([↑↓]*[ʧʥj])', r'n^\1'), - (r'N([↑↓]*[tdn])', r'n\1'), - (r'N([↑↓]*[kg])', r'ŋ\1') -]] - - - -def post_replace_ph(ph): - rep_map = { - ':': ',', - ';': ',', - ',': ',', - '。': '.', - '!': '!', - '?': '?', - '\n': '.', - "·": ",", - '、': ",", - '...': '…', - 'v': "V" - } - if ph in rep_map.keys(): - ph = rep_map[ph] - if ph in symbols: - return ph - if ph not in symbols: - ph = 'UNK' - return ph - -def symbols_to_japanese(text): - for regex, replacement in _symbols_to_japanese: - text = re.sub(regex, replacement, text) - return text - - -def preprocess_jap(text): - '''Reference https://r9y9.github.io/ttslearn/latest/notebooks/ch10_Recipe-Tacotron.html''' - text = symbols_to_japanese(text) - sentences = re.split(_japanese_marks, text) - marks = re.findall(_japanese_marks, text) - text = [] - for i, sentence in enumerate(sentences): - if re.match(_japanese_characters, sentence): - p = pyopenjtalk.g2p(sentence) - text += p.split(" ") - - if i < len(marks): - text += [marks[i].replace(' ', '')] - return text - -def text_normalize(text): - # todo: jap text normalize - return text - -def g2p(norm_text): - phones = preprocess_jap(norm_text) - phones = [post_replace_ph(i) for i in phones] - # todo: implement tones and word2ph - tones = [0 for i in phones] - word2ph = [1 for i in phones] - return phones, tones, word2ph - - -if __name__ == '__main__': - for line in open("../../../Downloads/transcript_utf8.txt").readlines(): - text = line.split(":")[1] - phones, tones, word2ph = g2p(text) - for p in phones: - if p == "z": - print(text, phones) - sys.exit(0) diff --git a/spaces/XzJosh/Spade-Bert-VITS2/bert/chinese-roberta-wwm-ext-large/README.md b/spaces/XzJosh/Spade-Bert-VITS2/bert/chinese-roberta-wwm-ext-large/README.md deleted file mode 100644 index 7bce039b7f81ee328fdf8efe3f14409200aacbef..0000000000000000000000000000000000000000 --- a/spaces/XzJosh/Spade-Bert-VITS2/bert/chinese-roberta-wwm-ext-large/README.md +++ /dev/null @@ -1,57 +0,0 @@ ---- -language: -- zh -tags: -- bert -license: "apache-2.0" ---- - -# Please use 'Bert' related functions to load this model! - -## Chinese BERT with Whole Word Masking -For further accelerating Chinese natural language processing, we provide **Chinese pre-trained BERT with Whole Word Masking**. - -**[Pre-Training with Whole Word Masking for Chinese BERT](https://arxiv.org/abs/1906.08101)** -Yiming Cui, Wanxiang Che, Ting Liu, Bing Qin, Ziqing Yang, Shijin Wang, Guoping Hu - -This repository is developed based on:https://github.com/google-research/bert - -You may also interested in, -- Chinese BERT series: https://github.com/ymcui/Chinese-BERT-wwm -- Chinese MacBERT: https://github.com/ymcui/MacBERT -- Chinese ELECTRA: https://github.com/ymcui/Chinese-ELECTRA -- Chinese XLNet: https://github.com/ymcui/Chinese-XLNet -- Knowledge Distillation Toolkit - TextBrewer: https://github.com/airaria/TextBrewer - -More resources by HFL: https://github.com/ymcui/HFL-Anthology - -## Citation -If you find the technical report or resource is useful, please cite the following technical report in your paper. -- Primary: https://arxiv.org/abs/2004.13922 -``` -@inproceedings{cui-etal-2020-revisiting, - title = "Revisiting Pre-Trained Models for {C}hinese Natural Language Processing", - author = "Cui, Yiming and - Che, Wanxiang and - Liu, Ting and - Qin, Bing and - Wang, Shijin and - Hu, Guoping", - booktitle = "Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: Findings", - month = nov, - year = "2020", - address = "Online", - publisher = "Association for Computational Linguistics", - url = "https://www.aclweb.org/anthology/2020.findings-emnlp.58", - pages = "657--668", -} -``` -- Secondary: https://arxiv.org/abs/1906.08101 -``` -@article{chinese-bert-wwm, - title={Pre-Training with Whole Word Masking for Chinese BERT}, - author={Cui, Yiming and Che, Wanxiang and Liu, Ting and Qin, Bing and Yang, Ziqing and Wang, Shijin and Hu, Guoping}, - journal={arXiv preprint arXiv:1906.08101}, - year={2019} - } -``` \ No newline at end of file diff --git a/spaces/YUANAI/DiffspeechResearch/modules/commons/conformer/conformer.py b/spaces/YUANAI/DiffspeechResearch/modules/commons/conformer/conformer.py deleted file mode 100644 index 21e1ecdda7ec069864d3904abb4360ec5aee637e..0000000000000000000000000000000000000000 --- a/spaces/YUANAI/DiffspeechResearch/modules/commons/conformer/conformer.py +++ /dev/null @@ -1,72 +0,0 @@ -from torch import nn -from .espnet_positional_embedding import RelPositionalEncoding -from .espnet_transformer_attn import RelPositionMultiHeadedAttention -from .layers import Swish, ConvolutionModule, EncoderLayer, MultiLayeredConv1d -from ..layers import Embedding - - -class ConformerLayers(nn.Module): - def __init__(self, hidden_size, num_layers, kernel_size=9, dropout=0.0, num_heads=4, - use_last_norm=True, save_hidden=False): - super().__init__() - self.use_last_norm = use_last_norm - self.layers = nn.ModuleList() - positionwise_layer = MultiLayeredConv1d - positionwise_layer_args = (hidden_size, hidden_size * 4, 1, dropout) - self.pos_embed = RelPositionalEncoding(hidden_size, dropout) - self.encoder_layers = nn.ModuleList([EncoderLayer( - hidden_size, - RelPositionMultiHeadedAttention(num_heads, hidden_size, 0.0), - positionwise_layer(*positionwise_layer_args), - positionwise_layer(*positionwise_layer_args), - ConvolutionModule(hidden_size, kernel_size, Swish()), - dropout, - ) for _ in range(num_layers)]) - if self.use_last_norm: - self.layer_norm = nn.LayerNorm(hidden_size) - else: - self.layer_norm = nn.Linear(hidden_size, hidden_size) - self.save_hidden = save_hidden - if save_hidden: - self.hiddens = [] - - def forward(self, x, padding_mask=None): - """ - - :param x: [B, T, H] - :param padding_mask: [B, T] - :return: [B, T, H] - """ - self.hiddens = [] - nonpadding_mask = x.abs().sum(-1) > 0 - x = self.pos_embed(x) - for l in self.encoder_layers: - x, mask = l(x, nonpadding_mask[:, None, :]) - if self.save_hidden: - self.hiddens.append(x[0]) - x = x[0] - x = self.layer_norm(x) * nonpadding_mask.float()[:, :, None] - return x - - -class ConformerEncoder(ConformerLayers): - def __init__(self, hidden_size, dict_size, num_layers=None): - conformer_enc_kernel_size = 9 - super().__init__(hidden_size, num_layers, conformer_enc_kernel_size) - self.embed = Embedding(dict_size, hidden_size, padding_idx=0) - - def forward(self, x): - """ - - :param src_tokens: [B, T] - :return: [B x T x C] - """ - x = self.embed(x) # [B, T, H] - x = super(ConformerEncoder, self).forward(x) - return x - - -class ConformerDecoder(ConformerLayers): - def __init__(self, hidden_size, num_layers): - conformer_dec_kernel_size = 9 - super().__init__(hidden_size, num_layers, conformer_dec_kernel_size) diff --git a/spaces/Yabo/ControlVideo/README.md b/spaces/Yabo/ControlVideo/README.md deleted file mode 100644 index ef7228c1f12b2c9b5247437cbdfb26b24b4374d5..0000000000000000000000000000000000000000 --- a/spaces/Yabo/ControlVideo/README.md +++ /dev/null @@ -1,20 +0,0 @@ ---- -title: ControlVideo -emoji: 🦩 -colorFrom: blue -colorTo: green -sdk: gradio -sdk_version: 3.36.1 -app_file: app.py -pinned: false -license: mit ---- -### Citation -``` -@article{zhang2023controlvideo, - title={ControlVideo: Training-free Controllable Text-to-Video Generation}, - author={Zhang, Yabo and Wei, Yuxiang and Jiang, Dongsheng and Zhang, Xiaopeng and Zuo, Wangmeng and Tian, Qi}, - journal={arXiv preprint arXiv:2305.13077}, - year={2023} -} -``` \ No newline at end of file diff --git a/spaces/Yuliang/ICON/lib/pymaf/models/pymaf_net.py b/spaces/Yuliang/ICON/lib/pymaf/models/pymaf_net.py deleted file mode 100644 index 2807abaa3c7da0be6913d2fd68cb0ad1721e2bf1..0000000000000000000000000000000000000000 --- a/spaces/Yuliang/ICON/lib/pymaf/models/pymaf_net.py +++ /dev/null @@ -1,362 +0,0 @@ -import torch -import torch.nn as nn -import numpy as np - -from lib.pymaf.utils.geometry import rot6d_to_rotmat, projection, rotation_matrix_to_angle_axis -from .maf_extractor import MAF_Extractor -from .smpl import SMPL, SMPL_MODEL_DIR, SMPL_MEAN_PARAMS, H36M_TO_J14 -from .hmr import ResNet_Backbone -from .res_module import IUV_predict_layer -from lib.common.config import cfg -import logging - -logger = logging.getLogger(__name__) - -BN_MOMENTUM = 0.1 - - -class Regressor(nn.Module): - def __init__(self, feat_dim, smpl_mean_params): - super().__init__() - - npose = 24 * 6 - - self.fc1 = nn.Linear(feat_dim + npose + 13, 1024) - self.drop1 = nn.Dropout() - self.fc2 = nn.Linear(1024, 1024) - self.drop2 = nn.Dropout() - self.decpose = nn.Linear(1024, npose) - self.decshape = nn.Linear(1024, 10) - self.deccam = nn.Linear(1024, 3) - nn.init.xavier_uniform_(self.decpose.weight, gain=0.01) - nn.init.xavier_uniform_(self.decshape.weight, gain=0.01) - nn.init.xavier_uniform_(self.deccam.weight, gain=0.01) - - self.smpl = SMPL(SMPL_MODEL_DIR, batch_size=64, create_transl=False) - - mean_params = np.load(smpl_mean_params) - init_pose = torch.from_numpy(mean_params['pose'][:]).unsqueeze(0) - init_shape = torch.from_numpy( - mean_params['shape'][:].astype('float32')).unsqueeze(0) - init_cam = torch.from_numpy(mean_params['cam']).unsqueeze(0) - self.register_buffer('init_pose', init_pose) - self.register_buffer('init_shape', init_shape) - self.register_buffer('init_cam', init_cam) - - def forward(self, - x, - init_pose=None, - init_shape=None, - init_cam=None, - n_iter=1, - J_regressor=None): - batch_size = x.shape[0] - - if init_pose is None: - init_pose = self.init_pose.expand(batch_size, -1) - if init_shape is None: - init_shape = self.init_shape.expand(batch_size, -1) - if init_cam is None: - init_cam = self.init_cam.expand(batch_size, -1) - - pred_pose = init_pose - pred_shape = init_shape - pred_cam = init_cam - for i in range(n_iter): - xc = torch.cat([x, pred_pose, pred_shape, pred_cam], 1) - xc = self.fc1(xc) - xc = self.drop1(xc) - xc = self.fc2(xc) - xc = self.drop2(xc) - pred_pose = self.decpose(xc) + pred_pose - pred_shape = self.decshape(xc) + pred_shape - pred_cam = self.deccam(xc) + pred_cam - - pred_rotmat = rot6d_to_rotmat(pred_pose).view(batch_size, 24, 3, 3) - - pred_output = self.smpl(betas=pred_shape, - body_pose=pred_rotmat[:, 1:], - global_orient=pred_rotmat[:, 0].unsqueeze(1), - pose2rot=False) - - pred_vertices = pred_output.vertices - pred_joints = pred_output.joints - pred_smpl_joints = pred_output.smpl_joints - pred_keypoints_2d = projection(pred_joints, pred_cam) - pose = rotation_matrix_to_angle_axis(pred_rotmat.reshape(-1, 3, - 3)).reshape( - -1, 72) - - if J_regressor is not None: - pred_joints = torch.matmul(J_regressor, pred_vertices) - pred_pelvis = pred_joints[:, [0], :].clone() - pred_joints = pred_joints[:, H36M_TO_J14, :] - pred_joints = pred_joints - pred_pelvis - - output = { - 'theta': torch.cat([pred_cam, pred_shape, pose], dim=1), - 'verts': pred_vertices, - 'kp_2d': pred_keypoints_2d, - 'kp_3d': pred_joints, - 'smpl_kp_3d': pred_smpl_joints, - 'rotmat': pred_rotmat, - 'pred_cam': pred_cam, - 'pred_shape': pred_shape, - 'pred_pose': pred_pose, - } - return output - - def forward_init(self, - x, - init_pose=None, - init_shape=None, - init_cam=None, - n_iter=1, - J_regressor=None): - batch_size = x.shape[0] - - if init_pose is None: - init_pose = self.init_pose.expand(batch_size, -1) - if init_shape is None: - init_shape = self.init_shape.expand(batch_size, -1) - if init_cam is None: - init_cam = self.init_cam.expand(batch_size, -1) - - pred_pose = init_pose - pred_shape = init_shape - pred_cam = init_cam - - pred_rotmat = rot6d_to_rotmat(pred_pose.contiguous()).view( - batch_size, 24, 3, 3) - - pred_output = self.smpl(betas=pred_shape, - body_pose=pred_rotmat[:, 1:], - global_orient=pred_rotmat[:, 0].unsqueeze(1), - pose2rot=False) - - pred_vertices = pred_output.vertices - pred_joints = pred_output.joints - pred_smpl_joints = pred_output.smpl_joints - pred_keypoints_2d = projection(pred_joints, pred_cam) - pose = rotation_matrix_to_angle_axis(pred_rotmat.reshape(-1, 3, - 3)).reshape( - -1, 72) - - if J_regressor is not None: - pred_joints = torch.matmul(J_regressor, pred_vertices) - pred_pelvis = pred_joints[:, [0], :].clone() - pred_joints = pred_joints[:, H36M_TO_J14, :] - pred_joints = pred_joints - pred_pelvis - - output = { - 'theta': torch.cat([pred_cam, pred_shape, pose], dim=1), - 'verts': pred_vertices, - 'kp_2d': pred_keypoints_2d, - 'kp_3d': pred_joints, - 'smpl_kp_3d': pred_smpl_joints, - 'rotmat': pred_rotmat, - 'pred_cam': pred_cam, - 'pred_shape': pred_shape, - 'pred_pose': pred_pose, - } - return output - - -class PyMAF(nn.Module): - """ PyMAF based Deep Regressor for Human Mesh Recovery - PyMAF: 3D Human Pose and Shape Regression with Pyramidal Mesh Alignment Feedback Loop, in ICCV, 2021 - """ - - def __init__(self, smpl_mean_params=SMPL_MEAN_PARAMS, pretrained=True): - super().__init__() - self.feature_extractor = ResNet_Backbone( - model=cfg.MODEL.PyMAF.BACKBONE, pretrained=pretrained) - - # deconv layers - self.inplanes = self.feature_extractor.inplanes - self.deconv_with_bias = cfg.RES_MODEL.DECONV_WITH_BIAS - self.deconv_layers = self._make_deconv_layer( - cfg.RES_MODEL.NUM_DECONV_LAYERS, - cfg.RES_MODEL.NUM_DECONV_FILTERS, - cfg.RES_MODEL.NUM_DECONV_KERNELS, - ) - - self.maf_extractor = nn.ModuleList() - for _ in range(cfg.MODEL.PyMAF.N_ITER): - self.maf_extractor.append(MAF_Extractor()) - ma_feat_len = self.maf_extractor[-1].Dmap.shape[ - 0] * cfg.MODEL.PyMAF.MLP_DIM[-1] - - grid_size = 21 - xv, yv = torch.meshgrid([ - torch.linspace(-1, 1, grid_size), - torch.linspace(-1, 1, grid_size) - ]) - points_grid = torch.stack([xv.reshape(-1), - yv.reshape(-1)]).unsqueeze(0) - self.register_buffer('points_grid', points_grid) - grid_feat_len = grid_size * grid_size * cfg.MODEL.PyMAF.MLP_DIM[-1] - - self.regressor = nn.ModuleList() - for i in range(cfg.MODEL.PyMAF.N_ITER): - if i == 0: - ref_infeat_dim = grid_feat_len - else: - ref_infeat_dim = ma_feat_len - self.regressor.append( - Regressor(feat_dim=ref_infeat_dim, - smpl_mean_params=smpl_mean_params)) - - dp_feat_dim = 256 - self.with_uv = cfg.LOSS.POINT_REGRESSION_WEIGHTS > 0 - if cfg.MODEL.PyMAF.AUX_SUPV_ON: - self.dp_head = IUV_predict_layer(feat_dim=dp_feat_dim) - - def _make_layer(self, block, planes, blocks, stride=1): - downsample = None - if stride != 1 or self.inplanes != planes * block.expansion: - downsample = nn.Sequential( - nn.Conv2d(self.inplanes, - planes * block.expansion, - kernel_size=1, - stride=stride, - bias=False), - nn.BatchNorm2d(planes * block.expansion), - ) - - layers = [] - layers.append(block(self.inplanes, planes, stride, downsample)) - self.inplanes = planes * block.expansion - for i in range(1, blocks): - layers.append(block(self.inplanes, planes)) - - return nn.Sequential(*layers) - - def _make_deconv_layer(self, num_layers, num_filters, num_kernels): - """ - Deconv_layer used in Simple Baselines: - Xiao et al. Simple Baselines for Human Pose Estimation and Tracking - https://github.com/microsoft/human-pose-estimation.pytorch - """ - assert num_layers == len(num_filters), \ - 'ERROR: num_deconv_layers is different len(num_deconv_filters)' - assert num_layers == len(num_kernels), \ - 'ERROR: num_deconv_layers is different len(num_deconv_filters)' - - def _get_deconv_cfg(deconv_kernel, index): - if deconv_kernel == 4: - padding = 1 - output_padding = 0 - elif deconv_kernel == 3: - padding = 1 - output_padding = 1 - elif deconv_kernel == 2: - padding = 0 - output_padding = 0 - - return deconv_kernel, padding, output_padding - - layers = [] - for i in range(num_layers): - kernel, padding, output_padding = _get_deconv_cfg( - num_kernels[i], i) - - planes = num_filters[i] - layers.append( - nn.ConvTranspose2d(in_channels=self.inplanes, - out_channels=planes, - kernel_size=kernel, - stride=2, - padding=padding, - output_padding=output_padding, - bias=self.deconv_with_bias)) - layers.append(nn.BatchNorm2d(planes, momentum=BN_MOMENTUM)) - layers.append(nn.ReLU(inplace=True)) - self.inplanes = planes - - return nn.Sequential(*layers) - - def forward(self, x, J_regressor=None): - - batch_size = x.shape[0] - - # spatial features and global features - s_feat, g_feat = self.feature_extractor(x) - - assert cfg.MODEL.PyMAF.N_ITER >= 0 and cfg.MODEL.PyMAF.N_ITER <= 3 - if cfg.MODEL.PyMAF.N_ITER == 1: - deconv_blocks = [self.deconv_layers] - elif cfg.MODEL.PyMAF.N_ITER == 2: - deconv_blocks = [self.deconv_layers[0:6], self.deconv_layers[6:9]] - elif cfg.MODEL.PyMAF.N_ITER == 3: - deconv_blocks = [ - self.deconv_layers[0:3], self.deconv_layers[3:6], - self.deconv_layers[6:9] - ] - - out_list = {} - - # initial parameters - # TODO: remove the initial mesh generation during forward to reduce runtime - # by generating initial mesh the beforehand: smpl_output = self.init_smpl - smpl_output = self.regressor[0].forward_init(g_feat, - J_regressor=J_regressor) - - out_list['smpl_out'] = [smpl_output] - out_list['dp_out'] = [] - - # for visulization - vis_feat_list = [s_feat.detach()] - - # parameter predictions - for rf_i in range(cfg.MODEL.PyMAF.N_ITER): - pred_cam = smpl_output['pred_cam'] - pred_shape = smpl_output['pred_shape'] - pred_pose = smpl_output['pred_pose'] - - pred_cam = pred_cam.detach() - pred_shape = pred_shape.detach() - pred_pose = pred_pose.detach() - - s_feat_i = deconv_blocks[rf_i](s_feat) - s_feat = s_feat_i - vis_feat_list.append(s_feat_i.detach()) - - self.maf_extractor[rf_i].im_feat = s_feat_i - self.maf_extractor[rf_i].cam = pred_cam - - if rf_i == 0: - sample_points = torch.transpose( - self.points_grid.expand(batch_size, -1, -1), 1, 2) - ref_feature = self.maf_extractor[rf_i].sampling(sample_points) - else: - pred_smpl_verts = smpl_output['verts'].detach() - # TODO: use a more sparse SMPL implementation (with 431 vertices) for acceleration - pred_smpl_verts_ds = torch.matmul( - self.maf_extractor[rf_i].Dmap.unsqueeze(0), - pred_smpl_verts) # [B, 431, 3] - ref_feature = self.maf_extractor[rf_i]( - pred_smpl_verts_ds) # [B, 431 * n_feat] - - smpl_output = self.regressor[rf_i](ref_feature, - pred_pose, - pred_shape, - pred_cam, - n_iter=1, - J_regressor=J_regressor) - out_list['smpl_out'].append(smpl_output) - - if self.training and cfg.MODEL.PyMAF.AUX_SUPV_ON: - iuv_out_dict = self.dp_head(s_feat) - out_list['dp_out'].append(iuv_out_dict) - - return out_list - - -def pymaf_net(smpl_mean_params, pretrained=True): - """ Constructs an PyMAF model with ResNet50 backbone. - Args: - pretrained (bool): If True, returns a model pre-trained on ImageNet - """ - model = PyMAF(smpl_mean_params, pretrained) - return model diff --git a/spaces/ZJunTvT/ZJunChat/modules/config.py b/spaces/ZJunTvT/ZJunChat/modules/config.py deleted file mode 100644 index 2eee7730787df6a857de21dbb0cbefc42cb7273d..0000000000000000000000000000000000000000 --- a/spaces/ZJunTvT/ZJunChat/modules/config.py +++ /dev/null @@ -1,173 +0,0 @@ -from collections import defaultdict -from contextlib import contextmanager -import os -import logging -import sys -import commentjson as json - -from . import shared -from . import presets - - -__all__ = [ - "my_api_key", - "authflag", - "auth_list", - "dockerflag", - "retrieve_proxy", - "log_level", - "advance_docs", - "update_doc_config", - "multi_api_key", - "server_name", - "server_port", - "share", -] - -# 添加一个统一的config文件,避免文件过多造成的疑惑(优先级最低) -# 同时,也可以为后续支持自定义功能提供config的帮助 -if os.path.exists("config.json"): - with open("config.json", "r", encoding='utf-8') as f: - config = json.load(f) -else: - config = {} - -lang_config = config.get("language", "auto") -language = os.environ.get("LANGUAGE", lang_config) - -if os.path.exists("api_key.txt"): - logging.info("检测到api_key.txt文件,正在进行迁移...") - with open("api_key.txt", "r") as f: - config["openai_api_key"] = f.read().strip() - os.rename("api_key.txt", "api_key(deprecated).txt") - with open("config.json", "w", encoding='utf-8') as f: - json.dump(config, f, indent=4) - -if os.path.exists("auth.json"): - logging.info("检测到auth.json文件,正在进行迁移...") - auth_list = [] - with open("auth.json", "r", encoding='utf-8') as f: - auth = json.load(f) - for _ in auth: - if auth[_]["username"] and auth[_]["password"]: - auth_list.append((auth[_]["username"], auth[_]["password"])) - else: - logging.error("请检查auth.json文件中的用户名和密码!") - sys.exit(1) - config["users"] = auth_list - os.rename("auth.json", "auth(deprecated).json") - with open("config.json", "w", encoding='utf-8') as f: - json.dump(config, f, indent=4) - -## 处理docker if we are running in Docker -dockerflag = config.get("dockerflag", False) -if os.environ.get("dockerrun") == "yes": - dockerflag = True - -## 处理 api-key 以及 允许的用户列表 -my_api_key = config.get("openai_api_key", "") -my_api_key = os.environ.get("OPENAI_API_KEY", my_api_key) - -xmchat_api_key = config.get("xmchat_api_key", "") -if os.environ.get("XMCHAT_API_KEY", None) == None: - os.environ["XMCHAT_API_KEY"] = xmchat_api_key - -## 多账户机制 -multi_api_key = config.get("multi_api_key", False) # 是否开启多账户机制 -if multi_api_key: - api_key_list = config.get("api_key_list", []) - if len(api_key_list) == 0: - logging.error("多账号模式已开启,但api_key_list为空,请检查config.json") - sys.exit(1) - shared.state.set_api_key_queue(api_key_list) - -auth_list = config.get("users", []) # 实际上是使用者的列表 -authflag = len(auth_list) > 0 # 是否开启认证的状态值,改为判断auth_list长度 - -# 处理自定义的api_host,优先读环境变量的配置,如果存在则自动装配 -api_host = os.environ.get("api_host", config.get("api_host", "")) -if api_host: - shared.state.set_api_host(api_host) - -@contextmanager -def retrieve_openai_api(api_key = None): - old_api_key = os.environ.get("OPENAI_API_KEY", "") - if api_key is None: - os.environ["OPENAI_API_KEY"] = my_api_key - yield my_api_key - else: - os.environ["OPENAI_API_KEY"] = api_key - yield api_key - os.environ["OPENAI_API_KEY"] = old_api_key - -## 处理log -log_level = config.get("log_level", "INFO") -logging.basicConfig( - level=log_level, - format="%(asctime)s [%(levelname)s] [%(filename)s:%(lineno)d] %(message)s", -) - -## 处理代理: -http_proxy = config.get("http_proxy", "") -https_proxy = config.get("https_proxy", "") -http_proxy = os.environ.get("HTTP_PROXY", http_proxy) -https_proxy = os.environ.get("HTTPS_PROXY", https_proxy) - -# 重置系统变量,在不需要设置的时候不设置环境变量,以免引起全局代理报错 -os.environ["HTTP_PROXY"] = "" -os.environ["HTTPS_PROXY"] = "" - -local_embedding = config.get("local_embedding", False) # 是否使用本地embedding - -@contextmanager -def retrieve_proxy(proxy=None): - """ - 1, 如果proxy = NONE,设置环境变量,并返回最新设置的代理 - 2,如果proxy != NONE,更新当前的代理配置,但是不更新环境变量 - """ - global http_proxy, https_proxy - if proxy is not None: - http_proxy = proxy - https_proxy = proxy - yield http_proxy, https_proxy - else: - old_var = os.environ["HTTP_PROXY"], os.environ["HTTPS_PROXY"] - os.environ["HTTP_PROXY"] = http_proxy - os.environ["HTTPS_PROXY"] = https_proxy - yield http_proxy, https_proxy # return new proxy - - # return old proxy - os.environ["HTTP_PROXY"], os.environ["HTTPS_PROXY"] = old_var - - -## 处理advance docs -advance_docs = defaultdict(lambda: defaultdict(dict)) -advance_docs.update(config.get("advance_docs", {})) -def update_doc_config(two_column_pdf): - global advance_docs - advance_docs["pdf"]["two_column"] = two_column_pdf - - logging.info(f"更新后的文件参数为:{advance_docs}") - -## 处理gradio.launch参数 -server_name = config.get("server_name", None) -server_port = config.get("server_port", None) -if server_name is None: - if dockerflag: - server_name = "0.0.0.0" - else: - server_name = "127.0.0.1" -if server_port is None: - if dockerflag: - server_port = 7860 - -assert server_port is None or type(server_port) == int, "要求port设置为int类型" - -# 设置默认model -default_model = config.get("default_model", "") -try: - presets.DEFAULT_MODEL = presets.MODELS.index(default_model) -except ValueError: - pass - -share = config.get("share", False) diff --git a/spaces/Zkins/Timmahw-SD2.1_Pokemon3D/README.md b/spaces/Zkins/Timmahw-SD2.1_Pokemon3D/README.md deleted file mode 100644 index 9bcca492a7acdf59d873019b7cfed18d7a4b7cc2..0000000000000000000000000000000000000000 --- a/spaces/Zkins/Timmahw-SD2.1_Pokemon3D/README.md +++ /dev/null @@ -1,12 +0,0 @@ ---- -title: Timmahw-SD2.1 Pokemon3D -emoji: 📈 -colorFrom: yellow -colorTo: yellow -sdk: gradio -sdk_version: 3.16.2 -app_file: app.py -pinned: false ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/abdullah040/TextBook/app.py b/spaces/abdullah040/TextBook/app.py deleted file mode 100644 index ba829e93706f24ebc5689715d956c16b0e88d0f1..0000000000000000000000000000000000000000 --- a/spaces/abdullah040/TextBook/app.py +++ /dev/null @@ -1,21 +0,0 @@ -import gradio as gr -from gpt_index import GPTSimpleVectorIndex -from gpt_index.langchain_helpers.chain_wrapper import OpenAI -import os - -os.environ["OPENAI_API_KEY"] = "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" -index = GPTSimpleVectorIndex.load_from_disk('./index (4).json') - -def my_model_function(input_text): - response = index.query(input_text, response_mode="compact") - return response - -iface = gr.Interface( - fn=my_model_function, - inputs=gr.inputs.Textbox(label="Enter your question here"), - outputs="text", - title="GPT Index Demo", - description="Type a question and see the matching responses from the index!" -) - -iface.launch() diff --git a/spaces/abionchito/rvc-models/infer_pack/attentions.py b/spaces/abionchito/rvc-models/infer_pack/attentions.py deleted file mode 100644 index 77cb63ffccf3e33badf22d50862a64ba517b487f..0000000000000000000000000000000000000000 --- a/spaces/abionchito/rvc-models/infer_pack/attentions.py +++ /dev/null @@ -1,417 +0,0 @@ -import copy -import math -import numpy as np -import torch -from torch import nn -from torch.nn import functional as F - -from infer_pack import commons -from infer_pack import modules -from infer_pack.modules import LayerNorm - - -class Encoder(nn.Module): - def __init__( - self, - hidden_channels, - filter_channels, - n_heads, - n_layers, - kernel_size=1, - p_dropout=0.0, - window_size=10, - **kwargs - ): - super().__init__() - self.hidden_channels = hidden_channels - self.filter_channels = filter_channels - self.n_heads = n_heads - self.n_layers = n_layers - self.kernel_size = kernel_size - self.p_dropout = p_dropout - self.window_size = window_size - - self.drop = nn.Dropout(p_dropout) - self.attn_layers = nn.ModuleList() - self.norm_layers_1 = nn.ModuleList() - self.ffn_layers = nn.ModuleList() - self.norm_layers_2 = nn.ModuleList() - for i in range(self.n_layers): - self.attn_layers.append( - MultiHeadAttention( - hidden_channels, - hidden_channels, - n_heads, - p_dropout=p_dropout, - window_size=window_size, - ) - ) - self.norm_layers_1.append(LayerNorm(hidden_channels)) - self.ffn_layers.append( - FFN( - hidden_channels, - hidden_channels, - filter_channels, - kernel_size, - p_dropout=p_dropout, - ) - ) - self.norm_layers_2.append(LayerNorm(hidden_channels)) - - def forward(self, x, x_mask): - attn_mask = x_mask.unsqueeze(2) * x_mask.unsqueeze(-1) - x = x * x_mask - for i in range(self.n_layers): - y = self.attn_layers[i](x, x, attn_mask) - y = self.drop(y) - x = self.norm_layers_1[i](x + y) - - y = self.ffn_layers[i](x, x_mask) - y = self.drop(y) - x = self.norm_layers_2[i](x + y) - x = x * x_mask - return x - - -class Decoder(nn.Module): - def __init__( - self, - hidden_channels, - filter_channels, - n_heads, - n_layers, - kernel_size=1, - p_dropout=0.0, - proximal_bias=False, - proximal_init=True, - **kwargs - ): - super().__init__() - self.hidden_channels = hidden_channels - self.filter_channels = filter_channels - self.n_heads = n_heads - self.n_layers = n_layers - self.kernel_size = kernel_size - self.p_dropout = p_dropout - self.proximal_bias = proximal_bias - self.proximal_init = proximal_init - - self.drop = nn.Dropout(p_dropout) - self.self_attn_layers = nn.ModuleList() - self.norm_layers_0 = nn.ModuleList() - self.encdec_attn_layers = nn.ModuleList() - self.norm_layers_1 = nn.ModuleList() - self.ffn_layers = nn.ModuleList() - self.norm_layers_2 = nn.ModuleList() - for i in range(self.n_layers): - self.self_attn_layers.append( - MultiHeadAttention( - hidden_channels, - hidden_channels, - n_heads, - p_dropout=p_dropout, - proximal_bias=proximal_bias, - proximal_init=proximal_init, - ) - ) - self.norm_layers_0.append(LayerNorm(hidden_channels)) - self.encdec_attn_layers.append( - MultiHeadAttention( - hidden_channels, hidden_channels, n_heads, p_dropout=p_dropout - ) - ) - self.norm_layers_1.append(LayerNorm(hidden_channels)) - self.ffn_layers.append( - FFN( - hidden_channels, - hidden_channels, - filter_channels, - kernel_size, - p_dropout=p_dropout, - causal=True, - ) - ) - self.norm_layers_2.append(LayerNorm(hidden_channels)) - - def forward(self, x, x_mask, h, h_mask): - """ - x: decoder input - h: encoder output - """ - self_attn_mask = commons.subsequent_mask(x_mask.size(2)).to( - device=x.device, dtype=x.dtype - ) - encdec_attn_mask = h_mask.unsqueeze(2) * x_mask.unsqueeze(-1) - x = x * x_mask - for i in range(self.n_layers): - y = self.self_attn_layers[i](x, x, self_attn_mask) - y = self.drop(y) - x = self.norm_layers_0[i](x + y) - - y = self.encdec_attn_layers[i](x, h, encdec_attn_mask) - y = self.drop(y) - x = self.norm_layers_1[i](x + y) - - y = self.ffn_layers[i](x, x_mask) - y = self.drop(y) - x = self.norm_layers_2[i](x + y) - x = x * x_mask - return x - - -class MultiHeadAttention(nn.Module): - def __init__( - self, - channels, - out_channels, - n_heads, - p_dropout=0.0, - window_size=None, - heads_share=True, - block_length=None, - proximal_bias=False, - proximal_init=False, - ): - super().__init__() - assert channels % n_heads == 0 - - self.channels = channels - self.out_channels = out_channels - self.n_heads = n_heads - self.p_dropout = p_dropout - self.window_size = window_size - self.heads_share = heads_share - self.block_length = block_length - self.proximal_bias = proximal_bias - self.proximal_init = proximal_init - self.attn = None - - self.k_channels = channels // n_heads - self.conv_q = nn.Conv1d(channels, channels, 1) - self.conv_k = nn.Conv1d(channels, channels, 1) - self.conv_v = nn.Conv1d(channels, channels, 1) - self.conv_o = nn.Conv1d(channels, out_channels, 1) - self.drop = nn.Dropout(p_dropout) - - if window_size is not None: - n_heads_rel = 1 if heads_share else n_heads - rel_stddev = self.k_channels**-0.5 - self.emb_rel_k = nn.Parameter( - torch.randn(n_heads_rel, window_size * 2 + 1, self.k_channels) - * rel_stddev - ) - self.emb_rel_v = nn.Parameter( - torch.randn(n_heads_rel, window_size * 2 + 1, self.k_channels) - * rel_stddev - ) - - nn.init.xavier_uniform_(self.conv_q.weight) - nn.init.xavier_uniform_(self.conv_k.weight) - nn.init.xavier_uniform_(self.conv_v.weight) - if proximal_init: - with torch.no_grad(): - self.conv_k.weight.copy_(self.conv_q.weight) - self.conv_k.bias.copy_(self.conv_q.bias) - - def forward(self, x, c, attn_mask=None): - q = self.conv_q(x) - k = self.conv_k(c) - v = self.conv_v(c) - - x, self.attn = self.attention(q, k, v, mask=attn_mask) - - x = self.conv_o(x) - return x - - def attention(self, query, key, value, mask=None): - # reshape [b, d, t] -> [b, n_h, t, d_k] - b, d, t_s, t_t = (*key.size(), query.size(2)) - query = query.view(b, self.n_heads, self.k_channels, t_t).transpose(2, 3) - key = key.view(b, self.n_heads, self.k_channels, t_s).transpose(2, 3) - value = value.view(b, self.n_heads, self.k_channels, t_s).transpose(2, 3) - - scores = torch.matmul(query / math.sqrt(self.k_channels), key.transpose(-2, -1)) - if self.window_size is not None: - assert ( - t_s == t_t - ), "Relative attention is only available for self-attention." - key_relative_embeddings = self._get_relative_embeddings(self.emb_rel_k, t_s) - rel_logits = self._matmul_with_relative_keys( - query / math.sqrt(self.k_channels), key_relative_embeddings - ) - scores_local = self._relative_position_to_absolute_position(rel_logits) - scores = scores + scores_local - if self.proximal_bias: - assert t_s == t_t, "Proximal bias is only available for self-attention." - scores = scores + self._attention_bias_proximal(t_s).to( - device=scores.device, dtype=scores.dtype - ) - if mask is not None: - scores = scores.masked_fill(mask == 0, -1e4) - if self.block_length is not None: - assert ( - t_s == t_t - ), "Local attention is only available for self-attention." - block_mask = ( - torch.ones_like(scores) - .triu(-self.block_length) - .tril(self.block_length) - ) - scores = scores.masked_fill(block_mask == 0, -1e4) - p_attn = F.softmax(scores, dim=-1) # [b, n_h, t_t, t_s] - p_attn = self.drop(p_attn) - output = torch.matmul(p_attn, value) - if self.window_size is not None: - relative_weights = self._absolute_position_to_relative_position(p_attn) - value_relative_embeddings = self._get_relative_embeddings( - self.emb_rel_v, t_s - ) - output = output + self._matmul_with_relative_values( - relative_weights, value_relative_embeddings - ) - output = ( - output.transpose(2, 3).contiguous().view(b, d, t_t) - ) # [b, n_h, t_t, d_k] -> [b, d, t_t] - return output, p_attn - - def _matmul_with_relative_values(self, x, y): - """ - x: [b, h, l, m] - y: [h or 1, m, d] - ret: [b, h, l, d] - """ - ret = torch.matmul(x, y.unsqueeze(0)) - return ret - - def _matmul_with_relative_keys(self, x, y): - """ - x: [b, h, l, d] - y: [h or 1, m, d] - ret: [b, h, l, m] - """ - ret = torch.matmul(x, y.unsqueeze(0).transpose(-2, -1)) - return ret - - def _get_relative_embeddings(self, relative_embeddings, length): - max_relative_position = 2 * self.window_size + 1 - # Pad first before slice to avoid using cond ops. - pad_length = max(length - (self.window_size + 1), 0) - slice_start_position = max((self.window_size + 1) - length, 0) - slice_end_position = slice_start_position + 2 * length - 1 - if pad_length > 0: - padded_relative_embeddings = F.pad( - relative_embeddings, - commons.convert_pad_shape([[0, 0], [pad_length, pad_length], [0, 0]]), - ) - else: - padded_relative_embeddings = relative_embeddings - used_relative_embeddings = padded_relative_embeddings[ - :, slice_start_position:slice_end_position - ] - return used_relative_embeddings - - def _relative_position_to_absolute_position(self, x): - """ - x: [b, h, l, 2*l-1] - ret: [b, h, l, l] - """ - batch, heads, length, _ = x.size() - # Concat columns of pad to shift from relative to absolute indexing. - x = F.pad(x, commons.convert_pad_shape([[0, 0], [0, 0], [0, 0], [0, 1]])) - - # Concat extra elements so to add up to shape (len+1, 2*len-1). - x_flat = x.view([batch, heads, length * 2 * length]) - x_flat = F.pad( - x_flat, commons.convert_pad_shape([[0, 0], [0, 0], [0, length - 1]]) - ) - - # Reshape and slice out the padded elements. - x_final = x_flat.view([batch, heads, length + 1, 2 * length - 1])[ - :, :, :length, length - 1 : - ] - return x_final - - def _absolute_position_to_relative_position(self, x): - """ - x: [b, h, l, l] - ret: [b, h, l, 2*l-1] - """ - batch, heads, length, _ = x.size() - # padd along column - x = F.pad( - x, commons.convert_pad_shape([[0, 0], [0, 0], [0, 0], [0, length - 1]]) - ) - x_flat = x.view([batch, heads, length**2 + length * (length - 1)]) - # add 0's in the beginning that will skew the elements after reshape - x_flat = F.pad(x_flat, commons.convert_pad_shape([[0, 0], [0, 0], [length, 0]])) - x_final = x_flat.view([batch, heads, length, 2 * length])[:, :, :, 1:] - return x_final - - def _attention_bias_proximal(self, length): - """Bias for self-attention to encourage attention to close positions. - Args: - length: an integer scalar. - Returns: - a Tensor with shape [1, 1, length, length] - """ - r = torch.arange(length, dtype=torch.float32) - diff = torch.unsqueeze(r, 0) - torch.unsqueeze(r, 1) - return torch.unsqueeze(torch.unsqueeze(-torch.log1p(torch.abs(diff)), 0), 0) - - -class FFN(nn.Module): - def __init__( - self, - in_channels, - out_channels, - filter_channels, - kernel_size, - p_dropout=0.0, - activation=None, - causal=False, - ): - super().__init__() - self.in_channels = in_channels - self.out_channels = out_channels - self.filter_channels = filter_channels - self.kernel_size = kernel_size - self.p_dropout = p_dropout - self.activation = activation - self.causal = causal - - if causal: - self.padding = self._causal_padding - else: - self.padding = self._same_padding - - self.conv_1 = nn.Conv1d(in_channels, filter_channels, kernel_size) - self.conv_2 = nn.Conv1d(filter_channels, out_channels, kernel_size) - self.drop = nn.Dropout(p_dropout) - - def forward(self, x, x_mask): - x = self.conv_1(self.padding(x * x_mask)) - if self.activation == "gelu": - x = x * torch.sigmoid(1.702 * x) - else: - x = torch.relu(x) - x = self.drop(x) - x = self.conv_2(self.padding(x * x_mask)) - return x * x_mask - - def _causal_padding(self, x): - if self.kernel_size == 1: - return x - pad_l = self.kernel_size - 1 - pad_r = 0 - padding = [[0, 0], [0, 0], [pad_l, pad_r]] - x = F.pad(x, commons.convert_pad_shape(padding)) - return x - - def _same_padding(self, x): - if self.kernel_size == 1: - return x - pad_l = (self.kernel_size - 1) // 2 - pad_r = self.kernel_size // 2 - padding = [[0, 0], [0, 0], [pad_l, pad_r]] - x = F.pad(x, commons.convert_pad_shape(padding)) - return x diff --git a/spaces/abyildirim/inst-inpaint/ldm/modules/diffusionmodules/util.py b/spaces/abyildirim/inst-inpaint/ldm/modules/diffusionmodules/util.py deleted file mode 100644 index 4873508eec2b432b2ab5d82dfd56cd3a6b207c1c..0000000000000000000000000000000000000000 --- a/spaces/abyildirim/inst-inpaint/ldm/modules/diffusionmodules/util.py +++ /dev/null @@ -1,270 +0,0 @@ -# adopted from -# https://github.com/openai/improved-diffusion/blob/main/improved_diffusion/gaussian_diffusion.py -# and -# https://github.com/lucidrains/denoising-diffusion-pytorch/blob/7706bdfc6f527f58d33f84b7b522e61e6e3164b3/denoising_diffusion_pytorch/denoising_diffusion_pytorch.py -# and -# https://github.com/openai/guided-diffusion/blob/0ba878e517b276c45d1195eb29f6f5f72659a05b/guided_diffusion/nn.py -# -# thanks! - - -import os -import math -import torch -import torch.nn as nn -import numpy as np -from einops import repeat - -from ldm.util import instantiate_from_config - - -def make_beta_schedule(schedule, n_timestep, linear_start=1e-4, linear_end=2e-2, cosine_s=8e-3): - if schedule == "linear": - betas = ( - torch.linspace(linear_start ** 0.5, linear_end ** 0.5, n_timestep, dtype=torch.float64) ** 2 - ) - - elif schedule == "cosine": - timesteps = ( - torch.arange(n_timestep + 1, dtype=torch.float64) / n_timestep + cosine_s - ) - alphas = timesteps / (1 + cosine_s) * np.pi / 2 - alphas = torch.cos(alphas).pow(2) - alphas = alphas / alphas[0] - betas = 1 - alphas[1:] / alphas[:-1] - betas = np.clip(betas, a_min=0, a_max=0.999) - - elif schedule == "sqrt_linear": - betas = torch.linspace(linear_start, linear_end, n_timestep, dtype=torch.float64) - elif schedule == "sqrt": - betas = torch.linspace(linear_start, linear_end, n_timestep, dtype=torch.float64) ** 0.5 - else: - raise ValueError(f"schedule '{schedule}' unknown.") - return betas.numpy() - - -def make_ddim_timesteps(ddim_discr_method, num_ddim_timesteps, num_ddpm_timesteps, verbose=True): - if ddim_discr_method == 'uniform': - c = num_ddpm_timesteps // num_ddim_timesteps - ddim_timesteps = np.asarray(list(range(0, num_ddpm_timesteps, c))) - elif ddim_discr_method == 'quad': - ddim_timesteps = ((np.linspace(0, np.sqrt(num_ddpm_timesteps * .8), num_ddim_timesteps)) ** 2).astype(int) - else: - raise NotImplementedError(f'There is no ddim discretization method called "{ddim_discr_method}"') - - # assert ddim_timesteps.shape[0] == num_ddim_timesteps - # add one to get the final alpha values right (the ones from first scale to data during sampling) - steps_out = ddim_timesteps + 1 - if verbose: - print(f'Selected timesteps for ddim sampler: {steps_out}') - return steps_out - - -def make_ddim_sampling_parameters(alphacums, ddim_timesteps, eta, verbose=True): - # select alphas for computing the variance schedule - alphas = alphacums[ddim_timesteps] - alphas_prev = np.asarray([alphacums[0]] + alphacums[ddim_timesteps[:-1]].tolist()) - - # according the the formula provided in https://arxiv.org/abs/2010.02502 - sigmas = eta * np.sqrt((1 - alphas_prev) / (1 - alphas) * (1 - alphas / alphas_prev)) - if verbose: - print(f'Selected alphas for ddim sampler: a_t: {alphas}; a_(t-1): {alphas_prev}') - print(f'For the chosen value of eta, which is {eta}, ' - f'this results in the following sigma_t schedule for ddim sampler {sigmas}') - return sigmas, alphas, alphas_prev - - -def betas_for_alpha_bar(num_diffusion_timesteps, alpha_bar, max_beta=0.999): - """ - Create a beta schedule that discretizes the given alpha_t_bar function, - which defines the cumulative product of (1-beta) over time from t = [0,1]. - :param num_diffusion_timesteps: the number of betas to produce. - :param alpha_bar: a lambda that takes an argument t from 0 to 1 and - produces the cumulative product of (1-beta) up to that - part of the diffusion process. - :param max_beta: the maximum beta to use; use values lower than 1 to - prevent singularities. - """ - betas = [] - for i in range(num_diffusion_timesteps): - t1 = i / num_diffusion_timesteps - t2 = (i + 1) / num_diffusion_timesteps - betas.append(min(1 - alpha_bar(t2) / alpha_bar(t1), max_beta)) - return np.array(betas) - - -def extract_into_tensor(a, t, x_shape): - b, *_ = t.shape - out = a.gather(-1, t) - return out.reshape(b, *((1,) * (len(x_shape) - 1))) - - -def checkpoint(func, inputs, params, flag): - """ - Evaluate a function without caching intermediate activations, allowing for - reduced memory at the expense of extra compute in the backward pass. - :param func: the function to evaluate. - :param inputs: the argument sequence to pass to `func`. - :param params: a sequence of parameters `func` depends on but does not - explicitly take as arguments. - :param flag: if False, disable gradient checkpointing. - """ - if flag: - - # For storing the attention maps, a dict object is sent to the forward function. - # Not to raise an error in below detach operation (backward function), dict and None objects are discarded. - inputs = [x for x in inputs if x is not isinstance(x, dict) and x is not None] - - args = tuple(inputs) + tuple(params) - return CheckpointFunction.apply(func, len(inputs), *args) - else: - return func(*inputs) - -class CheckpointFunction(torch.autograd.Function): - @staticmethod - def forward(ctx, run_function, length, *args): - ctx.run_function = run_function - ctx.input_tensors = list(args[:length]) - ctx.input_params = list(args[length:]) - with torch.no_grad(): - output_tensors = ctx.run_function(*ctx.input_tensors) - return output_tensors - - @staticmethod - def backward(ctx, *output_grads): - ctx.input_tensors = [x.detach().requires_grad_(True) for x in ctx.input_tensors] - with torch.enable_grad(): - # Fixes a bug where the first op in run_function modifies the - # Tensor storage in place, which is not allowed for detach()'d - # Tensors. - shallow_copies = [x.view_as(x) for x in ctx.input_tensors] - output_tensors = ctx.run_function(*shallow_copies) - input_grads = torch.autograd.grad( - output_tensors, - ctx.input_tensors + ctx.input_params, - output_grads, - allow_unused=True, - ) - del ctx.input_tensors - del ctx.input_params - del output_tensors - return (None, None) + input_grads - - -def timestep_embedding(timesteps, dim, max_period=10000, repeat_only=False): - """ - Create sinusoidal timestep embeddings. - :param timesteps: a 1-D Tensor of N indices, one per batch element. - These may be fractional. - :param dim: the dimension of the output. - :param max_period: controls the minimum frequency of the embeddings. - :return: an [N x dim] Tensor of positional embeddings. - """ - if not repeat_only: - half = dim // 2 - freqs = torch.exp( - -math.log(max_period) * torch.arange(start=0, end=half, dtype=torch.float32) / half - ).to(device=timesteps.device) - args = timesteps[:, None].float() * freqs[None] - embedding = torch.cat([torch.cos(args), torch.sin(args)], dim=-1) - if dim % 2: - embedding = torch.cat([embedding, torch.zeros_like(embedding[:, :1])], dim=-1) - else: - embedding = repeat(timesteps, 'b -> b d', d=dim) - return embedding - - -def zero_module(module): - """ - Zero out the parameters of a module and return it. - """ - for p in module.parameters(): - p.detach().zero_() - return module - - -def scale_module(module, scale): - """ - Scale the parameters of a module and return it. - """ - for p in module.parameters(): - p.detach().mul_(scale) - return module - - -def mean_flat(tensor): - """ - Take the mean over all non-batch dimensions. - """ - return tensor.mean(dim=list(range(1, len(tensor.shape)))) - - -def normalization(channels): - """ - Make a standard normalization layer. - :param channels: number of input channels. - :return: an nn.Module for normalization. - """ - return GroupNorm32(32, channels) - - -# PyTorch 1.7 has SiLU, but we support PyTorch 1.5. -class SiLU(nn.Module): - def forward(self, x): - return x * torch.sigmoid(x) - - -class GroupNorm32(nn.GroupNorm): - def forward(self, x): - return super().forward(x.float()).type(x.dtype) - -def conv_nd(dims, *args, **kwargs): - """ - Create a 1D, 2D, or 3D convolution module. - """ - if dims == 1: - return nn.Conv1d(*args, **kwargs) - elif dims == 2: - return nn.Conv2d(*args, **kwargs) - elif dims == 3: - return nn.Conv3d(*args, **kwargs) - raise ValueError(f"unsupported dimensions: {dims}") - - -def linear(*args, **kwargs): - """ - Create a linear module. - """ - return nn.Linear(*args, **kwargs) - - -def avg_pool_nd(dims, *args, **kwargs): - """ - Create a 1D, 2D, or 3D average pooling module. - """ - if dims == 1: - return nn.AvgPool1d(*args, **kwargs) - elif dims == 2: - return nn.AvgPool2d(*args, **kwargs) - elif dims == 3: - return nn.AvgPool3d(*args, **kwargs) - raise ValueError(f"unsupported dimensions: {dims}") - - -class HybridConditioner(nn.Module): - - def __init__(self, c_concat_config, c_crossattn_config): - super().__init__() - self.concat_conditioner = instantiate_from_config(c_concat_config) - self.crossattn_conditioner = instantiate_from_config(c_crossattn_config) - - def forward(self, c_concat, c_crossattn): - c_concat = self.concat_conditioner(c_concat) - c_crossattn = self.crossattn_conditioner(c_crossattn) - return {'c_concat': [c_concat], 'c_crossattn': [c_crossattn]} - - -def noise_like(shape, device, repeat=False): - repeat_noise = lambda: torch.randn((1, *shape[1:]), device=device).repeat(shape[0], *((1,) * (len(shape) - 1))) - noise = lambda: torch.randn(shape, device=device) - return repeat_noise() if repeat else noise() \ No newline at end of file diff --git a/spaces/aidiary/tts-ljspeech-demo/app.py b/spaces/aidiary/tts-ljspeech-demo/app.py deleted file mode 100644 index 3d56e4f8336389dec641a036485d14db4486eca7..0000000000000000000000000000000000000000 --- a/spaces/aidiary/tts-ljspeech-demo/app.py +++ /dev/null @@ -1,72 +0,0 @@ -import uuid - -import gradio as gr -import soundfile as sf -import torch -from huggingface_hub import hf_hub_download -from TTS.tts.configs.vits_config import VitsConfig -from TTS.tts.models.vits import Vits, VitsAudioConfig -from TTS.tts.utils.text.tokenizer import TTSTokenizer -from TTS.utils.audio import AudioProcessor - -REPO_ID = "aidiary/vits-tts-ljspeech" -FILENAME = "checkpoint_60000.pth" - -audio_config = VitsAudioConfig( - sample_rate=22050, - win_length=1024, - hop_length=256, - num_mels=80, - mel_fmin=0, - mel_fmax=None, -) - -config = VitsConfig( - audio=audio_config, - run_name="vits_ljspeech", - batch_size=32, - eval_batch_size=16, - batch_group_size=5, - num_loader_workers=8, - num_eval_loader_workers=4, - run_eval=True, - test_delay_epochs=-1, - epochs=1000, - text_cleaner="english_cleaners", - use_phonemes=True, - phoneme_language="en-us", - compute_input_seq_cache=True, - print_step=25, - print_eval=True, - mixed_precision=True, - cudnn_benchmark=False, -) - -ap = AudioProcessor.init_from_config(config) - -tokenizer, config = TTSTokenizer.init_from_config(config) - -checkpoint_path = hf_hub_download(REPO_ID, FILENAME) -model = Vits(config, ap, tokenizer, speaker_manager=None) -model.load_checkpoint(config, checkpoint_path, eval=True) - - -def tts(text): - token_ids = tokenizer.text_to_ids(text) - token_ids = torch.Tensor(token_ids).long() - token_ids = token_ids.unsqueeze(0) - - outputs = model.inference(token_ids) - waveform = outputs["model_outputs"].squeeze().numpy() - - outfile = f"{uuid.uuid1()}.wav" - sf.write(outfile, waveform, config.audio.sample_rate) - - return outfile - - -inputs = gr.Textbox(label="Input", max_lines=3) -outputs = gr.Audio(label="Output") - -demo = gr.Interface(fn=tts, inputs=inputs, outputs=outputs) -demo.launch() diff --git a/spaces/akhaliq/JoJoGAN/e4e/criteria/moco_loss.py b/spaces/akhaliq/JoJoGAN/e4e/criteria/moco_loss.py deleted file mode 100644 index 8fb13fbd426202cff9014c876c85b0d5c4ec6a9d..0000000000000000000000000000000000000000 --- a/spaces/akhaliq/JoJoGAN/e4e/criteria/moco_loss.py +++ /dev/null @@ -1,71 +0,0 @@ -import torch -from torch import nn -import torch.nn.functional as F - -from configs.paths_config import model_paths - - -class MocoLoss(nn.Module): - - def __init__(self, opts): - super(MocoLoss, self).__init__() - print("Loading MOCO model from path: {}".format(model_paths["moco"])) - self.model = self.__load_model() - self.model.eval() - for param in self.model.parameters(): - param.requires_grad = False - - @staticmethod - def __load_model(): - import torchvision.models as models - model = models.__dict__["resnet50"]() - # freeze all layers but the last fc - for name, param in model.named_parameters(): - if name not in ['fc.weight', 'fc.bias']: - param.requires_grad = False - checkpoint = torch.load(model_paths['moco'], map_location="cpu") - state_dict = checkpoint['state_dict'] - # rename moco pre-trained keys - for k in list(state_dict.keys()): - # retain only encoder_q up to before the embedding layer - if k.startswith('module.encoder_q') and not k.startswith('module.encoder_q.fc'): - # remove prefix - state_dict[k[len("module.encoder_q."):]] = state_dict[k] - # delete renamed or unused k - del state_dict[k] - msg = model.load_state_dict(state_dict, strict=False) - assert set(msg.missing_keys) == {"fc.weight", "fc.bias"} - # remove output layer - model = nn.Sequential(*list(model.children())[:-1]).cuda() - return model - - def extract_feats(self, x): - x = F.interpolate(x, size=224) - x_feats = self.model(x) - x_feats = nn.functional.normalize(x_feats, dim=1) - x_feats = x_feats.squeeze() - return x_feats - - def forward(self, y_hat, y, x): - n_samples = x.shape[0] - x_feats = self.extract_feats(x) - y_feats = self.extract_feats(y) - y_hat_feats = self.extract_feats(y_hat) - y_feats = y_feats.detach() - loss = 0 - sim_improvement = 0 - sim_logs = [] - count = 0 - for i in range(n_samples): - diff_target = y_hat_feats[i].dot(y_feats[i]) - diff_input = y_hat_feats[i].dot(x_feats[i]) - diff_views = y_feats[i].dot(x_feats[i]) - sim_logs.append({'diff_target': float(diff_target), - 'diff_input': float(diff_input), - 'diff_views': float(diff_views)}) - loss += 1 - diff_target - sim_diff = float(diff_target) - float(diff_views) - sim_improvement += sim_diff - count += 1 - - return loss / count, sim_improvement / count, sim_logs diff --git a/spaces/akhaliq/Mask2Former/mask2former/modeling/pixel_decoder/ops/modules/__init__.py b/spaces/akhaliq/Mask2Former/mask2former/modeling/pixel_decoder/ops/modules/__init__.py deleted file mode 100644 index 6fdbf03359958f3d67ab00f879bf6b61a6c8f06a..0000000000000000000000000000000000000000 --- a/spaces/akhaliq/Mask2Former/mask2former/modeling/pixel_decoder/ops/modules/__init__.py +++ /dev/null @@ -1,12 +0,0 @@ -# ------------------------------------------------------------------------------------------------ -# Deformable DETR -# Copyright (c) 2020 SenseTime. All Rights Reserved. -# Licensed under the Apache License, Version 2.0 [see LICENSE for details] -# ------------------------------------------------------------------------------------------------ -# Modified from https://github.com/chengdazhi/Deformable-Convolution-V2-PyTorch/tree/pytorch_1.0.0 -# ------------------------------------------------------------------------------------------------ - -# Copyright (c) Facebook, Inc. and its affiliates. -# Modified by Bowen Cheng from https://github.com/fundamentalvision/Deformable-DETR - -from .ms_deform_attn import MSDeformAttn diff --git a/spaces/akhaliq/Mask2Former/mask2former/modeling/transformer_decoder/position_encoding.py b/spaces/akhaliq/Mask2Former/mask2former/modeling/transformer_decoder/position_encoding.py deleted file mode 100644 index f32532e070e67b2cd25771aea1ad10e7e5a5dc69..0000000000000000000000000000000000000000 --- a/spaces/akhaliq/Mask2Former/mask2former/modeling/transformer_decoder/position_encoding.py +++ /dev/null @@ -1,64 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# # Modified by Bowen Cheng from: https://github.com/facebookresearch/detr/blob/master/models/position_encoding.py -""" -Various positional encodings for the transformer. -""" -import math - -import torch -from torch import nn - - -class PositionEmbeddingSine(nn.Module): - """ - This is a more standard version of the position embedding, very similar to the one - used by the Attention is all you need paper, generalized to work on images. - """ - - def __init__(self, num_pos_feats=64, temperature=10000, normalize=False, scale=None): - super().__init__() - self.num_pos_feats = num_pos_feats - self.temperature = temperature - self.normalize = normalize - if scale is not None and normalize is False: - raise ValueError("normalize should be True if scale is passed") - if scale is None: - scale = 2 * math.pi - self.scale = scale - - def forward(self, x, mask=None): - if mask is None: - mask = torch.zeros((x.size(0), x.size(2), x.size(3)), device=x.device, dtype=torch.bool) - not_mask = ~mask - y_embed = not_mask.cumsum(1, dtype=torch.float32) - x_embed = not_mask.cumsum(2, dtype=torch.float32) - if self.normalize: - eps = 1e-6 - y_embed = y_embed / (y_embed[:, -1:, :] + eps) * self.scale - x_embed = x_embed / (x_embed[:, :, -1:] + eps) * self.scale - - dim_t = torch.arange(self.num_pos_feats, dtype=torch.float32, device=x.device) - dim_t = self.temperature ** (2 * (dim_t // 2) / self.num_pos_feats) - - pos_x = x_embed[:, :, :, None] / dim_t - pos_y = y_embed[:, :, :, None] / dim_t - pos_x = torch.stack( - (pos_x[:, :, :, 0::2].sin(), pos_x[:, :, :, 1::2].cos()), dim=4 - ).flatten(3) - pos_y = torch.stack( - (pos_y[:, :, :, 0::2].sin(), pos_y[:, :, :, 1::2].cos()), dim=4 - ).flatten(3) - pos = torch.cat((pos_y, pos_x), dim=3).permute(0, 3, 1, 2) - return pos - - def __repr__(self, _repr_indent=4): - head = "Positional encoding " + self.__class__.__name__ - body = [ - "num_pos_feats: {}".format(self.num_pos_feats), - "temperature: {}".format(self.temperature), - "normalize: {}".format(self.normalize), - "scale: {}".format(self.scale), - ] - # _repr_indent = 4 - lines = [head] + [" " * _repr_indent + line for line in body] - return "\n".join(lines) diff --git a/spaces/akhaliq/Real-ESRGAN/README.md b/spaces/akhaliq/Real-ESRGAN/README.md deleted file mode 100644 index 87ad054801a0fd3d2ff7961285f07e7890dcfe82..0000000000000000000000000000000000000000 --- a/spaces/akhaliq/Real-ESRGAN/README.md +++ /dev/null @@ -1,34 +0,0 @@ ---- -title: Real ESRGAN -emoji: 🏃 -colorFrom: blue -colorTo: blue -sdk: gradio -sdk_version: 3.1.7 -app_file: app.py -pinned: false ---- - -# Configuration - -`title`: _string_ -Display title for the Space - -`emoji`: _string_ -Space emoji (emoji-only character allowed) - -`colorFrom`: _string_ -Color for Thumbnail gradient (red, yellow, green, blue, indigo, purple, pink, gray) - -`colorTo`: _string_ -Color for Thumbnail gradient (red, yellow, green, blue, indigo, purple, pink, gray) - -`sdk`: _string_ -Can be either `gradio` or `streamlit` - -`app_file`: _string_ -Path to your main application file (which contains either `gradio` or `streamlit` Python code). -Path is relative to the root of the repository. - -`pinned`: _boolean_ -Whether the Space stays on top of your list. diff --git a/spaces/akhaliq/Scientific_Title_Generator/app.py b/spaces/akhaliq/Scientific_Title_Generator/app.py deleted file mode 100644 index f5b0fc2fc56c04b2817ac8fb1298dc72c4fa4b30..0000000000000000000000000000000000000000 --- a/spaces/akhaliq/Scientific_Title_Generator/app.py +++ /dev/null @@ -1,10 +0,0 @@ -import gradio as gr -title = "Scientific Title Generator" -description = "Gradio demo for Scientific Title Generator. To use it, simply add your text, or click one of the examples to load them. Read more at the links below." -article = "

      Huggingface Model

      " -gr.Interface.load("huggingface/AryanLala/autonlp-Scientific_Title_Generator-34558227",inputs=gr.inputs.Textbox(lines=5, label="Input Text"),title=title, - description=description, - article=article, - examples=[ - ["""The scale, variety, and quantity of publicly-available NLP datasets has grown rapidly as researchers propose new tasks, larger models, and novel benchmarks. Datasets is a community library for contemporary NLP designed to support this ecosystem. Datasets aims to standardize end-user interfaces, versioning, and documentation, while providing a lightweight front-end that behaves similarly for small datasets as for internet-scale corpora. The design of the library incorporates a distributed, community-driven approach to adding datasets and documenting usage. After a year of development, the library now includes more than 650 unique datasets, has more than 250 contributors, and has helped support a variety of novel cross-dataset research projects and shared tasks. The library is available at https://github.com/huggingface/datasets."""] - ]).launch() \ No newline at end of file diff --git a/spaces/akhaliq/SummerTime/model/single_doc/textrank_model.py b/spaces/akhaliq/SummerTime/model/single_doc/textrank_model.py deleted file mode 100644 index 233d57559d1db67ece3a7ba27a63b94b5a78a954..0000000000000000000000000000000000000000 --- a/spaces/akhaliq/SummerTime/model/single_doc/textrank_model.py +++ /dev/null @@ -1,89 +0,0 @@ -import spacy -import pytextrank # noqa: F401 -from math import sqrt -from operator import itemgetter -from .base_single_doc_model import SingleDocSummModel -from typing import Union, List - - -class TextRankModel(SingleDocSummModel): - # static variables - model_name = "TextRank" - is_extractive = True - is_neural = False - - def __init__(self, num_sentences=1): - super(TextRankModel, self).__init__() - - self.num_sentences = num_sentences - # load a spaCy model, depending on language, scale, etc. - self.nlp = spacy.load("en_core_web_sm") - self.nlp.add_pipe("textrank", last=True) - - def summarize( - self, corpus: Union[List[str], List[List[str]]], queries: List[str] = None - ) -> List[str]: - self.assert_summ_input_type(corpus, queries) - - return list(map(lambda x: " ".join(self.summarize_single(x)), corpus)) - - def summarize_single(self, corpus) -> List[str]: - # add PyTextRank to the spaCy pipeline - doc = self.nlp(corpus) - sent_bounds = [[s.start, s.end, set([])] for s in doc.sents] - - limit_phrases = self.num_sentences - phrase_id = 0 - unit_vector = [] - for p in doc._.phrases: - unit_vector.append(p.rank) - for chunk in p.chunks: - for sent_start, sent_end, sent_vector in sent_bounds: - if chunk.start >= sent_start and chunk.end <= sent_end: - sent_vector.add(phrase_id) - break - phrase_id += 1 - if phrase_id == limit_phrases: - break - - sum_ranks = sum(unit_vector) - - unit_vector = [rank / sum_ranks for rank in unit_vector] - - sent_rank = {} - sent_id = 0 - for sent_start, sent_end, sent_vector in sent_bounds: - sum_sq = 0.0 - for phrase_id in range(len(unit_vector)): - if phrase_id not in sent_vector: - sum_sq += unit_vector[phrase_id] ** 2.0 - sent_rank[sent_id] = sqrt(sum_sq) - sent_id += 1 - - sorted(sent_rank.items(), key=itemgetter(1)) - - sent_text = {} - sent_id = 0 - limit_sentences = self.num_sentences - summary_sentences = [] - for sent in doc.sents: - sent_text[sent_id] = sent.text - sent_id += 1 - num_sent = 0 - for sent_id, rank in sorted(sent_rank.items(), key=itemgetter(1)): - summary_sentences.append(sent_text[sent_id]) - num_sent += 1 - if num_sent == limit_sentences: - break - - return summary_sentences - - @classmethod - def show_capability(cls): - basic_description = cls.generate_basic_description() - more_details = ( - "A graphbased ranking model for text processing. Extractive sentence summarization. \n " - "Strengths: \n - Fast with low memory usage \n - Allows for control of summary length \n " - "Weaknesses: \n - Not as accurate as neural methods." - ) - print(f"{basic_description} \n {'#'*20} \n {more_details}") diff --git a/spaces/akhaliq/deeplab2/trainer/evaluator_test.py b/spaces/akhaliq/deeplab2/trainer/evaluator_test.py deleted file mode 100644 index c0fd02456773521965e8370b585210759010ab55..0000000000000000000000000000000000000000 --- a/spaces/akhaliq/deeplab2/trainer/evaluator_test.py +++ /dev/null @@ -1,319 +0,0 @@ -# coding=utf-8 -# Copyright 2021 The Deeplab2 Authors. -# -# Licensed under the Apache License, Version 2.0 (the "License"); -# you may not use this file except in compliance with the License. -# You may obtain a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an "AS IS" BASIS, -# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -# See the License for the specific language governing permissions and -# limitations under the License. - -"""Tests for the evaluator.""" - -import os -import tempfile -from unittest import mock - -from absl import flags -import numpy as np -import tensorflow as tf - -from google.protobuf import text_format -from deeplab2 import common -from deeplab2 import config_pb2 -from deeplab2 import trainer_pb2 -from deeplab2.data import data_utils -from deeplab2.data import dataset -from deeplab2.data import sample_generator -from deeplab2.model import deeplab -from deeplab2.model.loss import loss_builder -from deeplab2.trainer import evaluator -from deeplab2.trainer import runner_utils - -# resources dependency - -_CONFIG_PATH = 'deeplab2/configs/example' - -flags.DEFINE_string( - 'panoptic_annotation_data', - 'deeplab2/data/testdata/', - 'Path to annotated test image.') - -FLAGS = flags.FLAGS - -_FILENAME_PREFIX = 'dummy_000000_000000' -_IMAGE_FOLDER = 'leftImg8bit/' - - -def _read_proto_file(filename, proto): - filename = filename # OSS: removed internal filename loading. - with tf.io.gfile.GFile(filename, 'r') as proto_file: - return text_format.ParseLines(proto_file, proto) - - -def _create_panoptic_deeplab_loss(dataset_info): - semantic_loss_options = trainer_pb2.LossOptions.SingleLossOptions( - name='softmax_cross_entropy') - center_loss_options = trainer_pb2.LossOptions.SingleLossOptions(name='mse') - regression_loss_options = trainer_pb2.LossOptions.SingleLossOptions( - name='l1') - loss_options = trainer_pb2.LossOptions( - semantic_loss=semantic_loss_options, - center_loss=center_loss_options, - regression_loss=regression_loss_options) - - loss_layer = loss_builder.DeepLabFamilyLoss( - loss_options, - num_classes=dataset_info.num_classes, - ignore_label=dataset_info.ignore_label, - thing_class_ids=dataset_info.class_has_instances_list) - return loss_layer - - -def _create_max_deeplab_loss(dataset_info): - semantic_loss_options = trainer_pb2.LossOptions.SingleLossOptions( - name='softmax_cross_entropy') - pq_style_loss_options = trainer_pb2.LossOptions.SingleLossOptions() - mask_id_cross_entropy_loss_options = ( - trainer_pb2.LossOptions.SingleLossOptions()) - instance_discrimination_loss_options = ( - trainer_pb2.LossOptions.SingleLossOptions()) - loss_options = trainer_pb2.LossOptions( - semantic_loss=semantic_loss_options, - pq_style_loss=pq_style_loss_options, - mask_id_cross_entropy_loss=mask_id_cross_entropy_loss_options, - instance_discrimination_loss=instance_discrimination_loss_options) - loss_layer = loss_builder.DeepLabFamilyLoss( - loss_options, - num_classes=dataset_info.num_classes, - ignore_label=dataset_info.ignore_label, - thing_class_ids=dataset_info.class_has_instances_list) - return loss_layer - - -class RealDataEvaluatorTest(tf.test.TestCase): - - def setUp(self): - super().setUp() - self._test_img_data_dir = os.path.join( - FLAGS.test_srcdir, - FLAGS.panoptic_annotation_data, - _IMAGE_FOLDER) - self._test_gt_data_dir = os.path.join( - FLAGS.test_srcdir, - FLAGS.panoptic_annotation_data) - image_path = self._test_img_data_dir + _FILENAME_PREFIX + '_leftImg8bit.png' - with tf.io.gfile.GFile(image_path, 'rb') as image_file: - rgb_image = data_utils.read_image(image_file.read()) - self._rgb_image = tf.convert_to_tensor(np.array(rgb_image)) - label_path = self._test_gt_data_dir + 'dummy_gt_for_vps.png' - with tf.io.gfile.GFile(label_path, 'rb') as label_file: - label = data_utils.read_image(label_file.read()) - self._label = tf.expand_dims(tf.convert_to_tensor( - np.dot(np.array(label), [1, 256, 256 * 256])), -1) - - def test_evaluates_max_deeplab_model(self): - tf.random.set_seed(0) - np.random.seed(0) - small_instances = {'threshold': 4096, 'weight': 1.0} - generator = sample_generator.PanopticSampleGenerator( - dataset.CITYSCAPES_PANOPTIC_INFORMATION._asdict(), - focus_small_instances=small_instances, - is_training=False, - crop_size=[769, 769], - thing_id_mask_annotations=True) - input_sample = { - 'image': self._rgb_image, - 'image_name': 'test_image', - 'label': self._label, - 'height': 800, - 'width': 800 - } - sample = generator(input_sample) - - experiment_options_textproto = """ - experiment_name: "evaluation_test" - eval_dataset_options { - dataset: "cityscapes_panoptic" - file_pattern: "EMPTY" - batch_size: 1 - crop_size: 769 - crop_size: 769 - thing_id_mask_annotations: true - } - evaluator_options { - continuous_eval_timeout: 43200 - stuff_area_limit: 2048 - center_score_threshold: 0.1 - nms_kernel: 13 - save_predictions: true - save_raw_predictions: false - } - """ - config = text_format.Parse(experiment_options_textproto, - config_pb2.ExperimentOptions()) - - model_proto_filename = os.path.join( - _CONFIG_PATH, 'example_coco_max_deeplab.textproto') - model_config = _read_proto_file(model_proto_filename, - config_pb2.ExperimentOptions()) - config.model_options.CopyFrom(model_config.model_options) - config.model_options.max_deeplab.auxiliary_semantic_head.output_channels = ( - 19) - model = deeplab.DeepLab(config, dataset.CITYSCAPES_PANOPTIC_INFORMATION) - pool_size = (49, 49) - model.set_pool_size(pool_size) - - loss_layer = _create_max_deeplab_loss( - dataset.CITYSCAPES_PANOPTIC_INFORMATION) - global_step = tf.Variable(initial_value=0, dtype=tf.int64) - - batched_sample = {} - for key, value in sample.items(): - batched_sample[key] = tf.expand_dims(value, axis=0) - real_data = [batched_sample] - - with tempfile.TemporaryDirectory() as model_dir: - with mock.patch.object(runner_utils, 'create_dataset'): - ev = evaluator.Evaluator( - config, model, loss_layer, global_step, model_dir) - - state = ev.eval_begin() - # Verify that output directories are created. - self.assertTrue(os.path.isdir(os.path.join(model_dir, 'vis'))) - - step_outputs = ev.eval_step(iter(real_data)) - - state = ev.eval_reduce(state, step_outputs) - result = ev.eval_end(state) - - expected_metric_keys = { - 'losses/eval_' + common.TOTAL_LOSS, - 'losses/eval_' + common.SEMANTIC_LOSS, - 'losses/eval_' + common.PQ_STYLE_LOSS_CLASS_TERM, - 'losses/eval_' + common.PQ_STYLE_LOSS_MASK_DICE_TERM, - 'losses/eval_' + common.MASK_ID_CROSS_ENTROPY_LOSS, - 'losses/eval_' + common.INSTANCE_DISCRIMINATION_LOSS, - 'evaluation/iou/IoU', - 'evaluation/pq/PQ', - 'evaluation/pq/SQ', - 'evaluation/pq/RQ', - 'evaluation/pq/TP', - 'evaluation/pq/FN', - 'evaluation/pq/FP', - } - self.assertCountEqual(result.keys(), expected_metric_keys) - self.assertSequenceEqual(result['losses/eval_total_loss'].shape, ()) - - -class EvaluatorTest(tf.test.TestCase): - - def test_evaluates_panoptic_deeplab_model(self): - experiment_options_textproto = """ - experiment_name: "evaluation_test" - eval_dataset_options { - dataset: "cityscapes_panoptic" - file_pattern: "EMPTY" - batch_size: 1 - crop_size: 1025 - crop_size: 2049 - # Skip resizing. - min_resize_value: 0 - max_resize_value: 0 - } - evaluator_options { - continuous_eval_timeout: 43200 - stuff_area_limit: 2048 - center_score_threshold: 0.1 - nms_kernel: 13 - save_predictions: true - save_raw_predictions: false - } - """ - config = text_format.Parse(experiment_options_textproto, - config_pb2.ExperimentOptions()) - - model_proto_filename = os.path.join( - _CONFIG_PATH, 'example_cityscapes_panoptic_deeplab.textproto') - model_config = _read_proto_file(model_proto_filename, - config_pb2.ExperimentOptions()) - config.model_options.CopyFrom(model_config.model_options) - model = deeplab.DeepLab(config, dataset.CITYSCAPES_PANOPTIC_INFORMATION) - pool_size = (33, 65) - model.set_pool_size(pool_size) - - loss_layer = _create_panoptic_deeplab_loss( - dataset.CITYSCAPES_PANOPTIC_INFORMATION) - global_step = tf.Variable(initial_value=0, dtype=tf.int64) - - fake_datum = { - common.IMAGE: - tf.zeros([1, 1025, 2049, 3]), - common.RESIZED_IMAGE: - tf.zeros([1, 1025, 2049, 3]), - common.GT_SIZE_RAW: - tf.constant([[1025, 2049]], dtype=tf.int32), - common.GT_SEMANTIC_KEY: - tf.zeros([1, 1025, 2049], dtype=tf.int32), - common.GT_SEMANTIC_RAW: - tf.zeros([1, 1025, 2049], dtype=tf.int32), - common.GT_PANOPTIC_RAW: - tf.zeros([1, 1025, 2049], dtype=tf.int32), - common.GT_IS_CROWD_RAW: - tf.zeros([1, 1025, 2049], dtype=tf.uint8), - common.GT_INSTANCE_CENTER_KEY: - tf.zeros([1, 1025, 2049], dtype=tf.float32), - common.GT_INSTANCE_REGRESSION_KEY: - tf.zeros([1, 1025, 2049, 2], dtype=tf.float32), - common.IMAGE_NAME: - 'fake', - common.SEMANTIC_LOSS_WEIGHT_KEY: - tf.zeros([1, 1025, 2049], dtype=tf.float32), - common.CENTER_LOSS_WEIGHT_KEY: - tf.zeros([1, 1025, 2049], dtype=tf.float32), - common.REGRESSION_LOSS_WEIGHT_KEY: - tf.zeros([1, 1025, 2049], dtype=tf.float32), - } - fake_data = [fake_datum] - - with tempfile.TemporaryDirectory() as model_dir: - with mock.patch.object(runner_utils, 'create_dataset'): - ev = evaluator.Evaluator( - config, model, loss_layer, global_step, model_dir) - - state = ev.eval_begin() - # Verify that output directories are created. - self.assertTrue(os.path.isdir(os.path.join(model_dir, 'vis'))) - - step_outputs = ev.eval_step(iter(fake_data)) - - state = ev.eval_reduce(state, step_outputs) - result = ev.eval_end(state) - - expected_metric_keys = { - 'losses/eval_total_loss', - 'losses/eval_semantic_loss', - 'losses/eval_center_loss', - 'losses/eval_regression_loss', - 'evaluation/iou/IoU', - 'evaluation/pq/PQ', - 'evaluation/pq/SQ', - 'evaluation/pq/RQ', - 'evaluation/pq/TP', - 'evaluation/pq/FN', - 'evaluation/pq/FP', - 'evaluation/ap/AP_Mask', - } - self.assertCountEqual(result.keys(), expected_metric_keys) - - self.assertSequenceEqual(result['losses/eval_total_loss'].shape, ()) - self.assertEqual(result['losses/eval_total_loss'].numpy(), 0.0) - - -if __name__ == '__main__': - tf.test.main() diff --git a/spaces/akhaliq/mdetr/app.py b/spaces/akhaliq/mdetr/app.py deleted file mode 100644 index c7e7da09f332a644e4c5e56cd536797a665fe7bf..0000000000000000000000000000000000000000 --- a/spaces/akhaliq/mdetr/app.py +++ /dev/null @@ -1,155 +0,0 @@ -import os -os.system('pip install gradio==2.3.0a0') -os.system('pip freeze') -import torch -from PIL import Image -import requests -import torchvision.transforms as T -import matplotlib.pyplot as plt -from collections import defaultdict -import torch.nn.functional as F -import numpy as np -from skimage.measure import find_contours - -from matplotlib import patches, lines -from matplotlib.patches import Polygon -import gradio as gr - -torch.hub.download_url_to_file('https://cdn.pixabay.com/photo/2014/03/04/15/10/elephants-279505_1280.jpg', 'elephant.jpg') - -torch.set_grad_enabled(False); -# standard PyTorch mean-std input image normalization -transform = T.Compose([ - T.Resize(800), - T.ToTensor(), - T.Normalize([0.485, 0.456, 0.406], [0.229, 0.224, 0.225]) -]) - -# for output bounding box post-processing -def box_cxcywh_to_xyxy(x): - x_c, y_c, w, h = x.unbind(1) - b = [(x_c - 0.5 * w), (y_c - 0.5 * h), - (x_c + 0.5 * w), (y_c + 0.5 * h)] - return torch.stack(b, dim=1) - -def rescale_bboxes(out_bbox, size): - img_w, img_h = size - b = box_cxcywh_to_xyxy(out_bbox) - b = b * torch.tensor([img_w, img_h, img_w, img_h], dtype=torch.float32) - return b -# colors for visualization -COLORS = [[0.000, 0.447, 0.741], [0.850, 0.325, 0.098], [0.929, 0.694, 0.125], - [0.494, 0.184, 0.556], [0.466, 0.674, 0.188], [0.301, 0.745, 0.933]] - -def apply_mask(image, mask, color, alpha=0.5): - """Apply the given mask to the image. - """ - for c in range(3): - image[:, :, c] = np.where(mask == 1, - image[:, :, c] * - (1 - alpha) + alpha * color[c] * 255, - image[:, :, c]) - return image - -def plot_results(pil_img, scores, boxes, labels, masks=None): - plt.figure(figsize=(16,10)) - np_image = np.array(pil_img) - ax = plt.gca() - colors = COLORS * 100 - if masks is None: - masks = [None for _ in range(len(scores))] - assert len(scores) == len(boxes) == len(labels) == len(masks) - for s, (xmin, ymin, xmax, ymax), l, mask, c in zip(scores, boxes.tolist(), labels, masks, colors): - ax.add_patch(plt.Rectangle((xmin, ymin), xmax - xmin, ymax - ymin, - fill=False, color=c, linewidth=3)) - text = f'{l}: {s:0.2f}' - ax.text(xmin, ymin, text, fontsize=15, bbox=dict(facecolor='white', alpha=0.8)) - - if mask is None: - continue - np_image = apply_mask(np_image, mask, c) - - padded_mask = np.zeros((mask.shape[0] + 2, mask.shape[1] + 2), dtype=np.uint8) - padded_mask[1:-1, 1:-1] = mask - contours = find_contours(padded_mask, 0.5) - for verts in contours: - # Subtract the padding and flip (y, x) to (x, y) - verts = np.fliplr(verts) - 1 - p = Polygon(verts, facecolor="none", edgecolor=c) - ax.add_patch(p) - - - plt.imshow(np_image) - plt.axis('off') - plt.savefig('foo.png',bbox_inches='tight') - return 'foo.png' - - -def add_res(results, ax, color='green'): - #for tt in results.values(): - if True: - bboxes = results['boxes'] - labels = results['labels'] - scores = results['scores'] - #keep = scores >= 0.0 - #bboxes = bboxes[keep].tolist() - #labels = labels[keep].tolist() - #scores = scores[keep].tolist() - #print(torchvision.ops.box_iou(tt['boxes'].cpu().detach(), torch.as_tensor([[xmin, ymin, xmax, ymax]]))) - - colors = ['purple', 'yellow', 'red', 'green', 'orange', 'pink'] - - for i, (b, ll, ss) in enumerate(zip(bboxes, labels, scores)): - ax.add_patch(plt.Rectangle((b[0], b[1]), b[2] - b[0], b[3] - b[1], fill=False, color=colors[i], linewidth=3)) - cls_name = ll if isinstance(ll,str) else CLASSES[ll] - text = f'{cls_name}: {ss:.2f}' - print(text) - ax.text(b[0], b[1], text, fontsize=15, bbox=dict(facecolor='white', alpha=0.8)) -model, postprocessor = torch.hub.load('ashkamath/mdetr:main', 'mdetr_efficientnetB5', pretrained=True, return_postprocessor=True) -model = model.cpu() -model.eval(); - - -def plot_inference(im, caption): - # mean-std normalize the input image (batch-size: 1) - img = transform(im).unsqueeze(0).cpu() - - # propagate through the model - memory_cache = model(img, [caption], encode_and_save=True) - outputs = model(img, [caption], encode_and_save=False, memory_cache=memory_cache) - - # keep only predictions with 0.7+ confidence - probas = 1 - outputs['pred_logits'].softmax(-1)[0, :, -1].cpu() - keep = (probas > 0.7).cpu() - - # convert boxes from [0; 1] to image scales - bboxes_scaled = rescale_bboxes(outputs['pred_boxes'].cpu()[0, keep], im.size) - - # Extract the text spans predicted by each box - positive_tokens = (outputs["pred_logits"].cpu()[0, keep].softmax(-1) > 0.1).nonzero().tolist() - predicted_spans = defaultdict(str) - for tok in positive_tokens: - item, pos = tok - if pos < 255: - span = memory_cache["tokenized"].token_to_chars(0, pos) - predicted_spans [item] += " " + caption[span.start:span.end] - - labels = [predicted_spans [k] for k in sorted(list(predicted_spans .keys()))] - return plot_results(im, probas[keep], bboxes_scaled, labels) - - - -title = "MDETR" -description = "Gradio demo for MDETR: Modulated Detection for End-to-End Multi-Modal Understanding. To use it, simply upload your image and add text, or click one of the examples to load them. Read more at the links below." -article = "

      MDETR: Modulated Detection for End-to-End Multi-Modal Understanding | Github Repo

      " -examples =[['elephant.jpg','baby elephant']] -gr.Interface( - plot_inference, - [gr.inputs.Image(type="pil", label="Input"), gr.inputs.Textbox(label="input text")], - gr.outputs.Image(type="file", label="Output"), - title=title, - description=description, - article=article, - examples=examples, - enable_queue=True - ).launch(debug=True) \ No newline at end of file diff --git a/spaces/ali-ghamdan/deoldify/fastai/callback.py b/spaces/ali-ghamdan/deoldify/fastai/callback.py deleted file mode 100644 index f64f0847d6307568760f6662ff87a15bb272d685..0000000000000000000000000000000000000000 --- a/spaces/ali-ghamdan/deoldify/fastai/callback.py +++ /dev/null @@ -1,396 +0,0 @@ -"Callbacks provides extensibility to the `basic_train` loop. See `train` for examples of custom callbacks." -from .basic_data import * -from .torch_core import * -import torch.distributed as dist - -__all__ = ['AverageMetric', 'Callback', 'CallbackHandler', 'OptimWrapper', 'SmoothenValue', 'Scheduler', 'annealing_cos', 'CallbackList', - 'annealing_exp', 'annealing_linear', 'annealing_no', 'annealing_poly'] - -class OptimWrapper(): - "Basic wrapper around `opt` to simplify hyper-parameters changes." - def __init__(self, opt:optim.Optimizer, wd:Floats=0., true_wd:bool=False, bn_wd:bool=True): - assert not isinstance(opt, OptimWrapper) - self.opt,self.true_wd,self.bn_wd = opt,true_wd,bn_wd - self.opt_keys = list(self.opt.param_groups[0].keys()) - self.opt_keys.remove('params') - self.read_defaults() - self.wd = wd - - @classmethod - def create(cls, opt_func:Union[type,Callable], lr:Union[float,Tuple,List], layer_groups:ModuleList, wd:Floats=0., - true_wd:bool=False, bn_wd:bool=True)->optim.Optimizer: - "Create an `optim.Optimizer` from `opt_func` with `lr`. Set lr on `layer_groups`." - split_params = split_no_wd_params(layer_groups) - opt = opt_func([{'params': p, 'lr':0} for p in split_params]) - opt = cls(opt, wd=wd, true_wd=true_wd, bn_wd=bn_wd) - opt.lr,opt.opt_func = listify(lr, layer_groups),opt_func - return opt - - def new(self, layer_groups:Collection[nn.Module], split_no_wd:bool=True): - "Create a new `OptimWrapper` from `self` with another `layer_groups` but the same hyper-parameters." - opt_func = getattr(self, 'opt_func', self.opt.__class__) - res = self.create(opt_func, self.lr, layer_groups, wd=self.wd, true_wd=self.true_wd, bn_wd=self.bn_wd) - res.mom,res.beta = self.mom,self.beta - return res - - def new_with_params(self, param_groups:Collection[Collection[nn.Parameter]]): - "Create a new `OptimWrapper` from `self` with another `layer_groups` but the same hyper-parameters." - opt_func = getattr(self, 'opt_func', self.opt.__class__) - opt = opt_func([{'params': p, 'lr':0} for p in param_groups]) - opt = self.__class__(opt, wd=self.wd, true_wd=self.true_wd, bn_wd=self.bn_wd) - opt.lr,opt.opt_func,opt.mom,opt.beta = self.lr,opt_func,self.mom,self.beta - return opt - - def __repr__(self)->str: - return f'OptimWrapper over {repr(self.opt)}.\nTrue weight decay: {self.true_wd}' - - #Pytorch optimizer methods - def step(self)->None: - "Set weight decay and step optimizer." - # weight decay outside of optimizer step (AdamW) - if self.true_wd: - for lr,wd,pg1,pg2 in zip(self._lr,self._wd,self.opt.param_groups[::2],self.opt.param_groups[1::2]): - for p in pg1['params']: p.data.mul_(1 - wd*lr) - if self.bn_wd: - for p in pg2['params']: p.data.mul_(1 - wd*lr) - self.set_val('weight_decay', listify(0, self._wd)) - self.opt.step() - - def zero_grad(self)->None: - "Clear optimizer gradients." - self.opt.zero_grad() - - #Passthrough to the inner opt. - def __getattr__(self, k:str)->Any: return getattr(self.opt, k, None) - def __setstate__(self,data:Any): self.__dict__.update(data) - - def clear(self): - "Reset the state of the inner optimizer." - sd = self.state_dict() - sd['state'] = {} - self.load_state_dict(sd) - - @property - def n_params(self): return sum([len(pg['params']) for pg in self.opt.param_groups]) - - #Hyperparameters as properties - @property - def lr(self)->float: return self._lr[-1] - @lr.setter - def lr(self, val:float)->None: - self._lr = self.set_val('lr', listify(val, self._lr)) - - @property - def mom(self)->float:return self._mom[-1] - @mom.setter - def mom(self, val:float)->None: - if 'momentum' in self.opt_keys: self.set_val('momentum', listify(val, self._mom)) - elif 'betas' in self.opt_keys: self.set_val('betas', (listify(val, self._mom), self._beta)) - self._mom = listify(val, self._mom) - - @property - def beta(self)->float: return None if self._beta is None else self._beta[-1] - @beta.setter - def beta(self, val:float)->None: - "Set beta (or alpha as makes sense for given optimizer)." - if val is None: return - if 'betas' in self.opt_keys: self.set_val('betas', (self._mom, listify(val, self._beta))) - elif 'alpha' in self.opt_keys: self.set_val('alpha', listify(val, self._beta)) - self._beta = listify(val, self._beta) - - @property - def wd(self)->float: return self._wd[-1] - @wd.setter - def wd(self, val:float)->None: - "Set weight decay." - if not self.true_wd: self.set_val('weight_decay', listify(val, self._wd), bn_groups=self.bn_wd) - self._wd = listify(val, self._wd) - - #Helper functions - def read_defaults(self)->None: - "Read the values inside the optimizer for the hyper-parameters." - self._beta = None - if 'lr' in self.opt_keys: self._lr = self.read_val('lr') - if 'momentum' in self.opt_keys: self._mom = self.read_val('momentum') - if 'alpha' in self.opt_keys: self._beta = self.read_val('alpha') - if 'betas' in self.opt_keys: self._mom,self._beta = self.read_val('betas') - if 'weight_decay' in self.opt_keys: self._wd = self.read_val('weight_decay') - reserved_names = ['params', 'lr', 'momentum', 'alpha', 'betas', 'weight_decay'] - stat_names = [n for n in self.opt_keys if n not in reserved_names] - self._stats = {n:self.read_val(n) for n in stat_names} - - def get_stat(self, name:str)->float: - if name in ['lr', 'mom', 'beta', 'wd']: return getattr(self, name) - else: return self._stats[name][-1] - def set_stat(self, name:str, value:Union[float, Collection[float]])->None: - if name in ['lr', 'mom', 'beta', 'wd']: setattr(self, name, value) - else: - val = listify(value, self._stats[name]) - self.set_val(name, val) - self._stats[name] = val - - def set_val(self, key:str, val:Any, bn_groups:bool=True)->Any: - "Set `val` inside the optimizer dictionary at `key`." - if is_tuple(val): val = [(v1,v2) for v1,v2 in zip(*val)] - for v,pg1,pg2 in zip(val,self.opt.param_groups[::2],self.opt.param_groups[1::2]): - pg1[key] = v - if bn_groups: pg2[key] = v - return val - - def read_val(self, key:str) -> Union[List[float],Tuple[List[float],List[float]]]: - "Read a hyperparameter `key` in the optimizer dictionary." - val = [pg[key] for pg in self.opt.param_groups[::2]] - if is_tuple(val[0]): val = [o[0] for o in val], [o[1] for o in val] - return val - - def get_state(self): - "Return the inner state minus the layer groups." - return {'opt_state':self.opt.state_dict(), 'lr':self._lr, 'wd':self._wd, 'beta':self._beta, 'mom':self._mom, - 'opt_func':self.opt_func, 'true_wd':self.true_wd, 'bn_wd':self.bn_wd} - - @classmethod - def load_with_state_and_layer_group(cls, state:dict, layer_groups:Collection[nn.Module]): - res = cls.create(state['opt_func'], state['lr'], layer_groups, wd=state['wd'], true_wd=state['true_wd'], - bn_wd=state['bn_wd']) - res._mom,res._beta = state['mom'],state['beta'] - res.load_state_dict(state['opt_state']) - return res - -class Callback(): - "Base class for callbacks that want to record values, dynamically change learner params, etc." - _order=0 - def on_train_begin(self, **kwargs:Any)->None: - "To initialize constants in the callback." - pass - def on_epoch_begin(self, **kwargs:Any)->None: - "At the beginning of each epoch." - pass - def on_batch_begin(self, **kwargs:Any)->None: - "Set HP before the output and loss are computed." - pass - def on_loss_begin(self, **kwargs:Any)->None: - "Called after forward pass but before loss has been computed." - pass - def on_backward_begin(self, **kwargs:Any)->None: - "Called after the forward pass and the loss has been computed, but before backprop." - pass - def on_backward_end(self, **kwargs:Any)->None: - "Called after backprop but before optimizer step. Useful for true weight decay in AdamW." - pass - def on_step_end(self, **kwargs:Any)->None: - "Called after the step of the optimizer but before the gradients are zeroed." - pass - def on_batch_end(self, **kwargs:Any)->None: - "Called at the end of the batch." - pass - def on_epoch_end(self, **kwargs:Any)->None: - "Called at the end of an epoch." - pass - def on_train_end(self, **kwargs:Any)->None: - "Useful for cleaning up things and saving files/models." - pass - def jump_to_epoch(self, epoch)->None: - "To resume training at `epoch` directly." - pass - - def get_state(self, minimal:bool=True): - "Return the inner state of the `Callback`, `minimal` or not." - to_remove = ['exclude', 'not_min'] + getattr(self, 'exclude', []).copy() - if minimal: to_remove += getattr(self, 'not_min', []).copy() - return {k:v for k,v in self.__dict__.items() if k not in to_remove} - - def __repr__(self): - attrs = func_args(self.__init__) - to_remove = getattr(self, 'exclude', []) - list_repr = [self.__class__.__name__] + [f'{k}: {getattr(self, k)}' for k in attrs if k != 'self' and k not in to_remove] - return '\n'.join(list_repr) - -class SmoothenValue(): - "Create a smooth moving average for a value (loss, etc) using `beta`." - def __init__(self, beta:float): - self.beta,self.n,self.mov_avg = beta,0,0 - - def add_value(self, val:float)->None: - "Add `val` to calculate updated smoothed value." - self.n += 1 - self.mov_avg = self.beta * self.mov_avg + (1 - self.beta) * val - self.smooth = self.mov_avg / (1 - self.beta ** self.n) - -CallbackList = Collection[Callback] - -def _get_init_state(): return {'epoch':0, 'iteration':0, 'num_batch':0, 'skip_validate': False} - -@dataclass -class CallbackHandler(): - "Manage all of the registered `callbacks` and `metrics`, smoothing loss by momentum `beta`." - callbacks:CallbackList=None - metrics:CallbackList=None - beta:float=0.98 - - def __post_init__(self)->None: - "Initialize smoother and learning stats." - self.callbacks = ifnone(self.callbacks, []) - self.metrics = ifnone(self.metrics, []) - self.metrics = [(met if isinstance(met, Callback) else AverageMetric(met)) for met in self.metrics] - self.callbacks = sorted(self.callbacks, key=lambda o: getattr(o, '_order', 0)) - self.smoothener = SmoothenValue(self.beta) - self.state_dict:Dict[str,Union[int,float,Tensor]]=_get_init_state() - - def _call_and_update(self, cb, cb_name, **kwargs)->None: - "Call `cb_name` on `cb` and update the inner state." - new = ifnone(getattr(cb, f'on_{cb_name}')(**self.state_dict, **kwargs), dict()) - for k,v in new.items(): - if k not in self.state_dict: - raise Exception(f"{k} isn't a valid key in the state of the callbacks.") - else: self.state_dict[k] = v - - def __call__(self, cb_name, call_mets=True, **kwargs)->None: - "Call through to all of the `CallbakHandler` functions." - if call_mets: - for met in self.metrics: self._call_and_update(met, cb_name, **kwargs) - for cb in self.callbacks: self._call_and_update(cb, cb_name, **kwargs) - - def set_dl(self, dl:DataLoader): - "Set the current `dl` used." - if hasattr(self, 'cb_dl'): self.callbacks.remove(self.cb_dl) - if isinstance(dl.dataset, Callback): - self.callbacks.append(dl.dataset) - self.cb_dl = dl.dataset - - def on_train_begin(self, epochs:int, pbar:PBar, metrics:MetricFuncList)->None: - "About to start learning." - self.state_dict = _get_init_state() - self.state_dict.update(dict(n_epochs=epochs, pbar=pbar, metrics=metrics)) - names = [(met.name if hasattr(met, 'name') else camel2snake(met.__class__.__name__)) for met in self.metrics] - self('train_begin', metrics_names=names) - if self.state_dict['epoch'] != 0: - self.state_dict['pbar'].first_bar.total -= self.state_dict['epoch'] - for cb in self.callbacks: cb.jump_to_epoch(self.state_dict['epoch']) - - def on_epoch_begin(self)->None: - "Handle new epoch." - self.state_dict['num_batch'],self.state_dict['stop_training'] = 0,False - self('epoch_begin') - - def on_batch_begin(self, xb:Tensor, yb:Tensor, train:bool=True)->Tuple[Any,Any]: - "Handle new batch `xb`,`yb` in `train` or validation." - self.state_dict.update(dict(last_input=xb, last_target=yb, train=train, - stop_epoch=False, skip_step=False, skip_zero=False, skip_bwd=False)) - self('batch_begin', mets = not self.state_dict['train']) - return self.state_dict['last_input'], self.state_dict['last_target'] - - def on_loss_begin(self, out:Tensor)->Any: - "Handle start of loss calculation with model output `out`." - self.state_dict['last_output'] = out - self('loss_begin', call_mets=False) - return self.state_dict['last_output'] - - def on_backward_begin(self, loss:Tensor)->Tuple[Any,Any]: - "Handle gradient calculation on `loss`." - self.smoothener.add_value(loss.detach().cpu()) - self.state_dict['last_loss'], self.state_dict['smooth_loss'] = loss, self.smoothener.smooth - self('backward_begin', call_mets=False) - return self.state_dict['last_loss'], self.state_dict['skip_bwd'] - - def on_backward_end(self)->Any: - "Handle end of gradient calculation." - self('backward_end', call_mets=False) - return self.state_dict['skip_step'] - - def on_step_end(self)->Any: - "Handle end of optimization step." - self('step_end', call_mets=False) - return self.state_dict['skip_zero'] - - def on_batch_end(self, loss:Tensor)->Any: - "Handle end of processing one batch with `loss`." - self.state_dict['last_loss'] = loss - self('batch_end', call_mets = not self.state_dict['train']) - if self.state_dict['train']: - self.state_dict['iteration'] += 1 - self.state_dict['num_batch'] += 1 - return self.state_dict['stop_epoch'] - - def on_epoch_end(self, val_loss:Tensor)->bool: - "Epoch is done, process `val_loss`." - self.state_dict['last_metrics'] = [val_loss] if val_loss is not None else [None] - self('epoch_end', call_mets = val_loss is not None) - self.state_dict['epoch'] += 1 - return self.state_dict['stop_training'] - - def on_train_end(self, exception:Union[bool,Exception])->None: - "Handle end of training, `exception` is an `Exception` or False if no exceptions during training." - self('train_end', exception=exception) - - @property - def skip_validate(self): return self.state_dict['skip_validate'] - -class AverageMetric(Callback): - "Wrap a `func` in a callback for metrics computation." - def __init__(self, func): - # If func has a __name__ use this one else it should be a partial - name = func.__name__ if hasattr(func, '__name__') else func.func.__name__ - self.func, self.name = func, name - self.world = num_distrib() - - def on_epoch_begin(self, **kwargs): - "Set the inner value to 0." - self.val, self.count = 0.,0 - - def on_batch_end(self, last_output, last_target, **kwargs): - "Update metric computation with `last_output` and `last_target`." - if not is_listy(last_target): last_target=[last_target] - self.count += first_el(last_target).size(0) - val = self.func(last_output, *last_target) - if self.world: - val = val.clone() - dist.all_reduce(val, op=dist.ReduceOp.SUM) - val /= self.world - self.val += first_el(last_target).size(0) * val.detach().cpu() - - def on_epoch_end(self, last_metrics, **kwargs): - "Set the final result in `last_metrics`." - return add_metrics(last_metrics, self.val/self.count) - -def annealing_no(start:Number, end:Number, pct:float)->Number: - "No annealing, always return `start`." - return start -def annealing_linear(start:Number, end:Number, pct:float)->Number: - "Linearly anneal from `start` to `end` as pct goes from 0.0 to 1.0." - return start + pct * (end-start) -def annealing_exp(start:Number, end:Number, pct:float)->Number: - "Exponentially anneal from `start` to `end` as pct goes from 0.0 to 1.0." - return start * (end/start) ** pct -def annealing_cos(start:Number, end:Number, pct:float)->Number: - "Cosine anneal from `start` to `end` as pct goes from 0.0 to 1.0." - cos_out = np.cos(np.pi * pct) + 1 - return end + (start-end)/2 * cos_out - -def do_annealing_poly(start:Number, end:Number, pct:float, degree:Number)->Number: - "Helper function for `anneal_poly`." - return end + (start-end) * (1-pct)**degree -def annealing_poly(degree:Number)->Number: - "Anneal polynomically from `start` to `end` as pct goes from 0.0 to 1.0." - return functools.partial(do_annealing_poly, degree=degree) - -class Scheduler(): - "Used to \"step\" from start,end (`vals`) over `n_iter` iterations on a schedule defined by `func`" - def __init__(self, vals:StartOptEnd, n_iter:int, func:Optional[AnnealFunc]=None): - self.start,self.end = (vals[0],vals[1]) if is_tuple(vals) else (vals,0) - self.n_iter = max(1,n_iter) - if func is None: self.func = annealing_linear if is_tuple(vals) else annealing_no - else: self.func = func - self.n = 0 - - def restart(self): self.n = 0 - - def step(self)->Number: - "Return next value along annealed schedule." - self.n += 1 - return self.func(self.start, self.end, self.n/self.n_iter) - - @property - def is_done(self)->bool: - "Return `True` if schedule completed." - return self.n >= self.n_iter - diff --git a/spaces/aliabd/SummerTime/tests/evaluation_test.py b/spaces/aliabd/SummerTime/tests/evaluation_test.py deleted file mode 100644 index aa48ff0b07633090551aa847405832b578ef18ce..0000000000000000000000000000000000000000 --- a/spaces/aliabd/SummerTime/tests/evaluation_test.py +++ /dev/null @@ -1,71 +0,0 @@ -import unittest -from typing import Tuple, List, Dict - -from evaluation import SUPPORTED_EVALUATION_METRICS - -from helpers import print_with_color - - -class TestEvaluationMetrics(unittest.TestCase): - def get_summary_pairs(self, size: int = 1) -> Tuple[List[str]]: - test_output = ( - [ - """ - Glowing letters that had been hanging above - the Yankee stadium from 1976 to 2008 were placed for auction at - Sotheby’s on Wednesday, but were not sold, The current owner - of the sign is Reggie Jackson, a Yankee hall-of-famer.""" - ] - * size - ) - test_target = ( - [ - """ - An auction for the lights from Yankee Stadium failed to - produce any bids on Wednesday at Sotheby’s. The lights, - currently owned by former Yankees player Reggie Jackson, - lit the stadium from 1976 until 2008.""" - ] - * size - ) - - return test_output, test_target - - def test_evaluate(self): - print_with_color(f"{'#'*10} Testing all evaluation metrics... {'#'*10}\n", "35") - - num_eval_metrics = 0 - - for metric_class in SUPPORTED_EVALUATION_METRICS: - # if metric_class in [Rouge, RougeWe]: - # # TODO: Temporarily skipping Rouge/RougeWE metrics to avoid local bug. - # continue - - print_with_color(f"Testing {metric_class.metric_name}...", "35") - - metric = metric_class() - - test_output, test_target = self.get_summary_pairs() - score_dict = metric.evaluate(test_output, test_target) - print(f"{metric_class} output dictionary") - print(score_dict) - self.assertTrue(isinstance(score_dict, Dict)) - self.assertNotEqual(score_dict, {}) - - for k, v in score_dict.items(): - self.assertTrue(isinstance(k, str) and isinstance(v, float)) - # # TODO: add metric score range assertions - # self.assertTrue(self.range[0] <= score_dict[k]) - # self.assertTrue(score_dict[k] <= self.range[1]) - - print_with_color(f"{metric_class.metric_name} test complete\n", "32") - num_eval_metrics += 1 - - print_with_color( - f"{'#'*10} Evaluation metrics test complete ({num_eval_metrics} metrics) {'#'*10}", - "32", - ) - - -if __name__ == "__main__": - unittest.main() diff --git a/spaces/allknowingroger/Image-Models-Test209/app.py b/spaces/allknowingroger/Image-Models-Test209/app.py deleted file mode 100644 index fc5088a159ad7070185b2543d13e8e8e8a6d5263..0000000000000000000000000000000000000000 --- a/spaces/allknowingroger/Image-Models-Test209/app.py +++ /dev/null @@ -1,144 +0,0 @@ -import gradio as gr -# import os -# import sys -# from pathlib import Path -import time - -models =[ - "lberglund/sweep_full_2_20231012114749", - "lberglund/sweep_full_1_20231012111005", - "lberglund/sweep_full_0_20231012104517", - "lberglund/sweep_quick_1_20231012103532", - "lberglund/sweep_quick_0_20231012102921", - "lberglund/test_20231012100010", - "nakli/kripa_ai", - "stabilityai/stable-diffusion-2-1", - "zeerakwyne/dreambooth_lora_model", -] - - -model_functions = {} -model_idx = 1 -for model_path in models: - try: - model_functions[model_idx] = gr.Interface.load(f"models/{model_path}", live=False, preprocess=True, postprocess=False) - except Exception as error: - def the_fn(txt): - return None - model_functions[model_idx] = gr.Interface(fn=the_fn, inputs=["text"], outputs=["image"]) - model_idx+=1 - - -def send_it_idx(idx): - def send_it_fn(prompt): - output = (model_functions.get(str(idx)) or model_functions.get(str(1)))(prompt) - return output - return send_it_fn - -def get_prompts(prompt_text): - return prompt_text - -def clear_it(val): - if int(val) != 0: - val = 0 - else: - val = 0 - pass - return val - -def all_task_end(cnt,t_stamp): - to = t_stamp + 60 - et = time.time() - if et > to and t_stamp != 0: - d = gr.update(value=0) - tog = gr.update(value=1) - #print(f'to: {to} et: {et}') - else: - if cnt != 0: - d = gr.update(value=et) - else: - d = gr.update(value=0) - tog = gr.update(value=0) - #print (f'passing: to: {to} et: {et}') - pass - return d, tog - -def all_task_start(): - print("\n\n\n\n\n\n\n") - t = time.gmtime() - t_stamp = time.time() - current_time = time.strftime("%H:%M:%S", t) - return gr.update(value=t_stamp), gr.update(value=t_stamp), gr.update(value=0) - -def clear_fn(): - nn = len(models) - return tuple([None, *[None for _ in range(nn)]]) - - - -with gr.Blocks(title="SD Models") as my_interface: - with gr.Column(scale=12): - # with gr.Row(): - # gr.Markdown("""- Primary prompt: 你想画的内容(英文单词,如 a cat, 加英文逗号效果更好;点 Improve 按钮进行完善)\n- Real prompt: 完善后的提示词,出现后再点右边的 Run 按钮开始运行""") - with gr.Row(): - with gr.Row(scale=6): - primary_prompt=gr.Textbox(label="Prompt", value="") - # real_prompt=gr.Textbox(label="Real prompt") - with gr.Row(scale=6): - # improve_prompts_btn=gr.Button("Improve") - with gr.Row(): - run=gr.Button("Run",variant="primary") - clear_btn=gr.Button("Clear") - with gr.Row(): - sd_outputs = {} - model_idx = 1 - for model_path in models: - with gr.Column(scale=3, min_width=320): - with gr.Box(): - sd_outputs[model_idx] = gr.Image(label=model_path) - pass - model_idx += 1 - pass - pass - - with gr.Row(visible=False): - start_box=gr.Number(interactive=False) - end_box=gr.Number(interactive=False) - tog_box=gr.Textbox(value=0,interactive=False) - - start_box.change( - all_task_end, - [start_box, end_box], - [start_box, tog_box], - every=1, - show_progress=False) - - primary_prompt.submit(all_task_start, None, [start_box, end_box, tog_box]) - run.click(all_task_start, None, [start_box, end_box, tog_box]) - runs_dict = {} - model_idx = 1 - for model_path in models: - runs_dict[model_idx] = run.click(model_functions[model_idx], inputs=[primary_prompt], outputs=[sd_outputs[model_idx]]) - model_idx += 1 - pass - pass - - # improve_prompts_btn_clicked=improve_prompts_btn.click( - # get_prompts, - # inputs=[primary_prompt], - # outputs=[primary_prompt], - # cancels=list(runs_dict.values())) - clear_btn.click( - clear_fn, - None, - [primary_prompt, *list(sd_outputs.values())], - cancels=[*list(runs_dict.values())]) - tog_box.change( - clear_it, - tog_box, - tog_box, - cancels=[*list(runs_dict.values())]) - -my_interface.queue(concurrency_count=600, status_update_rate=1) -my_interface.launch(inline=True, show_api=False) - \ No newline at end of file diff --git a/spaces/alvanlii/domain-expansion/torch_utils/ops/upfirdn2d.h b/spaces/alvanlii/domain-expansion/torch_utils/ops/upfirdn2d.h deleted file mode 100644 index c9e2032bcac9d2abde7a75eea4d812da348afadd..0000000000000000000000000000000000000000 --- a/spaces/alvanlii/domain-expansion/torch_utils/ops/upfirdn2d.h +++ /dev/null @@ -1,59 +0,0 @@ -// Copyright (c) 2021, NVIDIA CORPORATION. All rights reserved. -// -// NVIDIA CORPORATION and its licensors retain all intellectual property -// and proprietary rights in and to this software, related documentation -// and any modifications thereto. Any use, reproduction, disclosure or -// distribution of this software and related documentation without an express -// license agreement from NVIDIA CORPORATION is strictly prohibited. - -#include - -//------------------------------------------------------------------------ -// CUDA kernel parameters. - -struct upfirdn2d_kernel_params -{ - const void* x; - const float* f; - void* y; - - int2 up; - int2 down; - int2 pad0; - int flip; - float gain; - - int4 inSize; // [width, height, channel, batch] - int4 inStride; - int2 filterSize; // [width, height] - int2 filterStride; - int4 outSize; // [width, height, channel, batch] - int4 outStride; - int sizeMinor; - int sizeMajor; - - int loopMinor; - int loopMajor; - int loopX; - int launchMinor; - int launchMajor; -}; - -//------------------------------------------------------------------------ -// CUDA kernel specialization. - -struct upfirdn2d_kernel_spec -{ - void* kernel; - int tileOutW; - int tileOutH; - int loopMinor; - int loopX; -}; - -//------------------------------------------------------------------------ -// CUDA kernel selection. - -template upfirdn2d_kernel_spec choose_upfirdn2d_kernel(const upfirdn2d_kernel_params& p); - -//------------------------------------------------------------------------ diff --git a/spaces/anaclaudia13ct/insect_detection/utils/torch_utils.py b/spaces/anaclaudia13ct/insect_detection/utils/torch_utils.py deleted file mode 100644 index 77549b005ceb7499830256b1ae05eb834fa310c9..0000000000000000000000000000000000000000 --- a/spaces/anaclaudia13ct/insect_detection/utils/torch_utils.py +++ /dev/null @@ -1,432 +0,0 @@ -# YOLOv5 🚀 by Ultralytics, GPL-3.0 license -""" -PyTorch utils -""" - -import math -import os -import platform -import subprocess -import time -import warnings -from contextlib import contextmanager -from copy import deepcopy -from pathlib import Path - -import torch -import torch.distributed as dist -import torch.nn as nn -import torch.nn.functional as F -from torch.nn.parallel import DistributedDataParallel as DDP - -from utils.general import LOGGER, check_version, colorstr, file_date, git_describe - -LOCAL_RANK = int(os.getenv('LOCAL_RANK', -1)) # https://pytorch.org/docs/stable/elastic/run.html -RANK = int(os.getenv('RANK', -1)) -WORLD_SIZE = int(os.getenv('WORLD_SIZE', 1)) - -try: - import thop # for FLOPs computation -except ImportError: - thop = None - -# Suppress PyTorch warnings -warnings.filterwarnings('ignore', message='User provided device_type of \'cuda\', but CUDA is not available. Disabling') -warnings.filterwarnings('ignore', category=UserWarning) - - -def smart_inference_mode(torch_1_9=check_version(torch.__version__, '1.9.0')): - # Applies torch.inference_mode() decorator if torch>=1.9.0 else torch.no_grad() decorator - def decorate(fn): - return (torch.inference_mode if torch_1_9 else torch.no_grad)()(fn) - - return decorate - - -def smartCrossEntropyLoss(label_smoothing=0.0): - # Returns nn.CrossEntropyLoss with label smoothing enabled for torch>=1.10.0 - if check_version(torch.__version__, '1.10.0'): - return nn.CrossEntropyLoss(label_smoothing=label_smoothing) - if label_smoothing > 0: - LOGGER.warning(f'WARNING ⚠️ label smoothing {label_smoothing} requires torch>=1.10.0') - return nn.CrossEntropyLoss() - - -def smart_DDP(model): - # Model DDP creation with checks - assert not check_version(torch.__version__, '1.12.0', pinned=True), \ - 'torch==1.12.0 torchvision==0.13.0 DDP training is not supported due to a known issue. ' \ - 'Please upgrade or downgrade torch to use DDP. See https://github.com/ultralytics/yolov5/issues/8395' - if check_version(torch.__version__, '1.11.0'): - return DDP(model, device_ids=[LOCAL_RANK], output_device=LOCAL_RANK, static_graph=True) - else: - return DDP(model, device_ids=[LOCAL_RANK], output_device=LOCAL_RANK) - - -def reshape_classifier_output(model, n=1000): - # Update a TorchVision classification model to class count 'n' if required - from models.common import Classify - name, m = list((model.model if hasattr(model, 'model') else model).named_children())[-1] # last module - if isinstance(m, Classify): # YOLOv5 Classify() head - if m.linear.out_features != n: - m.linear = nn.Linear(m.linear.in_features, n) - elif isinstance(m, nn.Linear): # ResNet, EfficientNet - if m.out_features != n: - setattr(model, name, nn.Linear(m.in_features, n)) - elif isinstance(m, nn.Sequential): - types = [type(x) for x in m] - if nn.Linear in types: - i = types.index(nn.Linear) # nn.Linear index - if m[i].out_features != n: - m[i] = nn.Linear(m[i].in_features, n) - elif nn.Conv2d in types: - i = types.index(nn.Conv2d) # nn.Conv2d index - if m[i].out_channels != n: - m[i] = nn.Conv2d(m[i].in_channels, n, m[i].kernel_size, m[i].stride, bias=m[i].bias is not None) - - -@contextmanager -def torch_distributed_zero_first(local_rank: int): - # Decorator to make all processes in distributed training wait for each local_master to do something - if local_rank not in [-1, 0]: - dist.barrier(device_ids=[local_rank]) - yield - if local_rank == 0: - dist.barrier(device_ids=[0]) - - -def device_count(): - # Returns number of CUDA devices available. Safe version of torch.cuda.device_count(). Supports Linux and Windows - assert platform.system() in ('Linux', 'Windows'), 'device_count() only supported on Linux or Windows' - try: - cmd = 'nvidia-smi -L | wc -l' if platform.system() == 'Linux' else 'nvidia-smi -L | find /c /v ""' # Windows - return int(subprocess.run(cmd, shell=True, capture_output=True, check=True).stdout.decode().split()[-1]) - except Exception: - return 0 - - -def select_device(device='', batch_size=0, newline=True): - # device = None or 'cpu' or 0 or '0' or '0,1,2,3' - s = f'YOLOv5 🚀 {git_describe() or file_date()} Python-{platform.python_version()} torch-{torch.__version__} ' - device = str(device).strip().lower().replace('cuda:', '').replace('none', '') # to string, 'cuda:0' to '0' - cpu = device == 'cpu' - mps = device == 'mps' # Apple Metal Performance Shaders (MPS) - if cpu or mps: - os.environ['CUDA_VISIBLE_DEVICES'] = '-1' # force torch.cuda.is_available() = False - elif device: # non-cpu device requested - os.environ['CUDA_VISIBLE_DEVICES'] = device # set environment variable - must be before assert is_available() - assert torch.cuda.is_available() and torch.cuda.device_count() >= len(device.replace(',', '')), \ - f"Invalid CUDA '--device {device}' requested, use '--device cpu' or pass valid CUDA device(s)" - - if not cpu and not mps and torch.cuda.is_available(): # prefer GPU if available - devices = device.split(',') if device else '0' # range(torch.cuda.device_count()) # i.e. 0,1,6,7 - n = len(devices) # device count - if n > 1 and batch_size > 0: # check batch_size is divisible by device_count - assert batch_size % n == 0, f'batch-size {batch_size} not multiple of GPU count {n}' - space = ' ' * (len(s) + 1) - for i, d in enumerate(devices): - p = torch.cuda.get_device_properties(i) - s += f"{'' if i == 0 else space}CUDA:{d} ({p.name}, {p.total_memory / (1 << 20):.0f}MiB)\n" # bytes to MB - arg = 'cuda:0' - elif mps and getattr(torch, 'has_mps', False) and torch.backends.mps.is_available(): # prefer MPS if available - s += 'MPS\n' - arg = 'mps' - else: # revert to CPU - s += 'CPU\n' - arg = 'cpu' - - if not newline: - s = s.rstrip() - LOGGER.info(s) - return torch.device(arg) - - -def time_sync(): - # PyTorch-accurate time - if torch.cuda.is_available(): - torch.cuda.synchronize() - return time.time() - - -def profile(input, ops, n=10, device=None): - """ YOLOv5 speed/memory/FLOPs profiler - Usage: - input = torch.randn(16, 3, 640, 640) - m1 = lambda x: x * torch.sigmoid(x) - m2 = nn.SiLU() - profile(input, [m1, m2], n=100) # profile over 100 iterations - """ - results = [] - if not isinstance(device, torch.device): - device = select_device(device) - print(f"{'Params':>12s}{'GFLOPs':>12s}{'GPU_mem (GB)':>14s}{'forward (ms)':>14s}{'backward (ms)':>14s}" - f"{'input':>24s}{'output':>24s}") - - for x in input if isinstance(input, list) else [input]: - x = x.to(device) - x.requires_grad = True - for m in ops if isinstance(ops, list) else [ops]: - m = m.to(device) if hasattr(m, 'to') else m # device - m = m.half() if hasattr(m, 'half') and isinstance(x, torch.Tensor) and x.dtype is torch.float16 else m - tf, tb, t = 0, 0, [0, 0, 0] # dt forward, backward - try: - flops = thop.profile(m, inputs=(x,), verbose=False)[0] / 1E9 * 2 # GFLOPs - except Exception: - flops = 0 - - try: - for _ in range(n): - t[0] = time_sync() - y = m(x) - t[1] = time_sync() - try: - _ = (sum(yi.sum() for yi in y) if isinstance(y, list) else y).sum().backward() - t[2] = time_sync() - except Exception: # no backward method - # print(e) # for debug - t[2] = float('nan') - tf += (t[1] - t[0]) * 1000 / n # ms per op forward - tb += (t[2] - t[1]) * 1000 / n # ms per op backward - mem = torch.cuda.memory_reserved() / 1E9 if torch.cuda.is_available() else 0 # (GB) - s_in, s_out = (tuple(x.shape) if isinstance(x, torch.Tensor) else 'list' for x in (x, y)) # shapes - p = sum(x.numel() for x in m.parameters()) if isinstance(m, nn.Module) else 0 # parameters - print(f'{p:12}{flops:12.4g}{mem:>14.3f}{tf:14.4g}{tb:14.4g}{str(s_in):>24s}{str(s_out):>24s}') - results.append([p, flops, mem, tf, tb, s_in, s_out]) - except Exception as e: - print(e) - results.append(None) - torch.cuda.empty_cache() - return results - - -def is_parallel(model): - # Returns True if model is of type DP or DDP - return type(model) in (nn.parallel.DataParallel, nn.parallel.DistributedDataParallel) - - -def de_parallel(model): - # De-parallelize a model: returns single-GPU model if model is of type DP or DDP - return model.module if is_parallel(model) else model - - -def initialize_weights(model): - for m in model.modules(): - t = type(m) - if t is nn.Conv2d: - pass # nn.init.kaiming_normal_(m.weight, mode='fan_out', nonlinearity='relu') - elif t is nn.BatchNorm2d: - m.eps = 1e-3 - m.momentum = 0.03 - elif t in [nn.Hardswish, nn.LeakyReLU, nn.ReLU, nn.ReLU6, nn.SiLU]: - m.inplace = True - - -def find_modules(model, mclass=nn.Conv2d): - # Finds layer indices matching module class 'mclass' - return [i for i, m in enumerate(model.module_list) if isinstance(m, mclass)] - - -def sparsity(model): - # Return global model sparsity - a, b = 0, 0 - for p in model.parameters(): - a += p.numel() - b += (p == 0).sum() - return b / a - - -def prune(model, amount=0.3): - # Prune model to requested global sparsity - import torch.nn.utils.prune as prune - for name, m in model.named_modules(): - if isinstance(m, nn.Conv2d): - prune.l1_unstructured(m, name='weight', amount=amount) # prune - prune.remove(m, 'weight') # make permanent - LOGGER.info(f'Model pruned to {sparsity(model):.3g} global sparsity') - - -def fuse_conv_and_bn(conv, bn): - # Fuse Conv2d() and BatchNorm2d() layers https://tehnokv.com/posts/fusing-batchnorm-and-conv/ - fusedconv = nn.Conv2d(conv.in_channels, - conv.out_channels, - kernel_size=conv.kernel_size, - stride=conv.stride, - padding=conv.padding, - dilation=conv.dilation, - groups=conv.groups, - bias=True).requires_grad_(False).to(conv.weight.device) - - # Prepare filters - w_conv = conv.weight.clone().view(conv.out_channels, -1) - w_bn = torch.diag(bn.weight.div(torch.sqrt(bn.eps + bn.running_var))) - fusedconv.weight.copy_(torch.mm(w_bn, w_conv).view(fusedconv.weight.shape)) - - # Prepare spatial bias - b_conv = torch.zeros(conv.weight.size(0), device=conv.weight.device) if conv.bias is None else conv.bias - b_bn = bn.bias - bn.weight.mul(bn.running_mean).div(torch.sqrt(bn.running_var + bn.eps)) - fusedconv.bias.copy_(torch.mm(w_bn, b_conv.reshape(-1, 1)).reshape(-1) + b_bn) - - return fusedconv - - -def model_info(model, verbose=False, imgsz=640): - # Model information. img_size may be int or list, i.e. img_size=640 or img_size=[640, 320] - n_p = sum(x.numel() for x in model.parameters()) # number parameters - n_g = sum(x.numel() for x in model.parameters() if x.requires_grad) # number gradients - if verbose: - print(f"{'layer':>5} {'name':>40} {'gradient':>9} {'parameters':>12} {'shape':>20} {'mu':>10} {'sigma':>10}") - for i, (name, p) in enumerate(model.named_parameters()): - name = name.replace('module_list.', '') - print('%5g %40s %9s %12g %20s %10.3g %10.3g' % - (i, name, p.requires_grad, p.numel(), list(p.shape), p.mean(), p.std())) - - try: # FLOPs - p = next(model.parameters()) - stride = max(int(model.stride.max()), 32) if hasattr(model, 'stride') else 32 # max stride - im = torch.empty((1, p.shape[1], stride, stride), device=p.device) # input image in BCHW format - flops = thop.profile(deepcopy(model), inputs=(im,), verbose=False)[0] / 1E9 * 2 # stride GFLOPs - imgsz = imgsz if isinstance(imgsz, list) else [imgsz, imgsz] # expand if int/float - fs = f', {flops * imgsz[0] / stride * imgsz[1] / stride:.1f} GFLOPs' # 640x640 GFLOPs - except Exception: - fs = '' - - name = Path(model.yaml_file).stem.replace('yolov5', 'YOLOv5') if hasattr(model, 'yaml_file') else 'Model' - LOGGER.info(f"{name} summary: {len(list(model.modules()))} layers, {n_p} parameters, {n_g} gradients{fs}") - - -def scale_img(img, ratio=1.0, same_shape=False, gs=32): # img(16,3,256,416) - # Scales img(bs,3,y,x) by ratio constrained to gs-multiple - if ratio == 1.0: - return img - h, w = img.shape[2:] - s = (int(h * ratio), int(w * ratio)) # new size - img = F.interpolate(img, size=s, mode='bilinear', align_corners=False) # resize - if not same_shape: # pad/crop img - h, w = (math.ceil(x * ratio / gs) * gs for x in (h, w)) - return F.pad(img, [0, w - s[1], 0, h - s[0]], value=0.447) # value = imagenet mean - - -def copy_attr(a, b, include=(), exclude=()): - # Copy attributes from b to a, options to only include [...] and to exclude [...] - for k, v in b.__dict__.items(): - if (len(include) and k not in include) or k.startswith('_') or k in exclude: - continue - else: - setattr(a, k, v) - - -def smart_optimizer(model, name='Adam', lr=0.001, momentum=0.9, decay=1e-5): - # YOLOv5 3-param group optimizer: 0) weights with decay, 1) weights no decay, 2) biases no decay - g = [], [], [] # optimizer parameter groups - bn = tuple(v for k, v in nn.__dict__.items() if 'Norm' in k) # normalization layers, i.e. BatchNorm2d() - for v in model.modules(): - for p_name, p in v.named_parameters(recurse=0): - if p_name == 'bias': # bias (no decay) - g[2].append(p) - elif p_name == 'weight' and isinstance(v, bn): # weight (no decay) - g[1].append(p) - else: - g[0].append(p) # weight (with decay) - - if name == 'Adam': - optimizer = torch.optim.Adam(g[2], lr=lr, betas=(momentum, 0.999)) # adjust beta1 to momentum - elif name == 'AdamW': - optimizer = torch.optim.AdamW(g[2], lr=lr, betas=(momentum, 0.999), weight_decay=0.0) - elif name == 'RMSProp': - optimizer = torch.optim.RMSprop(g[2], lr=lr, momentum=momentum) - elif name == 'SGD': - optimizer = torch.optim.SGD(g[2], lr=lr, momentum=momentum, nesterov=True) - else: - raise NotImplementedError(f'Optimizer {name} not implemented.') - - optimizer.add_param_group({'params': g[0], 'weight_decay': decay}) # add g0 with weight_decay - optimizer.add_param_group({'params': g[1], 'weight_decay': 0.0}) # add g1 (BatchNorm2d weights) - LOGGER.info(f"{colorstr('optimizer:')} {type(optimizer).__name__}(lr={lr}) with parameter groups " - f"{len(g[1])} weight(decay=0.0), {len(g[0])} weight(decay={decay}), {len(g[2])} bias") - return optimizer - - -def smart_hub_load(repo='ultralytics/yolov5', model='yolov5s', **kwargs): - # YOLOv5 torch.hub.load() wrapper with smart error/issue handling - if check_version(torch.__version__, '1.9.1'): - kwargs['skip_validation'] = True # validation causes GitHub API rate limit errors - if check_version(torch.__version__, '1.12.0'): - kwargs['trust_repo'] = True # argument required starting in torch 0.12 - try: - return torch.hub.load(repo, model, **kwargs) - except Exception: - return torch.hub.load(repo, model, force_reload=True, **kwargs) - - -def smart_resume(ckpt, optimizer, ema=None, weights='yolov5s.pt', epochs=300, resume=True): - # Resume training from a partially trained checkpoint - best_fitness = 0.0 - start_epoch = ckpt['epoch'] + 1 - if ckpt['optimizer'] is not None: - optimizer.load_state_dict(ckpt['optimizer']) # optimizer - best_fitness = ckpt['best_fitness'] - if ema and ckpt.get('ema'): - ema.ema.load_state_dict(ckpt['ema'].float().state_dict()) # EMA - ema.updates = ckpt['updates'] - if resume: - assert start_epoch > 0, f'{weights} training to {epochs} epochs is finished, nothing to resume.\n' \ - f"Start a new training without --resume, i.e. 'python train.py --weights {weights}'" - LOGGER.info(f'Resuming training from {weights} from epoch {start_epoch} to {epochs} total epochs') - if epochs < start_epoch: - LOGGER.info(f"{weights} has been trained for {ckpt['epoch']} epochs. Fine-tuning for {epochs} more epochs.") - epochs += ckpt['epoch'] # finetune additional epochs - return best_fitness, start_epoch, epochs - - -class EarlyStopping: - # YOLOv5 simple early stopper - def __init__(self, patience=30): - self.best_fitness = 0.0 # i.e. mAP - self.best_epoch = 0 - self.patience = patience or float('inf') # epochs to wait after fitness stops improving to stop - self.possible_stop = False # possible stop may occur next epoch - - def __call__(self, epoch, fitness): - if fitness >= self.best_fitness: # >= 0 to allow for early zero-fitness stage of training - self.best_epoch = epoch - self.best_fitness = fitness - delta = epoch - self.best_epoch # epochs without improvement - self.possible_stop = delta >= (self.patience - 1) # possible stop may occur next epoch - stop = delta >= self.patience # stop training if patience exceeded - if stop: - LOGGER.info(f'Stopping training early as no improvement observed in last {self.patience} epochs. ' - f'Best results observed at epoch {self.best_epoch}, best model saved as best.pt.\n' - f'To update EarlyStopping(patience={self.patience}) pass a new patience value, ' - f'i.e. `python train.py --patience 300` or use `--patience 0` to disable EarlyStopping.') - return stop - - -class ModelEMA: - """ Updated Exponential Moving Average (EMA) from https://github.com/rwightman/pytorch-image-models - Keeps a moving average of everything in the model state_dict (parameters and buffers) - For EMA details see https://www.tensorflow.org/api_docs/python/tf/train/ExponentialMovingAverage - """ - - def __init__(self, model, decay=0.9999, tau=2000, updates=0): - # Create EMA - self.ema = deepcopy(de_parallel(model)).eval() # FP32 EMA - self.updates = updates # number of EMA updates - self.decay = lambda x: decay * (1 - math.exp(-x / tau)) # decay exponential ramp (to help early epochs) - for p in self.ema.parameters(): - p.requires_grad_(False) - - def update(self, model): - # Update EMA parameters - self.updates += 1 - d = self.decay(self.updates) - - msd = de_parallel(model).state_dict() # model state_dict - for k, v in self.ema.state_dict().items(): - if v.dtype.is_floating_point: # true for FP16 and FP32 - v *= d - v += (1 - d) * msd[k].detach() - # assert v.dtype == msd[k].dtype == torch.float32, f'{k}: EMA {v.dtype} and model {msd[k].dtype} must be FP32' - - def update_attr(self, model, include=(), exclude=('process_group', 'reducer')): - # Update EMA attributes - copy_attr(self.ema, model, include, exclude) diff --git a/spaces/anilkumar-kanasani/chat-with-your-pdf/app.py b/spaces/anilkumar-kanasani/chat-with-your-pdf/app.py deleted file mode 100644 index 396dc119f06936424e3f2585a4cee60d1d64dc0d..0000000000000000000000000000000000000000 --- a/spaces/anilkumar-kanasani/chat-with-your-pdf/app.py +++ /dev/null @@ -1,109 +0,0 @@ -import streamlit as st -from PyPDF2 import PdfReader -from langchain.vectorstores import FAISS -from langchain.chains import LLMChain, ConversationalRetrievalChain -from utils import (get_hf_embeddings, - get_openAI_chat_model, - get_hf_model, - get_local_gpt4_model, - set_LangChain_tracking, - check_password) -from langchain.text_splitter import RecursiveCharacterTextSplitter -from langchain.memory import ConversationBufferMemory -from langchain.docstore.document import Document - -embeddings = get_hf_embeddings() -openai_chat_model = get_openAI_chat_model() -#local_model = get_local_gpt4_model(model = "GPT4All-13B-snoozy.ggmlv3.q4_0.bin") -hf_chat_model = get_hf_model(repo_id = "tiiuae/falcon-40b") - -## Preparing Prompt -from langchain.prompts import PromptTemplate -entity_extraction_template = """ -Extract all top 10 important entites from the following context \ -return as python list \ -{input_text} \ -List of entities:""" -ENTITY_EXTRACTION_PROMPT = PromptTemplate.from_template(entity_extraction_template) - -def get_qa_prompt(List_of_entities): - qa_template = """ - Use the following pieces of context to answer the question at the end. \ - Use the following list of entities as your working scope. \ - If the question is out of given list of entities, just say that your question \ - is out of scope and give them the list of entities as your working scope \ - If you dont know the answer, just say that you don't know and tell \ - the user to seach web for more information, don't try to make up \ - an answer. Use three sentences maximum and keep the answer as \ - concise as possible.\ - list of entities: \ - """ + str(List_of_entities) + """ \ - context: {context} \ - Question: {question} \ - Helpful Answer:""" - print(qa_template) - QA_CHAIN_PROMPT = PromptTemplate.from_template(qa_template) - - return QA_CHAIN_PROMPT - -if check_password(): - st.title("Chat with your PDF ") - st.session_state.file_tracking = "new_run" - with st.expander("Upload your PDF : ", expanded=True): - st.session_state.lc_tracking = st.text_input("Please give a name to your session?") - input_file = st.file_uploader(label = "Upload a file", - accept_multiple_files=False, - type=["pdf"], - ) - if st.button("Process the file"): - st.session_state.file_tracking = "req_to_process" - try: - set_LangChain_tracking(project=str(st.session_state.lc_tracking)) - except: - set_LangChain_tracking(project="default") - if st.session_state.file_tracking == "req_to_process" and input_file is not None: - # Load Text Data - input_text = '' - bytes_data = PdfReader(input_file) - for page in bytes_data.pages: - input_text += page.extract_text() - - st.session_state.ner_chain = LLMChain(llm=hf_chat_model, prompt=ENTITY_EXTRACTION_PROMPT) - st.session_state.ners = st.session_state.ner_chain.run(input_text=input_text, verbose=True) - - input_text = input_text.replace('\n', '') - text_doc_chunks = [Document(page_content=x, metadata={}) for x in input_text.split('.')] - - # Embed and VectorStore - vector_store = FAISS.from_documents(text_doc_chunks, embeddings) - st.session_state.chat_history = [] - st.session_state.formatted_prompt = get_qa_prompt(st.session_state.ners) - st.session_state.chat_chain = ConversationalRetrievalChain.from_llm( - hf_chat_model, - chain_type="stuff", # "stuff", "map_reduce", "refine", "map_rerank" - verbose=True, - retriever=vector_store.as_retriever(), - # search_type="mmr" - # search_kwargs={"k": 1} - # search_type="similarity_score_threshold", search_kwargs={"score_threshold": .5} - combine_docs_chain_kwargs={"prompt": st.session_state.formatted_prompt}, - ) - if "chat_chain" in st.session_state: - st.header("We are ready to start chat with your pdf") - st.subheader("The scope of your PDF is: ") - st.markdown(st.session_state.ners) - else: - st.header("Upload and Process your file first") - - - if "chat_chain" in st.session_state and st.session_state.chat_history is not None: - if question := st.chat_input("Please type some thing here?"): - response = st.session_state.chat_chain({"question": question, "chat_history": st.session_state.chat_history}) - st.session_state.chat_history.append((question, response["answer"])) - - # Display chat messages from history on app rerun - for message in st.session_state.chat_history: - with st.chat_message("user"): - st.markdown(message[0]) - with st.chat_message("assistant"): - st.markdown(message[1]) \ No newline at end of file diff --git a/spaces/antonovmaxim/text-generation-webui-space/text-generation-webui-main/extensions/llava/README.md b/spaces/antonovmaxim/text-generation-webui-space/text-generation-webui-main/extensions/llava/README.md deleted file mode 100644 index 287162efef3ab7a047bef9d5cb37c16871703fd4..0000000000000000000000000000000000000000 --- a/spaces/antonovmaxim/text-generation-webui-space/text-generation-webui-main/extensions/llava/README.md +++ /dev/null @@ -1,71 +0,0 @@ -# LLaVA - -## Description -Adds [LLaVA 13B](https://github.com/haotian-liu/LLaVA) multimodality support to text-generation-webui. - -https://user-images.githubusercontent.com/3718215/233817203-69b57e77-0c55-4fd6-b742-3204bb13b8fc.mp4 - -## LLaVA-7B -7B version currently isn't supported. It will be supported if/when [more generic multimodality support](https://github.com/oobabooga/text-generation-webui/discussions/1687) gets implemented. - -## Usage -To run this extension, download LLaVA weights, for example from [here](https://huggingface.co/wojtab/llava-13b-v0-4bit-128g) (note: it's a 4-bit [GPTQ quantization](https://github.com/oobabooga/text-generation-webui/tree/main/docs/GPTQ-models-(4-bit-mode).md), done on "old CUDA" branch), and then start server.py with `--extensions llava` argument. - -Do note, that each image takes up 258 tokens, so adjust max_new_tokens to be at most 1700 (recommended value is between 200 to 500), so the images don't get truncated. - -To send an image, just upload it to the extension field below chat, and send a prompt as always. The image will be added to the end of your message. If you wish to modify the placement, include a string `` in your prompt. - -Additionally, there is *Embed all images, not only the last one* checkbox. It modifies the image embeddings, by default (if it's unchecked), all but the most recent images have their embeddings empty, so they are not fed to the network. From initial testing, it seems as LLaVA considers the features in all images at the same time, so by default the extension skips previous images. If you want to include them anyway, just tick this checkbox. - -## Extension config -This extension uses following parameters (from settings.json): -|Parameter|Description| -|---------|-----------| -|`llava-clip_bits`|Number of bits to load CLIP feature extractor in (either 32 or 16, default=32)| -|`llava-clip_device`|Torch device to run the extractor on, for example `cpu` or `cuda:0`, by default `cuda:0` if available| -|`llava-clip_repo`|Huggingface repository of CLIP model, `openai/clip-vit-large-patch14` by default. There should be no need to change it| -|`llava-projector_bits`|Number of bits to load CLIP->LLaMA feature projector in (either 32 or 16, default=32)| -|`llava-projector_device`|Torch device to run the CLIP->LLaMA feature projector on, for example `cpu` or `cuda:0`, by default `cuda:0` if available| -|`llava-projector_repo`|Huggingface repository of multimodal projector, `liuhaotian/LLaVA-13b-delta-v0` by default. There should be no need to change it| -|`llava-projector_filename`|The filename of multimodal projector weights, `mm_projector.bin` by default. There should be no need to change it| -|`llava-add_all_images_to_prompt`|Default value of "Embed all images, not only the last one" checkbox| -## Technical description - -### Original LLaVA -The default LLaVA implementation uses modified `transformers` library, however this extension forgoes this requirement. The transformers are modified in LLaVA in such a way, that the entire LLaVA model gets loaded, and the inference now looks as follows: -``` -images --> CLIP --> projector --> input embeddings for images --> | - | --> LLaMA -prompt -------------------------> input embeddings for text ----> | -``` -The images are represented in the prompt by the following token IDs: -- 32000 - `` - placeholder token for embeddings from projector -- 32001 - `` - token marking start of an image -- 32002 - `` - token marking end of an image - -By default, image will be represented as `*256`. The input embeddings for an image are converted with a single linear layer of the projector, then they are placed instead of `` tokens. -The concatenated prompt then gets fed to fine-tuned LLaMA. - -### In this extension - -Using default transformers, they only load the LLaMA part of LLaVA, ignoring the added projector weights, and not loading CLIP. We then reconstruct the `images -> CLIP -> projector` pipeline ourselves, then concatenate the input embeddings, and feed it to LLaMA loaded by transformers. This allows us to use normal flow from webui to load this model, and just hijack the model input with additional features. -Splitting it to 3 separate models, allows us to configure each of them, and to move them to different devices(for example we can run CLIP+projector on CPU and LLaMA on GPU). Also, it enables us to use 4-bit GPTQ quantization for LLaVA, massively cutting down the VRAM requirement (it should be possible to fit on 12GB of VRAM with full context size by moving CLIP and projector to CPU). - -### Usage through API - -You can run the multimodal inference through API, by inputting the images to prompt. Images are embedded like so: `f''`, where `img_str` is base-64 jpeg data. Python example: -```Python -import base64 -import requests - -CONTEXT = "You are LLaVA, a large language and vision assistant trained by UW Madison WAIV Lab. You are able to understand the visual content that the user provides, and assist the user with a variety of tasks using natural language. Follow the instructions carefully and explain your answers in detail.\n### Human: \nHi!\n### Assistant: \nHi there! How can I help you today?\n" - -with open('extreme_ironing.jpg', 'rb') as f: - img_str = base64.b64encode(f.read()).decode('utf-8') - prompt = CONTEXT + f'### Human: \nWhat is unusual about this image: \n\n### Assistant: \n' - print(requests.post('http://127.0.0.1:5000/api/v1/generate', json={'prompt': prompt, 'stopping_strings': ['\n###']}).json()) -``` -script output: -```Python -{'results': [{'text': "The unusual aspect of this image is that a man is standing on top of a yellow minivan while doing his laundry. He has set up a makeshift clothes line using the car's rooftop as an outdoor drying area. This scene is uncommon because people typically do their laundry indoors, in a dedicated space like a laundromat or a room in their home, rather than on top of a moving vehicle. Additionally, hanging clothes on the car could be potentially hazardous or illegal in some jurisdictions due to the risk of damaging the vehicle or causing accidents on the road.\n##"}]} -``` \ No newline at end of file diff --git a/spaces/artificialguybr/video-dubbing/TTS/TTS/tts/layers/tortoise/wav2vec_alignment.py b/spaces/artificialguybr/video-dubbing/TTS/TTS/tts/layers/tortoise/wav2vec_alignment.py deleted file mode 100644 index 47456cc5ac41b7ed9522fe543affc8482218730c..0000000000000000000000000000000000000000 --- a/spaces/artificialguybr/video-dubbing/TTS/TTS/tts/layers/tortoise/wav2vec_alignment.py +++ /dev/null @@ -1,150 +0,0 @@ -import torch -import torchaudio -from transformers import Wav2Vec2CTCTokenizer, Wav2Vec2FeatureExtractor, Wav2Vec2ForCTC - - -def max_alignment(s1, s2, skip_character="~", record=None): - """ - A clever function that aligns s1 to s2 as best it can. Wherever a character from s1 is not found in s2, a '~' is - used to replace that character. - - Finally got to use my DP skills! - """ - if record is None: - record = {} - assert skip_character not in s1, f"Found the skip character {skip_character} in the provided string, {s1}" - if len(s1) == 0: - return "" - if len(s2) == 0: - return skip_character * len(s1) - if s1 == s2: - return s1 - if s1[0] == s2[0]: - return s1[0] + max_alignment(s1[1:], s2[1:], skip_character, record) - - take_s1_key = (len(s1), len(s2) - 1) - if take_s1_key in record: - take_s1, take_s1_score = record[take_s1_key] - else: - take_s1 = max_alignment(s1, s2[1:], skip_character, record) - take_s1_score = len(take_s1.replace(skip_character, "")) - record[take_s1_key] = (take_s1, take_s1_score) - - take_s2_key = (len(s1) - 1, len(s2)) - if take_s2_key in record: - take_s2, take_s2_score = record[take_s2_key] - else: - take_s2 = max_alignment(s1[1:], s2, skip_character, record) - take_s2_score = len(take_s2.replace(skip_character, "")) - record[take_s2_key] = (take_s2, take_s2_score) - - return take_s1 if take_s1_score > take_s2_score else skip_character + take_s2 - - -class Wav2VecAlignment: - """ - Uses wav2vec2 to perform audio<->text alignment. - """ - - def __init__(self, device="cuda"): - self.model = Wav2Vec2ForCTC.from_pretrained("jbetker/wav2vec2-large-robust-ft-libritts-voxpopuli").cpu() - self.feature_extractor = Wav2Vec2FeatureExtractor.from_pretrained("facebook/wav2vec2-large-960h") - self.tokenizer = Wav2Vec2CTCTokenizer.from_pretrained("jbetker/tacotron-symbols") - self.device = device - - def align(self, audio, expected_text, audio_sample_rate=24000): - orig_len = audio.shape[-1] - - with torch.no_grad(): - self.model = self.model.to(self.device) - audio = audio.to(self.device) - audio = torchaudio.functional.resample(audio, audio_sample_rate, 16000) - clip_norm = (audio - audio.mean()) / torch.sqrt(audio.var() + 1e-7) - logits = self.model(clip_norm).logits - self.model = self.model.cpu() - - logits = logits[0] - pred_string = self.tokenizer.decode(logits.argmax(-1).tolist()) - - fixed_expectation = max_alignment(expected_text.lower(), pred_string) - w2v_compression = orig_len // logits.shape[0] - expected_tokens = self.tokenizer.encode(fixed_expectation) - expected_chars = list(fixed_expectation) - if len(expected_tokens) == 1: - return [0] # The alignment is simple; there is only one token. - expected_tokens.pop(0) # The first token is a given. - expected_chars.pop(0) - - alignments = [0] - - def pop_till_you_win(): - if len(expected_tokens) == 0: - return None - popped = expected_tokens.pop(0) - popped_char = expected_chars.pop(0) - while popped_char == "~": - alignments.append(-1) - if len(expected_tokens) == 0: - return None - popped = expected_tokens.pop(0) - popped_char = expected_chars.pop(0) - return popped - - next_expected_token = pop_till_you_win() - for i, logit in enumerate(logits): - top = logit.argmax() - if next_expected_token == top: - alignments.append(i * w2v_compression) - if len(expected_tokens) > 0: - next_expected_token = pop_till_you_win() - else: - break - - pop_till_you_win() - if not (len(expected_tokens) == 0 and len(alignments) == len(expected_text)): - torch.save([audio, expected_text], "alignment_debug.pth") - assert False, ( - "Something went wrong with the alignment algorithm. I've dumped a file, 'alignment_debug.pth' to" - "your current working directory. Please report this along with the file so it can get fixed." - ) - - # Now fix up alignments. Anything with -1 should be interpolated. - alignments.append(orig_len) # This'll get removed but makes the algorithm below more readable. - for i in range(len(alignments)): - if alignments[i] == -1: - for j in range(i + 1, len(alignments)): - if alignments[j] != -1: - next_found_token = j - break - for j in range(i, next_found_token): - gap = alignments[next_found_token] - alignments[i - 1] - alignments[j] = (j - i + 1) * gap // (next_found_token - i + 1) + alignments[i - 1] - - return alignments[:-1] - - def redact(self, audio, expected_text, audio_sample_rate=24000): - if "[" not in expected_text: - return audio - splitted = expected_text.split("[") - fully_split = [splitted[0]] - for spl in splitted[1:]: - assert "]" in spl, 'Every "[" character must be paired with a "]" with no nesting.' - fully_split.extend(spl.split("]")) - - # At this point, fully_split is a list of strings, with every other string being something that should be redacted. - non_redacted_intervals = [] - last_point = 0 - for i in range(len(fully_split)): - if i % 2 == 0: - end_interval = max(0, last_point + len(fully_split[i]) - 1) - non_redacted_intervals.append((last_point, end_interval)) - last_point += len(fully_split[i]) - - bare_text = "".join(fully_split) - alignments = self.align(audio, bare_text, audio_sample_rate) - - output_audio = [] - for nri in non_redacted_intervals: - start, stop = nri - output_audio.append(audio[:, alignments[start] : alignments[stop]]) - return torch.cat(output_audio, dim=-1) diff --git a/spaces/ashercn97/AsherTesting/docs/LLaMA-model.md b/spaces/ashercn97/AsherTesting/docs/LLaMA-model.md deleted file mode 100644 index ba7350f59c54c8ad821619cef2207763b09b3ef3..0000000000000000000000000000000000000000 --- a/spaces/ashercn97/AsherTesting/docs/LLaMA-model.md +++ /dev/null @@ -1,56 +0,0 @@ -LLaMA is a Large Language Model developed by Meta AI. - -It was trained on more tokens than previous models. The result is that the smallest version with 7 billion parameters has similar performance to GPT-3 with 175 billion parameters. - -This guide will cover usage through the official `transformers` implementation. For 4-bit mode, head over to [GPTQ models (4 bit mode) -](GPTQ-models-(4-bit-mode).md). - -## Getting the weights - -### Option 1: pre-converted weights - -* Direct download (recommended): - -https://huggingface.co/Neko-Institute-of-Science/LLaMA-7B-HF - -https://huggingface.co/Neko-Institute-of-Science/LLaMA-13B-HF - -https://huggingface.co/Neko-Institute-of-Science/LLaMA-30B-HF - -https://huggingface.co/Neko-Institute-of-Science/LLaMA-65B-HF - -* Torrent: - -https://github.com/oobabooga/text-generation-webui/pull/530#issuecomment-1484235789 - -The tokenizer files in the torrent above are outdated, in particular the files called `tokenizer_config.json` and `special_tokens_map.json`. Here you can find those files: https://huggingface.co/oobabooga/llama-tokenizer - -### Option 2: convert the weights yourself - -1. Install the `protobuf` library: - -``` -pip install protobuf==3.20.1 -``` - -2. Use the script below to convert the model in `.pth` format that you, a fellow academic, downloaded using Meta's official link. - -If you have `transformers` installed in place: - -``` -python -m transformers.models.llama.convert_llama_weights_to_hf --input_dir /path/to/LLaMA --model_size 7B --output_dir /tmp/outputs/llama-7b -``` - -Otherwise download [convert_llama_weights_to_hf.py](https://github.com/huggingface/transformers/blob/main/src/transformers/models/llama/convert_llama_weights_to_hf.py) first and run: - -``` -python convert_llama_weights_to_hf.py --input_dir /path/to/LLaMA --model_size 7B --output_dir /tmp/outputs/llama-7b -``` - -3. Move the `llama-7b` folder inside your `text-generation-webui/models` folder. - -## Starting the web UI - -```python -python server.py --model llama-7b -``` diff --git a/spaces/auto-academic/auto-draft/latex_templates/ICLR2022/methodology.tex b/spaces/auto-academic/auto-draft/latex_templates/ICLR2022/methodology.tex deleted file mode 100644 index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000 diff --git a/spaces/avivdm1/AutoGPT/tests/unit/json_tests.py b/spaces/avivdm1/AutoGPT/tests/unit/json_tests.py deleted file mode 100644 index 25c383377708359b5cfec28e0625343c5692f15c..0000000000000000000000000000000000000000 --- a/spaces/avivdm1/AutoGPT/tests/unit/json_tests.py +++ /dev/null @@ -1,114 +0,0 @@ -import unittest - -from autogpt.json_utils.json_fix_llm import fix_and_parse_json - - -class TestParseJson(unittest.TestCase): - def test_valid_json(self): - # Test that a valid JSON string is parsed correctly - json_str = '{"name": "John", "age": 30, "city": "New York"}' - obj = fix_and_parse_json(json_str) - self.assertEqual(obj, {"name": "John", "age": 30, "city": "New York"}) - - def test_invalid_json_minor(self): - # Test that an invalid JSON string can be fixed with gpt - json_str = '{"name": "John", "age": 30, "city": "New York",}' - self.assertEqual( - fix_and_parse_json(json_str, try_to_fix_with_gpt=False), - {"name": "John", "age": 30, "city": "New York"}, - ) - - def test_invalid_json_major_with_gpt(self): - # Test that an invalid JSON string raises an error when try_to_fix_with_gpt is False - json_str = 'BEGIN: "name": "John" - "age": 30 - "city": "New York" :END' - self.assertEqual( - fix_and_parse_json(json_str, try_to_fix_with_gpt=True), - {"name": "John", "age": 30, "city": "New York"}, - ) - - def test_invalid_json_major_without_gpt(self): - # Test that a REALLY invalid JSON string raises an error when try_to_fix_with_gpt is False - json_str = 'BEGIN: "name": "John" - "age": 30 - "city": "New York" :END' - # Assert that this raises an exception: - with self.assertRaises(Exception): - fix_and_parse_json(json_str, try_to_fix_with_gpt=False) - - def test_invalid_json_leading_sentence_with_gpt(self): - # Test that a REALLY invalid JSON string raises an error when try_to_fix_with_gpt is False - json_str = """I suggest we start by browsing the repository to find any issues that we can fix. - -{ - "command": { - "name": "browse_website", - "args":{ - "url": "https://github.com/Torantulino/Auto-GPT" - } - }, - "thoughts": - { - "text": "I suggest we start browsing the repository to find any issues that we can fix.", - "reasoning": "Browsing the repository will give us an idea of the current state of the codebase and identify any issues that we can address to improve the repo.", - "plan": "- Look through the repository to find any issues.\n- Investigate any issues to determine what needs to be fixed\n- Identify possible solutions to fix the issues\n- Open Pull Requests with fixes", - "criticism": "I should be careful while browsing so as not to accidentally introduce any new bugs or issues.", - "speak": "I will start browsing the repository to find any issues we can fix." - } -}""" - good_obj = { - "command": { - "name": "browse_website", - "args": {"url": "https://github.com/Torantulino/Auto-GPT"}, - }, - "thoughts": { - "text": "I suggest we start browsing the repository to find any issues that we can fix.", - "reasoning": "Browsing the repository will give us an idea of the current state of the codebase and identify any issues that we can address to improve the repo.", - "plan": "- Look through the repository to find any issues.\n- Investigate any issues to determine what needs to be fixed\n- Identify possible solutions to fix the issues\n- Open Pull Requests with fixes", - "criticism": "I should be careful while browsing so as not to accidentally introduce any new bugs or issues.", - "speak": "I will start browsing the repository to find any issues we can fix.", - }, - } - # Assert that this raises an exception: - self.assertEqual( - fix_and_parse_json(json_str, try_to_fix_with_gpt=False), good_obj - ) - - def test_invalid_json_leading_sentence_with_gpt(self): - # Test that a REALLY invalid JSON string raises an error when try_to_fix_with_gpt is False - json_str = """I will first need to browse the repository (https://github.com/Torantulino/Auto-GPT) and identify any potential bugs that need fixing. I will use the "browse_website" command for this. - -{ - "command": { - "name": "browse_website", - "args":{ - "url": "https://github.com/Torantulino/Auto-GPT" - } - }, - "thoughts": - { - "text": "Browsing the repository to identify potential bugs", - "reasoning": "Before fixing bugs, I need to identify what needs fixing. I will use the 'browse_website' command to analyze the repository.", - "plan": "- Analyze the repository for potential bugs and areas of improvement", - "criticism": "I need to ensure I am thorough and pay attention to detail while browsing the repository.", - "speak": "I am browsing the repository to identify potential bugs." - } -}""" - good_obj = { - "command": { - "name": "browse_website", - "args": {"url": "https://github.com/Torantulino/Auto-GPT"}, - }, - "thoughts": { - "text": "Browsing the repository to identify potential bugs", - "reasoning": "Before fixing bugs, I need to identify what needs fixing. I will use the 'browse_website' command to analyze the repository.", - "plan": "- Analyze the repository for potential bugs and areas of improvement", - "criticism": "I need to ensure I am thorough and pay attention to detail while browsing the repository.", - "speak": "I am browsing the repository to identify potential bugs.", - }, - } - # Assert that this raises an exception: - self.assertEqual( - fix_and_parse_json(json_str, try_to_fix_with_gpt=False), good_obj - ) - - -if __name__ == "__main__": - unittest.main() diff --git a/spaces/awaawawawa/iurf7irfuyytruyyugb/optimizedSD/diffusers_txt2img.py b/spaces/awaawawawa/iurf7irfuyytruyyugb/optimizedSD/diffusers_txt2img.py deleted file mode 100644 index 80fbb9723ef591e4c14aebf53386dac4fc3e3b66..0000000000000000000000000000000000000000 --- a/spaces/awaawawawa/iurf7irfuyytruyyugb/optimizedSD/diffusers_txt2img.py +++ /dev/null @@ -1,13 +0,0 @@ -import torch -from diffusers import LDMTextToImagePipeline - -pipe = LDMTextToImagePipeline.from_pretrained("CompVis/stable-diffusion-v1-3-diffusers", use_auth_token=True) - -prompt = "19th Century wooden engraving of Elon musk" - -seed = torch.manual_seed(1024) -images = pipe([prompt], batch_size=1, num_inference_steps=50, guidance_scale=7, generator=seed,torch_device="cpu" )["sample"] - -# save images -for idx, image in enumerate(images): - image.save(f"image-{idx}.png") diff --git a/spaces/awacke1/AW-02-H5-AR-VR-IOT/style.css b/spaces/awacke1/AW-02-H5-AR-VR-IOT/style.css deleted file mode 100644 index 114adf441e9032febb46bc056b2a8bb651075f0d..0000000000000000000000000000000000000000 --- a/spaces/awacke1/AW-02-H5-AR-VR-IOT/style.css +++ /dev/null @@ -1,28 +0,0 @@ -body { - padding: 2rem; - font-family: -apple-system, BlinkMacSystemFont, "Arial", sans-serif; -} - -h1 { - font-size: 16px; - margin-top: 0; -} - -p { - color: rgb(107, 114, 128); - font-size: 15px; - margin-bottom: 10px; - margin-top: 5px; -} - -.card { - max-width: 620px; - margin: 0 auto; - padding: 16px; - border: 1px solid lightgray; - border-radius: 16px; -} - -.card p:last-child { - margin-bottom: 0; -} diff --git a/spaces/awacke1/Audio-Sentiment-harshit345-xlsr-wav2vec-speech-emotion-recognition/app.py b/spaces/awacke1/Audio-Sentiment-harshit345-xlsr-wav2vec-speech-emotion-recognition/app.py deleted file mode 100644 index 98180fa459dd09a6bbbc048b412dd7349fd54f62..0000000000000000000000000000000000000000 --- a/spaces/awacke1/Audio-Sentiment-harshit345-xlsr-wav2vec-speech-emotion-recognition/app.py +++ /dev/null @@ -1,3 +0,0 @@ -import gradio as gr - -gr.Interface.load("models/harshit345/xlsr-wav2vec-speech-emotion-recognition").launch() \ No newline at end of file diff --git a/spaces/awacke1/Biomed-NER-AI-NLP-CT-Demo1/app.py b/spaces/awacke1/Biomed-NER-AI-NLP-CT-Demo1/app.py deleted file mode 100644 index 418d26fd42c4a6dbc3a230e0bb3ee4d9acf0553c..0000000000000000000000000000000000000000 --- a/spaces/awacke1/Biomed-NER-AI-NLP-CT-Demo1/app.py +++ /dev/null @@ -1,331 +0,0 @@ -import gradio as gr -import pandas as pd -import json -from collections import defaultdict - -# Create tokenizer for biomed model -from transformers import pipeline, AutoTokenizer, AutoModelForTokenClassification -tokenizer = AutoTokenizer.from_pretrained("d4data/biomedical-ner-all") # https://huggingface.co/d4data/biomedical-ner-all?text=asthma -model = AutoModelForTokenClassification.from_pretrained("d4data/biomedical-ner-all") -pipe = pipeline("ner", model=model, tokenizer=tokenizer, aggregation_strategy="simple") - -# Matplotlib for entity graph -import matplotlib.pyplot as plt -plt.switch_backend("Agg") - -# Load examples from JSON -import os - -# Load terminology datasets: -basedir = os.path.dirname(__file__) -#dataLOINC = pd.read_csv(basedir + "\\" + f'LoincTableCore.csv') -#dataPanels = pd.read_csv(basedir + "\\" + f'PanelsAndForms-ACW1208Labeled.csv') -#dataSNOMED = pd.read_csv(basedir + "\\" + f'sct2_TextDefinition_Full-en_US1000124_20220901.txt',sep='\t') -#dataOMS = pd.read_csv(basedir + "\\" + f'SnomedOMS.csv') -#dataICD10 = pd.read_csv(basedir + "\\" + f'ICD10Diagnosis.csv') - -dataLOINC = pd.read_csv(f'LoincTableCore.csv') -dataPanels = pd.read_csv(f'PanelsAndForms-ACW1208Labeled.csv') -dataSNOMED = pd.read_csv(f'sct2_TextDefinition_Full-en_US1000124_20220901.txt',sep='\t') -dataOMS = pd.read_csv(f'SnomedOMS.csv') -dataICD10 = pd.read_csv(f'ICD10Diagnosis.csv') - -dir_path = os.path.dirname(os.path.realpath(__file__)) -EXAMPLES = {} -#with open(dir_path + "\\" + "examples.json", "r") as f: -with open("examples.json", "r") as f: - example_json = json.load(f) - EXAMPLES = {x["text"]: x["label"] for x in example_json} - -def MatchLOINC(name): - #basedir = os.path.dirname(__file__) - pd.set_option("display.max_rows", None) - #data = pd.read_csv(basedir + "\\" + f'LoincTableCore.csv') - data = dataLOINC - swith=data.loc[data['COMPONENT'].str.contains(name, case=False, na=False)] - return swith - -def MatchLOINCPanelsandForms(name): - #basedir = os.path.dirname(__file__) - #data = pd.read_csv(basedir + "\\" + f'PanelsAndForms-ACW1208Labeled.csv') - data = dataPanels - # Assessment Name: - #swith=data.loc[data['ParentName'].str.contains(name, case=False, na=False)] - # Assessment Question: - swith=data.loc[data['LoincName'].str.contains(name, case=False, na=False)] - return swith - -def MatchSNOMED(name): - #basedir = os.path.dirname(__file__) - #data = pd.read_csv(basedir + "\\" + f'sct2_TextDefinition_Full-en_US1000124_20220901.txt',sep='\t') - data = dataSNOMED - swith=data.loc[data['term'].str.contains(name, case=False, na=False)] - return swith - -def MatchOMS(name): - #basedir = os.path.dirname(__file__) - #data = pd.read_csv(basedir + "\\" + f'SnomedOMS.csv') - data = dataOMS - swith=data.loc[data['SNOMED CT'].str.contains(name, case=False, na=False)] - return swith - -def MatchICD10(name): - #basedir = os.path.dirname(__file__) - #data = pd.read_csv(basedir + "\\" + f'ICD10Diagnosis.csv') - data = dataICD10 - swith=data.loc[data['Description'].str.contains(name, case=False, na=False)] - return swith - -def SaveResult(text, outputfileName): - #try: - basedir = os.path.dirname(__file__) - savePath = outputfileName - print("Saving: " + text + " to " + savePath) - from os.path import exists - file_exists = exists(savePath) - if file_exists: - with open(outputfileName, "a") as f: #append - #for line in text: - f.write(str(text.replace("\n"," "))) - f.write('\n') - else: - with open(outputfileName, "w") as f: #write - #for line in text: - f.write(str(text.replace("\n"," "))) - f.write('\n') - #except ValueError as err: - # raise ValueError("File Save Error in SaveResult \n" + format_tb(err.__traceback__)[0] + err.args[0] + "\nEnd of error message.") from None - - return - -def loadFile(filename): - try: - basedir = os.path.dirname(__file__) - loadPath = basedir + "\\" + filename - - print("Loading: " + loadPath) - - from os.path import exists - file_exists = exists(loadPath) - - if file_exists: - with open(loadPath, "r") as f: #read - contents = f.read() - print(contents) - return contents - - except ValueError as err: - raise ValueError("File Save Error in SaveResult \n" + format_tb(err.__traceback__)[0] + err.args[0] + "\nEnd of error message.") from None - - return "" - -def get_today_filename(): - from datetime import datetime - date = datetime.now().strftime("%Y_%m_%d-%I.%M.%S.%p") - #print(f"filename_{date}") 'filename_2023_01_12-03-29-22_AM' - return f"MedNER_{date}.csv" - -def get_base(filename): - basedir = os.path.dirname(__file__) - loadPath = basedir + "\\" + filename - #print("Loading: " + loadPath) - return loadPath - -def group_by_entity(raw): - outputFile = get_base(get_today_filename()) - out = defaultdict(int) - - for ent in raw: - out[ent["entity_group"]] += 1 - myEntityGroup = ent["entity_group"] - print("Found entity group type: " + myEntityGroup) - - if (myEntityGroup in ['Sign_symptom', 'Detailed_description', 'History', 'Activity', 'Medication' ]): - eterm = ent["word"].replace('#','') - minlength = 3 - if len(eterm) > minlength: - print("Found eterm: " + eterm) - eterm.replace("#","") - g1=MatchLOINC(eterm) - g2=MatchLOINCPanelsandForms(eterm) - g3=MatchSNOMED(eterm) - g4=MatchOMS(eterm) - g5=MatchICD10(eterm) - sAll = "" - - print("Saving to output file " + outputFile) - # Create harmonisation output format of input to output code, name, Text - - try: # 18 fields, output to labeled CSV dataset for results teaching on scored regret changes to action plan with data inputs - col = " 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19" - - #LOINC - g11 = g1['LOINC_NUM'].to_string().replace(","," ").replace("\n"," ") - g12 = g1['COMPONENT'].to_string().replace(","," ").replace("\n"," ") - s1 = ("LOINC Terms of entity ," + myEntityGroup + ", with term ," + eterm + ", LOINC codes of ," + g11 + ", and LOINC questions of ," + g12 + ", Label,Value, Label,Value, Label,Value ") - if g11 != 'Series([] )': SaveResult(s1, outputFile) - - #LOINC Panels - g21 = g2['Loinc'].to_string().replace(","," ").replace("\n"," ") - g22 = g2['LoincName'].to_string().replace(","," ").replace("\n"," ") - g23 = g2['ParentLoinc'].to_string().replace(","," ").replace("\n"," ") - g24 = g2['ParentName'].to_string().replace(","," ").replace("\n"," ") - s2 = ("LOINC Panels of entity ," + myEntityGroup + ", with term ," + eterm + ", LOINC codes of ," + g21 + ", and LOINC name of ," + g22 + ", and Parent codes of ," + g23 + ", with Parent names of ," + g24 + ", Label,Value ") - if g21 != 'Series([] )': SaveResult(s2, outputFile) - - #SNOMED - g31 = g3['conceptId'].to_string().replace(","," ").replace("\n"," ").replace("\l"," ").replace("\r"," ") - g32 = g3['term'].to_string().replace(","," ").replace("\n"," ").replace("\l"," ").replace("\r"," ") - s3 = ("SNOMED Terms of entity ," + myEntityGroup + ", with term ," + eterm + ", SNOMED concepts of ," + g31 + ", and SNOMED terms of ," + g32 + ", Label,Value, Label,Value, Label,Value ") - if g31 != 'Series([] )': SaveResult(s3, outputFile) - - #OMS - g41 = g4['Omaha Code'].to_string().replace(","," ").replace("\n"," ") - g42 = g4['SNOMED CT concept ID'].to_string().replace(","," ").replace("\n"," ") - g43 = g4['SNOMED CT'].to_string().replace(","," ").replace("\n"," ") - g44 = g4['PR'].to_string().replace(","," ").replace("\n"," ") - g45 = g4['S&S'].to_string().replace(","," ").replace("\n"," ") - s4 = ("OMS Terms of entity ," + myEntityGroup + ", with term ," + eterm + ", Omaha codes of ," + g41 + ", and SNOMED concepts of ," + g42 + ", and SNOMED codes of ," + g43 + ", and OMS problem of ," + g44 + ", and OMS Sign Symptom of ," + g45) - if g41 != 'Series([] )': SaveResult(s4, outputFile) - - #ICD10 - g51 = g5['Code'].to_string().replace(","," ").replace("\n"," ") - g52 = g5['Description'].to_string().replace(","," ").replace("\n"," ") - s5 = ("ICD10 matches of entity ," + myEntityGroup + ", with term ," + eterm + ", ICD10 codes of ," + g51 + ", and descriptions of ," + g52 + ", Label,Value, Label,Value, Label,Value ") - if g51 != 'Series([] )': SaveResult(s5, outputFile) - - except ValueError as err: - raise ValueError("Error in group by entity \n" + format_tb(err.__traceback__)[0] + err.args[0] + "\nEnd of error message.") from None - - #print(sAll) - - #return out; - #break; - # out["total"] = sum(out.values()) - # return out - return outputFile - - -def plot_to_figure(grouped): - fig = plt.figure() - plt.bar(x=list(grouped.keys()), height=list(grouped.values())) - plt.margins(0.2) - plt.subplots_adjust(bottom=0.4) - plt.xticks(rotation=90) - return fig - - -def ner(text): - raw = pipe(text) - ner_content = { - "text": text, - "entities": [ - { - "entity": x["entity_group"], - "word": x["word"], - "score": x["score"], - "start": x["start"], - "end": x["end"], - } - for x in raw - ], - } - - #grouped = group_by_entity(raw) - outputFile = group_by_entity(raw) - - #figure = plot_to_figure(grouped) - - label = EXAMPLES.get(text, "Unknown") - - #meta = { -# "entity_counts": grouped, -# "entities": len(set(grouped.keys())), -# "counts": sum(grouped.values()), -# } - - #return (ner_content, meta, label, figure) - outputDataframe = pd.read_csv(outputFile) - #outputFile = outputFile.replace(os.path.dirname(__file__) + "\\","") # Just filename for File download UI output element - - #return (ner_content, meta, label, figure, outputDataframe, outputFile) - return (ner_content, outputDataframe, outputFile) - -# New way = Gradio Blocks: -demo = gr.Blocks() -with demo: - gr.Markdown( - """ - # 🩺⚕️NLP Clinical Ontology Biomedical NER - """ - ) - input = gr.Textbox(label="Note text", value="") - #output=[ - # gr.HighlightedText(label="NER", combine_adjacent=True) - #] - with gr.Tab("Biomedical Entity Recognition"): - output=[ - gr.HighlightedText(label="NER", combine_adjacent=True), - #gr.JSON(label="Entity Counts"), - #gr.Label(label="Rating"), - #gr.Plot(label="Bar"), - gr.Dataframe(label="Dataframe"), - gr.File(label="File"), - ] - examples=list(EXAMPLES.keys()) - gr.Examples(examples, inputs=input) - input.change(fn=ner, inputs=input, outputs=output) - with gr.Tab("Clinical Terminology Resolution"): - #output=[ - # gr.Textbox(placeholder="CT Match Results", lines=10) - #] - with gr.Row(variant="compact"): - btnLOINC = gr.Button("LOINC") - btnPanels = gr.Button("Panels") - btnSNOMED = gr.Button("SNOMED") - btnOMS = gr.Button("OMS") - btnICD10 = gr.Button("ICD10") - - #output=[ - # gr.HighlightedText(label="NER", combine_adjacent=True), - # gr.File(label="File"), # add download link here - # gr.Dataframe(label="Dataframe", headers=["LOINC", "Panels", "SNOMED", "OMS", "ICD10"]), # add harmonised output for input corpus here as a dataframe to UI - # gr.Textbox(placeholder="CT Match Results", lines=10) # add matched text scratchpad here - #] - - - #textCT = gr.Textbox(placeholder="CT Match Results", lines=10) - - #btnLOINC.click(loadFile, inputs=["LOINCTerms.txt"], outputs=output) - #btnPanels.click(loadFile, "LOINCPanelsandForms.txt", output) - #btnSNOMED.click(loadFile, "SNOMEDTerms.txt", output) - #btnOMS.click(loadFile, "OMSTerms.txt", output) - #btnICD10.click(loadFile, "ICD10Terms.txt", output) - - examples=list(EXAMPLES.keys()) - gr.Examples(examples, inputs=input) - input.change(fn=ner, inputs=input, outputs=output) - #with gr.Tab("Examples Page 1"): - # gr.Examples(["a", "b", "c"], inputs=input) - #with gr.Tab("Examples Page 2"): - # gr.Examples(["d", "e", "f"], inputs=input) - #with gr.Tab("Examples Page 2"): - # gr.Examples(["g", "h", "i"], inputs=input) - -demo.launch(debug=True) - -# Old Way - Interface Load -#interface = gr.Interface( -# ner, -# inputs=gr.Textbox(label="Note text", value=""), -# outputs=[ -# gr.HighlightedText(label="NER", combine_adjacent=True), -# gr.JSON(label="Entity Counts"), -# gr.Label(label="Rating"), -# gr.Plot(label="Bar"), -# ], -# examples=list(EXAMPLES.keys()), -# allow_flagging="never", -#) - -#interface.launch() \ No newline at end of file diff --git a/spaces/awacke1/StreamlitCookies/app.py b/spaces/awacke1/StreamlitCookies/app.py deleted file mode 100644 index b3f12f69be8b21e7829aba5174fb8bdb60ac096f..0000000000000000000000000000000000000000 --- a/spaces/awacke1/StreamlitCookies/app.py +++ /dev/null @@ -1,20 +0,0 @@ -import streamlit as st - -# Input fields -a = st.number_input("First value", 1, 1000) -b = st.number_input("Second value", 1, 1000) - -# Perform an operation, and save its result -if st.button("Compute value"): - result = a * b - st.experimental_set_query_params(my_saved_result=result) # Save value - -# Retrieve app state -app_state = st.experimental_get_query_params() - -# Display saved result if it exist -if "my_saved_result" in app_state: - saved_result = app_state["my_saved_result"][0] - st.write("Here is your result", saved_result) -else: - st.write("No result to display, compute a value first.") \ No newline at end of file diff --git a/spaces/badayvedat/LLaVA/llava/eval/qa_baseline_gpt35.py b/spaces/badayvedat/LLaVA/llava/eval/qa_baseline_gpt35.py deleted file mode 100644 index babab6e12b4bb8cfa74a7edfa5e56cd1b3e2bf6c..0000000000000000000000000000000000000000 --- a/spaces/badayvedat/LLaVA/llava/eval/qa_baseline_gpt35.py +++ /dev/null @@ -1,74 +0,0 @@ -"""Generate answers with GPT-3.5""" -# Note: you need to be using OpenAI Python v0.27.0 for the code below to work -import argparse -import json -import os -import time -import concurrent.futures - -import openai -import tqdm -import shortuuid - -MODEL = 'gpt-3.5-turbo' -MODEL_ID = 'gpt-3.5-turbo:20230327' - -def get_answer(question_id: int, question: str, max_tokens: int): - ans = { - 'answer_id': shortuuid.uuid(), - 'question_id': question_id, - 'model_id': MODEL_ID, - } - for _ in range(3): - try: - response = openai.ChatCompletion.create( - model=MODEL, - messages=[{ - 'role': 'system', - 'content': 'You are a helpful assistant.' - }, { - 'role': 'user', - 'content': question, - }], - max_tokens=max_tokens, - ) - ans['text'] = response['choices'][0]['message']['content'] - return ans - except Exception as e: - print('[ERROR]', e) - ans['text'] = '#ERROR#' - time.sleep(1) - return ans - - -if __name__ == '__main__': - parser = argparse.ArgumentParser(description='ChatGPT answer generation.') - parser.add_argument('-q', '--question') - parser.add_argument('-o', '--output') - parser.add_argument('--max-tokens', type=int, default=1024, help='maximum number of tokens produced in the output') - args = parser.parse_args() - - questions_dict = {} - with open(os.path.expanduser(args.question)) as f: - for line in f: - if not line: - continue - q = json.loads(line) - questions_dict[q['question_id']] = q['text'] - - answers = [] - - with concurrent.futures.ThreadPoolExecutor(max_workers=32) as executor: - futures = [] - for qid, question in questions_dict.items(): - future = executor.submit(get_answer, qid, question, args.max_tokens) - futures.append(future) - - for future in tqdm.tqdm(concurrent.futures.as_completed(futures), total=len(futures)): - answers.append(future.result()) - - answers.sort(key=lambda x: x['question_id']) - - with open(os.path.expanduser(args.output), 'w') as f: - table = [json.dumps(ans) for ans in answers] - f.write('\n'.join(table)) diff --git a/spaces/banana-projects/datasets-card-creator/build/index.html b/spaces/banana-projects/datasets-card-creator/build/index.html deleted file mode 100644 index 882f6ee5c8f18ed02b5b4ab2eab757401c39dd97..0000000000000000000000000000000000000000 --- a/spaces/banana-projects/datasets-card-creator/build/index.html +++ /dev/null @@ -1 +0,0 @@ -React App
      \ No newline at end of file diff --git a/spaces/banana-projects/web3d/node_modules/three/examples/js/libs/o3dgc.js b/spaces/banana-projects/web3d/node_modules/three/examples/js/libs/o3dgc.js deleted file mode 100644 index 790507fdab91f1af21a1886f69d7c68e1f374a41..0000000000000000000000000000000000000000 --- a/spaces/banana-projects/web3d/node_modules/three/examples/js/libs/o3dgc.js +++ /dev/null @@ -1,2913 +0,0 @@ -/*global ArrayBuffer, Uint32Array, Int32Array, Float32Array, Int8Array, Uint8Array, window, performance, Console*/ - -/* -Copyright (c) 2013 Khaled Mammou - Advanced Micro Devices, Inc. - -Permission is hereby granted, free of charge, to any person obtaining a copy -of this software and associated documentation files (the "Software"), to deal -in the Software without restriction, including without limitation the rights -to use, copy, modify, merge, publish, distribute, sublicense, and/or sell -copies of the Software, and to permit persons to whom the Software is -furnished to do so, subject to the following conditions: - -The above copyright notice and this permission notice shall be included in -all copies or substantial portions of the Software. - -THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR -IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, -FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE -AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER -LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, -OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN -THE SOFTWARE. -*/ - -var o3dgc = (function () { - "use strict"; - var module, local; - module = {}; - local = {}; - local.O3DGC_BINARY_STREAM_BITS_PER_SYMBOL0 = 7; - local.O3DGC_BINARY_STREAM_MAX_SYMBOL0 = 127; // ((1 << O3DGC_BINARY_STREAM_BITS_PER_SYMBOL0) >>> 0) - 1; - local.O3DGC_BINARY_STREAM_BITS_PER_SYMBOL1 = 6; - local.O3DGC_BINARY_STREAM_MAX_SYMBOL1 = 63; // ((1 << O3DGC_BINARY_STREAM_BITS_PER_SYMBOL1) >>> 0) - 1; - local.O3DGC_BINARY_STREAM_NUM_SYMBOLS_UINT32 = 5; // Math.floor((32 + O3DGC_BINARY_STREAM_BITS_PER_SYMBOL0 - 1) / O3DGC_BINARY_STREAM_BITS_PER_SYMBOL0); - local.O3DGC_BIG_ENDIAN = 0; - local.O3DGC_LITTLE_ENDIAN = 1; - local.O3DGC_MAX_DOUBLE = 1.79769e+308; - local.O3DGC_MIN_LONG = -2147483647; - local.O3DGC_MAX_LONG = 2147483647; - local.O3DGC_MAX_UCHAR8 = 255; - local.O3DGC_MAX_TFAN_SIZE = 256; - local.O3DGC_MAX_ULONG = 4294967295; - local.O3DGC_SC3DMC_START_CODE = 0x00001F1; - local.O3DGC_DV_START_CODE = 0x00001F2; - local.O3DGC_SC3DMC_MAX_NUM_FLOAT_ATTRIBUTES = 256; - local.O3DGC_SC3DMC_MAX_NUM_INT_ATTRIBUTES = 256; - local.O3DGC_SC3DMC_MAX_DIM_ATTRIBUTES = 32; - local.O3DGC_SC3DMC_MAX_PREDICTION_NEIGHBORS = 2; - local.O3DGC_SC3DMC_BINARIZATION_FL = 0; // Fixed Length (not supported) - local.O3DGC_SC3DMC_BINARIZATION_BP = 1; // BPC (not supported) - local.O3DGC_SC3DMC_BINARIZATION_FC = 2; // 4 bits Coding (not supported) - local.O3DGC_SC3DMC_BINARIZATION_AC = 3; // Arithmetic Coding (not supported) - local.O3DGC_SC3DMC_BINARIZATION_AC_EGC = 4; // Arithmetic Coding & EGCk - local.O3DGC_SC3DMC_BINARIZATION_ASCII = 5; // Arithmetic Coding & EGCk - local.O3DGC_STREAM_TYPE_UNKOWN = 0; - local.O3DGC_STREAM_TYPE_ASCII = 1; - local.O3DGC_STREAM_TYPE_BINARY = 2; - local.O3DGC_SC3DMC_NO_PREDICTION = 0; // supported - local.O3DGC_SC3DMC_DIFFERENTIAL_PREDICTION = 1; // supported - local.O3DGC_SC3DMC_XOR_PREDICTION = 2; // not supported - local.O3DGC_SC3DMC_ADAPTIVE_DIFFERENTIAL_PREDICTION = 3; // not supported - local.O3DGC_SC3DMC_CIRCULAR_DIFFERENTIAL_PREDICTION = 4; // not supported - local.O3DGC_SC3DMC_PARALLELOGRAM_PREDICTION = 5; // supported - local.O3DGC_SC3DMC_SURF_NORMALS_PREDICTION = 6; // supported - local.O3DGC_SC3DMC_ENCODE_MODE_QBCR = 0; // not supported - local.O3DGC_SC3DMC_ENCODE_MODE_SVA = 1; // not supported - local.O3DGC_SC3DMC_ENCODE_MODE_TFAN = 2; // supported - local.O3DGC_DYNAMIC_VECTOR_ENCODE_MODE_LIFT = 0; - local.O3DGC_MIN_NEIGHBORS_SIZE = 128; - local.O3DGC_MIN_NUM_NEIGHBORS_SIZE = 16; - local.O3DGC_TFANS_MIN_SIZE_ALLOCATED_VERTICES_BUFFER = 128; - local.O3DGC_TFANS_MIN_SIZE_TFAN_SIZE_BUFFER = 8; - local.O3DGC_DEFAULT_VECTOR_SIZE = 32; - - module.O3DGC_IFS_FLOAT_ATTRIBUTE_TYPE_UNKOWN = 0; - module.O3DGC_IFS_FLOAT_ATTRIBUTE_TYPE_POSITION = 1; - module.O3DGC_IFS_FLOAT_ATTRIBUTE_TYPE_NORMAL = 2; - module.O3DGC_IFS_FLOAT_ATTRIBUTE_TYPE_COLOR = 3; - module.O3DGC_IFS_FLOAT_ATTRIBUTE_TYPE_TEXCOORD = 4; - module.O3DGC_IFS_FLOAT_ATTRIBUTE_TYPE_WEIGHT = 5; - module.O3DGC_IFS_INT_ATTRIBUTE_TYPE_UNKOWN = 0; - module.O3DGC_IFS_INT_ATTRIBUTE_TYPE_INDEX = 1; - module.O3DGC_IFS_INT_ATTRIBUTE_TYPE_JOINT_ID = 2; - module.O3DGC_IFS_INT_ATTRIBUTE_TYPE_INDEX_BUFFER_ID = 3; - - module.O3DGC_OK = 0; - module.O3DGC_ERROR_BUFFER_FULL = 1; - module.O3DGC_ERROR_CORRUPTED_STREAM = 5; - module.O3DGC_ERROR_NON_SUPPORTED_FEATURE = 6; - module.O3DGC_ERROR_AC = 7; - - function SystemEndianness() { - var a, b, c; - b = new ArrayBuffer(4); - a = new Uint32Array(b); - c = new Uint8Array(b); - a[0] = 1; - if (c[0] === 1) { - return local.O3DGC_LITTLE_ENDIAN; - } - return local.O3DGC_BIG_ENDIAN; - } - // SC3DMCStats class - module.SC3DMCStats = function () { - this.m_timeCoord = 0; - this.m_timeNormal = 0; - this.m_timeCoordIndex = 0; - this.m_timeFloatAttribute = new Float32Array(local.O3DGC_SC3DMC_MAX_NUM_FLOAT_ATTRIBUTES); - this.m_timeIntAttribute = new Float32Array(local.O3DGC_SC3DMC_MAX_NUM_INT_ATTRIBUTES); - this.m_timeReorder = 0; - this.m_streamSizeCoord = 0; - this.m_streamSizeNormal = 0; - this.m_streamSizeCoordIndex = 0; - this.m_streamSizeFloatAttribute = new Uint32Array(local.O3DGC_SC3DMC_MAX_NUM_FLOAT_ATTRIBUTES); - this.m_streamSizeIntAttribute = new Uint32Array(local.O3DGC_SC3DMC_MAX_NUM_INT_ATTRIBUTES); - }; - // SC3DMCTriplet class - module.SC3DMCTriplet = function (a, b, c) { - this.m_a = a; - this.m_b = b; - this.m_c = c; - }; - module.SC3DMCTriplet.prototype.Less = function (rhs) { - var res; - if (this.m_c !== rhs.m_c) { - res = (this.m_c < rhs.m_c); - } else if (this.m_b !== rhs.m_b) { - res = (this.m_b < rhs.m_b); - } else { - res = (this.m_a < rhs.m_a); - } - return res; - }; - module.SC3DMCTriplet.prototype.Equal = function (rhs) { - return (this.m_c === rhs.m_c && this.m_b === rhs.m_b && this.m_a === rhs.m_a); - }; - // SC3DMCPredictor class - module.SC3DMCPredictor = function () { - this.m_id = new module.SC3DMCTriplet(-1, -1, -1); - this.m_pred = new Float32Array(local.O3DGC_SC3DMC_MAX_DIM_ATTRIBUTES); - }; - // fix me: optimize this function (e.g., binary search) - function InsertPredictor(e, nPred, list, dimFloatArray) { - var pos, foundOrInserted, j, j1, j0, h, i; - pos = -1; - foundOrInserted = false; - j1 = nPred.m_value; - j0 = 0; - for (j = j0; j < j1; ++j) { - if (e.Equal(list[j].m_id)) { - foundOrInserted = true; - break; - } else if (e.Less(list[j].m_id)) { - if (nPred.m_value < local.O3DGC_SC3DMC_MAX_PREDICTION_NEIGHBORS) { - ++nPred.m_value; - } - for (h = nPred.m_value - 1; h > j; --h) { - list[h].m_id.m_a = list[h - 1].m_id.m_a; - list[h].m_id.m_b = list[h - 1].m_id.m_b; - list[h].m_id.m_c = list[h - 1].m_id.m_c; - for (i = 0; i < dimFloatArray; ++i) { - list[h].m_pred[i] = list[h - 1].m_pred[i]; - } - } - list[j].m_id.m_a = e.m_a; - list[j].m_id.m_b = e.m_b; - list[j].m_id.m_c = e.m_c; - pos = j; - foundOrInserted = true; - break; - } - } - if (!foundOrInserted && nPred.m_value < local.O3DGC_SC3DMC_MAX_PREDICTION_NEIGHBORS) { - pos = nPred.m_value++; - list[pos].m_id.m_a = e.m_a; - list[pos].m_id.m_b = e.m_b; - list[pos].m_id.m_c = e.m_c; - } - return pos; - } - // Timer class - if (typeof window.performance === 'undefined') { - window.performance = {}; - } - if (!window.performance.now) { - local.nowOffset = Date.now(); - if (performance.timing && performance.timing.navigationStart) { - local.nowOffset = performance.timing.navigationStart; - } - window.performance.now = function now() { - return Date.now() - local.nowOffset; - }; - } - module.Timer = function () { - this.m_start = 0; - this.m_end = 0; - }; - module.Timer.prototype.Tic = function () { - this.m_start = window.performance.now(); - }; - module.Timer.prototype.Toc = function () { - this.m_end = window.performance.now(); - }; - module.Timer.prototype.GetElapsedTime = function () { - return this.m_end - this.m_start; - }; - // Vec3 class - module.Vec3 = function (x, y, z) { - this.m_x = x; - this.m_y = y; - this.m_z = z; - }; - module.Vec3.prototype.Set = function (x, y, z) { - this.m_x = x; - this.m_y = y; - this.m_z = z; - }; - module.Vec3.prototype.Sub = function (lhs, rhs) { - this.m_x = lhs.m_x - rhs.m_x; - this.m_y = lhs.m_y - rhs.m_y; - this.m_z = lhs.m_z - rhs.m_z; - }; - module.Vec3.prototype.Add = function (lhs, rhs) { - this.m_x = lhs.m_x + rhs.m_x; - this.m_y = lhs.m_y + rhs.m_y; - this.m_z = lhs.m_z + rhs.m_z; - }; - module.Vec3.prototype.SelfAdd = function (v) { - this.m_x += v.m_x; - this.m_y += v.m_y; - this.m_z += v.m_z; - }; - module.Vec3.prototype.Cross = function (lhs, rhs) { - this.m_x = lhs.m_y * rhs.m_z - lhs.m_z * rhs.m_y; - this.m_y = lhs.m_z * rhs.m_x - lhs.m_x * rhs.m_z; - this.m_z = lhs.m_x * rhs.m_y - lhs.m_y * rhs.m_x; - }; - module.Vec3.prototype.GetNorm = function () { - return Math.sqrt(this.m_x * this.m_x + this.m_y * this.m_y + this.m_z * this.m_z); - }; - function SphereToCube(vin, vout) { - var ax, ay, az; - ax = Math.abs(vin.m_x); - ay = Math.abs(vin.m_y); - az = Math.abs(vin.m_z); - if (az >= ax && az >= ay) { - if (vin.m_z >= 0) { - vout.m_z = 0; - vout.m_x = vin.m_x; - vout.m_y = vin.m_y; - } else { - vout.m_z = 1; - vout.m_x = -vin.m_x; - vout.m_y = -vin.m_y; - } - } else if (ay >= ax && ay >= az) { - if (vin.m_y >= 0) { - vout.m_z = 2; - vout.m_x = vin.m_z; - vout.m_y = vin.m_x; - } else { - vout.m_z = 3; - vout.m_x = -vin.m_z; - vout.m_y = -vin.m_x; - } - } else { - if (vin.m_x >= 0) { - vout.m_z = 4; - vout.m_x = vin.m_y; - vout.m_y = vin.m_z; - } else { - vout.m_z = 5; - vout.m_x = -vin.m_y; - vout.m_y = -vin.m_z; - } - } - } - local.CubeToSphere = { - 0: function (vin, vout) { - vout.m_x = vin.m_x; - vout.m_y = vin.m_y; - vout.m_z = Math.sqrt(Math.max(0.0, 1.0 - vout.m_x * vout.m_x - vout.m_y * vout.m_y)); - }, - 1: function (vin, vout) { - vout.m_x = -vin.m_x; - vout.m_y = -vin.m_y; - vout.m_z = -Math.sqrt(Math.max(0.0, 1.0 - vout.m_x * vout.m_x - vout.m_y * vout.m_y)); - }, - 2: function (vin, vout) { - vout.m_z = vin.m_x; - vout.m_x = vin.m_y; - vout.m_y = Math.sqrt(Math.max(0.0, 1.0 - vout.m_x * vout.m_x - vout.m_z * vout.m_z)); - }, - 3: function (vin, vout) { - vout.m_z = -vin.m_x; - vout.m_x = -vin.m_y; - vout.m_y = -Math.sqrt(Math.max(0.0, 1.0 - vout.m_x * vout.m_x - vout.m_z * vout.m_z)); - }, - 4: function (vin, vout) { - vout.m_y = vin.m_x; - vout.m_z = vin.m_y; - vout.m_x = Math.sqrt(Math.max(0.0, 1.0 - vout.m_y * vout.m_y - vout.m_z * vout.m_z)); - }, - 5: function (vin, vout) { - vout.m_y = -vin.m_x; - vout.m_z = -vin.m_y; - vout.m_x = -Math.sqrt(Math.max(0.0, 1.0 - vout.m_y * vout.m_y - vout.m_z * vout.m_z)); - } - }; - function IntToUInt(value) { - return (value < 0) ? (-1 - (2 * value)) : (2 * value); - } - function UIntToInt(uiValue) { - return (uiValue & 1) ? -((uiValue + 1) >>> 1) : ((uiValue >>> 1)); - } - module.Iterator = function () { - this.m_count = 0; - }; - module.NumberRef = function () { - this.m_value = 0; - }; - // BinaryStream class - module.BinaryStream = function (buffer) { - this.m_endianness = SystemEndianness(); - this.m_buffer = buffer; - this.m_stream = new Uint8Array(this.m_buffer); - this.m_localBuffer = new ArrayBuffer(4); - this.m_localBufferViewUChar8 = new Uint8Array(this.m_localBuffer); - this.m_localBufferViewFloat32 = new Float32Array(this.m_localBuffer); - this.m_localBufferViewUInt32 = new Uint32Array(this.m_localBuffer); - }; - module.BinaryStream.prototype.ReadFloat32Bin = function (bsIterator) { - if (this.m_endianness === local.O3DGC_BIG_ENDIAN) { - this.m_localBufferViewUChar8[3] = this.m_stream[bsIterator.m_count++]; - this.m_localBufferViewUChar8[2] = this.m_stream[bsIterator.m_count++]; - this.m_localBufferViewUChar8[1] = this.m_stream[bsIterator.m_count++]; - this.m_localBufferViewUChar8[0] = this.m_stream[bsIterator.m_count++]; - } else { - this.m_localBufferViewUChar8[0] = this.m_stream[bsIterator.m_count++]; - this.m_localBufferViewUChar8[1] = this.m_stream[bsIterator.m_count++]; - this.m_localBufferViewUChar8[2] = this.m_stream[bsIterator.m_count++]; - this.m_localBufferViewUChar8[3] = this.m_stream[bsIterator.m_count++]; - } - return this.m_localBufferViewFloat32[0]; - }; - module.BinaryStream.prototype.ReadUInt32Bin = function (bsIterator) { - if (this.m_endianness === local.O3DGC_BIG_ENDIAN) { - this.m_localBufferViewUChar8[3] = this.m_stream[bsIterator.m_count++]; - this.m_localBufferViewUChar8[2] = this.m_stream[bsIterator.m_count++]; - this.m_localBufferViewUChar8[1] = this.m_stream[bsIterator.m_count++]; - this.m_localBufferViewUChar8[0] = this.m_stream[bsIterator.m_count++]; - } else { - this.m_localBufferViewUChar8[0] = this.m_stream[bsIterator.m_count++]; - this.m_localBufferViewUChar8[1] = this.m_stream[bsIterator.m_count++]; - this.m_localBufferViewUChar8[2] = this.m_stream[bsIterator.m_count++]; - this.m_localBufferViewUChar8[3] = this.m_stream[bsIterator.m_count++]; - } - return this.m_localBufferViewUInt32[0]; - }; - module.BinaryStream.prototype.ReadUChar8Bin = function (bsIterator) { - return this.m_stream[bsIterator.m_count++]; - }; - module.BinaryStream.prototype.ReadUInt32ASCII = function (bsIterator) { - var value, shift, i; - value = 0; - shift = 0; - for (i = 0; i < local.O3DGC_BINARY_STREAM_NUM_SYMBOLS_UINT32; ++i) { - value += (this.m_stream[bsIterator.m_count++] << shift) >>> 0; - shift += local.O3DGC_BINARY_STREAM_BITS_PER_SYMBOL0; - } - return value; - }; - module.BinaryStream.prototype.ReadFloat32ASCII = function (bsIterator) { - var value = this.ReadUInt32ASCII(bsIterator); - if (this.m_endianness === local.O3DGC_BIG_ENDIAN) { - this.m_localBufferViewUChar8[3] = value & local.O3DGC_MAX_UCHAR8; - value >>>= 8; - this.m_localBufferViewUChar8[2] = value & local.O3DGC_MAX_UCHAR8; - value >>>= 8; - this.m_localBufferViewUChar8[1] = value & local.O3DGC_MAX_UCHAR8; - value >>>= 8; - this.m_localBufferViewUChar8[0] = value & local.O3DGC_MAX_UCHAR8; - } else { - this.m_localBufferViewUChar8[0] = value & local.O3DGC_MAX_UCHAR8; - value >>>= 8; - this.m_localBufferViewUChar8[1] = value & local.O3DGC_MAX_UCHAR8; - value >>>= 8; - this.m_localBufferViewUChar8[2] = value & local.O3DGC_MAX_UCHAR8; - value >>>= 8; - this.m_localBufferViewUChar8[3] = value & local.O3DGC_MAX_UCHAR8; - } - return this.m_localBufferViewFloat32[0]; - }; - module.BinaryStream.prototype.ReadIntASCII = function (bsIterator) { - return UIntToInt(this.ReadUIntASCII(bsIterator)); - }; - module.BinaryStream.prototype.ReadUIntASCII = function (bsIterator) { - var i, x, value; - value = this.m_stream[bsIterator.m_count++]; - if (value === local.O3DGC_BINARY_STREAM_MAX_SYMBOL0) { - i = 0; - do { - x = this.m_stream[bsIterator.m_count++]; - value += ((x >>> 1) << i) >>> 0; - i += local.O3DGC_BINARY_STREAM_BITS_PER_SYMBOL1; - } while (x & 1); - } - return value; - }; - module.BinaryStream.prototype.ReadUCharASCII = function (bsIterator) { - return this.m_stream[bsIterator.m_count++]; - }; - module.BinaryStream.prototype.ReadFloat32 = function (bsIterator, streamType) { - if (streamType === local.O3DGC_STREAM_TYPE_ASCII) { - return this.ReadFloat32ASCII(bsIterator); - } - return this.ReadFloat32Bin(bsIterator); - }; - module.BinaryStream.prototype.ReadUInt32 = function (bsIterator, streamType) { - if (streamType === local.O3DGC_STREAM_TYPE_ASCII) { - return this.ReadUInt32ASCII(bsIterator); - } - return this.ReadUInt32Bin(bsIterator); - }; - module.BinaryStream.prototype.ReadUChar = function (bsIterator, streamType) { - if (streamType === local.O3DGC_STREAM_TYPE_ASCII) { - return this.ReadUCharASCII(bsIterator); - } - return this.ReadUChar8Bin(bsIterator); - }; - module.BinaryStream.prototype.GetBuffer = function (bsIterator, size) { - return new Uint8Array(this.m_buffer, bsIterator.m_count, size); - }; - - // Copyright (c) 2004 Amir Said (said@ieee.org) & William A. Pearlman (pearlw@ecse.rpi.edu) - // All rights reserved. - - local.O3DGC_AC_MIN_LENGTH = 0x01000000; // threshold for renormalization - local.O3DGC_AC_MAX_LENGTH = 0xFFFFFFFF; // maximum AC interval length - local.O3DGC_AC_BM_LENGTH_SHIFT = 13; // Maximum values for binary models length bits discarded before mult. - local.O3DGC_AC_BM_MAX_COUNT = (1 << local.O3DGC_AC_BM_LENGTH_SHIFT) >>> 0; // for adaptive models - local.O3DGC_AC_DM_LENGTH_SHIFT = 15; // Maximum values for general models length bits discarded before mult. - local.O3DGC_AC_DM_MAX_COUNT = (1 << local.O3DGC_AC_DM_LENGTH_SHIFT) >>> 0; // for adaptive models - // StaticBitModel class - module.StaticBitModel = function () { - this.m_bit0Prob = (1 << (local.O3DGC_AC_BM_LENGTH_SHIFT - 1)) >>> 0; // p0 = 0.5 - }; - module.StaticBitModel.prototype.SetProbability = function (p) { - this.m_bit0Prob = Math.floor(p * ((1 << local.O3DGC_AC_BM_LENGTH_SHIFT) >>> 0)); - }; - // AdaptiveBitModel class - module.AdaptiveBitModel = function () { - // initialization to equiprobable model - this.m_updateCycle = 4; - this.m_bitsUntilUpdate = 4; - this.m_bit0Prob = (1 << (local.O3DGC_AC_BM_LENGTH_SHIFT - 1)) >>> 0; - this.m_bit0Count = 1; - this.m_bitCount = 2; - }; - module.AdaptiveBitModel.prototype.Reset = function () { - this.m_updateCycle = 4; - this.m_bitsUntilUpdate = 4; - this.m_bit0Prob = (1 << (local.O3DGC_AC_BM_LENGTH_SHIFT - 1)) >>> 0; - this.m_bit0Count = 1; - this.m_bitCount = 2; - }; - module.AdaptiveBitModel.prototype.Update = function () { - // halve counts when a threshold is reached - if ((this.m_bitCount += this.m_updateCycle) > local.O3DGC_AC_BM_MAX_COUNT) { - this.m_bitCount = (this.m_bitCount + 1) >>> 1; - this.m_bit0Count = (this.m_bit0Count + 1) >>> 1; - if (this.m_bit0Count === this.m_bitCount) { - ++this.m_bitCount; - } - } - // compute scaled bit 0 probability - var scale = Math.floor(0x80000000 / this.m_bitCount); - this.m_bit0Prob = (this.m_bit0Count * scale) >>> (31 - local.O3DGC_AC_BM_LENGTH_SHIFT); - // set frequency of model updates - this.m_updateCycle = (5 * this.m_updateCycle) >>> 2; - if (this.m_updateCycle > 64) { - this.m_updateCycle = 64; - } - this.m_bitsUntilUpdate = this.m_updateCycle; - }; - // AdaptiveDataModel class - module.AdaptiveDataModel = function () { - this.m_buffer = {}; - this.m_distribution = {}; - this.m_symbolCount = {}; - this.m_decoderTable = {}; - this.m_totalCount = 0; - this.m_updateCycle = 0; - this.m_symbolsUntilUpdate = 0; - this.m_dataSymbols = 0; - this.m_lastSymbol = 0; - this.m_tableSize = 0; - this.m_tableShift = 0; - }; - module.AdaptiveDataModel.prototype.Update = function () { - var n, sum, s, scale, k, max_cycle, w; - // halve counts when a threshold is reached - if ((this.m_totalCount += this.m_updateCycle) > local.O3DGC_AC_DM_MAX_COUNT) { - this.m_totalCount = 0; - for (n = 0; n < this.m_dataSymbols; ++n) { - this.m_totalCount += (this.m_symbolCount[n] = (this.m_symbolCount[n] + 1) >>> 1); - } - } - // compute cumulative distribution, decoder table - sum = 0; - s = 0; - scale = Math.floor(0x80000000 / this.m_totalCount); - if (this.m_tableSize === 0) { - for (k = 0; k < this.m_dataSymbols; ++k) { - this.m_distribution[k] = (scale * sum) >>> (31 - local.O3DGC_AC_DM_LENGTH_SHIFT); - sum += this.m_symbolCount[k]; - } - } else { - for (k = 0; k < this.m_dataSymbols; ++k) { - this.m_distribution[k] = (scale * sum) >>> (31 - local.O3DGC_AC_DM_LENGTH_SHIFT); - sum += this.m_symbolCount[k]; - w = this.m_distribution[k] >>> this.m_tableShift; - while (s < w) { - this.m_decoderTable[++s] = k - 1; - } - } - this.m_decoderTable[0] = 0; - while (s <= this.m_tableSize) { - this.m_decoderTable[++s] = this.m_dataSymbols - 1; - } - } - // set frequency of model updates - this.m_updateCycle = (5 * this.m_updateCycle) >>> 2; - max_cycle = ((this.m_dataSymbols + 6) << 3) >>> 0; - if (this.m_updateCycle > max_cycle) { - this.m_updateCycle = max_cycle; - } - this.m_symbolsUntilUpdate = this.m_updateCycle; - }; - module.AdaptiveDataModel.prototype.Reset = function () { - var k; - if (this.m_dataSymbols === 0) { - return; - } - // restore probability estimates to uniform distribution - this.m_totalCount = 0; - this.m_updateCycle = this.m_dataSymbols; - for (k = 0; k < this.m_dataSymbols; ++k) { - this.m_symbolCount[k] = 1; - } - this.Update(); - this.m_symbolsUntilUpdate = this.m_updateCycle = (this.m_dataSymbols + 6) >>> 1; - }; - module.AdaptiveDataModel.prototype.SetAlphabet = function (number_of_symbols) { - if ((number_of_symbols < 2) || (number_of_symbols > (1 << 11))) { - Console.log("invalid number of data symbols"); - return module.O3DGC_ERROR_AC; - } - if (this.m_dataSymbols !== number_of_symbols) { // assign memory for data model - this.m_dataSymbols = number_of_symbols; - this.m_lastSymbol = this.m_dataSymbols - 1; - // define size of table for fast decoding - if (this.m_dataSymbols > 16) { - var table_bits = 3; - while (this.m_dataSymbols > ((1 << (table_bits + 2)) >>> 0)) { - ++table_bits; - } - this.m_tableSize = (1 << table_bits) >>> 0; - this.m_tableShift = local.O3DGC_AC_DM_LENGTH_SHIFT - table_bits; - this.m_buffer = new ArrayBuffer(4 * (2 * this.m_dataSymbols + this.m_tableSize + 2)); - this.m_distribution = new Uint32Array(this.m_buffer, 0, this.m_dataSymbols); - this.m_symbolCount = new Uint32Array(this.m_buffer, 4 * this.m_dataSymbols, this.m_dataSymbols); - this.m_decoderTable = new Uint32Array(this.m_buffer, 8 * this.m_dataSymbols, this.m_tableSize + 2); - } else {// small alphabet: no table needed - this.m_tableSize = this.m_tableShift = 0; - this.m_buffer = new ArrayBuffer(4 * 2 * this.m_dataSymbols); - this.m_distribution = new Uint32Array(this.m_buffer, 0, this.m_dataSymbols); - this.m_symbolCount = new Uint32Array(this.m_buffer, 4 * this.m_dataSymbols, this.m_dataSymbols); - this.m_decoderTable = {}; - } - } - this.Reset(); // initialize model - return module.O3DGC_OK; - }; - // ArithmeticDecoder class - module.ArithmeticDecoder = function () { - this.m_codeBuffer = {}; - this.m_acShift = 0; - this.m_base = 0; - this.m_value = 0; - this.m_length = 0; // arithmetic coding state - this.m_bufferSize = 0; - this.m_mode = 0; // mode: 0 = undef, 1 = encoder, 2 = decoder - }; - module.ArithmeticDecoder.prototype.SetBuffer = function (max_code_bytes, user_buffer) { - if (max_code_bytes === 0) { - Console.log("invalid codec buffer size"); - return module.O3DGC_ERROR_AC; - } - if (this.m_mode !== 0) { - Console.log("cannot set buffer while encoding or decoding"); - return module.O3DGC_ERROR_AC; - } - this.m_bufferSize = max_code_bytes; - this.m_codeBuffer = user_buffer; - }; - module.ArithmeticDecoder.prototype.StartDecoder = function () { - if (this.m_mode !== 0) { - Console.log("cannot start decoder"); - return module.O3DGC_ERROR_AC; - } - if (this.m_bufferSize === 0) { - Console.log("no code buffer set"); - return module.O3DGC_ERROR_AC; - } - // initialize decoder: interval, pointer, initial code value - this.m_mode = 2; - this.m_length = local.O3DGC_AC_MAX_LENGTH; - this.m_acShift = 3; - this.m_value = ((this.m_codeBuffer[0] << 24) | (this.m_codeBuffer[1] << 16) | (this.m_codeBuffer[2] << 8) | (this.m_codeBuffer[3])) >>> 0; - }; - module.ArithmeticDecoder.prototype.StopDecoder = function () { - if (this.m_mode !== 2) { - Console.log("invalid to stop decoder"); - return module.O3DGC_ERROR_AC; - } - this.m_mode = 0; - }; - module.ArithmeticDecoder.prototype.GetBit = function () { - this.m_length >>>= 1; // halve interval - var bit = (this.m_value >= this.m_length); // decode bit - if (bit) { - this.m_value -= this.m_length; // move base - } - if (this.m_length < local.O3DGC_AC_MIN_LENGTH) { - this.RenormDecInterval(); // renormalization - } - return bit; - }; - module.ArithmeticDecoder.prototype.GetBits = function (bits) { - var s = Math.floor(this.m_value / (this.m_length >>>= bits)); // decode symbol, change length - this.m_value -= this.m_length * s; // update interval - if (this.m_length < local.O3DGC_AC_MIN_LENGTH) { - this.RenormDecInterval(); // renormalization - } - return s; - }; - module.ArithmeticDecoder.prototype.DecodeStaticBitModel = function (M) { - var x, bit; - x = M.m_bit0Prob * (this.m_length >>> local.O3DGC_AC_BM_LENGTH_SHIFT); // product l x p0 - bit = (this.m_value >= x); // decision - // update & shift interval - if (!bit) { - this.m_length = x; - } else { - this.m_value -= x; // shifted interval base = 0 - this.m_length -= x; - } - if (this.m_length < local.O3DGC_AC_MIN_LENGTH) { - this.RenormDecInterval(); // renormalization - } - return bit; // return data bit value - }; - module.ArithmeticDecoder.prototype.DecodeAdaptiveBitModel = function (M) { - var x, bit; - x = M.m_bit0Prob * (this.m_length >>> local.O3DGC_AC_BM_LENGTH_SHIFT); // product l x p0 - bit = (this.m_value >= x); // decision - // update interval - if (!bit) { - this.m_length = x; - ++M.m_bit0Count; - } else { - this.m_value -= x; - this.m_length -= x; - } - if (this.m_length < local.O3DGC_AC_MIN_LENGTH) { - this.RenormDecInterval(); // renormalization - } - if (--M.m_bitsUntilUpdate === 0) { - M.Update(); // periodic model update - } - return bit; // return data bit value - }; - module.ArithmeticDecoder.prototype.DecodeAdaptiveDataModel = function (M) { - var n, s, x, y, t, dv, z, m; - y = this.m_length; - if (M.m_tableSize > 0) { // use table look-up for faster decoding - dv = Math.floor(this.m_value / (this.m_length >>>= local.O3DGC_AC_DM_LENGTH_SHIFT)); - t = dv >>> M.m_tableShift; - s = M.m_decoderTable[t]; // initial decision based on table look-up - n = M.m_decoderTable[t + 1] + 1; - while (n > s + 1) { // finish with bisection search - m = (s + n) >>> 1; - if (M.m_distribution[m] > dv) { - n = m; - } else { - s = m; - } - } - // compute products - x = M.m_distribution[s] * this.m_length; - if (s !== M.m_lastSymbol) { - y = M.m_distribution[s + 1] * this.m_length; - } - } else { // decode using only multiplications - x = s = 0; - this.m_length >>>= local.O3DGC_AC_DM_LENGTH_SHIFT; - m = (n = M.m_dataSymbols) >>> 1; - // decode via bisection search - do { - z = this.m_length * M.m_distribution[m]; - if (z > this.m_value) { - n = m; - y = z; // value is smaller - } else { - s = m; - x = z; // value is larger or equal - } - } while ((m = (s + n) >>> 1) !== s); - } - this.m_value -= x; // update interval - this.m_length = y - x; - if (this.m_length < local.O3DGC_AC_MIN_LENGTH) { - this.RenormDecInterval(); // renormalization - } - ++M.m_symbolCount[s]; - if (--M.m_symbolsUntilUpdate === 0) { - M.Update(false); // periodic model update - } - return s; - }; - module.ArithmeticDecoder.prototype.ExpGolombDecode = function (k, bModel0, bModel1) { - var symbol, binary_symbol, l; - symbol = 0; - binary_symbol = 0; - do { - l = this.DecodeAdaptiveBitModel(bModel1); - if (l) { - symbol += (1 << k) >>> 0; - k++; - } - } while (l); - while (k--) { //next binary part - if (this.DecodeStaticBitModel(bModel0)) { - binary_symbol = (binary_symbol | (1 << k)) >>> 0; - } - } - return (symbol + binary_symbol); - }; - module.ArithmeticDecoder.prototype.RenormDecInterval = function () { - do { // read least-significant byte - this.m_value = ((this.m_value << 8) | this.m_codeBuffer[++this.m_acShift]) >>> 0; - this.m_length = (this.m_length << 8) >>> 0; - } while (this.m_length < local.O3DGC_AC_MIN_LENGTH); // length multiplied by 256 - }; - module.ArithmeticDecoder.prototype.DecodeIntACEGC = function (mModelValues, bModel0, bModel1, exp_k, M) { - var uiValue = this.DecodeAdaptiveDataModel(mModelValues); - if (uiValue === M) { - uiValue += this.ExpGolombDecode(exp_k, bModel0, bModel1); - } - return UIntToInt(uiValue); - }; - module.ArithmeticDecoder.prototype.DecodeUIntACEGC = function (mModelValues, bModel0, bModel1, exp_k, M) { - var uiValue = this.DecodeAdaptiveDataModel(mModelValues); - if (uiValue === M) { - uiValue += this.ExpGolombDecode(exp_k, bModel0, bModel1); - } - return uiValue; - }; - - // - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - // FIFO class - module.FIFO = function () { - this.m_data = {}; - this.m_allocated = 0; - this.m_size = 0; - this.m_start = 0; - this.m_end = 0; - }; - module.FIFO.prototype.Clear = function () { - this.m_start = this.m_end = this.m_size = 0; - }; - module.FIFO.prototype.GetAllocatedSize = function () { - return this.m_allocated; - }; - module.FIFO.prototype.GetSize = function () { - return this.m_size; - }; - module.FIFO.prototype.Allocate = function (size) { - if (size > this.m_allocated) { - this.m_allocated = size; - this.m_data = new Int32Array(this.m_allocated); - } - this.Clear(); - return module.O3DGC_OK; - }; - module.FIFO.prototype.PopFirst = function () { - --this.m_size; - var current = this.m_start++; - if (this.m_start === this.m_allocated) { - this.m_end = 0; - } - return this.m_data[current]; - }; - module.FIFO.prototype.PushBack = function (value) { - --this.m_size; - this.m_data[this.m_end] = value; - ++this.m_size; - ++this.m_end; - if (this.m_end === this.m_allocated) { - this.m_end = 0; - } - }; - // IndexedFaceSet class - module.IndexedFaceSet = function () { - this.m_nCoordIndex = 0; - this.m_nCoord = 0; - this.m_nNormal = 0; - this.m_numFloatAttributes = 0; - this.m_numIntAttributes = 0; - this.m_creaseAngle = 30.0; - this.m_ccw = true; - this.m_solid = true; - this.m_convex = true; - this.m_isTriangularMesh = true; - this.m_coordMin = new Float32Array(3); - this.m_coordMax = new Float32Array(3); - this.m_normalMin = new Float32Array(3); - this.m_normalMax = new Float32Array(3); - this.m_nFloatAttribute = new Uint32Array(local.O3DGC_SC3DMC_MAX_NUM_FLOAT_ATTRIBUTES); - this.m_nIntAttribute = new Uint32Array(local.O3DGC_SC3DMC_MAX_NUM_INT_ATTRIBUTES); - this.m_dimFloatAttribute = new Uint32Array(local.O3DGC_SC3DMC_MAX_NUM_FLOAT_ATTRIBUTES); - this.m_dimIntAttribute = new Uint32Array(local.O3DGC_SC3DMC_MAX_NUM_INT_ATTRIBUTES); - this.m_typeFloatAttribute = new Uint32Array(local.O3DGC_SC3DMC_MAX_NUM_FLOAT_ATTRIBUTES); - this.m_typeIntAttribute = new Uint32Array(local.O3DGC_SC3DMC_MAX_NUM_INT_ATTRIBUTES); - this.m_minFloatAttributeBuffer = new ArrayBuffer(4 * local.O3DGC_SC3DMC_MAX_NUM_FLOAT_ATTRIBUTES * local.O3DGC_SC3DMC_MAX_DIM_ATTRIBUTES); - this.m_minFloatAttribute = new Float32Array(this.m_minFloatAttributeBuffer); - this.m_maxFloatAttributeBuffer = new ArrayBuffer(4 * local.O3DGC_SC3DMC_MAX_NUM_FLOAT_ATTRIBUTES * local.O3DGC_SC3DMC_MAX_DIM_ATTRIBUTES); - this.m_maxFloatAttribute = new Float32Array(this.m_maxFloatAttributeBuffer); - this.m_coordIndex = {}; - this.m_coord = {}; - this.m_normal = {}; - this.m_floatAttribute = []; - this.m_intAttribute = []; - }; - module.IndexedFaceSet.prototype.GetNCoordIndex = function () { - return this.m_nCoordIndex; - }; - module.IndexedFaceSet.prototype.GetNCoordIndex = function () { - return this.m_nCoordIndex; - }; - module.IndexedFaceSet.prototype.GetNCoord = function () { - return this.m_nCoord; - }; - module.IndexedFaceSet.prototype.GetNNormal = function () { - return this.m_nNormal; - }; - module.IndexedFaceSet.prototype.GetNFloatAttribute = function (a) { - return this.m_nFloatAttribute[a]; - }; - module.IndexedFaceSet.prototype.GetNIntAttribute = function (a) { - return this.m_nIntAttribute[a]; - }; - module.IndexedFaceSet.prototype.GetNumFloatAttributes = function () { - return this.m_numFloatAttributes; - }; - module.IndexedFaceSet.prototype.GetNumIntAttributes = function () { - return this.m_numIntAttributes; - }; - module.IndexedFaceSet.prototype.GetCoordMinArray = function () { - return this.m_coordMin; - }; - module.IndexedFaceSet.prototype.GetCoordMaxArray = function () { - return this.m_coordMax; - }; - module.IndexedFaceSet.prototype.GetNormalMinArray = function () { - return this.m_normalMin; - }; - module.IndexedFaceSet.prototype.GetNormalMaxArray = function () { - return this.m_normalMax; - }; - module.IndexedFaceSet.prototype.GetFloatAttributeMinArray = function (a) { - return (new Float32Array(this.m_minFloatAttributeBuffer, a * local.O3DGC_SC3DMC_MAX_DIM_ATTRIBUTES * 4, this.GetFloatAttributeDim(a))); - }; - module.IndexedFaceSet.prototype.GetFloatAttributeMaxArray = function (a) { - return (new Float32Array(this.m_maxFloatAttributeBuffer, a * local.O3DGC_SC3DMC_MAX_DIM_ATTRIBUTES * 4, this.GetFloatAttributeDim(a))); - }; - module.IndexedFaceSet.prototype.GetFloatAttributeDim = function (a) { - return this.m_dimFloatAttribute[a]; - }; - module.IndexedFaceSet.prototype.GetIntAttributeDim = function (a) { - return this.m_dimIntAttribute[a]; - }; - module.IndexedFaceSet.prototype.GetFloatAttributeType = function (a) { - return this.m_typeFloatAttribute[a]; - }; - module.IndexedFaceSet.prototype.GetIntAttributeType = function (a) { - return this.m_typeIntAttribute[a]; - }; - module.IndexedFaceSet.prototype.GetFloatAttributeMax = function (a, dim) { - return this.m_maxFloatAttribute[a * local.O3DGC_SC3DMC_MAX_DIM_ATTRIBUTES + dim]; - }; - module.IndexedFaceSet.prototype.GetCreaseAngle = function () { - return this.m_creaseAngle; - }; - module.IndexedFaceSet.prototype.GetCreaseAngle = function () { - return this.m_creaseAngle; - }; - module.IndexedFaceSet.prototype.GetCCW = function () { - return this.m_ccw; - }; - module.IndexedFaceSet.prototype.GetSolid = function () { - return this.m_solid; - }; - module.IndexedFaceSet.prototype.GetConvex = function () { - return this.m_convex; - }; - module.IndexedFaceSet.prototype.GetIsTriangularMesh = function () { - return this.m_isTriangularMesh; - }; - module.IndexedFaceSet.prototype.GetCoordIndex = function () { - return this.m_coordIndex; - }; - module.IndexedFaceSet.prototype.GetCoordIndex = function () { - return this.m_coordIndex; - }; - module.IndexedFaceSet.prototype.GetCoord = function () { - return this.m_coord; - }; - module.IndexedFaceSet.prototype.GetNormal = function () { - return this.m_normal; - }; - module.IndexedFaceSet.prototype.GetFloatAttribute = function (a) { - return this.m_floatAttribute[a]; - }; - module.IndexedFaceSet.prototype.GetIntAttribute = function (a) { - return this.m_intAttribute[a]; - }; - module.IndexedFaceSet.prototype.SetNCoordIndex = function (nCoordIndex) { - this.m_nCoordIndex = nCoordIndex; - }; - module.IndexedFaceSet.prototype.SetNNormalIndex = function (nNormalIndex) { - }; - module.IndexedFaceSet.prototype.SetNormalPerVertex = function (perVertex) { - }; - module.IndexedFaceSet.prototype.SetNFloatAttributeIndex = function (nFloatAttributeIndex) { - }; - module.IndexedFaceSet.prototype.SetNIntAttributeIndex = function (nIntAttributeIndex) { - }; - module.IndexedFaceSet.prototype.SetFloatAttributePerVertex = function (perVertex) { - }; - module.IndexedFaceSet.prototype.SetIntAttributePerVertex = function (perVertex) { - }; - module.IndexedFaceSet.prototype.SetNCoord = function (nCoord) { - this.m_nCoord = nCoord; - }; - module.IndexedFaceSet.prototype.SetNNormal = function (nNormal) { - this.m_nNormal = nNormal; - }; - module.IndexedFaceSet.prototype.SetNumFloatAttributes = function (numFloatAttributes) { - this.m_numFloatAttributes = numFloatAttributes; - }; - module.IndexedFaceSet.prototype.SetNumIntAttributes = function (numIntAttributes) { - this.m_numIntAttributes = numIntAttributes; - }; - module.IndexedFaceSet.prototype.SetCreaseAngle = function (creaseAngle) { - this.m_creaseAngle = creaseAngle; - }; - module.IndexedFaceSet.prototype.SetCCW = function (ccw) { - this.m_ccw = ccw; - }; - module.IndexedFaceSet.prototype.SetSolid = function (solid) { - this.m_solid = solid; - }; - module.IndexedFaceSet.prototype.SetConvex = function (convex) { - this.m_convex = convex; - }; - module.IndexedFaceSet.prototype.SetIsTriangularMesh = function (isTriangularMesh) { - this.m_isTriangularMesh = isTriangularMesh; - }; - module.IndexedFaceSet.prototype.SetCoordMin = function (j, min) { - this.m_coordMin[j] = min; - }; - module.IndexedFaceSet.prototype.SetCoordMax = function (j, max) { - this.m_coordMax[j] = max; - }; - module.IndexedFaceSet.prototype.SetNormalMin = function (j, min) { - this.m_normalMin[j] = min; - }; - module.IndexedFaceSet.prototype.SetNormalMax = function (j, max) { - this.m_normalMax[j] = max; - }; - module.IndexedFaceSet.prototype.SetNFloatAttribute = function (a, nFloatAttribute) { - this.m_nFloatAttribute[a] = nFloatAttribute; - }; - module.IndexedFaceSet.prototype.SetNIntAttribute = function (a, nIntAttribute) { - this.m_nIntAttribute[a] = nIntAttribute; - }; - module.IndexedFaceSet.prototype.SetFloatAttributeDim = function (a, d) { - this.m_dimFloatAttribute[a] = d; - }; - module.IndexedFaceSet.prototype.SetIntAttributeDim = function (a, d) { - this.m_dimIntAttribute[a] = d; - }; - module.IndexedFaceSet.prototype.SetFloatAttributeType = function (a, d) { - this.m_typeFloatAttribute[a] = d; - }; - module.IndexedFaceSet.prototype.SetIntAttributeType = function (a, d) { - this.m_typeIntAttribute[a] = d; - }; - module.IndexedFaceSet.prototype.SetFloatAttributeMin = function (a, dim, min) { - this.m_minFloatAttribute[a * local.O3DGC_SC3DMC_MAX_DIM_ATTRIBUTES + dim] = min; - }; - module.IndexedFaceSet.prototype.SetFloatAttributeMax = function (a, dim, max) { - this.m_maxFloatAttribute[a * local.O3DGC_SC3DMC_MAX_DIM_ATTRIBUTES + dim] = max; - }; - module.IndexedFaceSet.prototype.SetCoordIndex = function (coordIndex) { - this.m_coordIndex = coordIndex; - }; - module.IndexedFaceSet.prototype.SetCoord = function (coord) { - this.m_coord = coord; - }; - module.IndexedFaceSet.prototype.SetNormal = function (normal) { - this.m_normal = normal; - }; - module.IndexedFaceSet.prototype.SetFloatAttribute = function (a, floatAttribute) { - this.m_floatAttribute[a] = floatAttribute; - }; - module.IndexedFaceSet.prototype.SetIntAttribute = function (a, intAttribute) { - this.m_intAttribute[a] = intAttribute; - }; - - // SC3DMCEncodeParams class - module.SC3DMCEncodeParams = function () { - var a; - this.m_numFloatAttributes = 0; - this.m_numIntAttributes = 0; - this.m_floatAttributeQuantBits = new Uint32Array(local.O3DGC_SC3DMC_MAX_NUM_FLOAT_ATTRIBUTES); - this.m_floatAttributePredMode = new Uint32Array(local.O3DGC_SC3DMC_MAX_NUM_FLOAT_ATTRIBUTES); - this.m_intAttributePredMode = new Uint32Array(local.O3DGC_SC3DMC_MAX_NUM_INT_ATTRIBUTES); - this.m_encodeMode = local.O3DGC_SC3DMC_ENCODE_MODE_TFAN; - this.m_streamTypeMode = local.O3DGC_STREAM_TYPE_ASCII; - this.m_coordQuantBits = 14; - this.m_normalQuantBits = 8; - this.m_coordPredMode = local.O3DGC_SC3DMC_PARALLELOGRAM_PREDICTION; - this.m_normalPredMode = local.O3DGC_SC3DMC_SURF_NORMALS_PREDICTION; - for (a = 0; a < local.O3DGC_SC3DMC_MAX_NUM_FLOAT_ATTRIBUTES; ++a) { - this.m_floatAttributePredMode[a] = local.O3DGC_SC3DMC_PARALLELOGRAM_PREDICTION; - } - for (a = 0; a < local.O3DGC_SC3DMC_MAX_NUM_INT_ATTRIBUTES; ++a) { - this.m_intAttributePredMode[a] = local.O3DGC_SC3DMC_DIFFERENTIAL_PREDICTION; - } - }; - module.SC3DMCEncodeParams.prototype.GetStreamType = function () { - return this.m_streamTypeMode; - }; - module.SC3DMCEncodeParams.prototype.GetEncodeMode = function () { - return this.m_encodeMode; - }; - module.SC3DMCEncodeParams.prototype.GetNumFloatAttributes = function () { - return this.m_numFloatAttributes; - }; - module.SC3DMCEncodeParams.prototype.GetNumIntAttributes = function () { - return this.m_numIntAttributes; - }; - module.SC3DMCEncodeParams.prototype.GetCoordQuantBits = function () { - return this.m_coordQuantBits; - }; - module.SC3DMCEncodeParams.prototype.GetNormalQuantBits = function () { - return this.m_normalQuantBits; - }; - module.SC3DMCEncodeParams.prototype.GetFloatAttributeQuantBits = function (a) { - return this.m_floatAttributeQuantBits[a]; - }; - module.SC3DMCEncodeParams.prototype.GetCoordPredMode = function () { - return this.m_coordPredMode; - }; - module.SC3DMCEncodeParams.prototype.GetNormalPredMode = function () { - return this.m_normalPredMode; - }; - module.SC3DMCEncodeParams.prototype.GetFloatAttributePredMode = function (a) { - return this.m_floatAttributePredMode[a]; - }; - module.SC3DMCEncodeParams.prototype.GetIntAttributePredMode = function (a) { - return this.m_intAttributePredMode[a]; - }; - module.SC3DMCEncodeParams.prototype.GetCoordPredMode = function () { - return this.m_coordPredMode; - }; - module.SC3DMCEncodeParams.prototype.GetNormalPredMode = function () { - return this.m_normalPredMode; - }; - module.SC3DMCEncodeParams.prototype.GetFloatAttributePredMode = function (a) { - return this.m_floatAttributePredMode[a]; - }; - module.SC3DMCEncodeParams.prototype.GetIntAttributePredMode = function (a) { - return this.m_intAttributePredMode[a]; - }; - module.SC3DMCEncodeParams.prototype.SetStreamType = function (streamTypeMode) { - this.m_streamTypeMode = streamTypeMode; - }; - module.SC3DMCEncodeParams.prototype.SetEncodeMode = function (encodeMode) { - this.m_encodeMode = encodeMode; - }; - module.SC3DMCEncodeParams.prototype.SetNumFloatAttributes = function (numFloatAttributes) { - this.m_numFloatAttributes = numFloatAttributes; - }; - module.SC3DMCEncodeParams.prototype.SetNumIntAttributes = function (numIntAttributes) { - this.m_numIntAttributes = numIntAttributes; - }; - module.SC3DMCEncodeParams.prototype.SetCoordQuantBits = function (coordQuantBits) { - this.m_coordQuantBits = coordQuantBits; - }; - module.SC3DMCEncodeParams.prototype.SetNormalQuantBits = function (normalQuantBits) { - this.m_normalQuantBits = normalQuantBits; - }; - module.SC3DMCEncodeParams.prototype.SetFloatAttributeQuantBits = function (a, q) { - this.m_floatAttributeQuantBits[a] = q; - }; - module.SC3DMCEncodeParams.prototype.SetCoordPredMode = function (coordPredMode) { - this.m_coordPredMode = coordPredMode; - }; - module.SC3DMCEncodeParams.prototype.SetNormalPredMode = function (normalPredMode) { - this.m_normalPredMode = normalPredMode; - }; - module.SC3DMCEncodeParams.prototype.SetFloatAttributePredMode = function (a, p) { - this.m_floatAttributePredMode[a] = p; - }; - module.SC3DMCEncodeParams.prototype.SetIntAttributePredMode = function (a, p) { - this.m_intAttributePredMode[a] = p; - }; - // AdjacencyInfo class - module.AdjacencyInfo = function () { - this.m_neighborsSize = 0; // actual allocated size for m_neighbors - this.m_numNeighborsSize = 0; // actual allocated size for m_numNeighbors - this.m_numElements = 0; // number of elements - this.m_neighbors = {}; - this.m_numNeighbors = {}; - }; - module.AdjacencyInfo.prototype.Allocate = function (numNeighborsSize, neighborsSize) { - this.m_numElements = numNeighborsSize; - if (neighborsSize > this.m_neighborsSize) { - this.m_neighborsSize = neighborsSize; - this.m_neighbors = new Int32Array(this.m_neighborsSize); - } - if (numNeighborsSize > this.m_numNeighborsSize) { - this.m_numNeighborsSize = numNeighborsSize; - this.m_numNeighbors = new Int32Array(this.m_numNeighborsSize); - } - return module.O3DGC_OK; - }; - module.AdjacencyInfo.prototype.AllocateNumNeighborsArray = function (numElements) { - if (numElements > this.m_numNeighborsSize) { - this.m_numNeighborsSize = numElements; - this.m_numNeighbors = new Int32Array(this.m_numNeighborsSize); - } - this.m_numElements = numElements; - return module.O3DGC_OK; - }; - module.AdjacencyInfo.prototype.AllocateNeighborsArray = function () { - var i; - for (i = 1; i < this.m_numElements; ++i) { - this.m_numNeighbors[i] += this.m_numNeighbors[i - 1]; - } - if (this.m_numNeighbors[this.m_numElements - 1] > this.m_neighborsSize) { - this.m_neighborsSize = this.m_numNeighbors[this.m_numElements - 1]; - this.m_neighbors = new Int32Array(this.m_neighborsSize); - } - return module.O3DGC_OK; - }; - module.AdjacencyInfo.prototype.ClearNumNeighborsArray = function () { - var i; - for (i = 0; i < this.m_numElements; ++i) { - this.m_numNeighbors[i] = 0; - } - return module.O3DGC_OK; - }; - module.AdjacencyInfo.prototype.ClearNeighborsArray = function () { - var i; - for (i = 0; i < this.m_neighborsSize; ++i) { - this.m_neighbors[i] = -1; - } - return module.O3DGC_OK; - }; - module.AdjacencyInfo.prototype.Begin = function (element) { - return (element > 0) ? this.m_numNeighbors[element - 1] : 0; - }; - module.AdjacencyInfo.prototype.End = function (element) { - return this.m_numNeighbors[element]; - }; - module.AdjacencyInfo.prototype.AddNeighbor = function (element, neighbor) { - var p, p0, p1; - p0 = this.Begin(element); - p1 = this.End(element); - for (p = p0; p < p1; ++p) { - if (this.m_neighbors[p] === -1) { - this.m_neighbors[p] = neighbor; - return module.O3DGC_OK; - } - } - return module.O3DGC_ERROR_BUFFER_FULL; - }; - module.AdjacencyInfo.prototype.GetNeighbor = function (element) { - return this.m_neighbors[element]; - }; - module.AdjacencyInfo.prototype.GetNumNeighbors = function (element) { - return this.End(element) - this.Begin(element); - }; - module.AdjacencyInfo.prototype.GetNumNeighborsBuffer = function () { - return this.m_numNeighbors; - }; - module.AdjacencyInfo.prototype.GetNeighborsBuffer = function () { - return this.m_neighbors; - }; - // Vector class - module.Vector = function () { - this.m_data = {}; - this.m_allocated = 0; - this.m_size = 0; - }; - module.Vector.prototype.Clear = function () { - this.m_size = 0; - }; - module.Vector.prototype.Get = function (i) { - return this.m_data[i]; - }; - module.Vector.prototype.GetAllocatedSize = function () { - return this.m_allocated; - }; - module.Vector.prototype.GetSize = function () { - return this.m_size; - }; - module.Vector.prototype.GetBuffer = function () { - return this.m_data; - }; - module.Vector.prototype.SetSize = function (size) { - this.m_size = size; - }; - module.Vector.prototype.Allocate = function (size) { - var i, tmp_data; - if (size > this.m_allocated) { - this.m_allocated = size; - tmp_data = new Int32Array(this.m_allocated); - if (this.m_size > 0) { - for (i = 0; i < this.m_size; ++i) { - tmp_data[i] = this.m_data[i]; - } - } - this.m_data = tmp_data; - } - }; - module.Vector.prototype.PushBack = function (value) { - var i, tmp_data; - if (this.m_size === this.m_allocated) { - this.m_allocated *= 2; - if (this.m_allocated < local.O3DGC_DEFAULT_VECTOR_SIZE) { - this.m_allocated = local.O3DGC_DEFAULT_VECTOR_SIZE; - } - tmp_data = new Int32Array(this.m_allocated); - if (this.m_size > 0) { - for (i = 0; i < this.m_size; ++i) { - tmp_data[i] = this.m_data[i]; - } - } - this.m_data = tmp_data; - } - this.m_data[this.m_size++] = value; - }; - // CompressedTriangleFans class - module.CompressedTriangleFans = function () { - this.m_numTFANs = new module.Vector(); - this.m_degrees = new module.Vector(); - this.m_configs = new module.Vector(); - this.m_operations = new module.Vector(); - this.m_indices = new module.Vector(); - this.m_trianglesOrder = new module.Vector(); - this.m_streamType = local.O3DGC_STREAM_TYPE_UNKOWN; - }; - module.CompressedTriangleFans.prototype.GetStreamType = function () { - return this.m_streamType; - }; - module.CompressedTriangleFans.prototype.SetStreamType = function (streamType) { - this.m_streamType = streamType; - }; - module.CompressedTriangleFans.prototype.Clear = function () { - this.m_numTFANs.Clear(); - this.m_degrees.Clear(); - this.m_configs.Clear(); - this.m_operations.Clear(); - this.m_indices.Clear(); - return module.O3DGC_OK; - }; - module.CompressedTriangleFans.prototype.Allocate = function (numVertices, numTriangles) { - this.m_numTFANs.Allocate(numVertices); - this.m_degrees.Allocate(2 * numVertices); - this.m_configs.Allocate(2 * numVertices); - this.m_operations.Allocate(2 * numVertices); - this.m_indices.Allocate(2 * numVertices); - this.m_trianglesOrder.Allocate(numTriangles); - this.Clear(); - return module.O3DGC_OK; - }; - module.CompressedTriangleFans.prototype.PushNumTFans = function (numTFans) { - this.m_numTFANs.PushBack(numTFans); - }; - module.CompressedTriangleFans.prototype.ReadNumTFans = function (it) { - return this.m_numTFANs.Get(it.m_count++); - }; - module.CompressedTriangleFans.prototype.PushDegree = function (degree) { - this.m_degrees.PushBack(degree); - }; - module.CompressedTriangleFans.prototype.ReadDegree = function (it) { - return this.m_degrees.Get(it.m_count++); - }; - module.CompressedTriangleFans.prototype.PushConfig = function (config) { - this.m_configs.PushBack(config); - }; - module.CompressedTriangleFans.prototype.ReadConfig = function (it) { - return this.m_configs.Get(it.m_count++); - }; - module.CompressedTriangleFans.prototype.PushOperation = function (op) { - this.m_operations.PushBack(op); - }; - module.CompressedTriangleFans.prototype.ReadOperation = function (it) { - return this.m_operations.Get(it.m_count++); - }; - module.CompressedTriangleFans.prototype.PushIndex = function (index) { - this.m_indices.PushBack(index); - }; - module.CompressedTriangleFans.prototype.ReadIndex = function (it) { - return this.m_indices.Get(it.m_count++); - }; - module.CompressedTriangleFans.prototype.PushTriangleIndex = function (index) { - this.m_trianglesOrder.PushBack(IntToUInt(index)); - }; - module.CompressedTriangleFans.prototype.ReadTriangleIndex = function (it) { - return UIntToInt(this.m_trianglesOrder.Get(it.m_count++)); - }; - module.CompressedTriangleFans.prototype.LoadUIntData = function (data, bstream, it) { - var size, i; - bstream.ReadUInt32ASCII(it); - size = bstream.ReadUInt32ASCII(it); - data.Allocate(size); - data.Clear(); - for (i = 0; i < size; ++i) { - data.PushBack(bstream.ReadUIntASCII(it)); - } - return module.O3DGC_OK; - }; - module.CompressedTriangleFans.prototype.LoadIntData = function (data, bstream, it) { - var size, i; - bstream.ReadUInt32ASCII(it); - size = bstream.ReadUInt32ASCII(it); - data.Allocate(size); - data.Clear(); - for (i = 0; i < size; ++i) { - data.PushBack(bstream.ReadIntASCII(it)); - } - return module.O3DGC_OK; - }; - module.CompressedTriangleFans.prototype.LoadBinData = function (data, bstream, it) { - var size, symbol, i, h; - bstream.ReadUInt32ASCII(it); - size = bstream.ReadUInt32ASCII(it); - data.Allocate(size * local.O3DGC_BINARY_STREAM_BITS_PER_SYMBOL0); - data.Clear(); - i = 0; - while (i < size) { - symbol = bstream.ReadUCharASCII(it); - for (h = 0; h < local.O3DGC_BINARY_STREAM_BITS_PER_SYMBOL0; ++h) { - data.PushBack(symbol & 1); - symbol >>>= 1; - ++i; - } - } - return module.O3DGC_OK; - }; - module.CompressedTriangleFans.prototype.LoadUIntAC = function (data, M, bstream, it) { - - var sizeSize, size, minValue, buffer, acd, mModelValues, i; - sizeSize = bstream.ReadUInt32Bin(it) - 12; - size = bstream.ReadUInt32Bin(it); - if (size === 0) { - return module.O3DGC_OK; - } - minValue = bstream.ReadUInt32Bin(it); - buffer = bstream.GetBuffer(it, sizeSize); - it.m_count += sizeSize; - data.Allocate(size); - acd = new module.ArithmeticDecoder(); - acd.SetBuffer(sizeSize, buffer); - acd.StartDecoder(); - mModelValues = new module.AdaptiveDataModel(); - mModelValues.SetAlphabet(M + 1); - for (i = 0; i < size; ++i) { - data.PushBack(acd.DecodeAdaptiveDataModel(mModelValues) + minValue); - } - return module.O3DGC_OK; - }; - module.CompressedTriangleFans.prototype.LoadIntACEGC = function (data, M, bstream, it) { - var sizeSize, size, minValue, buffer, acd, mModelValues, bModel0, bModel1, value, i; - sizeSize = bstream.ReadUInt32Bin(it) - 12; - size = bstream.ReadUInt32Bin(it); - if (size === 0) { - return module.O3DGC_OK; - } - minValue = bstream.ReadUInt32Bin(it) - local.O3DGC_MAX_LONG; - buffer = bstream.GetBuffer(it, sizeSize); - it.m_count += sizeSize; - data.Allocate(size); - acd = new module.ArithmeticDecoder(); - acd.SetBuffer(sizeSize, buffer); - acd.StartDecoder(); - mModelValues = new module.AdaptiveDataModel(); - mModelValues.SetAlphabet(M + 2); - bModel0 = new module.StaticBitModel(); - bModel1 = new module.AdaptiveBitModel(); - for (i = 0; i < size; ++i) { - value = acd.DecodeAdaptiveDataModel(mModelValues); - if (value === M) { - value += acd.ExpGolombDecode(0, bModel0, bModel1); - } - data.PushBack(value + minValue); - } - return module.O3DGC_OK; - }; - module.CompressedTriangleFans.prototype.LoadBinAC = function (data, bstream, it) { - var sizeSize, size, buffer, acd, bModel, i; - sizeSize = bstream.ReadUInt32Bin(it) - 8; - size = bstream.ReadUInt32Bin(it); - if (size === 0) { - return module.O3DGC_OK; - } - buffer = bstream.GetBuffer(it, sizeSize); - it.m_count += sizeSize; - data.Allocate(size); - acd = new module.ArithmeticDecoder(); - acd.SetBuffer(sizeSize, buffer); - acd.StartDecoder(); - bModel = new module.AdaptiveBitModel(); - for (i = 0; i < size; ++i) { - data.PushBack(acd.DecodeAdaptiveBitModel(bModel)); - } - return module.O3DGC_OK; - }; - module.CompressedTriangleFans.prototype.Load = function (bstream, iterator, decodeTrianglesOrder, streamType) { - if (streamType === local.O3DGC_STREAM_TYPE_ASCII) { - this.LoadUIntData(this.m_numTFANs, bstream, iterator); - this.LoadUIntData(this.m_degrees, bstream, iterator); - this.LoadUIntData(this.m_configs, bstream, iterator); - this.LoadBinData(this.m_operations, bstream, iterator); - this.LoadIntData(this.m_indices, bstream, iterator); - if (decodeTrianglesOrder) { - this.LoadUIntData(this.m_trianglesOrder, bstream, iterator); - } - } else { - this.LoadIntACEGC(this.m_numTFANs, 4, bstream, iterator); - this.LoadIntACEGC(this.m_degrees, 16, bstream, iterator); - this.LoadUIntAC(this.m_configs, 10, bstream, iterator); - this.LoadBinAC(this.m_operations, bstream, iterator); - this.LoadIntACEGC(this.m_indices, 8, bstream, iterator); - if (decodeTrianglesOrder) { - this.LoadIntACEGC(this.m_trianglesOrder, 16, bstream, iterator); - } - } - return module.O3DGC_OK; - }; - // TriangleFans class - module.TriangleFans = function () { - this.m_verticesAllocatedSize = 0; - this.m_sizeTFANAllocatedSize = 0; - this.m_numTFANs = 0; - this.m_numVertices = 0; - this.m_sizeTFAN = {}; - this.m_vertices = {}; - }; - module.TriangleFans.prototype.Allocate = function (sizeTFAN, verticesSize) { - this.m_numTFANs = 0; - this.m_numVertices = 0; - if (this.m_verticesAllocatedSize < verticesSize) { - this.m_verticesAllocatedSize = verticesSize; - this.m_vertices = new Int32Array(this.m_verticesAllocatedSize); - } - if (this.m_sizeTFANAllocatedSize < sizeTFAN) { - this.m_sizeTFANAllocatedSize = sizeTFAN; - this.m_sizeTFAN = new Int32Array(this.m_sizeTFANAllocatedSize); - } - return module.O3DGC_OK; - }; - module.TriangleFans.prototype.Clear = function () { - this.m_numTFANs = 0; - this.m_numVertices = 0; - return module.O3DGC_OK; - }; - module.TriangleFans.prototype.AddVertex = function (vertex) { - var i, tmp_vertices; - ++this.m_numVertices; - if (this.m_numVertices > this.m_verticesAllocatedSize) { - this.m_verticesAllocatedSize *= 2; - tmp_vertices = new Int32Array(this.m_verticesAllocatedSize); - for (i = 0; i < this.m_numVertices; ++i) { - tmp_vertices[i] = this.m_vertices[i]; - } - this.m_vertices = tmp_vertices; - } - this.m_vertices[this.m_numVertices - 1] = vertex; - ++this.m_sizeTFAN[this.m_numTFANs - 1]; - return module.O3DGC_OK; - }; - module.TriangleFans.prototype.AddTFAN = function () { - var i, tmp_sizeTFAN; - ++this.m_numTFANs; - if (this.m_numTFANs > this.m_sizeTFANAllocatedSize) { - this.m_sizeTFANAllocatedSize *= 2; - tmp_sizeTFAN = new Int32Array(this.m_sizeTFANAllocatedSize); - for (i = 0; i < this.m_numTFANs; ++i) { - tmp_sizeTFAN[i] = this.m_sizeTFAN[i]; - } - this.m_sizeTFAN = tmp_sizeTFAN; - } - this.m_sizeTFAN[this.m_numTFANs - 1] = (this.m_numTFANs > 1) ? this.m_sizeTFAN[this.m_numTFANs - 2] : 0; - return module.O3DGC_OK; - }; - module.TriangleFans.prototype.Begin = function (tfan) { - return (tfan > 0) ? this.m_sizeTFAN[tfan - 1] : 0; - }; - module.TriangleFans.prototype.End = function (tfan) { - return this.m_sizeTFAN[tfan]; - }; - module.TriangleFans.prototype.GetVertex = function (vertex) { - return this.m_vertices[vertex]; - }; - module.TriangleFans.prototype.GetTFANSize = function (tfan) { - return this.End(tfan) - this.Begin(tfan); - }; - module.TriangleFans.prototype.GetNumTFANs = function () { - return this.m_numTFANs; - }; - module.TriangleFans.prototype.GetNumVertices = function () { - return this.m_numVertices; - }; - // TriangleListDecoder class - module.TriangleListDecoder = function () { - this.m_itNumTFans = new module.Iterator(); - this.m_itDegree = new module.Iterator(); - this.m_itConfig = new module.Iterator(); - this.m_itOperation = new module.Iterator(); - this.m_itIndex = new module.Iterator(); - this.m_maxNumVertices = 0; - this.m_maxNumTriangles = 0; - this.m_numTriangles = 0; - this.m_numVertices = 0; - this.m_tempTrianglesSize = 0; - this.m_vertexCount = 0; - this.m_triangleCount = 0; - this.m_numConqueredTriangles = 0; - this.m_numVisitedVertices = 0; - this.m_triangles = {}; - this.m_tempTriangles = {}; - this.m_visitedVertices = {}; - this.m_visitedVerticesValence = {}; - this.m_vertexToTriangle = new module.AdjacencyInfo(); - this.m_ctfans = new module.CompressedTriangleFans(); - this.m_tfans = new module.TriangleFans(); - this.m_streamType = local.O3DGC_STREAM_TYPE_ASCII; - this.m_decodeTrianglesOrder = false; - this.m_decodeVerticesOrder = false; - this.m_processConfig = { - 0: function (decoder, degree) { // ops: 1000001 vertices: -1 -2 - var u; - decoder.m_tfans.AddVertex(decoder.m_visitedVertices[0]); - for (u = 1; u < degree - 1; ++u) { - decoder.m_visitedVertices[decoder.m_numVisitedVertices++] = decoder.m_vertexCount; - decoder.m_tfans.AddVertex(decoder.m_vertexCount++); - } - decoder.m_tfans.AddVertex(decoder.m_visitedVertices[1]); - }, - 1: function (decoder, degree, focusVertex) { // ops: 1xxxxxx1 vertices: -1 x x x x x -2 - var u, op, index; - decoder.m_tfans.AddVertex(decoder.m_visitedVertices[0]); - for (u = 1; u < degree - 1; ++u) { - op = decoder.m_ctfans.ReadOperation(decoder.m_itOperation); - if (op === 1) { - index = decoder.m_ctfans.ReadIndex(decoder.m_itIndex); - if (index < 0) { - decoder.m_tfans.AddVertex(decoder.m_visitedVertices[-index - 1]); - } else { - decoder.m_tfans.AddVertex(index + focusVertex); - } - } else { - decoder.m_visitedVertices[decoder.m_numVisitedVertices++] = decoder.m_vertexCount; - decoder.m_tfans.AddVertex(decoder.m_vertexCount++); - } - } - decoder.m_tfans.AddVertex(decoder.m_visitedVertices[1]); - }, - 2: function (decoder, degree) { // ops: 00000001 vertices: -1 - var u; - for (u = 0; u < degree - 1; ++u) { - decoder.m_visitedVertices[decoder.m_numVisitedVertices++] = decoder.m_vertexCount; - decoder.m_tfans.AddVertex(decoder.m_vertexCount++); - } - decoder.m_tfans.AddVertex(decoder.m_visitedVertices[0]); - }, - 3: function (decoder, degree) { // ops: 00000001 vertices: -2 - var u; - for (u = 0; u < degree - 1; ++u) { - decoder.m_visitedVertices[decoder.m_numVisitedVertices++] = decoder.m_vertexCount; - decoder.m_tfans.AddVertex(decoder.m_vertexCount++); - } - decoder.m_tfans.AddVertex(decoder.m_visitedVertices[1]); - }, - 4: function (decoder, degree) {// ops: 10000000 vertices: -1 - var u; - decoder.m_tfans.AddVertex(decoder.m_visitedVertices[0]); - for (u = 1; u < degree; ++u) { - decoder.m_visitedVertices[decoder.m_numVisitedVertices++] = decoder.m_vertexCount; - decoder.m_tfans.AddVertex(decoder.m_vertexCount++); - } - }, - 5: function (decoder, degree) { // ops: 10000000 vertices: -2 - var u; - decoder.m_tfans.AddVertex(decoder.m_visitedVertices[1]); - for (u = 1; u < degree; ++u) { - decoder.m_visitedVertices[decoder.m_numVisitedVertices++] = decoder.m_vertexCount; - decoder.m_tfans.AddVertex(decoder.m_vertexCount++); - } - }, - 6: function (decoder, degree) { // ops: 00000000 vertices: - var u; - for (u = 0; u < degree; ++u) { - decoder.m_visitedVertices[decoder.m_numVisitedVertices++] = decoder.m_vertexCount; - decoder.m_tfans.AddVertex(decoder.m_vertexCount++); - } - }, - 7: function (decoder, degree) { // ops: 1000001 vertices: -2 -1 - var u; - decoder.m_tfans.AddVertex(decoder.m_visitedVertices[1]); - for (u = 1; u < degree - 1; ++u) { - decoder.m_visitedVertices[decoder.m_numVisitedVertices++] = decoder.m_vertexCount; - decoder.m_tfans.AddVertex(decoder.m_vertexCount++); - } - decoder.m_tfans.AddVertex(decoder.m_visitedVertices[0]); - }, - 8: function (decoder, degree, focusVertex) { // ops: 1xxxxxx1 vertices: -2 x x x x x -1 - var u, op, index; - decoder.m_tfans.AddVertex(decoder.m_visitedVertices[1]); - for (u = 1; u < degree - 1; ++u) { - op = decoder.m_ctfans.ReadOperation(decoder.m_itOperation); - if (op === 1) { - index = decoder.m_ctfans.ReadIndex(decoder.m_itIndex); - if (index < 0) { - decoder.m_tfans.AddVertex(decoder.m_visitedVertices[-index - 1]); - } else { - decoder.m_tfans.AddVertex(index + focusVertex); - } - } else { - decoder.m_visitedVertices[decoder.m_numVisitedVertices++] = decoder.m_vertexCount; - decoder.m_tfans.AddVertex(decoder.m_vertexCount++); - } - } - decoder.m_tfans.AddVertex(decoder.m_visitedVertices[0]); - }, - 9: function (decoder, degree, focusVertex) { // general case - var u, op, index; - for (u = 0; u < degree; ++u) { - op = decoder.m_ctfans.ReadOperation(decoder.m_itOperation); - if (op === 1) { - index = decoder.m_ctfans.ReadIndex(decoder.m_itIndex); - if (index < 0) { - decoder.m_tfans.AddVertex(decoder.m_visitedVertices[-index - 1]); - } else { - decoder.m_tfans.AddVertex(index + focusVertex); - } - } else { - decoder.m_visitedVertices[decoder.m_numVisitedVertices++] = decoder.m_vertexCount; - decoder.m_tfans.AddVertex(decoder.m_vertexCount++); - } - } - } - }; - }; - module.TriangleListDecoder.prototype.GetStreamType = function () { - return this.m_streamType; - }; - module.TriangleListDecoder.prototype.GetReorderTriangles = function () { - return this.m_decodeTrianglesOrder; - }; - module.TriangleListDecoder.prototype.GetReorderVertices = function () { - return this.m_decodeVerticesOrder; - }; - module.TriangleListDecoder.prototype.SetStreamType = function (streamType) { - this.m_streamType = streamType; - }; - module.TriangleListDecoder.prototype.GetVertexToTriangle = function () { - return this.m_vertexToTriangle; - }; - module.TriangleListDecoder.prototype.Reorder = function () { - var triangles, numTriangles, order, it, prevTriangleIndex, tempTriangles, t, i; - if (this.m_decodeTrianglesOrder) { - triangles = this.m_triangles; - numTriangles = this.m_numTriangles; - order = this.m_ctfans.m_trianglesOrder.m_data; - tempTriangles = this.m_tempTriangles; - tempTriangles.set(triangles); - it = 0; - prevTriangleIndex = 0; - for (i = 0; i < numTriangles; ++i) { - t = UIntToInt(order[it++]) + prevTriangleIndex; - triangles[3 * t] = tempTriangles[3 * i]; - triangles[3 * t + 1] = tempTriangles[3 * i + 1]; - triangles[3 * t + 2] = tempTriangles[3 * i + 2]; - prevTriangleIndex = t + 1; - } - } - return module.O3DGC_OK; - }; - module.TriangleListDecoder.prototype.CompueLocalConnectivityInfo = function (focusVertex) { - var visitedVertices, visitedVerticesValence, triangles, vertexToTriangle, beginV2T, endV2T, numConqueredTriangles, foundOrInserted, numVisitedVertices, tmp, i, j, k, h, x, y, t, p, v; - visitedVertices = this.m_visitedVertices; - visitedVerticesValence = this.m_visitedVerticesValence; - triangles = this.m_triangles; - vertexToTriangle = this.m_vertexToTriangle; - beginV2T = vertexToTriangle.Begin(focusVertex); - endV2T = vertexToTriangle.End(focusVertex); - numConqueredTriangles = 0; - numVisitedVertices = 0; - t = 0; - for (i = beginV2T; (t >= 0) && (i < endV2T); ++i) { - t = vertexToTriangle.GetNeighbor(i); - if (t >= 0) { - ++numConqueredTriangles; - p = 3 * t; - // extract visited vertices - for (k = 0; k < 3; ++k) { - v = triangles[p + k]; - if (v > focusVertex) { // vertices are insertices by increasing traversal order - foundOrInserted = false; - for (j = 0; j < numVisitedVertices; ++j) { - if (v === visitedVertices[j]) { - visitedVerticesValence[j]++; - foundOrInserted = true; - break; - } else if (v < visitedVertices[j]) { - ++numVisitedVertices; - for (h = numVisitedVertices - 1; h > j; --h) { - visitedVertices[h] = visitedVertices[h - 1]; - visitedVerticesValence[h] = visitedVerticesValence[h - 1]; - } - visitedVertices[j] = v; - visitedVerticesValence[j] = 1; - foundOrInserted = true; - break; - } - } - if (!foundOrInserted) { - visitedVertices[numVisitedVertices] = v; - visitedVerticesValence[numVisitedVertices] = 1; - numVisitedVertices++; - } - } - } - } - } - // re-order visited vertices by taking into account their valence (i.e., # of conquered triangles incident to each vertex) - // in order to avoid config. 9 - if (numVisitedVertices > 2) { - for (x = 1; x < numVisitedVertices; ++x) { - if (visitedVerticesValence[x] === 1) { - y = x; - while ((y > 0) && (visitedVerticesValence[y] < visitedVerticesValence[y - 1])) { - tmp = visitedVerticesValence[y]; - visitedVerticesValence[y] = visitedVerticesValence[y - 1]; - visitedVerticesValence[y - 1] = tmp; - tmp = visitedVertices[y]; - visitedVertices[y] = visitedVertices[y - 1]; - visitedVertices[y - 1] = tmp; - --y; - } - } - } - } - this.m_numConqueredTriangles = numConqueredTriangles; - this.m_numVisitedVertices = numVisitedVertices; - return module.O3DGC_OK; - }; - module.TriangleListDecoder.prototype.DecompressTFAN = function (focusVertex) { - var vertexToTriangle, triangles, itDegree, itConfig, tfans, ntfans, processConfig, ctfans, triangleCount, numConqueredTriangles, degree, config, k0, k1, b, c, t, f, k; - vertexToTriangle = this.m_vertexToTriangle; - triangles = this.m_triangles; - itDegree = this.m_itDegree; - itConfig = this.m_itConfig; - tfans = this.m_tfans; - processConfig = this.m_processConfig; - ctfans = this.m_ctfans; - triangleCount = this.m_triangleCount; - numConqueredTriangles = this.m_numConqueredTriangles; - ntfans = ctfans.ReadNumTFans(this.m_itNumTFans); - if (ntfans > 0) { - for (f = 0; f < ntfans; ++f) { - tfans.AddTFAN(); - degree = ctfans.ReadDegree(itDegree) + 2 - numConqueredTriangles; - config = ctfans.ReadConfig(itConfig); - k0 = tfans.GetNumVertices(); - tfans.AddVertex(focusVertex); - processConfig[config](this, degree, focusVertex); - k1 = tfans.GetNumVertices(); - b = tfans.GetVertex(k0 + 1); - for (k = k0 + 2; k < k1; ++k) { - c = tfans.GetVertex(k); - t = triangleCount * 3; - triangles[t++] = focusVertex; - triangles[t++] = b; - triangles[t] = c; - vertexToTriangle.AddNeighbor(focusVertex, triangleCount); - vertexToTriangle.AddNeighbor(b, triangleCount); - vertexToTriangle.AddNeighbor(c, triangleCount); - b = c; - triangleCount++; - } - } - } - this.m_triangleCount = triangleCount; - return module.O3DGC_OK; - }; - module.TriangleListDecoder.prototype.Decompress = function () { - var focusVertex; - for (focusVertex = 0; focusVertex < this.m_numVertices; ++focusVertex) { - if (focusVertex === this.m_vertexCount) { - this.m_vertexCount++; // insert focusVertex - } - this.CompueLocalConnectivityInfo(focusVertex); - this.DecompressTFAN(focusVertex); - } - return module.O3DGC_OK; - }; - module.TriangleListDecoder.prototype.Init = function (triangles, numTriangles, numVertices, maxSizeV2T) { - var i, numNeighbors; - this.m_numTriangles = numTriangles; - this.m_numVertices = numVertices; - this.m_triangles = triangles; - this.m_vertexCount = 0; - this.m_triangleCount = 0; - this.m_itNumTFans.m_count = 0; - this.m_itDegree.m_count = 0; - this.m_itConfig.m_count = 0; - this.m_itOperation.m_count = 0; - this.m_itIndex.m_count = 0; - if (this.m_numVertices > this.m_maxNumVertices) { - this.m_maxNumVertices = this.m_numVertices; - this.m_visitedVerticesValence = new Int32Array(this.m_numVertices); - this.m_visitedVertices = new Int32Array(this.m_numVertices); - } - if (this.m_decodeTrianglesOrder && this.m_tempTrianglesSize < this.m_numTriangles) { - this.m_tempTrianglesSize = this.m_numTriangles; - this.m_tempTriangles = new Int32Array(3 * this.m_tempTrianglesSize); - } - this.m_ctfans.SetStreamType(this.m_streamType); - this.m_ctfans.Allocate(this.m_numVertices, this.m_numTriangles); - this.m_tfans.Allocate(2 * this.m_numVertices, 8 * this.m_numVertices); - // compute vertex-to-triangle adjacency information - this.m_vertexToTriangle.AllocateNumNeighborsArray(numVertices); - numNeighbors = this.m_vertexToTriangle.GetNumNeighborsBuffer(); - for (i = 0; i < numVertices; ++i) { - numNeighbors[i] = maxSizeV2T; - } - this.m_vertexToTriangle.AllocateNeighborsArray(); - this.m_vertexToTriangle.ClearNeighborsArray(); - return module.O3DGC_OK; - }; - module.TriangleListDecoder.prototype.Decode = function (triangles, numTriangles, numVertices, bstream, it) { - var compressionMask, maxSizeV2T; - compressionMask = bstream.ReadUChar(it, this.m_streamType); - this.m_decodeTrianglesOrder = ((compressionMask & 2) !== 0); - this.m_decodeVerticesOrder = ((compressionMask & 1) !== 0); - if (this.m_decodeVerticesOrder) { // vertices reordering not supported - return module.O3DGC_ERROR_NON_SUPPORTED_FEATURE; - } - maxSizeV2T = bstream.ReadUInt32(it, this.m_streamType); - this.Init(triangles, numTriangles, numVertices, maxSizeV2T); - this.m_ctfans.Load(bstream, it, this.m_decodeTrianglesOrder, this.m_streamType); - this.Decompress(); - return module.O3DGC_OK; - }; - // SC3DMCDecoder class - module.SC3DMCDecoder = function () { - var i; - this.m_iterator = new module.Iterator(); - this.m_streamSize = 0; - this.m_params = new module.SC3DMCEncodeParams(); - this.m_triangleListDecoder = new module.TriangleListDecoder(); - this.m_quantFloatArray = {}; - this.m_orientation = {}; - this.m_normals = {}; - this.m_quantFloatArraySize = 0; - this.m_normalsSize = 0; - this.m_orientationSize = 0; - this.m_stats = new module.SC3DMCStats(); - this.m_streamType = local.O3DGC_STREAM_TYPE_UNKOWN; - this.m_neighbors = []; - this.m_idelta = new Float32Array(local.O3DGC_SC3DMC_MAX_DIM_ATTRIBUTES); - this.m_minNormal = new Float32Array(2); - this.m_maxNormal = new Float32Array(2); - this.m_minNormal[0] = this.m_minNormal[1] = -2; - this.m_maxNormal[0] = this.m_maxNormal[1] = 2; - for (i = 0; i < local.O3DGC_SC3DMC_MAX_DIM_ATTRIBUTES; ++i) { - this.m_neighbors[i] = new module.SC3DMCPredictor(); - } - }; - module.SC3DMCDecoder.prototype.GetStats = function () { - return this.m_stats; - }; - module.SC3DMCDecoder.prototype.DecodeHeader = function (ifs, bstream) { - var c0, start_code, mask, j, a, d; - c0 = this.m_iterator.m_count; - start_code = bstream.ReadUInt32(this.m_iterator, local.O3DGC_STREAM_TYPE_BINARY); - if (start_code !== local.O3DGC_SC3DMC_START_CODE) { - this.m_iterator.m_count = c0; - start_code = bstream.ReadUInt32(this.m_iterator, local.O3DGC_STREAM_TYPE_ASCII); - if (start_code !== local.O3DGC_SC3DMC_START_CODE) { - return module.O3DGC_ERROR_CORRUPTED_STREAM; - } - this.m_streamType = local.O3DGC_STREAM_TYPE_ASCII; - } else { - this.m_streamType = local.O3DGC_STREAM_TYPE_BINARY; - } - this.m_streamSize = bstream.ReadUInt32(this.m_iterator, this.m_streamType); - this.m_params.SetEncodeMode(bstream.ReadUChar(this.m_iterator, this.m_streamType)); - - ifs.SetCreaseAngle(bstream.ReadFloat32(this.m_iterator, this.m_streamType)); - mask = bstream.ReadUChar(this.m_iterator, this.m_streamType); - ifs.SetCCW((mask & 1) === 1); - ifs.SetSolid((mask & 2) === 1); - ifs.SetConvex((mask & 4) === 1); - ifs.SetIsTriangularMesh((mask & 8) === 1); - - ifs.SetNCoord(bstream.ReadUInt32(this.m_iterator, this.m_streamType)); - ifs.SetNNormal(bstream.ReadUInt32(this.m_iterator, this.m_streamType)); - ifs.SetNumFloatAttributes(bstream.ReadUInt32(this.m_iterator, this.m_streamType)); - ifs.SetNumIntAttributes(bstream.ReadUInt32(this.m_iterator, this.m_streamType)); - - if (ifs.GetNCoord() > 0) { - ifs.SetNCoordIndex(bstream.ReadUInt32(this.m_iterator, this.m_streamType)); - for (j = 0; j < 3; ++j) { - ifs.SetCoordMin(j, bstream.ReadFloat32(this.m_iterator, this.m_streamType)); - ifs.SetCoordMax(j, bstream.ReadFloat32(this.m_iterator, this.m_streamType)); - } - this.m_params.SetCoordQuantBits(bstream.ReadUChar(this.m_iterator, this.m_streamType)); - } - if (ifs.GetNNormal() > 0) { - ifs.SetNNormalIndex(bstream.ReadUInt32(this.m_iterator, this.m_streamType)); - for (j = 0; j < 3; ++j) { - ifs.SetNormalMin(j, bstream.ReadFloat32(this.m_iterator, this.m_streamType)); - ifs.SetNormalMax(j, bstream.ReadFloat32(this.m_iterator, this.m_streamType)); - } - ifs.SetNormalPerVertex(bstream.ReadUChar(this.m_iterator, this.m_streamType) === 1); - this.m_params.SetNormalQuantBits(bstream.ReadUChar(this.m_iterator, this.m_streamType)); - } - for (a = 0; a < ifs.GetNumFloatAttributes(); ++a) { - ifs.SetNFloatAttribute(a, bstream.ReadUInt32(this.m_iterator, this.m_streamType)); - if (ifs.GetNFloatAttribute(a) > 0) { - ifs.SetNFloatAttributeIndex(a, bstream.ReadUInt32(this.m_iterator, this.m_streamType)); - d = bstream.ReadUChar(this.m_iterator, this.m_streamType); - ifs.SetFloatAttributeDim(a, d); - for (j = 0; j < d; ++j) { - ifs.SetFloatAttributeMin(a, j, bstream.ReadFloat32(this.m_iterator, this.m_streamType)); - ifs.SetFloatAttributeMax(a, j, bstream.ReadFloat32(this.m_iterator, this.m_streamType)); - } - ifs.SetFloatAttributePerVertex(a, bstream.ReadUChar(this.m_iterator, this.m_streamType) === 1); - ifs.SetFloatAttributeType(a, bstream.ReadUChar(this.m_iterator, this.m_streamType)); - this.m_params.SetFloatAttributeQuantBits(a, bstream.ReadUChar(this.m_iterator, this.m_streamType)); - } - } - for (a = 0; a < ifs.GetNumIntAttributes(); ++a) { - ifs.SetNIntAttribute(a, bstream.ReadUInt32(this.m_iterator, this.m_streamType)); - if (ifs.GetNIntAttribute(a) > 0) { - ifs.SetNIntAttributeIndex(a, bstream.ReadUInt32(this.m_iterator, this.m_streamType)); - ifs.SetIntAttributeDim(a, bstream.ReadUChar(this.m_iterator, this.m_streamType)); - ifs.SetIntAttributePerVertex(a, bstream.ReadUChar(this.m_iterator, this.m_streamType) === 1); - ifs.SetIntAttributeType(a, bstream.ReadUChar(this.m_iterator, this.m_streamType)); - } - } - return module.O3DGC_OK; - }; - function DeltaPredictors(triangles, ta, v, nPred, neighbors, dimFloatArray, quantFloatArray, stride) { - var ws, k, p, w, i, id; - id = new module.SC3DMCTriplet(-1, -1, -1); - for (k = 0; k < 3; ++k) { - w = triangles[ta * 3 + k]; - if (w < v) { - id.m_a = -1; - id.m_b = -1; - id.m_c = w; - p = InsertPredictor(id, nPred, neighbors, dimFloatArray); - if (p !== -1) { - ws = w * stride; - for (i = 0; i < dimFloatArray; ++i) { - neighbors[p].m_pred[i] = quantFloatArray[ws + i]; - } - } - } - } - } - function ParallelogramPredictors(triangles, ta, v, nPred, neighbors, dimFloatArray, quantFloatArray, stride, v2T, v2TNeighbors) { - var ta3, tb3, as, bs, cs, a, b, c, x, i, k, u1_begin, u1_end, u1, tb, foundB, p, id; - ta3 = ta * 3; - id = new module.SC3DMCTriplet(-1, -1, -1); - if (triangles[ta3] === v) { - a = triangles[ta3 + 1]; - b = triangles[ta3 + 2]; - } else if (triangles[ta3 + 1] === v) { - a = triangles[ta3]; - b = triangles[ta3 + 2]; - } else { - a = triangles[ta3]; - b = triangles[ta3 + 1]; - } - if (a < v && b < v) { - u1_begin = v2T.Begin(a); - u1_end = v2T.End(a); - for (u1 = u1_begin; u1 < u1_end; ++u1) { - tb = v2TNeighbors[u1]; - if (tb < 0) { - break; - } - tb3 = tb * 3; - c = -1; - foundB = false; - for (k = 0; k < 3; ++k) { - x = triangles[tb3 + k]; - if (x === b) { - foundB = true; - } else if (x < v && x !== a) { - c = x; - } - } - if (c !== -1 && foundB) { - if (a < b) { - id.m_a = a; - id.m_b = b; - } else { - id.m_a = b; - id.m_b = a; - } - id.m_c = (-c - 1); - p = InsertPredictor(id, nPred, neighbors, dimFloatArray); - if (p !== -1) { - as = a * stride; - bs = b * stride; - cs = c * stride; - for (i = 0; i < dimFloatArray; ++i) { - neighbors[p].m_pred[i] = quantFloatArray[as + i] + quantFloatArray[bs + i] - quantFloatArray[cs + i]; - } - } - } - } - } - } - module.SC3DMCDecoder.prototype.DecodeIntArrayBinary = function (intArray, - numIntArray, - dimIntArray, - stride, - ifs, - predMode, - bstream) { - var testPredEnabled, bestPred, i, u, ta, u_begin, u_end, buffer, iterator, streamType, predResidual, acd, bModel0, bModel1, mModelPreds, v2T, v2TNeighbors, triangles, size, start, streamSize, mask, binarization, iteratorPred, exp_k, M, id, mModelValues, neighbors, normals, nPred, v; - iterator = this.m_iterator; - streamType = this.m_streamType; - acd = new module.ArithmeticDecoder(); - bModel0 = new module.StaticBitModel(); - bModel1 = new module.AdaptiveBitModel(); - mModelPreds = new module.AdaptiveDataModel(); - mModelPreds.SetAlphabet(local.O3DGC_SC3DMC_MAX_PREDICTION_NEIGHBORS + 1); - v2T = this.m_triangleListDecoder.GetVertexToTriangle(); - v2TNeighbors = v2T.m_neighbors; - triangles = ifs.GetCoordIndex(); - size = numIntArray * dimIntArray; - start = iterator.m_count; - streamSize = bstream.ReadUInt32(iterator, streamType); // bitsream size - mask = bstream.ReadUChar(iterator, streamType); - binarization = (mask >>> 4) & 7; - predMode.m_value = mask & 7; - streamSize -= (iterator.m_count - start); - iteratorPred = new module.Iterator(); - iteratorPred.m_count = iterator.m_count + streamSize; - exp_k = 0; - M = 0; - id = new module.SC3DMCTriplet(-1, -1, -1); - if (binarization !== local.O3DGC_SC3DMC_BINARIZATION_AC_EGC) { - return module.O3DGC_ERROR_CORRUPTED_STREAM; - } - buffer = bstream.GetBuffer(iterator, streamSize); - iterator.m_count += streamSize; - acd.SetBuffer(streamSize, buffer); - acd.StartDecoder(); - exp_k = acd.ExpGolombDecode(0, bModel0, bModel1); - M = acd.ExpGolombDecode(0, bModel0, bModel1); - mModelValues = new module.AdaptiveDataModel(); - mModelValues.SetAlphabet(M + 2); - neighbors = this.m_neighbors; - normals = this.m_normals; - nPred = new module.NumberRef(); - testPredEnabled = predMode.m_value !== local.O3DGC_SC3DMC_NO_PREDICTION; - for (v = 0; v < numIntArray; ++v) { - nPred.m_value = 0; - if (v2T.GetNumNeighbors(v) > 0 && testPredEnabled) { - u_begin = v2T.Begin(v); - u_end = v2T.End(v); - for (u = u_begin; u < u_end; ++u) { - ta = v2TNeighbors[u]; - if (ta < 0) { - break; - } - DeltaPredictors(triangles, ta, v, nPred, neighbors, dimIntArray, intArray, stride); - } - } - if (nPred.m_value > 1) { - bestPred = acd.DecodeAdaptiveDataModel(mModelPreds); - for (i = 0; i < dimIntArray; ++i) { - predResidual = acd.DecodeIntACEGC(mModelValues, bModel0, bModel1, exp_k, M); - intArray[v * stride + i] = predResidual + neighbors[bestPred].m_pred[i]; - } - } else if (v > 0 && predMode.m_value !== local.O3DGC_SC3DMC_NO_PREDICTION) { - for (i = 0; i < dimIntArray; ++i) { - predResidual = acd.DecodeIntACEGC(mModelValues, bModel0, bModel1, exp_k, M); - intArray[v * stride + i] = predResidual + intArray[(v - 1) * stride + i]; - } - } else { - for (i = 0; i < dimIntArray; ++i) { - predResidual = acd.DecodeUIntACEGC(mModelValues, bModel0, bModel1, exp_k, M); - intArray[v * stride + i] = predResidual; - } - } - } - iterator.m_count = iteratorPred.m_count; - return module.O3DGC_OK; - }; - module.SC3DMCDecoder.prototype.DecodeIntArrayASCII = function (intArray, - numIntArray, - dimIntArray, - stride, - ifs, - predMode, - bstream) { - var testPredEnabled, iterator, streamType, predResidual, v2T, v2TNeighbors, triangles, size, start, streamSize, mask, binarization, iteratorPred, id, neighbors, normals, nPred, v, u_begin, u_end, u, ta, i, bestPred; - iterator = this.m_iterator; - streamType = this.m_streamType; - v2T = this.m_triangleListDecoder.GetVertexToTriangle(); - v2TNeighbors = v2T.m_neighbors; - triangles = ifs.GetCoordIndex(); - size = numIntArray * dimIntArray; - start = iterator.m_count; - streamSize = bstream.ReadUInt32(iterator, streamType); // bitsream size - mask = bstream.ReadUChar(iterator, streamType); - binarization = (mask >>> 4) & 7; - predMode.m_value = mask & 7; - streamSize -= (iterator.m_count - start); - iteratorPred = new module.Iterator(); - iteratorPred.m_count = iterator.m_count + streamSize; - id = new module.SC3DMCTriplet(-1, -1, -1); - if (binarization !== local.O3DGC_SC3DMC_BINARIZATION_ASCII) { - return module.O3DGC_ERROR_CORRUPTED_STREAM; - } - bstream.ReadUInt32(iteratorPred, streamType); // predictors bitsream size - neighbors = this.m_neighbors; - normals = this.m_normals; - nPred = new module.NumberRef(); - testPredEnabled = predMode.m_value !== local.O3DGC_SC3DMC_NO_PREDICTION; - for (v = 0; v < numIntArray; ++v) { - nPred.m_value = 0; - if (v2T.GetNumNeighbors(v) > 0 && testPredEnabled) { - u_begin = v2T.Begin(v); - u_end = v2T.End(v); - for (u = u_begin; u < u_end; ++u) { - ta = v2TNeighbors[u]; - if (ta < 0) { - break; - } - DeltaPredictors(triangles, ta, v, nPred, neighbors, dimIntArray, intArray, stride); - } - } - if (nPred.m_value > 1) { - bestPred = bstream.ReadUCharASCII(iteratorPred); - for (i = 0; i < dimIntArray; ++i) { - predResidual = bstream.ReadIntASCII(iterator); - intArray[v * stride + i] = predResidual + neighbors[bestPred].m_pred[i]; - } - } else if (v > 0 && predMode.m_value !== local.O3DGC_SC3DMC_NO_PREDICTION) { - for (i = 0; i < dimIntArray; ++i) { - predResidual = bstream.ReadIntASCII(iterator); - intArray[v * stride + i] = predResidual + intArray[(v - 1) * stride + i]; - } - } else { - for (i = 0; i < dimIntArray; ++i) { - predResidual = bstream.ReadUIntASCII(iterator); - intArray[v * stride + i] = predResidual; - } - } - } - iterator.m_count = iteratorPred.m_count; - return module.O3DGC_OK; - }; - module.SC3DMCDecoder.prototype.DecodeIntArray = function (intArray, - numIntArray, - dimIntArray, - stride, - ifs, - predMode, - bstream) { - if (this.m_streamType === local.O3DGC_STREAM_TYPE_ASCII) { - return this.DecodeIntArrayASCII(intArray, numIntArray, dimIntArray, stride, ifs, predMode, bstream); - } - return this.DecodeIntArrayBinary(intArray, numIntArray, dimIntArray, stride, ifs, predMode, bstream); - }; - function ComputeNormals(triangles, ntris, coords, nvert, normals) { - var t3, v, n, t, a, b, c, d1, d2, n0; - n0 = new module.Vec3(); - d1 = new module.Vec3(); - d2 = new module.Vec3(); - n = nvert * 3; - for (v = 0; v < n; ++v) { - normals[v] = 0; - } - for (t = 0; t < ntris; ++t) { - t3 = t * 3; - a = triangles[t3] * 3; - b = triangles[t3 + 1] * 3; - c = triangles[t3 + 2] * 3; - d1.m_x = coords[b] - coords[a]; - d1.m_y = coords[b + 1] - coords[a + 1]; - d1.m_z = coords[b + 2] - coords[a + 2]; - d2.m_x = coords[c] - coords[a]; - d2.m_y = coords[c + 1] - coords[a + 1]; - d2.m_z = coords[c + 2] - coords[a + 2]; - n0.m_x = d1.m_y * d2.m_z - d1.m_z * d2.m_y; - n0.m_y = d1.m_z * d2.m_x - d1.m_x * d2.m_z; - n0.m_z = d1.m_x * d2.m_y - d1.m_y * d2.m_x; - normals[a] += n0.m_x; - normals[a + 1] += n0.m_y; - normals[a + 2] += n0.m_z; - normals[b] += n0.m_x; - normals[b + 1] += n0.m_y; - normals[b + 2] += n0.m_z; - normals[c] += n0.m_x; - normals[c + 1] += n0.m_y; - normals[c + 2] += n0.m_z; - } - } - module.SC3DMCDecoder.prototype.ProcessNormals = function (ifs) { - var v3, v2, nvert, normalSize, normals, quantFloatArray, orientation, triangles, n0, n1, v, rna0, rnb0, ni1, norm0; - nvert = ifs.GetNNormal(); - - normalSize = ifs.GetNNormal() * 3; - if (this.m_normalsSize < normalSize) { - this.m_normalsSize = normalSize; - this.m_normals = new Float32Array(this.m_normalsSize); - } - normals = this.m_normals; - quantFloatArray = this.m_quantFloatArray; - orientation = this.m_orientation; - triangles = ifs.GetCoordIndex(); - ComputeNormals(triangles, ifs.GetNCoordIndex(), quantFloatArray, nvert, normals); - n0 = new module.Vec3(); - n1 = new module.Vec3(); - for (v = 0; v < nvert; ++v) { - v3 = 3 * v; - n0.m_x = normals[v3]; - n0.m_y = normals[v3 + 1]; - n0.m_z = normals[v3 + 2]; - norm0 = Math.sqrt(n0.m_x * n0.m_x + n0.m_y * n0.m_y + n0.m_z * n0.m_z); - if (norm0 === 0.0) { - norm0 = 1.0; - } - SphereToCube(n0, n1); - rna0 = n1.m_x / norm0; - rnb0 = n1.m_y / norm0; - ni1 = n1.m_z + orientation[v]; - orientation[v] = ni1; - if ((ni1 >>> 1) !== (n1.m_z >>> 1)) { - rna0 = 0.0; - rnb0 = 0.0; - } - v2 = v * 2; - normals[v2] = rna0; - normals[v2 + 1] = rnb0; - } - return module.O3DGC_OK; - }; - module.SC3DMCDecoder.prototype.IQuantize = function (floatArray, - numFloatArray, - dimFloatArray, - stride, - minFloatArray, - maxFloatArray, - nQBits, - predMode) { - var v, nin, nout, orientation, normals, CubeToSphere; - if (predMode.m_value === local.O3DGC_SC3DMC_SURF_NORMALS_PREDICTION) { - CubeToSphere = local.CubeToSphere; - orientation = this.m_orientation; - normals = this.m_normals; - nin = new module.Vec3(0, 0, 0); - nout = new module.Vec3(0, 0, 0); - this.IQuantizeFloatArray(floatArray, numFloatArray, dimFloatArray, stride, this.m_minNormal, this.m_maxNormal, nQBits + 1); - for (v = 0; v < numFloatArray; ++v) { - nin.m_x = floatArray[stride * v] + normals[2 * v]; - nin.m_y = floatArray[stride * v + 1] + normals[2 * v + 1]; - nin.m_z = orientation[v]; - CubeToSphere[nin.m_z](nin, nout); - floatArray[stride * v] = nout.m_x; - floatArray[stride * v + 1] = nout.m_y; - floatArray[stride * v + 2] = nout.m_z; - } - } else { - this.IQuantizeFloatArray(floatArray, numFloatArray, dimFloatArray, stride, minFloatArray, maxFloatArray, nQBits); - } - }; - module.SC3DMCDecoder.prototype.DecodeFloatArrayBinary = function (floatArray, - numFloatArray, - dimFloatArray, - stride, - minFloatArray, - maxFloatArray, - nQBits, - ifs, - predMode, - bstream) { - var maxNPred, testPredEnabled, testParaPredEnabled, bestPred, dModel, buffer, quantFloatArray, neighbors, normals, nPred, ta, i, v, u, u_begin, u_end, iterator, orientation, streamType, predResidual, acd, bModel0, bModel1, mModelPreds, v2T, v2TNeighbors, triangles, size, start, streamSize, mask, binarization, iteratorPred, exp_k, M, mModelValues; - iterator = this.m_iterator; - orientation = this.m_orientation; - streamType = this.m_streamType; - acd = new module.ArithmeticDecoder(); - bModel0 = new module.StaticBitModel(); - bModel1 = new module.AdaptiveBitModel(); - mModelPreds = new module.AdaptiveDataModel(); - maxNPred = local.O3DGC_SC3DMC_MAX_PREDICTION_NEIGHBORS; - mModelPreds.SetAlphabet(maxNPred + 1); - v2T = this.m_triangleListDecoder.GetVertexToTriangle(); - v2TNeighbors = v2T.m_neighbors; - triangles = ifs.GetCoordIndex(); - size = numFloatArray * dimFloatArray; - start = iterator.m_count; - streamSize = bstream.ReadUInt32(iterator, streamType); - mask = bstream.ReadUChar(iterator, streamType); - binarization = (mask >>> 4) & 7; - predMode.m_value = mask & 7; - streamSize -= (iterator.m_count - start); - iteratorPred = new module.Iterator(); - iteratorPred.m_count = iterator.m_count + streamSize; - exp_k = 0; - M = 0; - if (binarization !== local.O3DGC_SC3DMC_BINARIZATION_AC_EGC) { - return module.O3DGC_ERROR_CORRUPTED_STREAM; - } - buffer = bstream.GetBuffer(iterator, streamSize); - iterator.m_count += streamSize; - acd.SetBuffer(streamSize, buffer); - acd.StartDecoder(); - exp_k = acd.ExpGolombDecode(0, bModel0, bModel1); - M = acd.ExpGolombDecode(0, bModel0, bModel1); - mModelValues = new module.AdaptiveDataModel(); - mModelValues.SetAlphabet(M + 2); - if (predMode.m_value === local.O3DGC_SC3DMC_SURF_NORMALS_PREDICTION) { - if (this.m_orientationSize < size) { - this.m_orientationSize = size; - this.m_orientation = new Int8Array(this.m_orientationSize); - orientation = this.m_orientation; - } - dModel = new module.AdaptiveDataModel(); - dModel.SetAlphabet(12); - for (i = 0; i < numFloatArray; ++i) { - orientation[i] = UIntToInt(acd.DecodeAdaptiveDataModel(dModel)); - } - this.ProcessNormals(ifs); - dimFloatArray = 2; - } - if (this.m_quantFloatArraySize < size) { - this.m_quantFloatArraySize = size; - this.m_quantFloatArray = new Int32Array(this.m_quantFloatArraySize); - } - quantFloatArray = this.m_quantFloatArray; - neighbors = this.m_neighbors; - normals = this.m_normals; - nPred = new module.NumberRef(); - testPredEnabled = predMode.m_value !== local.O3DGC_SC3DMC_NO_PREDICTION; - testParaPredEnabled = predMode.m_value === local.O3DGC_SC3DMC_PARALLELOGRAM_PREDICTION; - for (v = 0; v < numFloatArray; ++v) { - nPred.m_value = 0; - if (v2T.GetNumNeighbors(v) > 0 && testPredEnabled) { - u_begin = v2T.Begin(v); - u_end = v2T.End(v); - if (testParaPredEnabled) { - for (u = u_begin; u < u_end; ++u) { - ta = v2TNeighbors[u]; - if (ta < 0) { - break; - } - ParallelogramPredictors(triangles, ta, v, nPred, neighbors, dimFloatArray, quantFloatArray, stride, v2T, v2TNeighbors); - } - } - if (nPred.m_value < maxNPred) { - for (u = u_begin; u < u_end; ++u) { - ta = v2TNeighbors[u]; - if (ta < 0) { - break; - } - DeltaPredictors(triangles, ta, v, nPred, neighbors, dimFloatArray, quantFloatArray, stride); - } - } - } - if (nPred.m_value > 1) { - bestPred = acd.DecodeAdaptiveDataModel(mModelPreds); - for (i = 0; i < dimFloatArray; ++i) { - predResidual = acd.DecodeIntACEGC(mModelValues, bModel0, bModel1, exp_k, M); - quantFloatArray[v * stride + i] = predResidual + neighbors[bestPred].m_pred[i]; - } - } else if (v > 0 && testPredEnabled) { - for (i = 0; i < dimFloatArray; ++i) { - predResidual = acd.DecodeIntACEGC(mModelValues, bModel0, bModel1, exp_k, M); - quantFloatArray[v * stride + i] = predResidual + quantFloatArray[(v - 1) * stride + i]; - } - } else { - for (i = 0; i < dimFloatArray; ++i) { - predResidual = acd.DecodeUIntACEGC(mModelValues, bModel0, bModel1, exp_k, M); - quantFloatArray[v * stride + i] = predResidual; - } - } - } - iterator.m_count = iteratorPred.m_count; - this.IQuantize(floatArray, numFloatArray, dimFloatArray, stride, minFloatArray, maxFloatArray, nQBits, predMode); - return module.O3DGC_OK; - }; - module.SC3DMCDecoder.prototype.DecodeFloatArrayASCII = function (floatArray, - numFloatArray, - dimFloatArray, - stride, - minFloatArray, - maxFloatArray, - nQBits, - ifs, - predMode, - bstream) { - var maxNPred, testPredEnabled, testParaPredEnabled, iterator, orientation, streamType, predResidual, v2T, v2TNeighbors, triangles, size, start, streamSize, mask, binarization, iteratorPred, quantFloatArray, neighbors, normals, nPred, v, u, u_begin, u_end, ta, i, bestPred; - maxNPred = local.O3DGC_SC3DMC_MAX_PREDICTION_NEIGHBORS; - iterator = this.m_iterator; - orientation = this.m_orientation; - streamType = this.m_streamType; - v2T = this.m_triangleListDecoder.GetVertexToTriangle(); - v2TNeighbors = v2T.m_neighbors; - triangles = ifs.GetCoordIndex(); - size = numFloatArray * dimFloatArray; - start = iterator.m_count; - streamSize = bstream.ReadUInt32(iterator, streamType); - mask = bstream.ReadUChar(iterator, streamType); - binarization = (mask >>> 4) & 7; - predMode.m_value = mask & 7; - streamSize -= (iterator.m_count - start); - iteratorPred = new module.Iterator(); - iteratorPred.m_count = iterator.m_count + streamSize; - if (binarization !== local.O3DGC_SC3DMC_BINARIZATION_ASCII) { - return module.O3DGC_ERROR_CORRUPTED_STREAM; - } - bstream.ReadUInt32(iteratorPred, streamType); - if (predMode.m_value === local.O3DGC_SC3DMC_SURF_NORMALS_PREDICTION) { - if (this.m_orientationSize < numFloatArray) { - this.m_orientationSize = numFloatArray; - this.m_orientation = new Int8Array(this.m_orientationSize); - orientation = this.m_orientation; - } - for (i = 0; i < numFloatArray; ++i) { - orientation[i] = bstream.ReadIntASCII(iterator); - } - this.ProcessNormals(ifs); - dimFloatArray = 2; - } - if (this.m_quantFloatArraySize < size) { - this.m_quantFloatArraySize = size; - this.m_quantFloatArray = new Int32Array(this.m_quantFloatArraySize); - } - quantFloatArray = this.m_quantFloatArray; - neighbors = this.m_neighbors; - normals = this.m_normals; - nPred = new module.NumberRef(); - testPredEnabled = predMode.m_value !== local.O3DGC_SC3DMC_NO_PREDICTION; - testParaPredEnabled = predMode.m_value === local.O3DGC_SC3DMC_PARALLELOGRAM_PREDICTION; - for (v = 0; v < numFloatArray; ++v) { - nPred.m_value = 0; - if (v2T.GetNumNeighbors(v) > 0 && testPredEnabled) { - u_begin = v2T.Begin(v); - u_end = v2T.End(v); - if (testParaPredEnabled) { - for (u = u_begin; u < u_end; ++u) { - ta = v2TNeighbors[u]; - if (ta < 0) { - break; - } - ParallelogramPredictors(triangles, ta, v, nPred, neighbors, dimFloatArray, quantFloatArray, stride, v2T, v2TNeighbors); - } - } - if (nPred.m_value < maxNPred) { - for (u = u_begin; u < u_end; ++u) { - ta = v2TNeighbors[u]; - if (ta < 0) { - break; - } - DeltaPredictors(triangles, ta, v, nPred, neighbors, dimFloatArray, quantFloatArray, stride); - } - } - } - if (nPred.m_value > 1) { - bestPred = bstream.ReadUCharASCII(iteratorPred); - for (i = 0; i < dimFloatArray; ++i) { - predResidual = bstream.ReadIntASCII(iterator); - quantFloatArray[v * stride + i] = predResidual + neighbors[bestPred].m_pred[i]; - } - } else if (v > 0 && predMode.m_value !== local.O3DGC_SC3DMC_NO_PREDICTION) { - for (i = 0; i < dimFloatArray; ++i) { - predResidual = bstream.ReadIntASCII(iterator); - quantFloatArray[v * stride + i] = predResidual + quantFloatArray[(v - 1) * stride + i]; - } - } else { - for (i = 0; i < dimFloatArray; ++i) { - predResidual = bstream.ReadUIntASCII(iterator); - quantFloatArray[v * stride + i] = predResidual; - } - } - } - iterator.m_count = iteratorPred.m_count; - this.IQuantize(floatArray, numFloatArray, dimFloatArray, stride, minFloatArray, maxFloatArray, nQBits, predMode); - return module.O3DGC_OK; - }; - module.SC3DMCDecoder.prototype.DecodeFloatArray = function (floatArray, - numFloatArray, - dimFloatArray, - stride, - minFloatArray, - maxFloatArray, - nQBits, - ifs, - predMode, - bstream) { - if (this.m_streamType === local.O3DGC_STREAM_TYPE_ASCII) { - return this.DecodeFloatArrayASCII(floatArray, numFloatArray, dimFloatArray, stride, minFloatArray, maxFloatArray, nQBits, ifs, predMode, bstream); - } - return this.DecodeFloatArrayBinary(floatArray, numFloatArray, dimFloatArray, stride, minFloatArray, maxFloatArray, nQBits, ifs, predMode, bstream); - }; - module.SC3DMCDecoder.prototype.IQuantizeFloatArray = function (floatArray, numFloatArray, dimFloatArray, stride, minFloatArray, maxFloatArray, nQBits) { - var idelta, quantFloatArray, d, r, v; - idelta = this.m_idelta; - quantFloatArray = this.m_quantFloatArray; - for (d = 0; d < dimFloatArray; ++d) { - r = maxFloatArray[d] - minFloatArray[d]; - if (r > 0.0) { - idelta[d] = r / (((1 << nQBits) >>> 0) - 1); - } else { - idelta[d] = 1.0; - } - } - for (v = 0; v < numFloatArray; ++v) { - for (d = 0; d < dimFloatArray; ++d) { - floatArray[v * stride + d] = quantFloatArray[v * stride + d] * idelta[d] + minFloatArray[d]; - } - } - return module.O3DGC_OK; - }; - module.SC3DMCDecoder.prototype.DecodePlayload = function (ifs, bstream) { - var params, iterator, stats, predMode, timer, ret, a; - params = this.m_params; - iterator = this.m_iterator; - stats = this.m_stats; - predMode = new module.NumberRef(); - timer = new module.Timer(); - ret = module.O3DGC_OK; - this.m_triangleListDecoder.SetStreamType(this.m_streamType); - stats.m_streamSizeCoordIndex = iterator.m_count; - timer.Tic(); - this.m_triangleListDecoder.Decode(ifs.GetCoordIndex(), ifs.GetNCoordIndex(), ifs.GetNCoord(), bstream, iterator); - timer.Toc(); - stats.m_timeCoordIndex = timer.GetElapsedTime(); - stats.m_streamSizeCoordIndex = iterator.m_count - stats.m_streamSizeCoordIndex; - // decode coord - stats.m_streamSizeCoord = iterator.m_count; - timer.Tic(); - if (ifs.GetNCoord() > 0) { - ret = this.DecodeFloatArray(ifs.GetCoord(), ifs.GetNCoord(), 3, 3, ifs.GetCoordMinArray(), ifs.GetCoordMaxArray(), params.GetCoordQuantBits(), ifs, predMode, bstream); - params.SetCoordPredMode(predMode.m_value); - } - if (ret !== module.O3DGC_OK) { - return ret; - } - timer.Toc(); - stats.m_timeCoord = timer.GetElapsedTime(); - stats.m_streamSizeCoord = iterator.m_count - stats.m_streamSizeCoord; - - // decode Normal - stats.m_streamSizeNormal = iterator.m_count; - timer.Tic(); - if (ifs.GetNNormal() > 0) { - ret = this.DecodeFloatArray(ifs.GetNormal(), ifs.GetNNormal(), 3, 3, ifs.GetNormalMinArray(), ifs.GetNormalMaxArray(), params.GetNormalQuantBits(), ifs, predMode, bstream); - params.SetNormalPredMode(predMode.m_value); - } - if (ret !== module.O3DGC_OK) { - return ret; - } - timer.Toc(); - stats.m_timeNormal = timer.GetElapsedTime(); - stats.m_streamSizeNormal = iterator.m_count - stats.m_streamSizeNormal; - - // decode FloatAttributes - for (a = 0; a < ifs.GetNumFloatAttributes(); ++a) { - stats.m_streamSizeFloatAttribute[a] = iterator.m_count; - timer.Tic(); - ret = this.DecodeFloatArray(ifs.GetFloatAttribute(a), ifs.GetNFloatAttribute(a), ifs.GetFloatAttributeDim(a), ifs.GetFloatAttributeDim(a), ifs.GetFloatAttributeMinArray(a), ifs.GetFloatAttributeMaxArray(a), params.GetFloatAttributeQuantBits(a), ifs, predMode, bstream); - params.SetFloatAttributePredMode(a, predMode.m_value); - timer.Toc(); - stats.m_timeFloatAttribute[a] = timer.GetElapsedTime(); - stats.m_streamSizeFloatAttribute[a] = iterator.m_count - stats.m_streamSizeFloatAttribute[a]; - } - if (ret !== module.O3DGC_OK) { - return ret; - } - // decode IntAttributes - for (a = 0; a < ifs.GetNumIntAttributes(); ++a) { - stats.m_streamSizeIntAttribute[a] = iterator.m_count; - timer.Tic(); - ret = this.DecodeIntArray(ifs.GetIntAttribute(a), ifs.GetNIntAttribute(a), ifs.GetIntAttributeDim(a), ifs.GetIntAttributeDim(a), ifs, predMode, bstream); - params.SetIntAttributePredMode(a, predMode.m_value); - timer.Toc(); - stats.m_timeIntAttribute[a] = timer.GetElapsedTime(); - stats.m_streamSizeIntAttribute[a] = iterator.m_count - stats.m_streamSizeIntAttribute[a]; - } - if (ret !== module.O3DGC_OK) { - return ret; - } - timer.Tic(); - this.m_triangleListDecoder.Reorder(); - timer.Toc(); - stats.m_timeReorder = timer.GetElapsedTime(); - return ret; - }; - // DVEncodeParams class - module.DVEncodeParams = function () { - this.m_encodeMode = local.O3DGC_DYNAMIC_VECTOR_ENCODE_MODE_LIFT; - this.m_streamTypeMode = local.O3DGC_STREAM_TYPE_ASCII; - this.m_quantBits = 10; - }; - module.DVEncodeParams.prototype.GetStreamType = function () { - return this.m_streamTypeMode; - }; - module.DVEncodeParams.prototype.GetEncodeMode = function () { - return this.m_encodeMode; - }; - module.DVEncodeParams.prototype.GetQuantBits = function () { - return this.m_quantBits; - }; - module.DVEncodeParams.prototype.SetStreamType = function (streamTypeMode) { - this.m_streamTypeMode = streamTypeMode; - }; - module.DVEncodeParams.prototype.SetEncodeMode = function (encodeMode) { - this.m_encodeMode = encodeMode; - }; - module.DVEncodeParams.prototype.SetQuantBits = function (quantBits) { - this.m_quantBits = quantBits; - }; - // DynamicVector class - module.DynamicVector = function () { - this.m_num = 0; - this.m_dim = 0; - this.m_stride = 0; - this.m_max = {}; - this.m_min = {}; - this.m_vectors = {}; - }; - module.DynamicVector.prototype.GetNVector = function () { - return this.m_num; - }; - module.DynamicVector.prototype.GetDimVector = function () { - return this.m_dim; - }; - module.DynamicVector.prototype.GetStride = function () { - return this.m_stride; - }; - module.DynamicVector.prototype.GetMinArray = function () { - return this.m_min; - }; - module.DynamicVector.prototype.GetMaxArray = function () { - return this.m_max; - }; - module.DynamicVector.prototype.GetVectors = function () { - return this.m_vectors; - }; - module.DynamicVector.prototype.GetMin = function (j) { - return this.m_min[j]; - }; - module.DynamicVector.prototype.GetMax = function (j) { - return this.m_max[j]; - }; - module.DynamicVector.prototype.SetNVector = function (num) { - this.m_num = num; - }; - module.DynamicVector.prototype.SetDimVector = function (dim) { - this.m_dim = dim; - }; - module.DynamicVector.prototype.SetStride = function (stride) { - this.m_stride = stride; - }; - module.DynamicVector.prototype.SetMinArray = function (min) { - this.m_min = min; - }; - module.DynamicVector.prototype.SetMaxArray = function (max) { - this.m_max = max; - }; - module.DynamicVector.prototype.SetMin = function (j, min) { - this.m_min[j] = min; - }; - module.DynamicVector.prototype.SetMax = function (j, max) { - this.m_max[j] = max; - }; - module.DynamicVector.prototype.SetVectors = function (vectors) { - this.m_vectors = vectors; - }; - // DynamicVectorDecoder class - module.DynamicVectorDecoder = function () { - this.m_streamSize = 0; - this.m_maxNumVectors = 0; - this.m_numVectors = 0; - this.m_dimVectors = 0; - this.m_quantVectors = {}; - this.m_iterator = new module.Iterator(); - this.m_streamType = local.O3DGC_STREAM_TYPE_UNKOWN; - this.m_params = new module.DVEncodeParams(); - }; - module.DynamicVectorDecoder.prototype.GetStreamType = function () { - return this.m_streamType; - }; - module.DynamicVectorDecoder.prototype.GetIterator = function () { - return this.m_iterator; - }; - module.DynamicVectorDecoder.prototype.SetStreamType = function (streamType) { - this.m_streamType = streamType; - }; - module.DynamicVectorDecoder.prototype.SetIterator = function (iterator) { - this.m_iterator = iterator; - }; - module.DynamicVectorDecoder.prototype.IUpdate = function (data, shift, size) { - var p, size1; - size1 = size - 1; - p = 2; - data[shift] -= data[shift + 1] >> 1; - while (p < size1) { - data[shift + p] -= (data[shift + p - 1] + data[shift + p + 1] + 2) >> 2; - p += 2; - } - if (p === size1) { - data[shift + p] -= data[shift + p - 1] >> 1; - } - return module.O3DGC_OK; - }; - module.DynamicVectorDecoder.prototype.IPredict = function (data, shift, size) { - var p, size1; - size1 = size - 1; - p = 1; - while (p < size1) { - data[shift + p] += (data[shift + p - 1] + data[shift + p + 1] + 1) >> 1; - p += 2; - } - if (p === size1) { - data[shift + p] += data[shift + p - 1]; - } - return module.O3DGC_OK; - }; - module.DynamicVectorDecoder.prototype.Merge = function (data, shift, size) { - var i, h, a, b, tmp; - h = (size >> 1) + (size & 1); - a = h - 1; - b = h; - while (a > 0) { - for (i = a; i < b; i += 2) { - tmp = data[shift + i]; - data[shift + i] = data[shift + i + 1]; - data[shift + i + 1] = tmp; - } - --a; - ++b; - } - return module.O3DGC_OK; - }; - module.DynamicVectorDecoder.prototype.ITransform = function (data, shift, size) { - var n, even, k, i; - n = size; - even = 0; - k = 0; - even += ((n & 1) << k++) >>> 0; - while (n > 1) { - n = (n >> 1) + ((n & 1) >>> 0); - even += ((n & 1) << k++) >>> 0; - } - for (i = k - 2; i >= 0; --i) { - n = ((n << 1) >>> 0) - (((even >>> i) & 1)) >>> 0; - this.Merge(data, shift, n); - this.IUpdate(data, shift, n); - this.IPredict(data, shift, n); - } - return module.O3DGC_OK; - }; - module.DynamicVectorDecoder.prototype.IQuantize = function (floatArray, - numFloatArray, - dimFloatArray, - stride, - minFloatArray, - maxFloatArray, - nQBits) { - var quantVectors, r, idelta, size, d, v; - quantVectors = this.m_quantVectors; - size = numFloatArray * dimFloatArray; - for (d = 0; d < dimFloatArray; ++d) { - r = maxFloatArray[d] - minFloatArray[d]; - if (r > 0.0) { - idelta = r / (((1 << nQBits) >>> 0) - 1); - } else { - idelta = 1.0; - } - for (v = 0; v < numFloatArray; ++v) { - floatArray[v * stride + d] = quantVectors[v + d * numFloatArray] * idelta + minFloatArray[d]; - } - } - return module.O3DGC_OK; - }; - module.DynamicVectorDecoder.prototype.DecodeHeader = function (dynamicVector, bstream) { - var iterator, c0, start_code, streamType; - iterator = this.m_iterator; - c0 = iterator.m_count; - start_code = bstream.ReadUInt32(iterator, local.O3DGC_STREAM_TYPE_BINARY); - if (start_code !== local.O3DGC_DV_START_CODE) { - iterator.m_count = c0; - start_code = bstream.ReadUInt32(iterator, local.O3DGC_STREAM_TYPE_ASCII); - if (start_code !== local.O3DGC_DV_START_CODE) { - return module.O3DGC_ERROR_CORRUPTED_STREAM; - } - this.m_streamType = local.O3DGC_STREAM_TYPE_ASCII; - } else { - this.m_streamType = local.O3DGC_STREAM_TYPE_BINARY; - } - streamType = this.m_streamType; - this.m_streamSize = bstream.ReadUInt32(iterator, streamType); - this.m_params.SetEncodeMode(bstream.ReadUChar(iterator, streamType)); - dynamicVector.SetNVector(bstream.ReadUInt32(iterator, streamType)); - if (dynamicVector.GetNVector() > 0) { - dynamicVector.SetDimVector(bstream.ReadUInt32(iterator, streamType)); - this.m_params.SetQuantBits(bstream.ReadUChar(iterator, streamType)); - } - return module.O3DGC_OK; - }; - module.DynamicVectorDecoder.prototype.DecodePlayload = function (dynamicVector, bstream) { - var size, iterator, streamType, ret, start, streamSize, dim, num, j, acd, bModel0, bModel1, exp_k, M, buffer, mModelValues, quantVectors, v, d; - iterator = this.m_iterator; - streamType = this.m_streamType; - ret = module.O3DGC_OK; - start = iterator.m_count; - streamSize = bstream.ReadUInt32(iterator, streamType); - dim = dynamicVector.GetDimVector(); - num = dynamicVector.GetNVector(); - size = dim * num; - for (j = 0; j < dynamicVector.GetDimVector(); ++j) { - dynamicVector.SetMin(j, bstream.ReadFloat32(iterator, streamType)); - dynamicVector.SetMax(j, bstream.ReadFloat32(iterator, streamType)); - } - acd = new module.ArithmeticDecoder(); - bModel0 = new module.StaticBitModel(); - bModel1 = new module.AdaptiveBitModel(); - streamSize -= (iterator.m_count - start); - exp_k = 0; - M = 0; - if (streamType === local.O3DGC_STREAM_TYPE_BINARY) { - buffer = bstream.GetBuffer(iterator, streamSize); - iterator.m_count += streamSize; - acd.SetBuffer(streamSize, buffer); - acd.StartDecoder(); - exp_k = acd.ExpGolombDecode(0, bModel0, bModel1); - M = acd.ExpGolombDecode(0, bModel0, bModel1); - } - mModelValues = new module.AdaptiveDataModel(); - mModelValues.SetAlphabet(M + 2); - if (this.m_maxNumVectors < size) { - this.m_maxNumVectors = size; - this.m_quantVectors = new Int32Array(this.m_maxNumVectors); - } - quantVectors = this.m_quantVectors; - if (streamType === local.O3DGC_STREAM_TYPE_ASCII) { - for (v = 0; v < num; ++v) { - for (d = 0; d < dim; ++d) { - quantVectors[d * num + v] = bstream.ReadIntASCII(iterator); - } - } - } else { - for (v = 0; v < num; ++v) { - for (d = 0; d < dim; ++d) { - quantVectors[d * num + v] = acd.DecodeIntACEGC(mModelValues, bModel0, bModel1, exp_k, M); - } - } - } - for (d = 0; d < dim; ++d) { - this.ITransform(quantVectors, d * num, num); - } - this.IQuantize(dynamicVector.GetVectors(), num, dim, - dynamicVector.GetStride(), dynamicVector.GetMinArray(), - dynamicVector.GetMaxArray(), this.m_params.GetQuantBits()); - return ret; - }; - - return module; -})(); - diff --git a/spaces/bastiendechamps/geoguessr-bot/app.py b/spaces/bastiendechamps/geoguessr-bot/app.py deleted file mode 100644 index 1ace5821cd2014468680058a895455a9c45c0014..0000000000000000000000000000000000000000 --- a/spaces/bastiendechamps/geoguessr-bot/app.py +++ /dev/null @@ -1,92 +0,0 @@ -import os.path - -import numpy as np -import gradio as gr -import plotly.graph_objects as go - -from geoguessr_bot.guessr import RandomGuessr, AbstractGuessr, NearestNeighborEmbedderGuessr, \ - AverageNeighborsEmbedderGuessr -from geoguessr_bot.retriever import DinoV2Embedder, Retriever, RandomEmbedder - -ALL_GUESSR_CLASS = { - "random": RandomGuessr, - "nearestNeighborEmbedder": NearestNeighborEmbedderGuessr, - "averageNeighborsEmbedder": AverageNeighborsEmbedderGuessr, -} - -ALL_GUESSR_ARGS = { - "random": {}, - "nearestNeighborEmbedder": { - "embedder": DinoV2Embedder( - device="cpu" - ), - # "embedder": RandomEmbedder(n_dim=384), - "retriever": Retriever( - embeddings_path=os.path.join(os.path.dirname(os.path.abspath(__file__)), - "resources/embeddings.npy"), - ), - "metadata_path": os.path.join(os.path.dirname(os.path.abspath(__file__)), - "resources/metadatav3.csv"), - }, - "averageNeighborsEmbedder": { - "embedder": DinoV2Embedder( - device="cpu" - ), - # "embedder": RandomEmbedder(n_dim=384), - "retriever": Retriever( - embeddings_path=os.path.join(os.path.dirname(os.path.abspath(__file__)), - "resources/embeddings.npy"), - ), - "metadata_path": os.path.join(os.path.dirname(os.path.abspath(__file__)), - "resources/metadatav3.csv"), - "n_neighbors": 100, - "dbscan_eps": 0.5 - } -} - -# For instantiating guessrs only when needed -ALL_GUESSR = {} - - -def create_map(guessr: str) -> go.Figure: - """Create an interactive map - """ - # Instantiate guessr if not already done - if guessr not in ALL_GUESSR: - ALL_GUESSR[guessr] = ALL_GUESSR_CLASS[guessr](**ALL_GUESSR_ARGS[guessr]) - return AbstractGuessr.create_map() - - -def guess(guessr: str, uploaded_image) -> go.Figure: - """Guess a coordinate from an image uploaded in the Gradio interface - """ - # Instantiate guessr if not already done - if guessr not in ALL_GUESSR: - ALL_GUESSR[guessr] = ALL_GUESSR_CLASS[guessr](**ALL_GUESSR_ARGS[guessr]) - # Convert image to numpy array - uploaded_image = np.array(uploaded_image) - # Guess coordinate - guess_coordinate = ALL_GUESSR[guessr].guess(uploaded_image) - # Create map - fig = ALL_GUESSR[guessr].create_map(guess_coordinate) - return fig - - -if __name__ == "__main__": - # Create & launch Gradio interface - with gr.Blocks() as demo: - with gr.Row(): - with gr.Column(): - guessr_dropdown = gr.Dropdown( - list(ALL_GUESSR_CLASS.keys()), - value="nearestNeighborEmbedder", - label="Guessr type", - info="More Guessr types will be added soon!" - ) - image = gr.Image(shape=(800, 800)) - button = gr.Button(text="Guess") - interactive_map = gr.Plot() - demo.load(create_map, [guessr_dropdown], interactive_map) - button.click(guess, [guessr_dropdown, image], interactive_map) - # Launch demo 🚀 - demo.launch() diff --git a/spaces/belinghy/character-animation-motion-vaes/static/three/examples/jsm/controls/OrbitControls.min.js b/spaces/belinghy/character-animation-motion-vaes/static/three/examples/jsm/controls/OrbitControls.min.js deleted file mode 100644 index cb0dd5d2fe597146bbf5f14f03bbf78bd0311f90..0000000000000000000000000000000000000000 --- a/spaces/belinghy/character-animation-motion-vaes/static/three/examples/jsm/controls/OrbitControls.min.js +++ /dev/null @@ -1 +0,0 @@ -import{EventDispatcher,MOUSE,Quaternion,Spherical,TOUCH,Vector2,Vector3}from"../../../build/three.module.min.js";var OrbitControls=function(e,t){var o,n,a,i,r,c;void 0===t&&console.warn('THREE.OrbitControls: The second parameter "domElement" is now mandatory.'),t===document&&console.error('THREE.OrbitControls: "document" should not be used as the target "domElement". Please use "renderer.domElement" instead.'),this.object=e,this.domElement=t,this.enabled=!0,this.target=new Vector3,this.minDistance=0,this.maxDistance=1/0,this.minZoom=0,this.maxZoom=1/0,this.minPolarAngle=0,this.maxPolarAngle=Math.PI,this.minAzimuthAngle=-1/0,this.maxAzimuthAngle=1/0,this.enableDamping=!1,this.dampingFactor=.05,this.enableZoom=!0,this.zoomSpeed=1,this.enableRotate=!0,this.rotateSpeed=1,this.enablePan=!0,this.panSpeed=1,this.screenSpacePanning=!0,this.keyPanSpeed=7,this.autoRotate=!1,this.autoRotateSpeed=2,this.enableKeys=!0,this.keys={LEFT:37,UP:38,RIGHT:39,BOTTOM:40},this.mouseButtons={LEFT:MOUSE.ROTATE,MIDDLE:MOUSE.DOLLY,RIGHT:MOUSE.PAN},this.touches={ONE:TOUCH.ROTATE,TWO:TOUCH.DOLLY_PAN},this.target0=this.target.clone(),this.position0=this.object.position.clone(),this.zoom0=this.object.zoom,this.getPolarAngle=function(){return b.phi},this.getAzimuthalAngle=function(){return b.theta},this.saveState=function(){s.target0.copy(s.target),s.position0.copy(s.object.position),s.zoom0=s.object.zoom},this.reset=function(){s.target.copy(s.target0),s.object.position.copy(s.position0),s.object.zoom=s.zoom0,s.object.updateProjectionMatrix(),s.dispatchEvent(u),s.update(),h=p.NONE},this.update=(o=new Vector3,n=(new Quaternion).setFromUnitVectors(e.up,new Vector3(0,1,0)),a=n.clone().inverse(),i=new Vector3,r=new Quaternion,c=2*Math.PI,function(){var e=s.object.position;o.copy(e).sub(s.target),o.applyQuaternion(n),b.setFromVector3(o),s.autoRotate&&h===p.NONE&&C(2*Math.PI/60/60*s.autoRotateSpeed),s.enableDamping?(b.theta+=E.theta*s.dampingFactor,b.phi+=E.phi*s.dampingFactor):(b.theta+=E.theta,b.phi+=E.phi);var t=s.minAzimuthAngle,l=s.maxAzimuthAngle;return isFinite(t)&&isFinite(l)&&(t<-Math.PI?t+=c:t>Math.PI&&(t-=c),l<-Math.PI?l+=c:l>Math.PI&&(l-=c),b.theta=t(t+l)/2?Math.max(t,b.theta):Math.min(l,b.theta)),b.phi=Math.max(s.minPolarAngle,Math.min(s.maxPolarAngle,b.phi)),b.makeSafe(),b.radius*=O,b.radius=Math.max(s.minDistance,Math.min(s.maxDistance,b.radius)),!0===s.enableDamping?s.target.addScaledVector(f,s.dampingFactor):s.target.add(f),o.setFromSpherical(b),o.applyQuaternion(a),e.copy(s.target).add(o),s.object.lookAt(s.target),!0===s.enableDamping?(E.theta*=1-s.dampingFactor,E.phi*=1-s.dampingFactor,f.multiplyScalar(1-s.dampingFactor)):(E.set(0,0,0),f.set(0,0,0)),O=1,!!(g||i.distanceToSquared(s.object.position)>d||8*(1-r.dot(s.object.quaternion))>d)&&(s.dispatchEvent(u),i.copy(s.object.position),r.copy(s.object.quaternion),g=!1,!0)}),this.dispose=function(){s.domElement.removeEventListener("contextmenu",ee,!1),s.domElement.removeEventListener("pointerdown",B,!1),s.domElement.removeEventListener("wheel",W,!1),s.domElement.removeEventListener("touchstart",Q,!1),s.domElement.removeEventListener("touchend",$,!1),s.domElement.removeEventListener("touchmove",J,!1),s.domElement.ownerDocument.removeEventListener("pointermove",G,!1),s.domElement.ownerDocument.removeEventListener("pointerup",K,!1),s.domElement.removeEventListener("keydown",q,!1)};var s=this,u={type:"change"},l={type:"start"},m={type:"end"},p={NONE:-1,ROTATE:0,DOLLY:1,PAN:2,TOUCH_ROTATE:3,TOUCH_PAN:4,TOUCH_DOLLY_PAN:5,TOUCH_DOLLY_ROTATE:6},h=p.NONE,d=1e-6,b=new Spherical,E=new Spherical,O=1,f=new Vector3,g=!1,v=new Vector2,T=new Vector2,y=new Vector2,P=new Vector2,L=new Vector2,w=new Vector2,A=new Vector2,N=new Vector2,M=new Vector2;function j(){return Math.pow(.95,s.zoomSpeed)}function C(e){E.theta-=e}function D(e){E.phi-=e}var S,R=(S=new Vector3,function(e,t){S.setFromMatrixColumn(t,0),S.multiplyScalar(-e),f.add(S)}),k=function(){var e=new Vector3;return function(t,o){!0===s.screenSpacePanning?e.setFromMatrixColumn(o,1):(e.setFromMatrixColumn(o,0),e.crossVectors(s.object.up,e)),e.multiplyScalar(t),f.add(e)}}(),Y=function(){var e=new Vector3;return function(t,o){var n=s.domElement;if(s.object.isPerspectiveCamera){var a=s.object.position;e.copy(a).sub(s.target);var i=e.length();i*=Math.tan(s.object.fov/2*Math.PI/180),R(2*t*i/n.clientHeight,s.object.matrix),k(2*o*i/n.clientHeight,s.object.matrix)}else s.object.isOrthographicCamera?(R(t*(s.object.right-s.object.left)/s.object.zoom/n.clientWidth,s.object.matrix),k(o*(s.object.top-s.object.bottom)/s.object.zoom/n.clientHeight,s.object.matrix)):(console.warn("WARNING: OrbitControls.js encountered an unknown camera type - pan disabled."),s.enablePan=!1)}}();function H(e){s.object.isPerspectiveCamera?O/=e:s.object.isOrthographicCamera?(s.object.zoom=Math.max(s.minZoom,Math.min(s.maxZoom,s.object.zoom*e)),s.object.updateProjectionMatrix(),g=!0):(console.warn("WARNING: OrbitControls.js encountered an unknown camera type - dolly/zoom disabled."),s.enableZoom=!1)}function x(e){s.object.isPerspectiveCamera?O*=e:s.object.isOrthographicCamera?(s.object.zoom=Math.max(s.minZoom,Math.min(s.maxZoom,s.object.zoom/e)),s.object.updateProjectionMatrix(),g=!0):(console.warn("WARNING: OrbitControls.js encountered an unknown camera type - dolly/zoom disabled."),s.enableZoom=!1)}function U(e){v.set(e.clientX,e.clientY)}function V(e){P.set(e.clientX,e.clientY)}function I(e){if(1==e.touches.length)v.set(e.touches[0].pageX,e.touches[0].pageY);else{var t=.5*(e.touches[0].pageX+e.touches[1].pageX),o=.5*(e.touches[0].pageY+e.touches[1].pageY);v.set(t,o)}}function z(e){if(1==e.touches.length)P.set(e.touches[0].pageX,e.touches[0].pageY);else{var t=.5*(e.touches[0].pageX+e.touches[1].pageX),o=.5*(e.touches[0].pageY+e.touches[1].pageY);P.set(t,o)}}function X(e){var t=e.touches[0].pageX-e.touches[1].pageX,o=e.touches[0].pageY-e.touches[1].pageY,n=Math.sqrt(t*t+o*o);A.set(0,n)}function _(e){if(1==e.touches.length)T.set(e.touches[0].pageX,e.touches[0].pageY);else{var t=.5*(e.touches[0].pageX+e.touches[1].pageX),o=.5*(e.touches[0].pageY+e.touches[1].pageY);T.set(t,o)}y.subVectors(T,v).multiplyScalar(s.rotateSpeed);var n=s.domElement;C(2*Math.PI*y.x/n.clientHeight),D(2*Math.PI*y.y/n.clientHeight),v.copy(T)}function F(e){if(1==e.touches.length)L.set(e.touches[0].pageX,e.touches[0].pageY);else{var t=.5*(e.touches[0].pageX+e.touches[1].pageX),o=.5*(e.touches[0].pageY+e.touches[1].pageY);L.set(t,o)}w.subVectors(L,P).multiplyScalar(s.panSpeed),Y(w.x,w.y),P.copy(L)}function Z(e){var t=e.touches[0].pageX-e.touches[1].pageX,o=e.touches[0].pageY-e.touches[1].pageY,n=Math.sqrt(t*t+o*o);N.set(0,n),M.set(0,Math.pow(N.y/A.y,s.zoomSpeed)),H(M.y),A.copy(N)}function B(e){if(!1!==s.enabled)switch(e.pointerType){case"mouse":case"pen":!function(e){var t;switch(e.preventDefault(),s.domElement.focus?s.domElement.focus():window.focus(),e.button){case 0:t=s.mouseButtons.LEFT;break;case 1:t=s.mouseButtons.MIDDLE;break;case 2:t=s.mouseButtons.RIGHT;break;default:t=-1}switch(t){case MOUSE.DOLLY:if(!1===s.enableZoom)return;!function(e){A.set(e.clientX,e.clientY)}(e),h=p.DOLLY;break;case MOUSE.ROTATE:if(e.ctrlKey||e.metaKey||e.shiftKey){if(!1===s.enablePan)return;V(e),h=p.PAN}else{if(!1===s.enableRotate)return;U(e),h=p.ROTATE}break;case MOUSE.PAN:if(e.ctrlKey||e.metaKey||e.shiftKey){if(!1===s.enableRotate)return;U(e),h=p.ROTATE}else{if(!1===s.enablePan)return;V(e),h=p.PAN}break;default:h=p.NONE}h!==p.NONE&&(s.domElement.ownerDocument.addEventListener("pointermove",G,!1),s.domElement.ownerDocument.addEventListener("pointerup",K,!1),s.dispatchEvent(l))}(e)}}function G(e){if(!1!==s.enabled)switch(e.pointerType){case"mouse":case"pen":!function(e){if(!1===s.enabled)return;switch(e.preventDefault(),h){case p.ROTATE:if(!1===s.enableRotate)return;!function(e){T.set(e.clientX,e.clientY),y.subVectors(T,v).multiplyScalar(s.rotateSpeed);var t=s.domElement;C(2*Math.PI*y.x/t.clientHeight),D(2*Math.PI*y.y/t.clientHeight),v.copy(T),s.update()}(e);break;case p.DOLLY:if(!1===s.enableZoom)return;!function(e){N.set(e.clientX,e.clientY),M.subVectors(N,A),M.y>0?H(j()):M.y<0&&x(j()),A.copy(N),s.update()}(e);break;case p.PAN:if(!1===s.enablePan)return;!function(e){L.set(e.clientX,e.clientY),w.subVectors(L,P).multiplyScalar(s.panSpeed),Y(w.x,w.y),P.copy(L),s.update()}(e)}}(e)}}function K(e){if(!1!==s.enabled)switch(e.pointerType){case"mouse":case"pen":!function(e){if(!1===s.enabled)return;s.domElement.ownerDocument.removeEventListener("pointermove",G,!1),s.domElement.ownerDocument.removeEventListener("pointerup",K,!1),s.dispatchEvent(m),h=p.NONE}()}}function W(e){!1===s.enabled||!1===s.enableZoom||h!==p.NONE&&h!==p.ROTATE||(e.preventDefault(),e.stopPropagation(),s.dispatchEvent(l),function(e){e.deltaY<0?x(j()):e.deltaY>0&&H(j()),s.update()}(e),s.dispatchEvent(m))}function q(e){!1!==s.enabled&&!1!==s.enableKeys&&!1!==s.enablePan&&function(e){var t=!1;switch(e.keyCode){case s.keys.UP:Y(0,s.keyPanSpeed),t=!0;break;case s.keys.BOTTOM:Y(0,-s.keyPanSpeed),t=!0;break;case s.keys.LEFT:Y(s.keyPanSpeed,0),t=!0;break;case s.keys.RIGHT:Y(-s.keyPanSpeed,0),t=!0}t&&(e.preventDefault(),s.update())}(e)}function Q(e){if(!1!==s.enabled){switch(e.preventDefault(),e.touches.length){case 1:switch(s.touches.ONE){case TOUCH.ROTATE:if(!1===s.enableRotate)return;I(e),h=p.TOUCH_ROTATE;break;case TOUCH.PAN:if(!1===s.enablePan)return;z(e),h=p.TOUCH_PAN;break;default:h=p.NONE}break;case 2:switch(s.touches.TWO){case TOUCH.DOLLY_PAN:if(!1===s.enableZoom&&!1===s.enablePan)return;!function(e){s.enableZoom&&X(e),s.enablePan&&z(e)}(e),h=p.TOUCH_DOLLY_PAN;break;case TOUCH.DOLLY_ROTATE:if(!1===s.enableZoom&&!1===s.enableRotate)return;!function(e){s.enableZoom&&X(e),s.enableRotate&&I(e)}(e),h=p.TOUCH_DOLLY_ROTATE;break;default:h=p.NONE}break;default:h=p.NONE}h!==p.NONE&&s.dispatchEvent(l)}}function J(e){if(!1!==s.enabled)switch(e.preventDefault(),e.stopPropagation(),h){case p.TOUCH_ROTATE:if(!1===s.enableRotate)return;_(e),s.update();break;case p.TOUCH_PAN:if(!1===s.enablePan)return;F(e),s.update();break;case p.TOUCH_DOLLY_PAN:if(!1===s.enableZoom&&!1===s.enablePan)return;!function(e){s.enableZoom&&Z(e),s.enablePan&&F(e)}(e),s.update();break;case p.TOUCH_DOLLY_ROTATE:if(!1===s.enableZoom&&!1===s.enableRotate)return;!function(e){s.enableZoom&&Z(e),s.enableRotate&&_(e)}(e),s.update();break;default:h=p.NONE}}function $(e){!1!==s.enabled&&(s.dispatchEvent(m),h=p.NONE)}function ee(e){!1!==s.enabled&&e.preventDefault()}s.domElement.addEventListener("contextmenu",ee,!1),s.domElement.addEventListener("pointerdown",B,!1),s.domElement.addEventListener("wheel",W,!1),s.domElement.addEventListener("touchstart",Q,!1),s.domElement.addEventListener("touchend",$,!1),s.domElement.addEventListener("touchmove",J,!1),s.domElement.addEventListener("keydown",q,!1),-1===s.domElement.tabIndex&&(s.domElement.tabIndex=0),this.update()};OrbitControls.prototype=Object.create(EventDispatcher.prototype),OrbitControls.prototype.constructor=OrbitControls;var MapControls=function(e,t){OrbitControls.call(this,e,t),this.screenSpacePanning=!1,this.mouseButtons.LEFT=MOUSE.PAN,this.mouseButtons.RIGHT=MOUSE.ROTATE,this.touches.ONE=TOUCH.PAN,this.touches.TWO=TOUCH.DOLLY_ROTATE};MapControls.prototype=Object.create(EventDispatcher.prototype),MapControls.prototype.constructor=MapControls;export{OrbitControls,MapControls}; \ No newline at end of file diff --git a/spaces/bielalpha/nerijs-pixel-art-xl/README.md b/spaces/bielalpha/nerijs-pixel-art-xl/README.md deleted file mode 100644 index cd8fe6e34a83aff75b42000f2dc2c44a3d42dce1..0000000000000000000000000000000000000000 --- a/spaces/bielalpha/nerijs-pixel-art-xl/README.md +++ /dev/null @@ -1,12 +0,0 @@ ---- -title: Nerijs Pixel Art Xl -emoji: 🏃 -colorFrom: red -colorTo: gray -sdk: gradio -sdk_version: 3.44.3 -app_file: app.py -pinned: false ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/bigPear/digitalWDF/src/__init__.py b/spaces/bigPear/digitalWDF/src/__init__.py deleted file mode 100644 index d8cc901f36752410cded38d01a9b355807f96e66..0000000000000000000000000000000000000000 --- a/spaces/bigPear/digitalWDF/src/__init__.py +++ /dev/null @@ -1,4 +0,0 @@ -from .utils import ( - load_pretrained, - ModelArguments -) diff --git a/spaces/bioriAsaeru/text-to-voice/American Academy Ophthalmology Books Free Download.md b/spaces/bioriAsaeru/text-to-voice/American Academy Ophthalmology Books Free Download.md deleted file mode 100644 index 1d1b2c5a77fb8ac062c290f640fed308d1f29680..0000000000000000000000000000000000000000 --- a/spaces/bioriAsaeru/text-to-voice/American Academy Ophthalmology Books Free Download.md +++ /dev/null @@ -1,72 +0,0 @@ -

      american academy ophthalmology books free download


      Download File - https://urloso.com/2uyODG



      -
      -1. Introduction - - 2. History of Ophthalmology - - 3. Classification of Eye Diseases - - 4. Treatment of Eye Diseases - - 5. Ophthalmic Emergencies - - 6. The Legal Responsibilities of Ophthalmologists - - 7. Taking Care of Your Eyes - - 8. Further Reading - - 9. Index - - 8. IV. Microsurgery - - 2. A Brief History of Microsurgery - - 3. Microsurgery Techniques - - 4. Microsurgery Applications - - 5. Further Reading - - 6. Index - - 9. V. Laser Surgery - - 2. A Brief History of Laser Surgery - - 3. Lasers - - 4. Lasers in Medicine - - 5. Lasers in Ophthalmology - - 6. Clinical Applications of Lasers - - 7. Further Reading - - 8. Index - - 10. Index - - 11. About the Author - -## Landmarks - - 1. Cover - -. "And here it is." I found the lump in my sack and tossed it on the fire. - -He looked at me and laughed. "What, did you swallow a hairball?" - -"No." - -"Good. I'll make you something to eat." He made a pot of tea and fried up some of the bread we'd been given in Veldrid. - -We shared our lunch and then ate our dinner, accompanied by more stories from the road and talk of the latest rumors and what we'd heard about the trail. And when we'd eaten, I decided to take a stroll. - -The air was hot, and the silence was oppressive. I took a few long steps before catching myself. And then I ran. - -The great city of Memnon lies ahead, and the walls of the temple of Herakles were covered with shadows. Shadows that fell over me and the throng of people around me. I ran up to the temple, out to the gate, back to the shadow that turned out to be a gap in the iron fence, then ran back into the shadow of the gate. The strange things I was seeing were reality. And reality was strange. I'd seen the 4fefd39f24
      -
      -
      -

      diff --git a/spaces/bioriAsaeru/text-to-voice/Jonathan Strange and Mr Norrell epub 61 Read the epic tale of magic and rivalry in nineteenth-century England.md b/spaces/bioriAsaeru/text-to-voice/Jonathan Strange and Mr Norrell epub 61 Read the epic tale of magic and rivalry in nineteenth-century England.md deleted file mode 100644 index 342cb81e7749d12cc8e3c86ab655d35b345df097..0000000000000000000000000000000000000000 --- a/spaces/bioriAsaeru/text-to-voice/Jonathan Strange and Mr Norrell epub 61 Read the epic tale of magic and rivalry in nineteenth-century England.md +++ /dev/null @@ -1,6 +0,0 @@ -

      jonathan strange and mr norrell epub 61


      DOWNLOAD === https://urloso.com/2uyO8G



      - - aaccfb2cb3
      -
      -
      -

      diff --git a/spaces/bla/tranny/App/Chat/utils/Dev/PalmAPI.py b/spaces/bla/tranny/App/Chat/utils/Dev/PalmAPI.py deleted file mode 100644 index 9b04990fbcdc44fd1fb88fd1c7c3b6e653596884..0000000000000000000000000000000000000000 --- a/spaces/bla/tranny/App/Chat/utils/Dev/PalmAPI.py +++ /dev/null @@ -1,149 +0,0 @@ -import aiohttp -import asyncio -import google.generativeai as palm -from langchain.llms import GooglePalm -from langchain.chains.summarize import load_summarize_chain -from langchain.text_splitter import RecursiveCharacterTextSplitter -from langchain import PromptTemplate -import os -PALM_API = "" -API_KEY=os.environ.get("PALM_API",PALM_API) -palm.configure(api_key=API_KEY) - -llm = GooglePalm(google_api_key=API_KEY, safety_settings= [ - {"category": "HARM_CATEGORY_DEROGATORY", "threshold": 4}, - {"category": "HARM_CATEGORY_TOXICITY", "threshold": 4}, - {"category": "HARM_CATEGORY_VIOLENCE", "threshold": 4}, - {"category": "HARM_CATEGORY_SEXUAL", "threshold": 4}, - {"category": "HARM_CATEGORY_MEDICAL", "threshold": 4}, - {"category": "HARM_CATEGORY_DANGEROUS", "threshold": 4}, - ],) -text_splitter = RecursiveCharacterTextSplitter(separators=["\n\n", "\n"], chunk_size=10000, chunk_overlap=500) -summary_chain = load_summarize_chain(llm=llm, chain_type='map_reduce', -# verbose=True # Set verbose=True if you want to see the prompts being used - ) -essay= ''' TFC Mamma Ron Subway Galito Urban Heart Kootie Java Square In this video, I'm going to try every single fast food chain in Irobi Kenya and I'm going to rate them on a scale of terrible, bad, mid, good and for the incredible ones, go zest! I've broken them up into categories, so pizza category, burger category, chicken, general fast food and breakfast category and I'm starting with Pizza Hut To keep this fair across all restaurants, I'm ordering the cheapest possible meal on the menu or as close to my budget of 500 Kenya shillings and for that price in Pizza Hut Okay, so this is the mine meat lovers pizza This is going to be my first tasting of Pizza Hut in Irobi Kenya I haven't washed my hands, no one has to know that Okay, I could already feel how chunky this pizza is Maybe dip that in this barbecue sauce Mmm Okay, ''' -docs = text_splitter.create_documents([essay]) -# print(docs[0].page_content) -map_prompt = """ -Write a concise summary of the following: -"{text}" -CONCISE SUMMARY: -""" -combine_prompt = """ -Write a concise summary of the following text delimited by triple backquotes. -Return your response in bullet points which covers the key points of the text. -```{text}``` -BULLET POINT SUMMARY: -""" -combine_prompt_template = PromptTemplate(template=combine_prompt, input_variables=["text"]) -map_prompt_template = PromptTemplate(template=map_prompt, input_variables=["text"]) - -summary_chain = load_summarize_chain(llm=llm, - chain_type='map_reduce', - map_prompt=map_prompt_template, - combine_prompt=combine_prompt_template, - verbose=True - ) -output = summary_chain.run(docs) -print(output) - -def count_tokens(text): - return palm.count_message_tokens(prompt=text)['token_count'] - - - -async def summarization(text): - url = f"https://generativelanguage.googleapis.com/v1beta2/models/text-bison-001:generateText?key={API_KEY}" - - headers = { - "Content-Type": "application/json", - } - - data = { - "prompt": { - "text": f"### You are given the an audio transcript as context. You are required to write a summary, list down the main takeaways, also cite the text when necessary.\n \"There's actually a military proven technique to fall asleep in exactly two minutes after closing your eyes. It's mind-blowing. Here's how you can do it too. Now, this technique was developed in the military to allow soldiers to fall asleep at any time, any place, even on the battlefield when the environment is extremely uncomfortable and there's a lot of noise happening. Sleep for a soldier is crucial. Now, according to my research, this was developed mainly for fighter pilots who need 100% of their reflexes and focus, which we all know decreases with the lack of sleep. So here's the technique that they use, and it's quite simple. First, you need to calm your body and systematically relax and shut down each part of your body from head to toe, literally. Start by relaxing the muscles in your forehead. Relax your eyes, your cheeks, your jaw, and focus on your breathing. Now go down to your neck and your shoulders. Make sure your shoulders are not tensed up, drop them as low as you can, and keep your arms loose to your side, including your hands and fingers. Imagine this warm sensation going from your head all the way down to your fingertips. Now take a deep breath and slowly exhale, relaxing your chest, your stomach, down to your thighs, knees, legs, and feet. Again, imagine this warm sensation going down from your heart all the way to your toes. Now while you're doing this, it's really important to clear your mind of any stresses. To do this, think of two scenarios. One, you're lying in a canoe on a calm lake with nothing but a clear blue sky above you. Two, you're lying in a black velvet hammock in a pitch black room. At any time when you start thinking of anything else or you start getting distracted, repeat these words for 10 seconds. Don't think. Don't think. Don't think. So that's the technique. You're supposed to practice every night for six weeks. 96% of people who master this technique are actually able to fall asleep within two minutes of shutting their eyes. I find it super interesting. I did not invent this technique, but I'm definitely going to try it out. Let me know if you're on board as well.\"\n\n\nSummary: The military developed a technique to help soldiers fall asleep at any time, any place. This technique involves systematically relaxing and shutting down each part of the body from head to toe, while clearing the mind of any stresses.\n\n\n\n\n {text} " - }, - "temperature": 0.95, - "top_k": 100, - "top_p": 0.95, - "candidate_count": 1, - "max_output_tokens": 1024, - "stop_sequences": [""], - "safety_settings": [ - {"category": "HARM_CATEGORY_DEROGATORY", "threshold": 4}, - {"category": "HARM_CATEGORY_TOXICITY", "threshold": 4}, - {"category": "HARM_CATEGORY_VIOLENCE", "threshold": 4}, - {"category": "HARM_CATEGORY_SEXUAL", "threshold": 4}, - {"category": "HARM_CATEGORY_MEDICAL", "threshold": 4}, - {"category": "HARM_CATEGORY_DANGEROUS", "threshold": 4}, - ], - } - - - async with aiohttp.ClientSession() as session: - async with session.post(url, json=data, headers=headers) as response: - if response.status == 200: - result = await response.json() - print(result) - temp = result["candidates"][0]["output"] - return temp - else: - print(f"Error: {response.status}\n{await response.text()}") - - - -def generate_summary(data): - template = """ {data} """.format(data=data) - chunk_size = 30_000 - inc = 250 - max_tokens = 7000 - min_tokens = 6000 - max_token_doc = 0 - - # choose the appropriate chunk size - while True: - # initialize text splitter - text_splitter = RecursiveCharacterTextSplitter( - separators=["\n\n", "\n"], - chunk_size=chunk_size, - chunk_overlap=0, - ) - docs = text_splitter.create_documents([data]) - temp =[] - for doc in docs: - temp_tokens=count_tokens(doc.page_content) - temp.append(temp_tokens) - max_token_doc = max(temp) - docs_count=len(temp) - # print(docs[0].page_content) - if docs_count ==1: - break - if max_tokens < max_token_doc or max_token_doc < min_tokens: - if max_tokens < max_token_doc: - chunk_size -= inc - else: - chunk_size += inc - continue - - else: - break - return docs - - -async def main(): - docs=generate_summary(''' -Transcript -Yo, Mabu, you really the only independent artist putting up numbers right now, bro They call me the independent variable for a reason gang. Oh, that's why I spent eight hours a day in school And I still put up more numbers than these fleas Like most of our rappers in Santa the streets only dealers who took was a plea I'm using the police, so this is a match that I'm a PD I cover my rhymes, I'm an innocent creep, keep my daily back when I sleep I'm rapping the words, but they write it for me Me, I'm all about keeping the peace, I'm me at least I get paid because a lot of these rhymes make up for free They call me the duck, I got the spreadsheet shot, but say, can I bring a friend? Never, never, never I guess she forgot the bad light, the two fives don't equal a ten Quick math, don't try it again, she's shaped like a turtle Or a hen, her makeup is fucked, she don't know how to blend, make her do worldle She did it her brains if she given her head, call her Virgil Cause the way she be blind shit got me dead She had a gut arc ten, I want it in a second circle She like Babu, I like purple, so I blew her back out when I left her on red Get it, cause blue and red equals purple So I blew her back out when I left her on red, don't let that go over your head One thought, two thought, three thought four They offered their knees on the floor, Keisha Becky, Sophie, so Texted me begging for mob, black in their bitches, she calling me bro If you're my dog then throw my bone Ruff, ruff, throw my booty, Keisha's in a flow To all my competition, take a grip, take a grip I invested money in myself and it paid, I can take a break too My boy is a household name''' ) - - for doc in docs: - summary=await summarization(doc.page_content) - print(summary) - - - - - - -# if __name__ == '__main__': -# asyncio.run(main=main()) diff --git a/spaces/brjathu/HMR2.0/hmr2/models/hmr2.py b/spaces/brjathu/HMR2.0/hmr2/models/hmr2.py deleted file mode 100644 index 6da022af65f9c62fb0ac928c708032dc3c998130..0000000000000000000000000000000000000000 --- a/spaces/brjathu/HMR2.0/hmr2/models/hmr2.py +++ /dev/null @@ -1,363 +0,0 @@ -import torch -import pytorch_lightning as pl -from typing import Any, Dict, Mapping, Tuple - -from yacs.config import CfgNode - -from ..utils import SkeletonRenderer, MeshRenderer -from ..utils.geometry import aa_to_rotmat, perspective_projection -from .backbones import create_backbone -from .heads import build_smpl_head -from .discriminator import Discriminator -from .losses import Keypoint3DLoss, Keypoint2DLoss, ParameterLoss -from . import SMPL - - -class HMR2(pl.LightningModule): - - def __init__(self, cfg: CfgNode, init_renderer: bool = True): - """ - Setup HMR2 model - Args: - cfg (CfgNode): Config file as a yacs CfgNode - """ - super().__init__() - - # Save hyperparameters - self.save_hyperparameters(logger=False, ignore=['init_renderer']) - - self.cfg = cfg - # Create backbone feature extractor - self.backbone = create_backbone(cfg) - - # Create SMPL head - self.smpl_head = build_smpl_head(cfg) - - # Create discriminator - if self.cfg.LOSS_WEIGHTS.ADVERSARIAL > 0: - self.discriminator = Discriminator() - - # Define loss functions - self.keypoint_3d_loss = Keypoint3DLoss(loss_type='l1') - self.keypoint_2d_loss = Keypoint2DLoss(loss_type='l1') - self.smpl_parameter_loss = ParameterLoss() - - # Instantiate SMPL model - smpl_cfg = {k.lower(): v for k,v in dict(cfg.SMPL).items()} - self.smpl = SMPL(**smpl_cfg) - - # Buffer that shows whetheer we need to initialize ActNorm layers - self.register_buffer('initialized', torch.tensor(False)) - # Setup renderer for visualization - if init_renderer: - self.renderer = SkeletonRenderer(self.cfg) - self.mesh_renderer = MeshRenderer(self.cfg, faces=self.smpl.faces) - else: - self.renderer = None - self.mesh_renderer = None - - # Disable automatic optimization since we use adversarial training - self.automatic_optimization = False - - def get_parameters(self): - all_params = list(self.smpl_head.parameters()) - all_params += list(self.backbone.parameters()) - return all_params - - def configure_optimizers(self) -> Tuple[torch.optim.Optimizer, torch.optim.Optimizer]: - """ - Setup model and distriminator Optimizers - Returns: - Tuple[torch.optim.Optimizer, torch.optim.Optimizer]: Model and discriminator optimizers - """ - param_groups = [{'params': filter(lambda p: p.requires_grad, self.get_parameters()), 'lr': self.cfg.TRAIN.LR}] - - optimizer = torch.optim.AdamW(params=param_groups, - # lr=self.cfg.TRAIN.LR, - weight_decay=self.cfg.TRAIN.WEIGHT_DECAY) - optimizer_disc = torch.optim.AdamW(params=self.discriminator.parameters(), - lr=self.cfg.TRAIN.LR, - weight_decay=self.cfg.TRAIN.WEIGHT_DECAY) - - return optimizer, optimizer_disc - - def forward_step(self, batch: Dict, train: bool = False) -> Dict: - """ - Run a forward step of the network - Args: - batch (Dict): Dictionary containing batch data - train (bool): Flag indicating whether it is training or validation mode - Returns: - Dict: Dictionary containing the regression output - """ - - # Use RGB image as input - x = batch['img'] - batch_size = x.shape[0] - - # Compute conditioning features using the backbone - # if using ViT backbone, we need to use a different aspect ratio - conditioning_feats = self.backbone(x[:,:,:,32:-32]) - - pred_smpl_params, pred_cam, _ = self.smpl_head(conditioning_feats) - - # Store useful regression outputs to the output dict - output = {} - output['pred_cam'] = pred_cam - output['pred_smpl_params'] = {k: v.clone() for k,v in pred_smpl_params.items()} - - # Compute camera translation - device = pred_smpl_params['body_pose'].device - dtype = pred_smpl_params['body_pose'].dtype - focal_length = self.cfg.EXTRA.FOCAL_LENGTH * torch.ones(batch_size, 2, device=device, dtype=dtype) - pred_cam_t = torch.stack([pred_cam[:, 1], - pred_cam[:, 2], - 2*focal_length[:, 0]/(self.cfg.MODEL.IMAGE_SIZE * pred_cam[:, 0] +1e-9)],dim=-1) - output['pred_cam_t'] = pred_cam_t - output['focal_length'] = focal_length - - # Compute model vertices, joints and the projected joints - pred_smpl_params['global_orient'] = pred_smpl_params['global_orient'].reshape(batch_size, -1, 3, 3) - pred_smpl_params['body_pose'] = pred_smpl_params['body_pose'].reshape(batch_size, -1, 3, 3) - pred_smpl_params['betas'] = pred_smpl_params['betas'].reshape(batch_size, -1) - smpl_output = self.smpl(**{k: v.float() for k,v in pred_smpl_params.items()}, pose2rot=False) - pred_keypoints_3d = smpl_output.joints - pred_vertices = smpl_output.vertices - output['pred_keypoints_3d'] = pred_keypoints_3d.reshape(batch_size, -1, 3) - output['pred_vertices'] = pred_vertices.reshape(batch_size, -1, 3) - pred_cam_t = pred_cam_t.reshape(-1, 3) - focal_length = focal_length.reshape(-1, 2) - pred_keypoints_2d = perspective_projection(pred_keypoints_3d, - translation=pred_cam_t, - focal_length=focal_length / self.cfg.MODEL.IMAGE_SIZE) - - output['pred_keypoints_2d'] = pred_keypoints_2d.reshape(batch_size, -1, 2) - return output - - def compute_loss(self, batch: Dict, output: Dict, train: bool = True) -> torch.Tensor: - """ - Compute losses given the input batch and the regression output - Args: - batch (Dict): Dictionary containing batch data - output (Dict): Dictionary containing the regression output - train (bool): Flag indicating whether it is training or validation mode - Returns: - torch.Tensor : Total loss for current batch - """ - - pred_smpl_params = output['pred_smpl_params'] - pred_keypoints_2d = output['pred_keypoints_2d'] - pred_keypoints_3d = output['pred_keypoints_3d'] - - - batch_size = pred_smpl_params['body_pose'].shape[0] - device = pred_smpl_params['body_pose'].device - dtype = pred_smpl_params['body_pose'].dtype - - # Get annotations - gt_keypoints_2d = batch['keypoints_2d'] - gt_keypoints_3d = batch['keypoints_3d'] - gt_smpl_params = batch['smpl_params'] - has_smpl_params = batch['has_smpl_params'] - is_axis_angle = batch['smpl_params_is_axis_angle'] - - # Compute 3D keypoint loss - loss_keypoints_2d = self.keypoint_2d_loss(pred_keypoints_2d, gt_keypoints_2d) - loss_keypoints_3d = self.keypoint_3d_loss(pred_keypoints_3d, gt_keypoints_3d, pelvis_id=25+14) - - # Compute loss on SMPL parameters - loss_smpl_params = {} - for k, pred in pred_smpl_params.items(): - gt = gt_smpl_params[k].view(batch_size, -1) - if is_axis_angle[k].all(): - gt = aa_to_rotmat(gt.reshape(-1, 3)).view(batch_size, -1, 3, 3) - has_gt = has_smpl_params[k] - loss_smpl_params[k] = self.smpl_parameter_loss(pred.reshape(batch_size, -1), gt.reshape(batch_size, -1), has_gt) - - # # Filter out images with corresponding SMPL parameter annotations - # smpl_params = {k: v.clone() for k,v in gt_smpl_params.items()} - # smpl_params['body_pose'] = aa_to_rotmat(smpl_params['body_pose'].reshape(-1, 3)).reshape(batch_size, -1, 3, 3)[:, :, :, :2].permute(0, 1, 3, 2).reshape(batch_size, -1) - # smpl_params['global_orient'] = aa_to_rotmat(smpl_params['global_orient'].reshape(-1, 3)).reshape(batch_size, -1, 3, 3)[:, :, :, :2].permute(0, 1, 3, 2).reshape(batch_size, -1) - # smpl_params['betas'] = smpl_params['betas'] - # has_smpl_params = (batch['has_smpl_params']['body_pose'] > 0) - # smpl_params = {k: v[has_smpl_params] for k, v in smpl_params.items()} - - loss = self.cfg.LOSS_WEIGHTS['KEYPOINTS_3D'] * loss_keypoints_3d+\ - self.cfg.LOSS_WEIGHTS['KEYPOINTS_2D'] * loss_keypoints_2d+\ - sum([loss_smpl_params[k] * self.cfg.LOSS_WEIGHTS[k.upper()] for k in loss_smpl_params]) - - losses = dict(loss=loss.detach(), - loss_keypoints_2d=loss_keypoints_2d.detach(), - loss_keypoints_3d=loss_keypoints_3d.detach()) - - for k, v in loss_smpl_params.items(): - losses['loss_' + k] = v.detach() - - output['losses'] = losses - - return loss - - # Tensoroboard logging should run from first rank only - @pl.utilities.rank_zero.rank_zero_only - def tensorboard_logging(self, batch: Dict, output: Dict, step_count: int, train: bool = True, write_to_summary_writer: bool = True) -> None: - """ - Log results to Tensorboard - Args: - batch (Dict): Dictionary containing batch data - output (Dict): Dictionary containing the regression output - step_count (int): Global training step count - train (bool): Flag indicating whether it is training or validation mode - """ - - mode = 'train' if train else 'val' - batch_size = batch['keypoints_2d'].shape[0] - images = batch['img'] - images = images * torch.tensor([0.229, 0.224, 0.225], device=images.device).reshape(1,3,1,1) - images = images + torch.tensor([0.485, 0.456, 0.406], device=images.device).reshape(1,3,1,1) - #images = 255*images.permute(0, 2, 3, 1).cpu().numpy() - - pred_keypoints_3d = output['pred_keypoints_3d'].detach().reshape(batch_size, -1, 3) - pred_vertices = output['pred_vertices'].detach().reshape(batch_size, -1, 3) - focal_length = output['focal_length'].detach().reshape(batch_size, 2) - gt_keypoints_3d = batch['keypoints_3d'] - gt_keypoints_2d = batch['keypoints_2d'] - losses = output['losses'] - pred_cam_t = output['pred_cam_t'].detach().reshape(batch_size, 3) - pred_keypoints_2d = output['pred_keypoints_2d'].detach().reshape(batch_size, -1, 2) - - if write_to_summary_writer: - summary_writer = self.logger.experiment - for loss_name, val in losses.items(): - summary_writer.add_scalar(mode +'/' + loss_name, val.detach().item(), step_count) - num_images = min(batch_size, self.cfg.EXTRA.NUM_LOG_IMAGES) - - gt_keypoints_3d = batch['keypoints_3d'] - pred_keypoints_3d = output['pred_keypoints_3d'].detach().reshape(batch_size, -1, 3) - - # We render the skeletons instead of the full mesh because rendering a lot of meshes will make the training slow. - #predictions = self.renderer(pred_keypoints_3d[:num_images], - # gt_keypoints_3d[:num_images], - # 2 * gt_keypoints_2d[:num_images], - # images=images[:num_images], - # camera_translation=pred_cam_t[:num_images]) - predictions = self.mesh_renderer.visualize_tensorboard(pred_vertices[:num_images].cpu().numpy(), - pred_cam_t[:num_images].cpu().numpy(), - images[:num_images].cpu().numpy(), - pred_keypoints_2d[:num_images].cpu().numpy(), - gt_keypoints_2d[:num_images].cpu().numpy(), - focal_length=focal_length[:num_images].cpu().numpy()) - if write_to_summary_writer: - summary_writer.add_image('%s/predictions' % mode, predictions, step_count) - - return predictions - - def forward(self, batch: Dict) -> Dict: - """ - Run a forward step of the network in val mode - Args: - batch (Dict): Dictionary containing batch data - Returns: - Dict: Dictionary containing the regression output - """ - return self.forward_step(batch, train=False) - - def training_step_discriminator(self, batch: Dict, - body_pose: torch.Tensor, - betas: torch.Tensor, - optimizer: torch.optim.Optimizer) -> torch.Tensor: - """ - Run a discriminator training step - Args: - batch (Dict): Dictionary containing mocap batch data - body_pose (torch.Tensor): Regressed body pose from current step - betas (torch.Tensor): Regressed betas from current step - optimizer (torch.optim.Optimizer): Discriminator optimizer - Returns: - torch.Tensor: Discriminator loss - """ - batch_size = body_pose.shape[0] - gt_body_pose = batch['body_pose'] - gt_betas = batch['betas'] - gt_rotmat = aa_to_rotmat(gt_body_pose.view(-1,3)).view(batch_size, -1, 3, 3) - disc_fake_out = self.discriminator(body_pose.detach(), betas.detach()) - loss_fake = ((disc_fake_out - 0.0) ** 2).sum() / batch_size - disc_real_out = self.discriminator(gt_rotmat, gt_betas) - loss_real = ((disc_real_out - 1.0) ** 2).sum() / batch_size - loss_disc = loss_fake + loss_real - loss = self.cfg.LOSS_WEIGHTS.ADVERSARIAL * loss_disc - optimizer.zero_grad() - self.manual_backward(loss) - optimizer.step() - return loss_disc.detach() - - def training_step(self, joint_batch: Dict, batch_idx: int) -> Dict: - """ - Run a full training step - Args: - joint_batch (Dict): Dictionary containing image and mocap batch data - batch_idx (int): Unused. - batch_idx (torch.Tensor): Unused. - Returns: - Dict: Dictionary containing regression output. - """ - batch = joint_batch['img'] - mocap_batch = joint_batch['mocap'] - optimizer = self.optimizers(use_pl_optimizer=True) - if self.cfg.LOSS_WEIGHTS.ADVERSARIAL > 0: - optimizer, optimizer_disc = optimizer - - # Update learning rates - self.update_learning_rates(batch_idx) - - batch_size = batch['img'].shape[0] - output = self.forward_step(batch, train=True) - pred_smpl_params = output['pred_smpl_params'] - if self.cfg.get('UPDATE_GT_SPIN', False): - self.update_batch_gt_spin(batch, output) - loss = self.compute_loss(batch, output, train=True) - if self.cfg.LOSS_WEIGHTS.ADVERSARIAL > 0: - disc_out = self.discriminator(pred_smpl_params['body_pose'].reshape(batch_size, -1), pred_smpl_params['betas'].reshape(batch_size, -1)) - loss_adv = ((disc_out - 1.0) ** 2).sum() / batch_size - loss = loss + self.cfg.LOSS_WEIGHTS.ADVERSARIAL * loss_adv - - # Error if Nan - if torch.isnan(loss): - raise ValueError('Loss is NaN') - - optimizer.zero_grad() - self.manual_backward(loss) - # Clip gradient - if self.cfg.TRAIN.get('GRAD_CLIP_VAL', 0) > 0: - gn = torch.nn.utils.clip_grad_norm_(self.get_parameters(), self.cfg.TRAIN.GRAD_CLIP_VAL, error_if_nonfinite=True) - self.log('train/grad_norm', gn, on_step=True, on_epoch=True, prog_bar=True, logger=True) - optimizer.step() - if self.cfg.LOSS_WEIGHTS.ADVERSARIAL > 0: - loss_disc = self.training_step_discriminator(mocap_batch, pred_smpl_params['body_pose'].reshape(batch_size, -1), pred_smpl_params['betas'].reshape(batch_size, -1), optimizer_disc) - output['losses']['loss_gen'] = loss_adv - output['losses']['loss_disc'] = loss_disc - - if self.global_step > 0 and self.global_step % self.cfg.GENERAL.LOG_STEPS == 0: - self.tensorboard_logging(batch, output, self.global_step, train=True) - - self.log('train/loss', output['losses']['loss'], on_step=True, on_epoch=True, prog_bar=True, logger=False) - - return output - - def validation_step(self, batch: Dict, batch_idx: int, dataloader_idx=0) -> Dict: - """ - Run a validation step and log to Tensorboard - Args: - batch (Dict): Dictionary containing batch data - batch_idx (int): Unused. - Returns: - Dict: Dictionary containing regression output. - """ - # batch_size = batch['img'].shape[0] - output = self.forward_step(batch, train=False) - - pred_smpl_params = output['pred_smpl_params'] - loss = self.compute_loss(batch, output, train=False) - output['loss'] = loss - self.tensorboard_logging(batch, output, self.global_step, train=False) - - return output diff --git a/spaces/chendl/compositional_test/transformers/README_hd.md b/spaces/chendl/compositional_test/transformers/README_hd.md deleted file mode 100644 index 24fc985432b7eceee128352ac6d82604bd672f93..0000000000000000000000000000000000000000 --- a/spaces/chendl/compositional_test/transformers/README_hd.md +++ /dev/null @@ -1,470 +0,0 @@ - - - - -

      -
      - -
      -

      -

      - - Build - - - GitHub - - - Documentation - - - GitHub release - - - Contributor Covenant - - DOI -

      - -

      -

      - English | - 简体中文 | - 繁體中文 | - 한국어 | - Español | - 日本語 | - हिन्दी | -

      -

      - -

      -

      Jax, PyTorch और TensorFlow के लिए उन्नत मशीन लर्निंग

      -

      - -

      - -

      - -🤗 Transformers 100 से अधिक भाषाओं में पाठ वर्गीकरण, सूचना निष्कर्षण, प्रश्न उत्तर, सारांशीकरण, अनुवाद, पाठ निर्माण का समर्थन करने के लिए हजारों पूर्व-प्रशिक्षित मॉडल प्रदान करता है। इसका उद्देश्य सबसे उन्नत एनएलपी तकनीक को सभी के लिए सुलभ बनाना है। - -🤗 Transformers त्वरित डाउनलोड और उपयोग के लिए एक एपीआई प्रदान करता है, जिससे आप किसी दिए गए पाठ पर एक पूर्व-प्रशिक्षित मॉडल ले सकते हैं, इसे अपने डेटासेट पर ठीक कर सकते हैं और इसे [मॉडल हब] (https://huggingface.co/models) के माध्यम से समुदाय के साथ साझा कर सकते हैं। ) . इसी समय, प्रत्येक परिभाषित पायथन मॉड्यूल पूरी तरह से स्वतंत्र है, जो संशोधन और तेजी से अनुसंधान प्रयोगों के लिए सुविधाजनक है। - -🤗 Transformers तीन सबसे लोकप्रिय गहन शिक्षण पुस्तकालयों का समर्थन करता है: [Jax](https://jax.readthedocs.io/en/latest/), [PyTorch](https://pytorch.org/) and [TensorFlow](https://www.tensorflow.org/) — और इसके साथ निर्बाध रूप से एकीकृत होता है। आप अपने मॉडल को सीधे एक ढांचे के साथ प्रशिक्षित कर सकते हैं और दूसरे के साथ लोड और अनुमान लगा सकते हैं। - -## ऑनलाइन डेमो - -आप सबसे सीधे मॉडल पृष्ठ पर परीक्षण कर सकते हैं [model hub](https://huggingface.co/models) मॉडल पर। हम [निजी मॉडल होस्टिंग, मॉडल संस्करण, और अनुमान एपीआई] भी प्रदान करते हैं।(https://huggingface.co/pricing)。 - -यहाँ कुछ उदाहरण हैं: -- [शब्द को भरने के लिए मास्क के रूप में BERT का प्रयोग करें](https://huggingface.co/bert-base-uncased?text=Paris+is+the+%5BMASK%5D+of+France) -- [इलेक्ट्रा के साथ नामित इकाई पहचान](https://huggingface.co/dbmdz/electra-large-discriminator-finetuned-conll03-english?text=My+name+is+Sarah+and+I+live+in+London+city) -- [जीपीटी-2 के साथ टेक्स्ट जनरेशन](https://huggingface.co/gpt2?text=A+long+time+ago%2C+) -- [रॉबर्टा के साथ प्राकृतिक भाषा निष्कर्ष](https://huggingface.co/roberta-large-mnli?text=The+dog+was+lost.+Nobody+lost+any+animal) -- [बार्ट के साथ पाठ सारांश](https://huggingface.co/facebook/bart-large-cnn?text=The+tower+is+324+metres+%281%2C063+ft%29+tall%2C+about+the+same+height+as+an+81-storey+building%2C+and+the+tallest+structure+in+Paris.+Its+base+is+square%2C+measuring+125+metres+%28410+ft%29+on+each+side.+During+its+construction%2C+the+Eiffel+Tower+surpassed+the+Washington+Monument+to+become+the+tallest+man-made+structure+in+the+world%2C+a+title+it+held+for+41+years+until+the+Chrysler+Building+in+New+York+City+was+finished+in+1930.+It+was+the+first+structure+to+reach+a+height+of+300+metres.+Due+to+the+addition+of+a+broadcasting+aerial+at+the+top+of+the+tower+in+1957%2C+it+is+now+taller+than+the+Chrysler+Building+by+5.2+metres+%2817+ft%29.+Excluding+transmitters%2C+the+Eiffel+Tower+is+the+second+tallest+free-standing+structure+in+France+after+the+Millau+Viaduct) -- [डिस्टिलबर्ट के साथ प्रश्नोत्तर](https://huggingface.co/distilbert-base-uncased-distilled-squad?text=Which+name+is+also+used+to+describe+the+Amazon+rainforest+in+English%3F&context=The+Amazon+rainforest+%28Portuguese%3A+Floresta+Amaz%C3%B4nica+or+Amaz%C3%B4nia%3B+Spanish%3A+Selva+Amaz%C3%B3nica%2C+Amazon%C3%ADa+or+usually+Amazonia%3B+French%3A+For%C3%AAt+amazonienne%3B+Dutch%3A+Amazoneregenwoud%29%2C+also+known+in+English+as+Amazonia+or+the+Amazon+Jungle%2C+is+a+moist+broadleaf+forest+that+covers+most+of+the+Amazon+basin+of+South+America.+This+basin+encompasses+7%2C000%2C000+square+kilometres+%282%2C700%2C000+sq+mi%29%2C+of+which+5%2C500%2C000+square+kilometres+%282%2C100%2C000+sq+mi%29+are+covered+by+the+rainforest.+This+region+includes+territory+belonging+to+nine+nations.+The+majority+of+the+forest+is+contained+within+Brazil%2C+with+60%25+of+the+rainforest%2C+followed+by+Peru+with+13%25%2C+Colombia+with+10%25%2C+and+with+minor+amounts+in+Venezuela%2C+Ecuador%2C+Bolivia%2C+Guyana%2C+Suriname+and+French+Guiana.+States+or+departments+in+four+nations+contain+%22Amazonas%22+in+their+names.+The+Amazon+represents+over+half+of+the+planet%27s+remaining+rainforests%2C+and+comprises+the+largest+and+most+biodiverse+tract+of+tropical+rainforest+in+the+world%2C+with+an+estimated+390+billion+individual+trees+divided+into+16%2C000+species) -- [अनुवाद के लिए T5 का प्रयोग करें](https://huggingface.co/t5-base?text=My+name+is+Wolfgang+and+I+live+in+Berlin) - -**[Write With Transformer](https://transformer.huggingface.co)**,हगिंग फेस टीम द्वारा बनाया गया, यह एक आधिकारिक पाठ पीढ़ी है demo。 - -## यदि आप हगिंग फेस टीम से बीस्पोक समर्थन की तलाश कर रहे हैं - - - HuggingFace Expert Acceleration Program -
      - -## जल्दी शुरू करें - -हम त्वरित उपयोग के लिए मॉडल प्रदान करते हैं `pipeline` (पाइपलाइन) एपीआई। पाइपलाइन पूर्व-प्रशिक्षित मॉडल और संबंधित पाठ प्रीप्रोसेसिंग को एकत्रित करती है। सकारात्मक और नकारात्मक भावना को निर्धारित करने के लिए पाइपलाइनों का उपयोग करने का एक त्वरित उदाहरण यहां दिया गया है: - -```python ->>> from transformers import pipeline - -# भावना विश्लेषण पाइपलाइन का उपयोग करना ->>> classifier = pipeline('sentiment-analysis') ->>> classifier('We are very happy to introduce pipeline to the transformers repository.') -[{'label': 'POSITIVE', 'score': 0.9996980428695679}] -``` - -कोड की दूसरी पंक्ति पाइपलाइन द्वारा उपयोग किए गए पूर्व-प्रशिक्षित मॉडल को डाउनलोड और कैश करती है, जबकि कोड की तीसरी पंक्ति दिए गए पाठ पर मूल्यांकन करती है। यहां उत्तर 99 आत्मविश्वास के स्तर के साथ "सकारात्मक" है। - -कई एनएलपी कार्यों में आउट ऑफ़ द बॉक्स पाइपलाइनों का पूर्व-प्रशिक्षण होता है। उदाहरण के लिए, हम किसी दिए गए पाठ से किसी प्रश्न का उत्तर आसानी से निकाल सकते हैं: - -``` python ->>> from transformers import pipeline - -# प्रश्नोत्तर पाइपलाइन का उपयोग करना ->>> question_answerer = pipeline('question-answering') ->>> question_answerer({ -... 'question': 'What is the name of the repository ?', -... 'context': 'Pipeline has been included in the huggingface/transformers repository' -... }) -{'score': 0.30970096588134766, 'start': 34, 'end': 58, 'answer': 'huggingface/transformers'} - -``` - -उत्तर देने के अलावा, पूर्व-प्रशिक्षित मॉडल संगत आत्मविश्वास स्कोर भी देता है, जहां उत्तर टोकनयुक्त पाठ में शुरू और समाप्त होता है। आप [इस ट्यूटोरियल](https://huggingface.co/docs/transformers/task_summary) से पाइपलाइन एपीआई द्वारा समर्थित कार्यों के बारे में अधिक जान सकते हैं। - -अपने कार्य पर किसी भी पूर्व-प्रशिक्षित मॉडल को डाउनलोड करना और उसका उपयोग करना भी कोड की तीन पंक्तियों की तरह सरल है। यहाँ PyTorch संस्करण के लिए एक उदाहरण दिया गया है: -```python ->>> from transformers import AutoTokenizer, AutoModel - ->>> tokenizer = AutoTokenizer.from_pretrained("bert-base-uncased") ->>> model = AutoModel.from_pretrained("bert-base-uncased") - ->>> inputs = tokenizer("Hello world!", return_tensors="pt") ->>> outputs = model(**inputs) -``` -यहाँ समकक्ष है TensorFlow कोड: -```python ->>> from transformers import AutoTokenizer, TFAutoModel - ->>> tokenizer = AutoTokenizer.from_pretrained("bert-base-uncased") ->>> model = TFAutoModel.from_pretrained("bert-base-uncased") - ->>> inputs = tokenizer("Hello world!", return_tensors="tf") ->>> outputs = model(**inputs) -``` - -टोकननाइज़र सभी पूर्व-प्रशिक्षित मॉडलों के लिए प्रीप्रोसेसिंग प्रदान करता है और इसे सीधे एक स्ट्रिंग (जैसे ऊपर दिए गए उदाहरण) या किसी सूची पर बुलाया जा सकता है। यह एक डिक्शनरी (तानाशाही) को आउटपुट करता है जिसे आप डाउनस्ट्रीम कोड में उपयोग कर सकते हैं या `**` अनपैकिंग एक्सप्रेशन के माध्यम से सीधे मॉडल को पास कर सकते हैं। - -मॉडल स्वयं एक नियमित [Pytorch `nn.Module`](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) या [TensorFlow `tf.keras.Model`](https ://pytorch.org/docs/stable/nn.html#torch.nn.Module) ://www.tensorflow.org/api_docs/python/tf/keras/Model) (आपके बैकएंड के आधार पर), जो हो सकता है सामान्य तरीके से उपयोग किया जाता है। [यह ट्यूटोरियल](https://huggingface.co/transformers/training.html) बताता है कि इस तरह के मॉडल को क्लासिक PyTorch या TensorFlow प्रशिक्षण लूप में कैसे एकीकृत किया जाए, या हमारे `ट्रेनर` एपीआई का उपयोग कैसे करें ताकि इसे जल्दी से फ़ाइन ट्यून किया जा सके।एक नया डेटासेट पे। - -## ट्रांसफार्मर का उपयोग क्यों करें? - -1. उपयोग में आसानी के लिए उन्नत मॉडल: - - एनएलयू और एनएलजी पर बेहतर प्रदर्शन - - प्रवेश के लिए कम बाधाओं के साथ शिक्षण और अभ्यास के अनुकूल - - उपयोगकर्ता-सामना करने वाले सार तत्व, केवल तीन वर्गों को जानने की जरूरत है - - सभी मॉडलों के लिए एकीकृत एपीआई - -1. कम कम्प्यूटेशनल ओवरहेड और कम कार्बन उत्सर्जन: - - शोधकर्ता हर बार नए सिरे से प्रशिक्षण देने के बजाय प्रशिक्षित मॉडल साझा कर सकते हैं - - इंजीनियर गणना समय और उत्पादन ओवरहेड को कम कर सकते हैं - - दर्जनों मॉडल आर्किटेक्चर, 2,000 से अधिक पूर्व-प्रशिक्षित मॉडल, 100 से अधिक भाषाओं का समर्थन - -1.मॉडल जीवनचक्र के हर हिस्से को शामिल करता है: - - कोड की केवल 3 पंक्तियों में उन्नत मॉडलों को प्रशिक्षित करें - - मॉडल को मनमाने ढंग से विभिन्न डीप लर्निंग फ्रेमवर्क के बीच स्थानांतरित किया जा सकता है, जैसा आप चाहते हैं - - निर्बाध रूप से प्रशिक्षण, मूल्यांकन और उत्पादन के लिए सबसे उपयुक्त ढांचा चुनें - -1. आसानी से अनन्य मॉडल को अनुकूलित करें और अपनी आवश्यकताओं के लिए मामलों का उपयोग करें: - - हम मूल पेपर परिणामों को पुन: पेश करने के लिए प्रत्येक मॉडल आर्किटेक्चर के लिए कई उपयोग के मामले प्रदान करते हैं - - मॉडल की आंतरिक संरचना पारदर्शी और सुसंगत रहती है - - मॉडल फ़ाइल को अलग से इस्तेमाल किया जा सकता है, जो संशोधन और त्वरित प्रयोग के लिए सुविधाजनक है - -## मुझे ट्रांसफॉर्मर का उपयोग कब नहीं करना चाहिए? - -- यह लाइब्रेरी मॉड्यूलर न्यूरल नेटवर्क टूलबॉक्स नहीं है। मॉडल फ़ाइल में कोड जानबूझकर अल्पविकसित है, बिना अतिरिक्त सार इनकैप्सुलेशन के, ताकि शोधकर्ता अमूर्तता और फ़ाइल जंपिंग में शामिल हुए जल्दी से पुनरावृति कर सकें। -- `ट्रेनर` एपीआई किसी भी मॉडल के साथ संगत नहीं है, यह केवल इस पुस्तकालय के मॉडल के लिए अनुकूलित है। यदि आप सामान्य मशीन लर्निंग के लिए उपयुक्त प्रशिक्षण लूप कार्यान्वयन की तलाश में हैं, तो कहीं और देखें। -- हमारे सर्वोत्तम प्रयासों के बावजूद, [उदाहरण निर्देशिका] (https://github.com/huggingface/transformers/tree/main/examples) में स्क्रिप्ट केवल उपयोग के मामले हैं। आपकी विशिष्ट समस्या के लिए, वे जरूरी नहीं कि बॉक्स से बाहर काम करें, और आपको कोड की कुछ पंक्तियों को सूट करने की आवश्यकता हो सकती है। - -## स्थापित करना - -### पिप का उपयोग करना - -इस रिपॉजिटरी का परीक्षण Python 3.6+, Flax 0.3.2+, PyTorch 1.3.1+ और TensorFlow 2.3+ के तहत किया गया है। - -आप [वर्चुअल एनवायरनमेंट] (https://docs.python.org/3/library/venv.html) में 🤗 ट्रांसफॉर्मर इंस्टॉल कर सकते हैं। यदि आप अभी तक पायथन के वर्चुअल एनवायरनमेंट से परिचित नहीं हैं, तो कृपया इसे [उपयोगकर्ता निर्देश] (https://packaging.python.org/guides/installing-using-pip-and-virtual-environments/) पढ़ें। - -सबसे पहले, पायथन के उस संस्करण के साथ एक आभासी वातावरण बनाएं जिसका आप उपयोग करने और उसे सक्रिय करने की योजना बना रहे हैं। - -फिर, आपको Flax, PyTorch या TensorFlow में से किसी एक को स्थापित करने की आवश्यकता है। अपने प्लेटफ़ॉर्म पर इन फ़्रेमवर्क को स्थापित करने के लिए, [TensorFlow स्थापना पृष्ठ](https://www.tensorflow.org/install/), [PyTorch स्थापना पृष्ठ](https://pytorch.org/get-started /locally/# देखें) start-locally) या [Flax स्थापना पृष्ठ](https://github.com/google/flax#quick-install). - -जब इनमें से कोई एक बैकएंड सफलतापूर्वक स्थापित हो जाता है, तो ट्रांसफॉर्मर निम्नानुसार स्थापित किए जा सकते हैं: - -```bash -pip install transformers -``` - -यदि आप उपयोग के मामलों को आज़माना चाहते हैं या आधिकारिक रिलीज़ से पहले नवीनतम इन-डेवलपमेंट कोड का उपयोग करना चाहते हैं, तो आपको [सोर्स से इंस्टॉल करना होगा](https://huggingface.co/docs/transformers/installation#installing-from- स्रोत)। - -### कोंडा का उपयोग करना - -ट्रांसफॉर्मर संस्करण 4.0.0 के बाद से, हमारे पास एक कोंडा चैनल है: `हगिंगफेस`। - -ट्रांसफॉर्मर कोंडा के माध्यम से निम्नानुसार स्थापित किया जा सकता है: - -```shell script -conda install -c huggingface transformers -``` - -कोंडा के माध्यम से Flax, PyTorch, या TensorFlow में से किसी एक को स्थापित करने के लिए, निर्देशों के लिए उनके संबंधित स्थापना पृष्ठ देखें। - -## मॉडल आर्किटेक्चर -[उपयोगकर्ता](https://huggingface.co/users) और [organization](https://huggingface.co) द्वारा ट्रांसफॉर्मर समर्थित [**सभी मॉडल चौकियों**](https://huggingface.co/models) /users) हगिंगफेस.को/ऑर्गनाइजेशन), सभी को बिना किसी बाधा के हगिंगफेस.को [मॉडल हब](https://huggingface.co) के साथ एकीकृत किया गया है। - -चौकियों की वर्तमान संख्या: ![](https://img.shields.io/endpoint?url=https://huggingface.co/api/shields/models&color=brightgreen) - -🤗 ट्रांसफॉर्मर वर्तमान में निम्नलिखित आर्किटेक्चर का समर्थन करते हैं (मॉडल के अवलोकन के लिए [यहां] देखें (https://huggingface.co/docs/transformers/model_summary)): - -1. **[ALBERT](https://huggingface.co/docs/transformers/model_doc/albert)** (Google Research and the Toyota Technological Institute at Chicago) साथ थीसिस [ALBERT: A Lite BERT for Self-supervised भाषा प्रतिनिधित्व सीखना](https://arxiv.org/abs/1909.11942), झेंझोंग लैन, मिंगदा चेन, सेबेस्टियन गुडमैन, केविन गिम्पेल, पीयूष शर्मा, राडू सोरिकट -1. **[ALIGN](https://huggingface.co/docs/transformers/model_doc/align)** (Google Research से) Chao Jia, Yinfei Yang, Ye Xia, Yi-Ting Chen, Zarana Parekh, Hieu Pham, Quoc V. Le, Yunhsuan Sung, Zhen Li, Tom Duerig. द्वाराअनुसंधान पत्र [Scaling Up Visual and Vision-Language Representation Learning With Noisy Text Supervision](https://arxiv.org/abs/2102.05918) के साथ जारी किया गया -1. **[AltCLIP](https://huggingface.co/docs/transformers/model_doc/altclip)** (from BAAI) released with the paper [AltCLIP: Altering the Language Encoder in CLIP for Extended Language Capabilities](https://arxiv.org/abs/2211.06679) by Chen, Zhongzhi and Liu, Guang and Zhang, Bo-Wen and Ye, Fulong and Yang, Qinghong and Wu, Ledell. -1. **[Audio Spectrogram Transformer](https://huggingface.co/docs/transformers/model_doc/audio-spectrogram-transformer)** (from MIT) released with the paper [AST: Audio Spectrogram Transformer](https://arxiv.org/abs/2104.01778) by Yuan Gong, Yu-An Chung, James Glass. -1. **[BART](https://huggingface.co/docs/transformers/model_doc/bart)** (फेसबुक) साथ थीसिस [बार्ट: प्राकृतिक भाषा निर्माण, अनुवाद के लिए अनुक्रम-से-अनुक्रम पूर्व प्रशिक्षण , और समझ] (https://arxiv.org/pdf/1910.13461.pdf) पर निर्भर माइक लुईस, यिनहान लियू, नमन गोयल, मार्जन ग़ज़विनिनेजाद, अब्देलरहमान मोहम्मद, ओमर लेवी, वेस स्टोयानोव और ल्यूक ज़ेटलमॉयर -1. **[BARThez](https://huggingface.co/docs/transformers/model_doc/barthez)** (से École polytechnique) साथ थीसिस [BARThez: a Skilled Pretrained French Sequence-to-Sequence Model](https://arxiv.org/abs/2010.12321) पर निर्भर Moussa Kamal Eddine, Antoine J.-P. Tixier, Michalis Vazirgiannis रिहाई। -1. **[BARTpho](https://huggingface.co/docs/transformers/model_doc/bartpho)** (VinAI Research से) साथ में पेपर [BARTpho: Pre-trained Sequence-to-Sequence Models for Vietnamese](https://arxiv.org/abs/2109.09701)गुयेन लुओंग ट्रान, डुओंग मिन्ह ले और डाट क्वोक गुयेन द्वारा पोस्ट किया गया। -1. **[BEiT](https://huggingface.co/docs/transformers/model_doc/beit)** (Microsoft से) साथ में कागज [BEiT: BERT इमेज ट्रांसफॉर्मर्स का प्री-ट्रेनिंग](https://arxiv.org/abs/2106.08254) Hangbo Bao, Li Dong, Furu Wei द्वारा। -1. **[BERT](https://huggingface.co/docs/transformers/model_doc/bert)** (गूगल से) साथ वाला पेपर [बीईआरटी: प्री-ट्रेनिंग ऑफ डीप बिडायरेक्शनल ट्रांसफॉर्मर्स फॉर लैंग्वेज अंडरस्टैंडिंग](https://arxiv.org/abs/1810.04805) जैकब डेवलिन, मिंग-वेई चांग, ​​केंटन ली और क्रिस्टीना टौटानोवा द्वारा प्रकाशित किया गया था। . -1. **[BERT For Sequence Generation](https://huggingface.co/docs/transformers/model_doc/bert-generation)** (गूगल से) साथ देने वाला पेपर [सीक्वेंस जेनरेशन टास्क के लिए प्री-ट्रेंड चेकपॉइंट का इस्तेमाल करना](https ://arxiv.org/abs/1907.12461) साशा रोठे, शशि नारायण, अलियाक्सि सेवेरिन द्वारा। -1. **[BERTweet](https://huggingface.co/docs/transformers/model_doc/bertweet)** (VinAI Research से) साथ में पेपर [BERTweet: अंग्रेजी ट्वीट्स के लिए एक पूर्व-प्रशिक्षित भाषा मॉडल] (https://aclanthology.org/2020.emnlp-demos.2/) डाट क्वोक गुयेन, थान वु और अन्ह तुआन गुयेन द्वारा प्रकाशित। -1. **[BigBird-Pegasus](https://huggingface.co/docs/transformers/model_doc/bigbird_pegasus)** (गूगल रिसर्च से) साथ वाला पेपर [बिग बर्ड: ट्रांसफॉर्मर्स फॉर लॉन्गर सीक्वेंस](https://arxiv .org/abs/2007.14062) मंज़िल ज़हीर, गुरु गुरुगणेश, अविनावा दुबे, जोशुआ आइंस्ली, क्रिस अल्बर्टी, सैंटियागो ओंटानोन, फिलिप फाम, अनिरुद्ध रावुला, किफ़ान वांग, ली यांग, अमर अहमद द्वारा। -1. **[BigBird-RoBERTa](https://huggingface.co/docs/transformers/model_doc/big_bird)** (गूगल रिसर्च से) साथ में पेपर [बिग बर्ड: ट्रांसफॉर्मर्स फॉर लॉन्गर सीक्वेंस](https://arxiv.org/abs/2007.14062) मंज़िल ज़हीर, गुरु गुरुगणेश, अविनावा दुबे, जोशुआ आइंस्ली, क्रिस अल्बर्टी, सैंटियागो ओंटानन, फिलिप फाम द्वारा , अनिरुद्ध रावुला, किफ़ान वांग, ली यांग, अमर अहमद द्वारा पोस्ट किया गया। -1. **[BioGpt](https://huggingface.co/docs/transformers/model_doc/biogpt)** (from Microsoft Research AI4Science) released with the paper [BioGPT: generative pre-trained transformer for biomedical text generation and mining](https://academic.oup.com/bib/advance-article/doi/10.1093/bib/bbac409/6713511?guestAccessKey=a66d9b5d-4f83-4017-bb52-405815c907b9) by Renqian Luo, Liai Sun, Yingce Xia, Tao Qin, Sheng Zhang, Hoifung Poon and Tie-Yan Liu. -1. **[BiT](https://huggingface.co/docs/transformers/model_doc/bit)** (from Google AI) released with the paper [Big Transfer (BiT) by Alexander Kolesnikov, Lucas Beyer, Xiaohua Zhai, Joan Puigcerver, Jessica Yung, Sylvain Gelly, Neil Houlsby. -1. **[Blenderbot](https://huggingface.co/docs/transformers/model_doc/blenderbot)** (फेसबुक से) साथ में कागज [एक ओपन-डोमेन चैटबॉट बनाने की विधि](https://arxiv.org /abs/2004.13637) स्टीफन रोलर, एमिली दीनन, नमन गोयल, दा जू, मैरी विलियमसन, यिनहान लियू, जिंग जू, मायल ओट, कर्ट शस्टर, एरिक एम। स्मिथ, वाई-लैन बॉरो, जेसन वेस्टन द्वारा। -1. **[BlenderbotSmall](https://huggingface.co/docs/transformers/model_doc/blenderbot-small)** (फेसबुक से) साथ में पेपर [एक ओपन-डोमेन चैटबॉट बनाने की रेसिपी](https://arxiv .org/abs/2004.13637) स्टीफन रोलर, एमिली दीनन, नमन गोयल, दा जू, मैरी विलियमसन, यिनहान लियू, जिंग जू, मायल ओट, कर्ट शस्टर, एरिक एम स्मिथ, वाई-लैन बॉरो, जेसन वेस्टन द्वारा। -1. **[BLIP](https://huggingface.co/docs/transformers/model_doc/blip)** (from Salesforce) released with the paper [BLIP: Bootstrapping Language-Image Pre-training for Unified Vision-Language Understanding and Generation](https://arxiv.org/abs/2201.12086) by Junnan Li, Dongxu Li, Caiming Xiong, Steven Hoi. -1. **[BLIP-2](https://huggingface.co/docs/transformers/model_doc/blip-2)** (Salesforce से) Junnan Li, Dongxu Li, Silvio Savarese, Steven Hoi. द्वाराअनुसंधान पत्र [BLIP-2: Bootstrapping Language-Image Pre-training with Frozen Image Encoders and Large Language Models](https://arxiv.org/abs/2301.12597) के साथ जारी किया गया -1. **[BLOOM](https://huggingface.co/docs/transformers/model_doc/bloom)** (from BigScience workshop) released by the [BigSicence Workshop](https://bigscience.huggingface.co/). -1. **[BORT](https://huggingface.co/docs/transformers/model_doc/bort)** (एलेक्सा से) कागज के साथ [बीईआरटी के लिए ऑप्टिमल सबआर्किटेक्चर एक्सट्रैक्शन](https://arxiv.org/abs/ 2010.10499) एड्रियन डी विंटर और डैनियल जे पेरी द्वारा। -1. **[BridgeTower](https://huggingface.co/docs/transformers/model_doc/bridgetower)** (हरबिन इंस्टिट्यूट ऑफ़ टेक्नोलॉजी/माइक्रोसॉफ्ट रिसर्च एशिया/इंटेल लैब्स से) कागज के साथ [ब्रिजटॉवर: विजन-लैंग्वेज रिप्रेजेंटेशन लर्निंग में एनकोडर्स के बीच ब्रिज बनाना]() by Xiao Xu, Chenfei Wu, Shachar Rosenman, Vasudev Lal, Wanxiang Che, Nan Duan. -1. **[ByT5](https://huggingface.co/docs/transformers/model_doc/byt5)** (Google अनुसंधान से) साथ में कागज [ByT5: पूर्व-प्रशिक्षित बाइट-टू-बाइट मॉडल के साथ एक टोकन-मुक्त भविष्य की ओर] (https://arxiv.org/abs/2105.13626) Linting Xue, Aditya Barua, Noah Constant, रामी अल-रफू, शरण नारंग, मिहिर काले, एडम रॉबर्ट्स, कॉलिन रैफेल द्वारा पोस्ट किया गया। -1. **[CamemBERT](https://huggingface.co/docs/transformers/model_doc/camembert)** (इनरिया/फेसबुक/सोरबोन से) साथ में कागज [CamemBERT: एक टेस्टी फ्रेंच लैंग्वेज मॉडल](https:// arxiv.org/abs/1911.03894) लुई मार्टिन*, बेंजामिन मुलर*, पेड्रो जेवियर ऑर्टिज़ सुआरेज़*, योआन ड्यूपॉन्ट, लॉरेंट रोमरी, एरिक विलेमोन्टे डे ला क्लर्जरी, जैमे सेडाह और बेनोइट सगोट द्वारा। -1. **[CANINE](https://huggingface.co/docs/transformers/model_doc/canine)** (Google रिसर्च से) साथ में दिया गया पेपर [कैनाइन: प्री-ट्रेनिंग ए एफिशिएंट टोकनाइजेशन-फ्री एनकोडर फॉर लैंग्वेज रिप्रेजेंटेशन]( https://arxiv.org/abs/2103.06874) जोनाथन एच क्लार्क, डैन गैरेट, यूलिया टर्क, जॉन विएटिंग द्वारा। -1. **[Chinese-CLIP](https://huggingface.co/docs/transformers/model_doc/chinese_clip)** (from OFA-Sys) released with the paper [Chinese CLIP: Contrastive Vision-Language Pretraining in Chinese](https://arxiv.org/abs/2211.01335) by An Yang, Junshu Pan, Junyang Lin, Rui Men, Yichang Zhang, Jingren Zhou, Chang Zhou. -1. **[CLAP](https://huggingface.co/docs/transformers/model_doc/clap)** (LAION-AI से) Yusong Wu, Ke Chen, Tianyu Zhang, Yuchen Hui, Taylor Berg-Kirkpatrick, Shlomo Dubnov. द्वाराअनुसंधान पत्र [Large-scale Contrastive Language-Audio Pretraining with Feature Fusion and Keyword-to-Caption Augmentation]https://arxiv.org/abs/2211.06687) के साथ जारी किया गया -1. **[CLIP](https://huggingface.co/docs/transformers/model_doc/clip)** (OpenAI से) साथ वाला पेपर [लर्निंग ट्रांसफरेबल विजुअल मॉडल फ्रॉम नेचुरल लैंग्वेज सुपरविजन](https://arxiv.org /abs/2103.00020) एलेक रैडफोर्ड, जोंग वूक किम, क्रिस हैलासी, आदित्य रमेश, गेब्रियल गोह, संध्या अग्रवाल, गिरीश शास्त्री, अमांडा एस्केल, पामेला मिश्किन, जैक क्लार्क, ग्रेचेन क्रुएगर, इल्या सुत्स्केवर द्वारा। -1. **[CLIPSeg](https://huggingface.co/docs/transformers/model_doc/clipseg)** (from University of Göttingen) released with the paper [Image Segmentation Using Text and Image Prompts](https://arxiv.org/abs/2112.10003) by Timo Lüddecke and Alexander Ecker. -1. **[CodeGen](https://huggingface.co/docs/transformers/model_doc/codegen)** (सेल्सफोर्स से) साथ में पेपर [प्रोग्राम सिंथेसिस के लिए एक संवादात्मक प्रतिमान](https://arxiv.org/abs/2203.13474) एरिक निजकैंप, बो पैंग, हिरोआकी हयाशी, लिफू तू, हुआन वांग, यिंगबो झोउ, सिल्वियो सावरेस, कैमिंग जिओंग रिलीज। -1. **[Conditional DETR](https://huggingface.co/docs/transformers/model_doc/conditional_detr)** (माइक्रोसॉफ्ट रिसर्च एशिया से) कागज के साथ [फास्ट ट्रेनिंग कन्वर्जेंस के लिए सशर्त डीईटीआर](https://arxiv. org/abs/2108.06152) डेपू मेंग, ज़ियाओकांग चेन, ज़ेजिया फैन, गैंग ज़ेंग, होउकियांग ली, युहुई युआन, लेई सन, जिंगडोंग वांग द्वारा। -1. **[ConvBERT](https://huggingface.co/docs/transformers/model_doc/convbert)** (YituTech से) साथ में कागज [ConvBERT: स्पैन-आधारित डायनेमिक कनवल्शन के साथ BERT में सुधार](https://arxiv .org/abs/2008.02496) जिहांग जियांग, वीहाओ यू, डाकान झोउ, युनपेंग चेन, जियाशी फेंग, शुइचेंग यान द्वारा। -1. **[ConvNeXT](https://huggingface.co/docs/transformers/model_doc/convnext)** (Facebook AI से) साथ वाला पेपर [A ConvNet for the 2020s](https://arxiv.org/abs /2201.03545) ज़ुआंग लियू, हेंज़ी माओ, चाओ-युआन वू, क्रिस्टोफ़ फीचटेनहोफ़र, ट्रेवर डेरेल, सैनिंग ज़ी द्वारा। -1. **[ConvNeXTV2](https://huggingface.co/docs/transformers/model_doc/convnextv2)** (from Facebook AI) released with the paper [ConvNeXt V2: Co-designing and Scaling ConvNets with Masked Autoencoders](https://arxiv.org/abs/2301.00808) by Sanghyun Woo, Shoubhik Debnath, Ronghang Hu, Xinlei Chen, Zhuang Liu, In So Kweon, Saining Xie. -1. **[CPM](https://huggingface.co/docs/transformers/model_doc/cpm)** (सिंघुआ यूनिवर्सिटी से) साथ में पेपर [सीपीएम: ए लार्ज-स्केल जेनेरेटिव चाइनीज प्री-ट्रेंड लैंग्वेज मॉडल](https : //arxiv.org/abs/2012.00413) झेंग्यान झांग, जू हान, हाओ झोउ, पेई के, युक्सियन गु, डेमिंग ये, युजिया किन, युशेंग सु, हाओझे जी, जियान गुआन, फैंचाओ क्यूई, ज़ियाओझी वांग, यानान झेंग द्वारा , गुओयांग ज़ेंग, हुआनकी काओ, शेंगकी चेन, डाइक्सुआन ली, ज़ेनबो सन, ज़ियुआन लियू, मिनली हुआंग, वेंटाओ हान, जी तांग, जुआनज़ी ली, ज़ियाओयान झू, माओसोंग सन। -1. **[CPM-Ant](https://huggingface.co/docs/transformers/main/model_doc/cpmant)** (from OpenBMB) released by the [OpenBMB](https://www.openbmb.org/). -1. **[CTRL](https://huggingface.co/docs/transformers/model_doc/ctrl)** (सेल्सफोर्स से) साथ में पेपर [CTRL: ए कंडिशनल ट्रांसफॉर्मर लैंग्वेज मॉडल फॉर कंट्रोलेबल जेनरेशन](https://arxiv.org/abs/1909.05858) नीतीश शिरीष केसकर*, ब्रायन मैककैन*, लव आर. वार्ष्णेय, कैमिंग जिओंग और रिचर्ड द्वारा सोचर द्वारा जारी किया गया। -1. **[CvT](https://huggingface.co/docs/transformers/model_doc/cvt)** (Microsoft से) साथ में दिया गया पेपर [CvT: इंट्रोड्यूसिंग कनवॉल्यूशन टू विजन ट्रांसफॉर्मर्स](https://arxiv.org/ एब्स/2103.15808) हैपिंग वू, बिन जिओ, नोएल कोडेला, मेंगचेन लियू, जियांग दाई, लू युआन, लेई झांग द्वारा। -1. **[Data2Vec](https://huggingface.co/docs/transformers/model_doc/data2vec)** (फेसबुक से) साथ में कागज [Data2Vec: भाषण, दृष्टि और भाषा में स्व-पर्यवेक्षित सीखने के लिए एक सामान्य ढांचा] (https://arxiv.org/abs/2202.03555) एलेक्सी बाएव्स्की, वेई-निंग सू, कियानटोंग जू, अरुण बाबू, जियाताओ गु, माइकल औली द्वारा पोस्ट किया गया। -1. **[DeBERTa](https://huggingface.co/docs/transformers/model_doc/deberta)** (Microsoft से) साथ में दिया गया पेपर [DeBERta: डिकोडिंग-एन्हांस्ड BERT विद डिसेंटैंगल्ड अटेंशन](https://arxiv. org/abs/2006.03654) पेंगचेंग हे, ज़ियाओडोंग लियू, जियानफेंग गाओ, वीज़ू चेन द्वारा। -1. **[DeBERTa-v2](https://huggingface.co/docs/transformers/model_doc/deberta-v2)** (Microsoft से) साथ में दिया गया पेपर [DeBERTa: डिकोडिंग-एन्हांस्ड BERT विथ डिसेंन्गल्ड अटेंशन](https: //arxiv.org/abs/2006.03654) पेंगचेंग हे, ज़ियाओडोंग लियू, जियानफेंग गाओ, वीज़ू चेन द्वारा पोस्ट किया गया। -1. **[Decision Transformer](https://huggingface.co/docs/transformers/model_doc/decision_transformer)** (बर्कले/फेसबुक/गूगल से) पेपर के साथ [डिसीजन ट्रांसफॉर्मर: रीनफोर्समेंट लर्निंग वाया सीक्वेंस मॉडलिंग](https : //arxiv.org/abs/2106.01345) लिली चेन, केविन लू, अरविंद राजेश्वरन, किमिन ली, आदित्य ग्रोवर, माइकल लास्किन, पीटर एबील, अरविंद श्रीनिवास, इगोर मोर्डच द्वारा पोस्ट किया गया। -1. **[Deformable DETR](https://huggingface.co/docs/transformers/model_doc/deformable_detr)** (सेंसटाइम रिसर्च से) साथ में पेपर [डिफॉर्मेबल डीईटीआर: डिफॉर्मेबल ट्रांसफॉर्मर्स फॉर एंड-टू-एंड ऑब्जेक्ट डिटेक्शन] (https://arxiv.org/abs/2010.04159) Xizhou Zhu, Weijie Su, Lewei Lu, Bin Li, Xiaogang Wang, जिफेंग दाई द्वारा पोस्ट किया गया। -1. **[DeiT](https://huggingface.co/docs/transformers/model_doc/deit)** (फेसबुक से) साथ में पेपर [ट्रेनिंग डेटा-एफिशिएंट इमेज ट्रांसफॉर्मर और डिस्टिलेशन थ्रू अटेंशन](https://arxiv .org/abs/2012.12877) ह्यूगो टौव्रोन, मैथ्यू कॉर्ड, मैथिज्स डूज़, फ़्रांसिस्को मस्सा, एलेक्ज़ेंडर सबलेरोल्स, हर्वे जेगौ द्वारा। -1. **[DePlot](https://huggingface.co/docs/transformers/main/model_doc/deplot)** (Google AI से) Fangyu Liu, Julian Martin Eisenschlos, Francesco Piccinno, Syrine Krichene, Chenxi Pang, Kenton Lee, Mandar Joshi, Wenhu Chen, Nigel Collier, Yasemin Altun. द्वाराअनुसंधान पत्र [DePlot: One-shot visual language reasoning by plot-to-table translation](https://arxiv.org/abs/2212.10505) के साथ जारी किया गया -1. **[DETA](https://huggingface.co/docs/transformers/model_doc/deta)** (from The University of Texas at Austin) released with the paper [NMS Strikes Back](https://arxiv.org/abs/2212.06137) by Jeffrey Ouyang-Zhang, Jang Hyun Cho, Xingyi Zhou, Philipp Krähenbühl. -1. **[DETR](https://huggingface.co/docs/transformers/model_doc/detr)** (फेसबुक से) साथ में कागज [ट्रांसफॉर्मर्स के साथ एंड-टू-एंड ऑब्जेक्ट डिटेक्शन](https://arxiv. org/abs/2005.12872) निकोलस कैरियन, फ़्रांसिस्को मस्सा, गेब्रियल सिनेव, निकोलस उसुनियर, अलेक्जेंडर किरिलोव, सर्गेई ज़ागोरुयको द्वारा। -1. **[DialoGPT](https://huggingface.co/docs/transformers/model_doc/dialogpt)** (माइक्रोसॉफ्ट रिसर्च से) कागज के साथ [DialoGPT: बड़े पैमाने पर जनरेटिव प्री-ट्रेनिंग फॉर कन्वर्सेशनल रिस्पांस जेनरेशन](https ://arxiv.org/abs/1911.00536) यिज़े झांग, सिकी सन, मिशेल गैली, येन-चुन चेन, क्रिस ब्रोकेट, जियांग गाओ, जियानफेंग गाओ, जिंगजिंग लियू, बिल डोलन द्वारा। -1. **[DiNAT](https://huggingface.co/docs/transformers/model_doc/dinat)** (from SHI Labs) released with the paper [Dilated Neighborhood Attention Transformer](https://arxiv.org/abs/2209.15001) by Ali Hassani and Humphrey Shi. -1. **[DistilBERT](https://huggingface.co/docs/transformers/model_doc/distilbert)** (हगिंगफेस से), साथ में कागज [डिस्टिलबर्ट, बीईआरटी का डिस्टिल्ड वर्जन: छोटा, तेज, सस्ता और हल्का] (https://arxiv.org/abs/1910.01108) विक्टर सनह, लिसांड्रे डेब्यू और थॉमस वुल्फ द्वारा पोस्ट किया गया। यही तरीका GPT-2 को [DistilGPT2](https://github.com/huggingface/transformers/tree/main/examples/distillation), RoBERta से [DistilRoBERta](https://github.com) पर कंप्रेस करने के लिए भी लागू किया जाता है। / हगिंगफेस/ट्रांसफॉर्मर्स/ट्री/मेन/उदाहरण/डिस्टिलेशन), बहुभाषी BERT से [DistilmBERT](https://github.com/huggingface/transformers/tree/main/examples/distillation) और डिस्टिलबर्ट का जर्मन संस्करण। -1. **[DiT](https://huggingface.co/docs/transformers/model_doc/dit)** (माइक्रोसॉफ्ट रिसर्च से) साथ में पेपर [DiT: सेल्फ सुपरवाइज्ड प्री-ट्रेनिंग फॉर डॉक्यूमेंट इमेज ट्रांसफॉर्मर](https://arxiv.org/abs/2203.02378) जुनलॉन्ग ली, यिहेंग जू, टेंगचाओ लव, लेई कुई, चा झांग द्वारा फुरु वेई द्वारा पोस्ट किया गया। -1. **[Donut](https://huggingface.co/docs/transformers/model_doc/donut)** (NAVER से) साथ में कागज [OCR-मुक्त डॉक्यूमेंट अंडरस्टैंडिंग ट्रांसफॉर्मर](https://arxiv.org/abs /2111.15664) गीवूक किम, टीकग्यू होंग, मूनबिन यिम, जियोंग्योन नाम, जिनयॉन्ग पार्क, जिनयॉन्ग यिम, वोनसेओक ह्वांग, सांगडू यूं, डोंगयून हान, सेउंग्युन पार्क द्वारा। -1. **[DPR](https://huggingface.co/docs/transformers/model_doc/dpr)** (फेसबुक से) साथ में पेपर [ओपन-डोमेन क्वेश्चन आंसरिंग के लिए डेंस पैसेज रिट्रीवल](https://arxiv. org/abs/2004.04906) व्लादिमीर करपुखिन, बरलास ओज़ुज़, सेवन मिन, पैट्रिक लुईस, लेडेल वू, सर्गेई एडुनोव, डैनकी चेन, और वेन-ताऊ यिह द्वारा। -1. **[DPT](https://huggingface.co/docs/transformers/master/model_doc/dpt)** (इंटेल लैब्स से) साथ में कागज [विज़न ट्रांसफॉर्मर्स फॉर डेंस प्रेडिक्शन](https://arxiv.org /abs/2103.13413) रेने रैनफ्टल, एलेक्सी बोचकोवस्की, व्लादलेन कोल्टन द्वारा। -1. **[EfficientFormer](https://huggingface.co/docs/transformers/model_doc/efficientformer)** (from Snap Research) released with the paper [EfficientFormer: Vision Transformers at MobileNetSpeed](https://arxiv.org/abs/2206.01191) by Yanyu Li, Geng Yuan, Yang Wen, Ju Hu, Georgios Evangelidis, Sergey Tulyakov, Yanzhi Wang, Jian Ren. -1. **[EfficientNet](https://huggingface.co/docs/transformers/model_doc/efficientnet)** (from Google Brain) released with the paper [EfficientNet: Rethinking Model Scaling for Convolutional Neural Networks](https://arxiv.org/abs/1905.11946) by Mingxing Tan, Quoc V. Le. -1. **[ELECTRA](https://huggingface.co/docs/transformers/model_doc/electra)** (Google रिसर्च/स्टैनफोर्ड यूनिवर्सिटी से) साथ में दिया गया पेपर [इलेक्ट्रा: जेनरेटर के बजाय भेदभाव करने वाले के रूप में टेक्स्ट एन्कोडर्स का पूर्व-प्रशिक्षण] (https://arxiv.org/abs/2003.10555) केविन क्लार्क, मिन्ह-थांग लुओंग, क्वोक वी. ले, क्रिस्टोफर डी. मैनिंग द्वारा पोस्ट किया गया। -1. **[EncoderDecoder](https://huggingface.co/docs/transformers/model_doc/encoder-decoder)** (Google रिसर्च से) साथ में दिया गया पेपर [सीक्वेंस जेनरेशन टास्क के लिए प्री-ट्रेंड चेकपॉइंट का इस्तेमाल करना](https:/ /arxiv.org/abs/1907.12461) साशा रोठे, शशि नारायण, अलियाक्सि सेवेरिन द्वारा। -1. **[ERNIE](https://huggingface.co/docs/transformers/model_doc/ernie)**(Baidu से) साथ देने वाला पेपर [ERNIE: एन्हांस्ड रिप्रेजेंटेशन थ्रू नॉलेज इंटीग्रेशन](https://arxiv.org/abs/1904.09223) यू सन, शुओहुआन वांग, युकुन ली, शिकुन फेंग, ज़ुई चेन, हान झांग, शिन तियान, डैनक्सियांग झू, हाओ तियान, हुआ वू द्वारा पोस्ट किया गया। -1. **[ErnieM](https://huggingface.co/docs/transformers/model_doc/ernie_m)** (Baidu से) Xuan Ouyang, Shuohuan Wang, Chao Pang, Yu Sun, Hao Tian, Hua Wu, Haifeng Wang. द्वाराअनुसंधान पत्र [ERNIE-M: Enhanced Multilingual Representation by Aligning Cross-lingual Semantics with Monolingual Corpora](https://arxiv.org/abs/2012.15674) के साथ जारी किया गया -1. **[ESM](https://huggingface.co/docs/transformers/model_doc/esm)** (मेटा AI से) ट्रांसफॉर्मर प्रोटीन भाषा मॉडल हैं। **ESM-1b** पेपर के साथ जारी किया गया था [ अलेक्जेंडर राइव्स, जोशुआ मेयर, टॉम सर्कु, सिद्धार्थ गोयल, ज़ेमिंग लिन द्वारा जैविक संरचना और कार्य असुरक्षित सीखने को 250 मिलियन प्रोटीन अनुक्रमों तक स्केल करने से उभरता है] (https://www.pnas.org/content/118/15/e2016239118) जेसन लियू, डेमी गुओ, मायल ओट, सी. लॉरेंस ज़िटनिक, जेरी मा और रॉब फर्गस। **ESM-1v** को पेपर के साथ जारी किया गया था [भाषा मॉडल प्रोटीन फ़ंक्शन पर उत्परिवर्तन के प्रभावों की शून्य-शॉट भविष्यवाणी को सक्षम करते हैं] (https://doi.org/10.1101/2021.07.09.450648) जोशुआ मेयर, रोशन राव, रॉबर्ट वेरकुइल, जेसन लियू, टॉम सर्कु और अलेक्जेंडर राइव्स द्वारा। **ESM-2** को पेपर के साथ जारी किया गया था [भाषा मॉडल विकास के पैमाने पर प्रोटीन अनुक्रम सटीक संरचना भविष्यवाणी को सक्षम करते हैं](https://doi.org/10.1101/2022.07.20.500902) ज़ेमिंग लिन, हलील अकिन, रोशन राव, ब्रायन ही, झोंगकाई झू, वेंटिंग लू, ए द्वारा लान डॉस सैंटोस कोस्टा, मरियम फ़ज़ल-ज़रंडी, टॉम सर्कू, साल कैंडिडो, अलेक्जेंडर राइव्स। -1. **[FLAN-T5](https://huggingface.co/docs/transformers/model_doc/flan-t5)** (from Google AI) released in the repository [google-research/t5x](https://github.com/google-research/t5x/blob/main/docs/models.md#flan-t5-checkpoints) by Hyung Won Chung, Le Hou, Shayne Longpre, Barret Zoph, Yi Tay, William Fedus, Eric Li, Xuezhi Wang, Mostafa Dehghani, Siddhartha Brahma, Albert Webson, Shixiang Shane Gu, Zhuyun Dai, Mirac Suzgun, Xinyun Chen, Aakanksha Chowdhery, Sharan Narang, Gaurav Mishra, Adams Yu, Vincent Zhao, Yanping Huang, Andrew Dai, Hongkun Yu, Slav Petrov, Ed H. Chi, Jeff Dean, Jacob Devlin, Adam Roberts, Denny Zhou, Quoc V. Le, and Jason Wei -1. **[FLAN-UL2](https://huggingface.co/docs/transformers/model_doc/flan-ul2)** (from Google AI) released in the repository [google-research/t5x](https://github.com/google-research/t5x/blob/main/docs/models.md#flan-ul2-checkpoints) by Hyung Won Chung, Le Hou, Shayne Longpre, Barret Zoph, Yi Tay, William Fedus, Eric Li, Xuezhi Wang, Mostafa Dehghani, Siddhartha Brahma, Albert Webson, Shixiang Shane Gu, Zhuyun Dai, Mirac Suzgun, Xinyun Chen, Aakanksha Chowdhery, Sharan Narang, Gaurav Mishra, Adams Yu, Vincent Zhao, Yanping Huang, Andrew Dai, Hongkun Yu, Slav Petrov, Ed H. Chi, Jeff Dean, Jacob Devlin, Adam Roberts, Denny Zhou, Quoc V. Le, and Jason Wei -1. **[FlauBERT](https://huggingface.co/docs/transformers/model_doc/flaubert)** (CNRS से) साथ वाला पेपर [FlauBERT: Unsupervised Language Model Pre-training for फ़्रेंच](https://arxiv .org/abs/1912.05372) Hang Le, Loïc Vial, Jibril Frej, Vincent Segonne, Maximin Coavoux, बेंजामिन लेकोउटेक्स, अलेक्जेंड्रे अल्लाउज़ेन, बेनोइट क्रैबे, लॉरेंट बेसेसियर, डिडिएर श्वाब द्वारा। -1. **[FLAVA](https://huggingface.co/docs/transformers/model_doc/flava)** (FLAVA: A फाउंडेशनल लैंग्वेज एंड विजन अलाइनमेंट मॉडल) (https://arxiv) साथ वाला पेपर .org/abs/2112.04482) अमनप्रीत सिंह, रोंगहांग हू, वेदानुज गोस्वामी, गुइल्यूम कुएरॉन, वोज्शिएक गालुबा, मार्कस रोहरबैक, और डौवे कीला द्वारा। -1. **[FNet](https://huggingface.co/docs/transformers/model_doc/fnet)** (गूगल रिसर्च से) साथ वाला पेपर [FNet: मिक्सिंग टोकन विद फूरियर ट्रांसफॉर्म्स](https://arxiv.org /abs/2105.03824) जेम्स ली-थॉर्प, जोशुआ आइंस्ली, इल्या एकस्टीन, सैंटियागो ओंटानन द्वारा। -1. **[Funnel Transformer](https://huggingface.co/docs/transformers/model_doc/funnel)** (सीएमयू/गूगल ब्रेन से) साथ में कागज [फ़नल-ट्रांसफॉर्मर: कुशल भाषा प्रसंस्करण के लिए अनुक्रमिक अतिरेक को छानना](https://arxiv.org/abs/2006.03236) जिहांग दाई, गुओकुन लाई, यिमिंग यांग, क्वोक वी. ले ​​द्वारा रिहाई। -1. **[GIT](https://huggingface.co/docs/transformers/model_doc/git)** (from Microsoft Research) released with the paper [GIT: A Generative Image-to-text Transformer for Vision and Language](https://arxiv.org/abs/2205.14100) by Jianfeng Wang, Zhengyuan Yang, Xiaowei Hu, Linjie Li, Kevin Lin, Zhe Gan, Zicheng Liu, Ce Liu, Lijuan Wang. -1. **[GLPN](https://huggingface.co/docs/transformers/model_doc/glpn)** (KAIST से) साथ वाला पेपर [वर्टिकल कटडेप्थ के साथ मोनोकुलर डेप्थ एस्टीमेशन के लिए ग्लोबल-लोकल पाथ नेटवर्क्स](https:/ /arxiv.org/abs/2201.07436) डोयोन किम, वूंगह्युन गा, प्युंगवान आह, डोंगग्यू जू, सेहवान चुन, जुनमो किम द्वारा। -1. **[GPT](https://huggingface.co/docs/transformers/model_doc/openai-gpt)** (OpenAI से) साथ में दिया गया पेपर [जेनरेटिव प्री-ट्रेनिंग द्वारा भाषा की समझ में सुधार](https://blog .openai.com/language-unsupervised/) एलेक रैडफोर्ड, कार्तिक नरसिम्हन, टिम सालिमन्स और इल्या सुत्स्केवर द्वारा। -1. **[GPT Neo](https://huggingface.co/docs/transformers/model_doc/gpt_neo)** (EleutherAI से) रिपॉजिटरी के साथ [EleutherAI/gpt-neo](https://github.com/ EleutherAI /gpt-neo) रिलीज। सिड ब्लैक, स्टेला बिडरमैन, लियो गाओ, फिल वांग और कॉनर लेही द्वारा पोस्ट किया गया। -1. **[GPT NeoX](https://huggingface.co/docs/transformers/model_doc/gpt_neox)** (EleutherAI से) पेपर के साथ जारी किया गया [GPT-NeoX-20B: एक ओपन-सोर्स ऑटोरेग्रेसिव लैंग्वेज मॉडल] (https://arxiv.org/abs/2204.06745) सिड ब्लैक, स्टेला बिडरमैन, एरिक हैलाहन, क्वेंटिन एंथोनी, लियो गाओ, लॉरेंस गोल्डिंग, होरेस हे, कॉनर लेही, काइल मैकडोनेल, जेसन फांग, माइकल पाइलर, यूएसवीएसएन साई प्रशांत द्वारा , शिवांशु पुरोहित, लारिया रेनॉल्ड्स, जोनाथन टो, बेन वांग, सैमुअल वेनबैक -1. **[GPT NeoX Japanese](https://huggingface.co/docs/transformers/model_doc/gpt_neox_japanese)** (अबेजा के जरिए) शिन्या ओटानी, ताकायोशी मकाबे, अनुज अरोड़ा, क्यो हटोरी द्वारा। -1. **[GPT-2](https://huggingface.co/docs/transformers/model_doc/gpt2)** (ओपनएआई से) साथ में पेपर [लैंग्वेज मॉडल्स अनसुपरवाइज्ड मल्टीटास्क लर्नर्स हैं](https://blog.openai.com/better-language-models/) एलेक रैडफोर्ड*, जेफरी वू*, रेवन चाइल्ड, डेविड लुआन, डारियो एमोडी* द्वारा * और इल्या सुत्सकेवर** ने पोस्ट किया। -1. **[GPT-J](https://huggingface.co/docs/transformers/model_doc/gptj)** (EleutherAI से) साथ वाला पेपर [kingoflolz/mesh-transformer-jax](https://github. com/kingoflolz/mesh-transformer-jax/) बेन वांग और अरन कोमात्सुजाकी द्वारा। -1. **[GPT-Sw3](https://huggingface.co/docs/transformers/model_doc/gpt-sw3)** (from AI-Sweden) released with the paper [Lessons Learned from GPT-SW3: Building the First Large-Scale Generative Language Model for Swedish](http://www.lrec-conf.org/proceedings/lrec2022/pdf/2022.lrec-1.376.pdf) by Ariel Ekgren, Amaru Cuba Gyllensten, Evangelia Gogoulou, Alice Heiman, Severine Verlinden, Joey Öhman, Fredrik Carlsson, Magnus Sahlgren. -1. **[GPTBigCode](https://huggingface.co/docs/transformers/main/model_doc/gpt_bigcode)** (BigCode से) Loubna Ben Allal, Raymond Li, Denis Kocetkov, Chenghao Mou, Christopher Akiki, Carlos Munoz Ferrandis, Niklas Muennighoff, Mayank Mishra, Alex Gu, Manan Dey, Logesh Kumar Umapathi, Carolyn Jane Anderson, Yangtian Zi, Joel Lamy Poirier, Hailey Schoelkopf, Sergey Troshin, Dmitry Abulkhanov, Manuel Romero, Michael Lappert, Francesco De Toni, Bernardo García del Río, Qian Liu, Shamik Bose, Urvashi Bhattacharyya, Terry Yue Zhuo, Ian Yu, Paulo Villegas, Marco Zocca, Sourab Mangrulkar, David Lansky, Huu Nguyen, Danish Contractor, Luis Villa, Jia Li, Dzmitry Bahdanau, Yacine Jernite, Sean Hughes, Daniel Fried, Arjun Guha, Harm de Vries, Leandro von Werra. द्वाराअनुसंधान पत्र [SantaCoder: don't reach for the stars!](https://arxiv.org/abs/2301.03988) के साथ जारी किया गया -1. **[GPTSAN-japanese](https://huggingface.co/docs/transformers/model_doc/gptsan-japanese)** released in the repository [tanreinama/GPTSAN](https://github.com/tanreinama/GPTSAN/blob/main/report/model.md) by Toshiyuki Sakamoto(tanreinama). -1. **[Graphormer](https://huggingface.co/docs/transformers/model_doc/graphormer)** (from Microsoft) released with the paper [Do Transformers Really Perform Bad for Graph Representation?](https://arxiv.org/abs/2106.05234) by Chengxuan Ying, Tianle Cai, Shengjie Luo, Shuxin Zheng, Guolin Ke, Di He, Yanming Shen, Tie-Yan Liu. -1. **[GroupViT](https://huggingface.co/docs/transformers/model_doc/groupvit)** (UCSD, NVIDIA से) साथ में कागज [GroupViT: टेक्स्ट सुपरविजन से सिमेंटिक सेगमेंटेशन इमर्जेस](https://arxiv .org/abs/2202.11094) जियारुई जू, शालिनी डी मेलो, सिफ़ी लियू, वोनमिन बायन, थॉमस ब्रेउएल, जान कौट्ज़, ज़ियाओलोंग वांग द्वारा। -1. **[Hubert](https://huggingface.co/docs/transformers/model_doc/hubert)** (फेसबुक से) साथ में पेपर [ह्यूबर्ट: सेल्फ सुपरवाइज्ड स्पीच रिप्रेजेंटेशन लर्निंग बाय मास्क्ड प्रेडिक्शन ऑफ हिडन यूनिट्स](https ://arxiv.org/abs/2106.07447) वेई-निंग सू, बेंजामिन बोल्टे, याओ-हंग ह्यूबर्ट त्साई, कुशाल लखोटिया, रुस्लान सालाखुतदीनोव, अब्देलरहमान मोहम्मद द्वारा। -1. **[I-BERT](https://huggingface.co/docs/transformers/model_doc/ibert)** (बर्कले से) साथ में कागज [I-BERT: Integer-only BERT Quantization](https:// arxiv.org/abs/2101.01321) सेहून किम, अमीर घोलमी, ज़ेवेई याओ, माइकल डब्ल्यू महोनी, कर्ट केटज़र द्वारा। -1. **[ImageGPT](https://huggingface.co/docs/transformers/model_doc/imagegpt)** (from OpenAI) released with the paper [Generative Pretraining from Pixels](https://openai.com/blog/image-gpt/) by Mark Chen, Alec Radford, Rewon Child, Jeffrey Wu, Heewoo Jun, David Luan, Ilya Sutskever. -1. **[Informer](https://huggingface.co/docs/transformers/model_doc/informer)** (from Beihang University, UC Berkeley, Rutgers University, SEDD Company) released with the paper [Informer: Beyond Efficient Transformer for Long Sequence Time-Series Forecasting](https://arxiv.org/abs/2012.07436) by Haoyi Zhou, Shanghang Zhang, Jieqi Peng, Shuai Zhang, Jianxin Li, Hui Xiong, and Wancai Zhang. -1. **[Jukebox](https://huggingface.co/docs/transformers/model_doc/jukebox)** (from OpenAI) released with the paper [Jukebox: A Generative Model for Music](https://arxiv.org/pdf/2005.00341.pdf) by Prafulla Dhariwal, Heewoo Jun, Christine Payne, Jong Wook Kim, Alec Radford, Ilya Sutskever. -1. **[LayoutLM](https://huggingface.co/docs/transformers/model_doc/layoutlm)** (from Microsoft Research Asia) released with the paper [LayoutLM: Pre-training of Text and Layout for Document Image Understanding](https://arxiv.org/abs/1912.13318) by Yiheng Xu, Minghao Li, Lei Cui, Shaohan Huang, Furu Wei, Ming Zhou. -1. **[LayoutLMv2](https://huggingface.co/docs/transformers/model_doc/layoutlmv2)** (from Microsoft Research Asia) released with the paper [LayoutLMv2: Multi-modal Pre-training for Visually-Rich Document Understanding](https://arxiv.org/abs/2012.14740) by Yang Xu, Yiheng Xu, Tengchao Lv, Lei Cui, Furu Wei, Guoxin Wang, Yijuan Lu, Dinei Florencio, Cha Zhang, Wanxiang Che, Min Zhang, Lidong Zhou. -1. **[LayoutLMv3](https://huggingface.co/docs/transformers/model_doc/layoutlmv3)** (माइक्रोसॉफ्ट रिसर्च एशिया से) साथ देने वाला पेपर [लेआउटएलएमवी3: यूनिफाइड टेक्स्ट और इमेज मास्किंग के साथ दस्तावेज़ एआई के लिए पूर्व-प्रशिक्षण](https://arxiv.org/abs/2204.08387) युपन हुआंग, टेंगचाओ लव, लेई कुई, युटोंग लू, फुरु वेई द्वारा पोस्ट किया गया। -1. **[LayoutXLM](https://huggingface.co/docs/transformers/model_doc/layoutxlm)** (from Microsoft Research Asia) released with the paper [LayoutXLM: Multimodal Pre-training for Multilingual Visually-rich Document Understanding](https://arxiv.org/abs/2104.08836) by Yiheng Xu, Tengchao Lv, Lei Cui, Guoxin Wang, Yijuan Lu, Dinei Florencio, Cha Zhang, Furu Wei. -1. **[LED](https://huggingface.co/docs/transformers/model_doc/led)** (from AllenAI) released with the paper [Longformer: The Long-Document Transformer](https://arxiv.org/abs/2004.05150) by Iz Beltagy, Matthew E. Peters, Arman Cohan. -1. **[LeViT](https://huggingface.co/docs/transformers/model_doc/levit)** (मेटा AI से) साथ वाला पेपर [LeViT: A Vision Transformer in ConvNet's Clothing for Faster Inference](https:/ /arxiv.org/abs/2104.01136) बेन ग्राहम, अलाएल्डिन एल-नौबी, ह्यूगो टौवरन, पियरे स्टॉक, आर्मंड जौलिन, हर्वे जेगौ, मैथिज डूज़ द्वारा। -1. **[LiLT](https://huggingface.co/docs/transformers/model_doc/lilt)** (दक्षिण चीन प्रौद्योगिकी विश्वविद्यालय से) साथ में कागज [LiLT: एक सरल लेकिन प्रभावी भाषा-स्वतंत्र लेआउट ट्रांसफार्मर संरचित दस्तावेज़ समझ के लिए](https://arxiv.org/abs/2202.13669) जियापेंग वांग, लियानवेन जिन, काई डिंग द्वारा पोस्ट किया गया। -1. **[LLaMA](https://huggingface.co/docs/transformers/main/model_doc/llama)** (The FAIR team of Meta AI से) Hugo Touvron, Thibaut Lavril, Gautier Izacard, Xavier Martinet, Marie-Anne Lachaux, Timothée Lacroix, Baptiste Rozière, Naman Goyal, Eric Hambro, Faisal Azhar, Aurelien Rodriguez, Armand Joulin, Edouard Grave, Guillaume Lample. द्वाराअनुसंधान पत्र [LLaMA: Open and Efficient Foundation Language Models](https://arxiv.org/abs/2302.13971) के साथ जारी किया गया -1. **[Longformer](https://huggingface.co/docs/transformers/model_doc/longformer)** (from AllenAI) released with the paper [Longformer: The Long-Document Transformer](https://arxiv.org/abs/2004.05150) by Iz Beltagy, Matthew E. Peters, Arman Cohan. -1. **[LongT5](https://huggingface.co/docs/transformers/model_doc/longt5)** (मैंडी गुओ, जोशुआ आइंस्ली, डेविड यूथस, सैंटियागो ओंटानन, जियानमो नि, यूं-हुआन सुंग, यिनफेई यांग द्वारा पोस्ट किया गया। -1. **[LUKE](https://huggingface.co/docs/transformers/model_doc/luke)** (स्टूडियो औसिया से) साथ में पेपर [LUKE: डीप कॉन्टेक्स्टुअलाइज्ड एंटिटी रिप्रेजेंटेशन विद एंटिटी-अवेयर सेल्फ-अटेंशन](https ://arxiv.org/abs/2010.01057) Ikuya Yamada, Akari Asai, Hiroyuki Shindo, Hideaki Takeda, Yuji Matsumoto द्वारा। -1. **[LXMERT](https://huggingface.co/docs/transformers/model_doc/lxmert)** (UNC चैपल हिल से) साथ में पेपर [LXMERT: ओपन-डोमेन क्वेश्चन के लिए ट्रांसफॉर्मर से क्रॉस-मोडलिटी एनकोडर रिप्रेजेंटेशन सीखना Answering](https://arxiv.org/abs/1908.07490) हाओ टैन और मोहित बंसल द्वारा। -1. **[M-CTC-T](https://huggingface.co/docs/transformers/model_doc/mctct)** (from Facebook) released with the paper [Pseudo-Labeling For Massively Multilingual Speech Recognition](https://arxiv.org/abs/2111.00161) by Loren Lugosch, Tatiana Likhomanenko, Gabriel Synnaeve, and Ronan Collobert. -1. **[M2M100](https://huggingface.co/docs/transformers/model_doc/m2m_100)** (फेसबुक से) साथ देने वाला पेपर [बियॉन्ड इंग्लिश-सेंट्रिक मल्टीलिंगुअल मशीन ट्रांसलेशन](https://arxiv.org/ एब्स/2010.11125) एंजेला फैन, श्रुति भोसले, होल्गर श्वेन्क, झी मा, अहमद अल-किश्की, सिद्धार्थ गोयल, मनदीप बैनेस, ओनूर सेलेबी, गुइल्लाम वेन्जेक, विश्रव चौधरी, नमन गोयल, टॉम बर्च, विटाली लिपचिंस्की, सर्गेई एडुनोव, एडौर्ड द्वारा ग्रेव, माइकल औली, आर्मंड जौलिन द्वारा पोस्ट किया गया। -1. **[MarianMT](https://huggingface.co/docs/transformers/model_doc/marian)** Jörg द्वारा [OPUS](http://opus.nlpl.eu/) डेटा से प्रशिक्षित मशीनी अनुवाद मॉडल पोस्ट किया गया टाइडेमैन द्वारा। [मैरियन फ्रेमवर्क](https://marian-nmt.github.io/) माइक्रोसॉफ्ट ट्रांसलेटर टीम द्वारा विकसित। -1. **[MarkupLM](https://huggingface.co/docs/transformers/model_doc/markuplm)** (माइक्रोसॉफ्ट रिसर्च एशिया से) साथ में पेपर [मार्कअपएलएम: विजुअली-रिच डॉक्यूमेंट अंडरस्टैंडिंग के लिए टेक्स्ट और मार्कअप लैंग्वेज का प्री-ट्रेनिंग] (https://arxiv.org/abs/2110.08518) जुनलॉन्ग ली, यिहेंग जू, लेई कुई, फुरु द्वारा वी द्वारा पोस्ट किया गया। -1. **[Mask2Former](https://huggingface.co/docs/transformers/model_doc/mask2former)** (FAIR and UIUC से) Bowen Cheng, Ishan Misra, Alexander G. Schwing, Alexander Kirillov, Rohit Girdhar. द्वाराअनुसंधान पत्र [Masked-attention Mask Transformer for Universal Image Segmentation](https://arxiv.org/abs/2112.01527) के साथ जारी किया गया -1. **[MaskFormer](https://huggingface.co/docs/transformers/model_doc/maskformer)** (मेटा और UIUC से) पेपर के साथ जारी किया गया [प्रति-पिक्सेल वर्गीकरण वह सब नहीं है जिसकी आपको सिमेंटिक सेगमेंटेशन की आवश्यकता है] (https://arxiv.org/abs/2107.06278) बोवेन चेंग, अलेक्जेंडर जी. श्विंग, अलेक्जेंडर किरिलोव द्वारा >>>>>> रिबेस ठीक करें -1. **[MatCha](https://huggingface.co/docs/transformers/main/model_doc/matcha)** (Google AI से) Fangyu Liu, Francesco Piccinno, Syrine Krichene, Chenxi Pang, Kenton Lee, Mandar Joshi, Yasemin Altun, Nigel Collier, Julian Martin Eisenschlos. द्वाराअनुसंधान पत्र [MatCha: Enhancing Visual Language Pretraining with Math Reasoning and Chart Derendering](https://arxiv.org/abs/2212.09662) के साथ जारी किया गया -1. **[mBART](https://huggingface.co/docs/transformers/model_doc/mbart)** (फेसबुक से) साथ में पेपर [न्यूरल मशीन ट्रांसलेशन के लिए मल्टीलिंगुअल डीनोइजिंग प्री-ट्रेनिंग](https://arxiv. org/abs/2001.08210) यिनहान लियू, जियाताओ गु, नमन गोयल, जियान ली, सर्गेई एडुनोव, मार्जन ग़ज़विनिनेजाद, माइक लुईस, ल्यूक ज़ेटलमॉयर द्वारा। -1. **[mBART-50](https://huggingface.co/docs/transformers/model_doc/mbart)** (फेसबुक से) साथ में पेपर [एक्स्टेंसिबल बहुभाषी प्रीट्रेनिंग और फाइनट्यूनिंग के साथ बहुभाषी अनुवाद](https://arxiv युकिंग टैंग, चाउ ट्रान, जियान ली, पेंग-जेन चेन, नमन गोयल, विश्रव चौधरी, जियाताओ गु, एंजेला फैन द्वारा .org/abs/2008.00401)। -1. **[MEGA](https://huggingface.co/docs/transformers/main/model_doc/mega)** (Facebook से) Xuezhe Ma, Chunting Zhou, Xiang Kong, Junxian He, Liangke Gui, Graham Neubig, Jonathan May, and Luke Zettlemoyer. द्वाराअनुसंधान पत्र [Mega: Moving Average Equipped Gated Attention](https://arxiv.org/abs/2209.10655) के साथ जारी किया गया -1. **[Megatron-BERT](https://huggingface.co/docs/transformers/model_doc/megatron-bert)** (NVIDIA से) कागज के साथ [Megatron-LM: मॉडल का उपयोग करके बहु-अरब पैरामीटर भाषा मॉडल का प्रशिक्षण Parallelism](https://arxiv.org/abs/1909.08053) मोहम्मद शोएबी, मोस्टोफा पटवारी, राउल पुरी, पैट्रिक लेग्रेस्ले, जेरेड कैस्पर और ब्रायन कैटानज़ारो द्वारा। -1. **[Megatron-GPT2](https://huggingface.co/docs/transformers/model_doc/megatron_gpt2)** (NVIDIA से) साथ वाला पेपर [Megatron-LM: ट्रेनिंग मल्टी-बिलियन पैरामीटर लैंग्वेज मॉडल्स यूजिंग मॉडल पैरेललिज़्म] (https://arxiv.org/abs/1909.08053) मोहम्मद शोएबी, मोस्टोफा पटवारी, राउल पुरी, पैट्रिक लेग्रेस्ले, जेरेड कैस्पर और ब्रायन कैटानज़ारो द्वारा पोस्ट किया गया। -1. **[MGP-STR](https://huggingface.co/docs/transformers/model_doc/mgp-str)** (Alibaba Research से) Peng Wang, Cheng Da, and Cong Yao. द्वाराअनुसंधान पत्र [Multi-Granularity Prediction for Scene Text Recognition](https://arxiv.org/abs/2209.03592) के साथ जारी किया गया -1. **[mLUKE](https://huggingface.co/docs/transformers/model_doc/mluke)** (फ्रॉम Studio Ousia) साथ में पेपर [mLUKE: द पावर ऑफ एंटिटी रिप्रेजेंटेशन इन मल्टीलिंगुअल प्रीट्रेन्ड लैंग्वेज मॉडल्स](https://arxiv.org/abs/2110.08151) रयोकन री, इकुया यामाडा, और योशिमासा त्सुरोका द्वारा। -1. **[MobileBERT](https://huggingface.co/docs/transformers/model_doc/mobilebert)** (सीएमयू/गूगल ब्रेन से) साथ में कागज [मोबाइलबर्ट: संसाधन-सीमित उपकरणों के लिए एक कॉम्पैक्ट टास्क-अज्ञेय बीईआरटी] (https://arxiv.org/abs/2004.02984) Zhiqing Sun, Hongkun Yu, Xiaodan Song, Renjie Liu, Yiming Yang, और Denny Zhou द्वारा पोस्ट किया गया। -1. **[MobileNetV1](https://huggingface.co/docs/transformers/model_doc/mobilenet_v1)** (from Google Inc.) released with the paper [MobileNets: Efficient Convolutional Neural Networks for Mobile Vision Applications](https://arxiv.org/abs/1704.04861) by Andrew G. Howard, Menglong Zhu, Bo Chen, Dmitry Kalenichenko, Weijun Wang, Tobias Weyand, Marco Andreetto, Hartwig Adam. -1. **[MobileNetV2](https://huggingface.co/docs/transformers/model_doc/mobilenet_v2)** (from Google Inc.) released with the paper [MobileNetV2: Inverted Residuals and Linear Bottlenecks](https://arxiv.org/abs/1801.04381) by Mark Sandler, Andrew Howard, Menglong Zhu, Andrey Zhmoginov, Liang-Chieh Chen. -1. **[MobileViT](https://huggingface.co/docs/transformers/model_doc/mobilevit)** (Apple से) साथ में कागज [MobileViT: लाइट-वेट, जनरल-पर्पस, और मोबाइल-फ्रेंडली विजन ट्रांसफॉर्मर] (https://arxiv.org/abs/2110.02178) सचिन मेहता और मोहम्मद रस्तगरी द्वारा पोस्ट किया गया। -1. **[MPNet](https://huggingface.co/docs/transformers/model_doc/mpnet)** (from Microsoft Research) released with the paper [MPNet: Masked and Permuted Pre-training for Language Understanding](https://arxiv.org/abs/2004.09297) by Kaitao Song, Xu Tan, Tao Qin, Jianfeng Lu, Tie-Yan Liu. -1. **[MT5](https://huggingface.co/docs/transformers/model_doc/mt5)** (Google AI से) साथ वाला पेपर [mT5: एक व्यापक बहुभाषी पूर्व-प्रशिक्षित टेक्स्ट-टू-टेक्स्ट ट्रांसफॉर्मर]( https://arxiv.org/abs/2010.11934) लिंटिंग ज़ू, नोआ कॉन्सटेंट, एडम रॉबर्ट्स, मिहिर काले, रामी अल-रफू, आदित्य सिद्धांत, आदित्य बरुआ, कॉलिन रैफेल द्वारा पोस्ट किया गया। -1. **[MVP](https://huggingface.co/docs/transformers/model_doc/mvp)** (from RUC AI Box) released with the paper [MVP: Multi-task Supervised Pre-training for Natural Language Generation](https://arxiv.org/abs/2206.12131) by Tianyi Tang, Junyi Li, Wayne Xin Zhao and Ji-Rong Wen. -1. **[NAT](https://huggingface.co/docs/transformers/model_doc/nat)** (from SHI Labs) released with the paper [Neighborhood Attention Transformer](https://arxiv.org/abs/2204.07143) by Ali Hassani, Steven Walton, Jiachen Li, Shen Li, and Humphrey Shi. -1. **[Nezha](https://huggingface.co/docs/transformers/model_doc/nezha)** (हुआवेई नूह के आर्क लैब से) साथ में कागज़ [NEZHA: चीनी भाषा समझ के लिए तंत्रिका प्रासंगिक प्रतिनिधित्व](https :/ /arxiv.org/abs/1909.00204) जुन्किउ वेई, ज़ियाओज़े रेन, ज़िआओगुआंग ली, वेनयोंग हुआंग, यी लियाओ, याशेंग वांग, जियाशू लिन, शिन जियांग, जिओ चेन और कुन लियू द्वारा। -1. **[NLLB](https://huggingface.co/docs/transformers/model_doc/nllb)** (फ्रॉम मेटा) साथ में पेपर [नो लैंग्वेज लेफ्ट बिहाइंड: स्केलिंग ह्यूमन-सेंटेड मशीन ट्रांसलेशन] (https://arxiv.org/abs/2207.04672) एनएलएलबी टीम द्वारा प्रकाशित। -1. **[NLLB-MOE](https://huggingface.co/docs/transformers/main/model_doc/nllb-moe)** (Meta से) the NLLB team. द्वाराअनुसंधान पत्र [No Language Left Behind: Scaling Human-Centered Machine Translation](https://arxiv.org/abs/2207.04672) के साथ जारी किया गया -1. **[Nyströmformer](https://huggingface.co/docs/transformers/model_doc/nystromformer)** (विस्कॉन्सिन विश्वविद्यालय - मैडिसन से) साथ में कागज [Nyströmformer: A Nyström- आधारित एल्गोरिथम आत्म-ध्यान का अनुमान लगाने के लिए ](https://arxiv.org/abs/2102.03902) युनयांग ज़िओंग, झानपेंग ज़ेंग, रुद्रसिस चक्रवर्ती, मिंगक्सिंग टैन, ग्लेन फंग, यिन ली, विकास सिंह द्वारा पोस्ट किया गया। -1. **[OneFormer](https://huggingface.co/docs/transformers/model_doc/oneformer)** (SHI Labs से) पेपर [OneFormer: One Transformer to Rule Universal Image Segmentation](https://arxiv.org/abs/2211.06220) जितेश जैन, जिआचेन ली, मांगटिक चिउ, अली हसनी, निकिता ओरलोव, हम्फ्री शि के द्वारा जारी किया गया है। -1. **[OPT](https://huggingface.co/docs/transformers/master/model_doc/opt)** (from Meta AI) released with the paper [OPT: Open Pre-trained Transformer Language Models](https://arxiv.org/abs/2205.01068) by Susan Zhang, Stephen Roller, Naman Goyal, Mikel Artetxe, Moya Chen, Shuohui Chen et al. -1. **[OWL-ViT](https://huggingface.co/docs/transformers/model_doc/owlvit)** (Google AI से) साथ में कागज [विज़न ट्रांसफॉर्मर्स के साथ सिंपल ओपन-वोकैबुलरी ऑब्जेक्ट डिटेक्शन](https:/ /arxiv.org/abs/2205.06230) मैथियास मिंडरर, एलेक्सी ग्रिट्सेंको, ऑस्टिन स्टोन, मैक्सिम न्यूमैन, डिर्क वीसेनबोर्न, एलेक्सी डोसोवित्स्की, अरविंद महेंद्रन, अनुराग अर्नब, मुस्तफा देहघानी, ज़ुओरन शेन, जिओ वांग, ज़ियाओहुआ झाई, थॉमस किफ़, और नील हॉल्सबी द्वारा पोस्ट किया गया। -1. **[Pegasus](https://huggingface.co/docs/transformers/model_doc/pegasus)** (from Google) released with the paper [PEGASUS: Pre-training with Extracted Gap-sentences for Abstractive Summarization](https://arxiv.org/abs/1912.08777) by Jingqing Zhang, Yao Zhao, Mohammad Saleh and Peter J. Liu. -1. **[PEGASUS-X](https://huggingface.co/docs/transformers/model_doc/pegasus_x)** (Google की ओर से) साथ में दिया गया पेपर [लंबे इनपुट सारांश के लिए ट्रांसफ़ॉर्मरों को बेहतर तरीके से एक्सटेंड करना](https://arxiv .org/abs/2208.04347) जेसन फांग, याओ झाओ, पीटर जे लियू द्वारा। -1. **[Perceiver IO](https://huggingface.co/docs/transformers/model_doc/perceiver)** (दीपमाइंड से) साथ में पेपर [पर्सीवर आईओ: संरचित इनपुट और आउटपुट के लिए एक सामान्य वास्तुकला] (https://arxiv.org/abs/2107.14795) एंड्रयू जेगल, सेबेस्टियन बोरग्यूड, जीन-बैप्टिस्ट अलायराक, कार्ल डोर्श, कैटलिन इओनेस्कु, डेविड द्वारा डिंग, स्कंद कोप्पुला, डैनियल ज़ोरान, एंड्रयू ब्रॉक, इवान शेलहैमर, ओलिवियर हेनाफ, मैथ्यू एम। बोट्विनिक, एंड्रयू ज़िसरमैन, ओरिओल विनियल्स, जोआओ कैरेरा द्वारा पोस्ट किया गया। -1. **[PhoBERT](https://huggingface.co/docs/transformers/model_doc/phobert)** (VinAI Research से) कागज के साथ [PhoBERT: वियतनामी के लिए पूर्व-प्रशिक्षित भाषा मॉडल](https://www .aclweb.org/anthology/2020.findings-emnlp.92/) डैट क्वोक गुयेन और अन्ह तुआन गुयेन द्वारा पोस्ट किया गया। -1. **[Pix2Struct](https://huggingface.co/docs/transformers/main/model_doc/pix2struct)** (Google से) Kenton Lee, Mandar Joshi, Iulia Turc, Hexiang Hu, Fangyu Liu, Julian Eisenschlos, Urvashi Khandelwal, Peter Shaw, Ming-Wei Chang, Kristina Toutanova. द्वाराअनुसंधान पत्र [Pix2Struct: Screenshot Parsing as Pretraining for Visual Language Understanding](https://arxiv.org/abs/2210.03347) के साथ जारी किया गया -1. **[PLBart](https://huggingface.co/docs/transformers/model_doc/plbart)** (UCLA NLP से) साथ वाला पेपर [प्रोग्राम अंडरस्टैंडिंग एंड जेनरेशन के लिए यूनिफाइड प्री-ट्रेनिंग](https://arxiv .org/abs/2103.06333) वसी उद्दीन अहमद, सैकत चक्रवर्ती, बैशाखी रे, काई-वेई चांग द्वारा। -1. **[PoolFormer](https://huggingface.co/docs/transformers/model_doc/poolformer)** (from Sea AI Labs) released with the paper [MetaFormer is Actually What You Need for Vision](https://arxiv.org/abs/2111.11418) by Yu, Weihao and Luo, Mi and Zhou, Pan and Si, Chenyang and Zhou, Yichen and Wang, Xinchao and Feng, Jiashi and Yan, Shuicheng. -1. **[ProphetNet](https://huggingface.co/docs/transformers/model_doc/prophetnet)** (माइक्रोसॉफ्ट रिसर्च से) साथ में पेपर [ProphetNet: प्रेडिक्टिंग फ्यूचर एन-ग्राम फॉर सीक्वेंस-टू-सीक्वेंस प्री-ट्रेनिंग ](https://arxiv.org/abs/2001.04063) यू यान, वीज़ेन क्यूई, येयुन गोंग, दयाहेंग लियू, नान डुआन, जिउशेंग चेन, रुओफ़ेई झांग और मिंग झोउ द्वारा पोस्ट किया गया। -1. **[QDQBert](https://huggingface.co/docs/transformers/model_doc/qdqbert)** (NVIDIA से) साथ वाला पेपर [डीप लर्निंग इंफ़ेक्शन के लिए इंटीजर क्वांटिज़ेशन: प्रिंसिपल्स एंड एम्पिरिकल इवैल्यूएशन](https:// arxiv.org/abs/2004.09602) हाओ वू, पैट्रिक जुड, जिआओजी झांग, मिखाइल इसेव और पॉलियस माइकेविसियस द्वारा। -1. **[RAG](https://huggingface.co/docs/transformers/model_doc/rag)** (फेसबुक से) साथ में कागज [रिट्रीवल-ऑगमेंटेड जेनरेशन फॉर नॉलेज-इंटेंसिव एनएलपी टास्क](https://arxiv .org/abs/2005.11401) पैट्रिक लुईस, एथन पेरेज़, अलेक्जेंड्रा पिक्टस, फैबियो पेट्रोनी, व्लादिमीर कारपुखिन, नमन गोयल, हेनरिक कुटलर, माइक लुईस, वेन-ताउ यिह, टिम रॉकटाशेल, सेबस्टियन रिडेल, डौवे कीला द्वारा। -1. **[REALM](https://huggingface.co/docs/transformers/model_doc/realm.html)** (Google अनुसंधान से) केल्विन गु, केंटन ली, ज़ोरा तुंग, पानुपोंग पसुपत और मिंग-वेई चांग द्वारा साथ में दिया गया पेपर [REALM: रिट्रीवल-ऑगमेंटेड लैंग्वेज मॉडल प्री-ट्रेनिंग](https://arxiv.org/abs/2002.08909)। -1. **[Reformer](https://huggingface.co/docs/transformers/model_doc/reformer)** (from Google Research) released with the paper [Reformer: The Efficient Transformer](https://arxiv.org/abs/2001.04451) by Nikita Kitaev, Łukasz Kaiser, Anselm Levskaya. -1. **[RegNet](https://huggingface.co/docs/transformers/model_doc/regnet)** (META रिसर्च से) [डिज़ाइनिंग नेटवर्क डिज़ाइन स्पेस] (https://arxiv.org/) पेपर के साथ जारी किया गया एब्स/2003.13678) इलिजा राडोसावोविक, राज प्रतीक कोसाराजू, रॉस गिर्शिक, कैमिंग ही, पिओटर डॉलर द्वारा। -1. **[RemBERT](https://huggingface.co/docs/transformers/model_doc/rembert)** (गूगल रिसर्च से) साथ वाला पेपर [पूर्व-प्रशिक्षित भाषा मॉडल में एम्बेडिंग कपलिंग पर पुनर्विचार](https://arxiv .org/pdf/2010.12821.pdf) ह्युंग वोन चुंग, थिबॉल्ट फ़ेवरी, हेनरी त्साई, एम. जॉनसन, सेबेस्टियन रुडर द्वारा। -1. **[ResNet](https://huggingface.co/docs/transformers/model_doc/resnet)** (माइक्रोसॉफ्ट रिसर्च से) [डीप रेसिडुअल लर्निंग फॉर इमेज रिकग्निशन] (https://arxiv. org/abs/1512.03385) कैमिंग हे, जियांग्यु झांग, शाओकिंग रेन, जियान सन द्वारा। -1. **[RoBERTa](https://huggingface.co/docs/transformers/model_doc/roberta)** (फेसबुक से), साथ में कागज [मजबूत रूप से अनुकूलित BERT प्रीट्रेनिंग दृष्टिकोण](https://arxiv.org/abs /1907.11692) यिनहान लियू, मायल ओट, नमन गोयल, जिंगफेई डू, मंदार जोशी, डैनकी चेन, ओमर लेवी, माइक लुईस, ल्यूक ज़ेटलमॉयर, वेसेलिन स्टोयानोव द्वारा। -1. **[RoBERTa-PreLayerNorm](https://huggingface.co/docs/transformers/model_doc/roberta-prelayernorm)** (from Facebook) released with the paper [fairseq: A Fast, Extensible Toolkit for Sequence Modeling](https://arxiv.org/abs/1904.01038) by Myle Ott, Sergey Edunov, Alexei Baevski, Angela Fan, Sam Gross, Nathan Ng, David Grangier, Michael Auli. -1. **[RoCBert](https://huggingface.co/docs/transformers/model_doc/roc_bert)** (from WeChatAI) released with the paper [RoCBert: Robust Chinese Bert with Multimodal Contrastive Pretraining](https://aclanthology.org/2022.acl-long.65.pdf) by HuiSu, WeiweiShi, XiaoyuShen, XiaoZhou, TuoJi, JiaruiFang, JieZhou. -1. **[RoFormer](https://huggingface.co/docs/transformers/model_doc/roformer)** (झुईई टेक्नोलॉजी से), साथ में पेपर [रोफॉर्मर: रोटरी पोजिशन एंबेडिंग के साथ एन्हांस्ड ट्रांसफॉर्मर] (https://arxiv.org/pdf/2104.09864v1.pdf) जियानलिन सु और यू लू और शेंगफेंग पैन और बो वेन और युनफेंग लियू द्वारा प्रकाशित। -1. **[SegFormer](https://huggingface.co/docs/transformers/model_doc/segformer)** (from NVIDIA) released with the paper [SegFormer: Simple and Efficient Design for Semantic Segmentation with Transformers](https://arxiv.org/abs/2105.15203) by Enze Xie, Wenhai Wang, Zhiding Yu, Anima Anandkumar, Jose M. Alvarez, Ping Luo. -1. **[SEW](https://huggingface.co/docs/transformers/model_doc/sew)** (ASAPP से) साथ देने वाला पेपर [भाषण पहचान के लिए अनसुपरवाइज्ड प्री-ट्रेनिंग में परफॉर्मेंस-एफिशिएंसी ट्रेड-ऑफ्स](https ://arxiv.org/abs/2109.06870) फेलिक्स वू, क्वांगयुन किम, जिंग पैन, क्यू हान, किलियन क्यू. वेनबर्गर, योव आर्टज़ी द्वारा। -1. **[SEW-D](https://huggingface.co/docs/transformers/model_doc/sew_d)** (ASAPP से) साथ में पेपर [भाषण पहचान के लिए अनसुपरवाइज्ड प्री-ट्रेनिंग में परफॉर्मेंस-एफिशिएंसी ट्रेड-ऑफ्स] (https://arxiv.org/abs/2109.06870) फेलिक्स वू, क्वांगयुन किम, जिंग पैन, क्यू हान, किलियन क्यू. वेनबर्गर, योआव आर्टज़ी द्वारा पोस्ट किया गया। -1. **[SpeechT5](https://huggingface.co/docs/transformers/model_doc/speecht5)** (from Microsoft Research) released with the paper [SpeechT5: Unified-Modal Encoder-Decoder Pre-Training for Spoken Language Processing](https://arxiv.org/abs/2110.07205) by Junyi Ao, Rui Wang, Long Zhou, Chengyi Wang, Shuo Ren, Yu Wu, Shujie Liu, Tom Ko, Qing Li, Yu Zhang, Zhihua Wei, Yao Qian, Jinyu Li, Furu Wei. -1. **[SpeechToTextTransformer](https://huggingface.co/docs/transformers/model_doc/speech_to_text)** (फेसबुक से), साथ में पेपर [फेयरसेक S2T: फास्ट स्पीच-टू-टेक्स्ट मॉडलिंग विद फेयरसेक](https: //arxiv.org/abs/2010.05171) चांगहान वांग, यूं तांग, जुताई मा, ऐनी वू, दिमित्रो ओखोनको, जुआन पिनो द्वारा पोस्ट किया गया。 -1. **[SpeechToTextTransformer2](https://huggingface.co/docs/transformers/model_doc/speech_to_text_2)** (फेसबुक से) साथ में पेपर [लार्ज-स्केल सेल्फ- एंड सेमी-सुपरवाइज्ड लर्निंग फॉर स्पीच ट्रांसलेशन](https://arxiv.org/abs/2104.06678) चांगहान वांग, ऐनी वू, जुआन पिनो, एलेक्सी बेवस्की, माइकल औली, एलेक्सिस द्वारा Conneau द्वारा पोस्ट किया गया। -1. **[Splinter](https://huggingface.co/docs/transformers/model_doc/splinter)** (तेल अवीव यूनिवर्सिटी से) साथ में पेपर [स्पैन सिलेक्शन को प्री-ट्रेनिंग करके कुछ-शॉट क्वेश्चन आंसरिंग](https:// arxiv.org/abs/2101.00438) ओरि राम, युवल कर्स्टन, जोनाथन बेरेंट, अमीर ग्लोबर्सन, ओमर लेवी द्वारा। -1. **[SqueezeBERT](https://huggingface.co/docs/transformers/model_doc/squeezebert)** (बर्कले से) कागज के साथ [SqueezeBERT: कुशल तंत्रिका नेटवर्क के बारे में NLP को कंप्यूटर विज़न क्या सिखा सकता है?](https: //arxiv.org/abs/2006.11316) फॉरेस्ट एन. इनडोला, अल्बर्ट ई. शॉ, रवि कृष्णा, और कर्ट डब्ल्यू. केटज़र द्वारा। -1. **[Swin Transformer](https://huggingface.co/docs/transformers/model_doc/swin)** (माइक्रोसॉफ्ट से) साथ में कागज [स्वाइन ट्रांसफॉर्मर: शिफ्टेड विंडोज का उपयोग कर पदानुक्रमित विजन ट्रांसफॉर्मर](https://arxiv .org/abs/2103.14030) ज़ी लियू, युटोंग लिन, यू काओ, हान हू, यिक्सुआन वेई, झेंग झांग, स्टीफन लिन, बैनिंग गुओ द्वारा। -1. **[Swin Transformer V2](https://huggingface.co/docs/transformers/model_doc/swinv2)** (Microsoft से) साथ वाला पेपर [Swin Transformer V2: स्केलिंग अप कैपेसिटी एंड रेजोल्यूशन](https:// ज़ी लियू, हान हू, युटोंग लिन, ज़ुलिआंग याओ, ज़ेंडा ज़ी, यिक्सुआन वेई, जिया निंग, यू काओ, झेंग झांग, ली डोंग, फुरु वेई, बैनिंग गुओ द्वारा arxiv.org/abs/2111.09883। -1. **[Swin2SR](https://huggingface.co/docs/transformers/model_doc/swin2sr)** (from University of Würzburg) released with the paper [Swin2SR: SwinV2 Transformer for Compressed Image Super-Resolution and Restoration](https://arxiv.org/abs/2209.11345) by Marcos V. Conde, Ui-Jin Choi, Maxime Burchi, Radu Timofte. -1. **[SwitchTransformers](https://huggingface.co/docs/transformers/model_doc/switch_transformers)** (from Google) released with the paper [Switch Transformers: Scaling to Trillion Parameter Models with Simple and Efficient Sparsity](https://arxiv.org/abs/2101.03961) by William Fedus, Barret Zoph, Noam Shazeer. -1. **[T5](https://huggingface.co/docs/transformers/model_doc/t5)** (来自 Google AI)कॉलिन रैफेल और नोम शज़ीर और एडम रॉबर्ट्स और कैथरीन ली और शरण नारंग और माइकल मटेना द्वारा साथ में पेपर [एक एकीकृत टेक्स्ट-टू-टेक्स्ट ट्रांसफॉर्मर के साथ स्थानांतरण सीखने की सीमा की खोज] (https://arxiv.org/abs/1910.10683) और यांकी झोउ और वेई ली और पीटर जे लियू। -1. **[T5v1.1](https://huggingface.co/docs/transformers/model_doc/t5v1.1)** (Google AI से) साथ वाला पेपर [google-research/text-to-text-transfer- ट्रांसफॉर्मर](https://github.com/google-research/text-to-text-transfer-transformer/blob/main/released_checkpoints.md#t511) कॉलिन रैफेल और नोम शज़ीर और एडम रॉबर्ट्स और कैथरीन ली और शरण नारंग द्वारा और माइकल मटेना और यांकी झोउ और वेई ली और पीटर जे लियू। -1. **[Table Transformer](https://huggingface.co/docs/transformers/model_doc/table-transformer)** (माइक्रोसॉफ्ट रिसर्च से) साथ में पेपर [पबटेबल्स-1एम: टूवर्ड्स कॉम्प्रिहेंसिव टेबल एक्सट्रैक्शन फ्रॉम अनस्ट्रक्चर्ड डॉक्यूमेंट्स ](https://arxiv.org/abs/2110.00061) ब्रैंडन स्मॉक, रोहित पेसाला, रॉबिन अब्राहम द्वारा पोस्ट किया गया। -1. **[TAPAS](https://huggingface.co/docs/transformers/model_doc/tapas)** (Google AI से) साथ में कागज [TAPAS: पूर्व-प्रशिक्षण के माध्यम से कमजोर पर्यवेक्षण तालिका पार्सिंग](https:// arxiv.org/abs/2004.02349) जोनाथन हर्ज़िग, पावेल क्रिज़िस्तोफ़ नोवाक, थॉमस मुलर, फ्रांसेस्को पिकिन्नो और जूलियन मार्टिन ईसेन्च्लोस द्वारा। -1. **[TAPEX](https://huggingface.co/docs/transformers/model_doc/tapex)** (माइक्रोसॉफ्ट रिसर्च से) साथ में पेपर [TAPEX: टेबल प्री-ट्रेनिंग थ्रू लर्निंग अ न्यूरल SQL एक्ज़ीक्यूटर](https: //arxiv.org/abs/2107.07653) कियान लियू, बेई चेन, जियाकी गुओ, मोर्टेज़ा ज़ियादी, ज़ेकी लिन, वीज़ू चेन, जियान-गुआंग लू द्वारा पोस्ट किया गया। -1. **[Time Series Transformer](https://huggingface.co/docs/transformers/model_doc/time_series_transformer)** (from HuggingFace). -1. **[TimeSformer](https://huggingface.co/docs/transformers/model_doc/timesformer)** (from Facebook) released with the paper [Is Space-Time Attention All You Need for Video Understanding?](https://arxiv.org/abs/2102.05095) by Gedas Bertasius, Heng Wang, Lorenzo Torresani. -1. **[Trajectory Transformer](https://huggingface.co/docs/transformers/model_doc/trajectory_transformers)** (from the University of California at Berkeley) released with the paper [Offline Reinforcement Learning as One Big Sequence Modeling Problem](https://arxiv.org/abs/2106.02039) by Michael Janner, Qiyang Li, Sergey Levine -1. **[Transformer-XL](https://huggingface.co/docs/transformers/model_doc/transfo-xl)** (Google/CMU की ओर से) कागज के साथ [संस्करण-एक्स: एक ब्लॉग मॉडल चौकस चौक मॉडल मॉडल] (https://arxivorg/abs/1901.02860) क्वोकोक वी. ले, रुस्लैन सलाखुतदी -1. **[TrOCR](https://huggingface.co/docs/transformers/model_doc/trocr)** (from Microsoft) released with the paper [TrOCR: Transformer-based Optical Character Recognition with Pre-trained Models](https://arxiv.org/abs/2109.10282) by Minghao Li, Tengchao Lv, Lei Cui, Yijuan Lu, Dinei Florencio, Cha Zhang, Zhoujun Li, Furu Wei. -1. **[TVLT](https://huggingface.co/docs/transformers/model_doc/tvlt)** (from UNC Chapel Hill) released with the paper [TVLT: Textless Vision-Language Transformer](https://arxiv.org/abs/2209.14156) by Zineng Tang, Jaemin Cho, Yixin Nie, Mohit Bansal. -1. **[UL2](https://huggingface.co/docs/transformers/model_doc/ul2)** (from Google Research) released with the paper [Unifying Language Learning Paradigms](https://arxiv.org/abs/2205.05131v1) by Yi Tay, Mostafa Dehghani, Vinh Q. Tran, Xavier Garcia, Dara Bahri, Tal Schuster, Huaixiu Steven Zheng, Neil Houlsby, Donald Metzler -1. **[UniSpeech](https://huggingface.co/docs/transformers/model_doc/unispeech)** (माइक्रोसॉफ्ट रिसर्च से) साथ में दिया गया पेपर [UniSpeech: यूनिफाइड स्पीच रिप्रेजेंटेशन लर्निंग विद लेबलेड एंड अनलेबल्ड डेटा](https:/ /arxiv.org/abs/2101.07597) चेंगई वांग, यू वू, याओ कियान, केनिची कुमातानी, शुजी लियू, फुरु वेई, माइकल ज़ेंग, ज़ुएदोंग हुआंग द्वारा। -1. **[UniSpeechSat](https://huggingface.co/docs/transformers/model_doc/unispeech-sat)** (माइक्रोसॉफ्ट रिसर्च से) कागज के साथ [UNISPEECH-SAT: यूनिवर्सल स्पीच रिप्रेजेंटेशन लर्निंग विद स्पीकर अवेयर प्री-ट्रेनिंग ](https://arxiv.org/abs/2110.05752) सानयुआन चेन, यू वू, चेंग्यी वांग, झेंगयांग चेन, झूओ चेन, शुजी लियू, जियान वू, याओ कियान, फुरु वेई, जिन्यु ली, जियांगज़ान यू द्वारा पोस्ट किया गया। -1. **[UPerNet](https://huggingface.co/docs/transformers/model_doc/upernet)** (from Peking University) released with the paper [Unified Perceptual Parsing for Scene Understanding](https://arxiv.org/abs/1807.10221) by Tete Xiao, Yingcheng Liu, Bolei Zhou, Yuning Jiang, Jian Sun. -1. **[VAN](https://huggingface.co/docs/transformers/model_doc/van)** (सिंघुआ यूनिवर्सिटी और ननकाई यूनिवर्सिटी से) साथ में पेपर [विजुअल अटेंशन नेटवर्क](https://arxiv.org/ pdf/2202.09741.pdf) मेंग-हाओ गुओ, चेंग-ज़े लू, झेंग-निंग लियू, मिंग-मिंग चेंग, शि-मिन हू द्वारा। -1. **[VideoMAE](https://huggingface.co/docs/transformers/model_doc/videomae)** (मल्टीमीडिया कम्प्यूटिंग ग्रुप, नानजिंग यूनिवर्सिटी से) साथ में पेपर [वीडियोएमएई: मास्क्ड ऑटोएन्कोडर स्व-पर्यवेक्षित वीडियो प्री-ट्रेनिंग के लिए डेटा-कुशल सीखने वाले हैं] (https://arxiv.org/abs/2203.12602) ज़ान टोंग, यिबिंग सॉन्ग, जुए द्वारा वांग, लिमिन वांग द्वारा पोस्ट किया गया। -1. **[ViLT](https://huggingface.co/docs/transformers/model_doc/vilt)** (NAVER AI Lab/Kakao Enterprise/Kakao Brain से) साथ में कागज [ViLT: Vision-and-Language Transformer बिना कनवल्शन या रीजन सुपरविजन](https://arxiv.org/abs/2102.03334) वोनजे किम, बोक्यूंग सोन, इल्डू किम द्वारा पोस्ट किया गया। -1. **[Vision Transformer (ViT)](https://huggingface.co/docs/transformers/model_doc/vit)** (गूगल एआई से) कागज के साथ [एक इमेज इज़ वर्थ 16x16 वर्ड्स: ट्रांसफॉर्मर्स फॉर इमेज रिकॉग्निशन एट स्केल](https://arxiv.org/abs/2010.11929) एलेक्सी डोसोवित्स्की, लुकास बेयर, अलेक्जेंडर कोलेसनिकोव, डिर्क वीसेनबोर्न, शियाओहुआ झाई, थॉमस अनटरथिनर, मुस्तफा देहघानी, मैथियास मिंडरर, जॉर्ज हेगोल्ड, सिल्वेन गेली, जैकब उस्ज़कोरेइट द्वारा हॉल्सबी द्वारा पोस्ट किया गया। -1. **[VisualBERT](https://huggingface.co/docs/transformers/model_doc/visual_bert)** (UCLA NLP से) साथ वाला पेपर [VisualBERT: A Simple and Performant Baseline for Vision and Language](https:/ /arxiv.org/pdf/1908.03557) लियुनियन हेरोल्ड ली, मार्क यात्स्कर, दा यिन, चो-जुई हसीह, काई-वेई चांग द्वारा। -1. **[ViT Hybrid](https://huggingface.co/docs/transformers/model_doc/vit_hybrid)** (from Google AI) released with the paper [An Image is Worth 16x16 Words: Transformers for Image Recognition at Scale](https://arxiv.org/abs/2010.11929) by Alexey Dosovitskiy, Lucas Beyer, Alexander Kolesnikov, Dirk Weissenborn, Xiaohua Zhai, Thomas Unterthiner, Mostafa Dehghani, Matthias Minderer, Georg Heigold, Sylvain Gelly, Jakob Uszkoreit, Neil Houlsby. -1. **[ViTMAE](https://huggingface.co/docs/transformers/model_doc/vit_mae)** (मेटा एआई से) साथ में कागज [मास्कड ऑटोएन्कोडर स्केलेबल विजन लर्नर्स हैं](https://arxiv.org/ एब्स/2111.06377) कैमिंग हे, ज़िनेली चेन, सेनिंग ज़ी, यांगहो ली, पिओट्र डॉलर, रॉस गिर्शिक द्वारा। -1. **[ViTMSN](https://huggingface.co/docs/transformers/model_doc/vit_msn)** (मेटा एआई से) साथ में कागज [लेबल-कुशल सीखने के लिए मास्क्ड स्याम देश के नेटवर्क](https://arxiv. org/abs/2204.07141) महमूद असरान, मथिल्डे कैरन, ईशान मिश्रा, पियोट्र बोजानोवस्की, फ्लोरियन बोर्डेस, पास्कल विंसेंट, आर्मंड जौलिन, माइकल रब्बत, निकोलस बल्लास द्वारा। -1. **[Wav2Vec2](https://huggingface.co/docs/transformers/model_doc/wav2vec2)** (फेसबुक एआई से) साथ में पेपर [wav2vec 2.0: ए फ्रेमवर्क फॉर सेल्फ-सुपरवाइज्ड लर्निंग ऑफ स्पीच रिप्रेजेंटेशन] (https://arxiv.org/abs/2006.11477) एलेक्सी बेवस्की, हेनरी झोउ, अब्देलरहमान मोहम्मद, माइकल औली द्वारा। -1. **[Wav2Vec2-Conformer](https://huggingface.co/docs/transformers/model_doc/wav2vec2-conformer)** (Facebook AI से) साथ वाला पेपर [FAIRSEQ S2T: FAIRSEQ के साथ फास्ट स्पीच-टू-टेक्स्ट मॉडलिंग ](https://arxiv.org/abs/2010.05171) चांगहान वांग, यूं तांग, जुताई मा, ऐनी वू, सरव्या पोपुरी, दिमित्रो ओखोनको, जुआन पिनो द्वारा पोस्ट किया गया। -1. **[Wav2Vec2Phoneme](https://huggingface.co/docs/transformers/model_doc/wav2vec2_phoneme)** (Facebook AI से) साथ वाला पेपर [सरल और प्रभावी जीरो-शॉट क्रॉस-लिंगुअल फोनेम रिकॉग्निशन](https:/ /arxiv.org/abs/2109.11680) कियानटोंग जू, एलेक्सी बाएव्स्की, माइकल औली द्वारा। -1. **[WavLM](https://huggingface.co/docs/transformers/model_doc/wavlm)** (माइक्रोसॉफ्ट रिसर्च से) पेपर के साथ जारी किया गया [WavLM: फुल स्टैक के लिए बड़े पैमाने पर स्व-पर्यवेक्षित पूर्व-प्रशिक्षण स्पीच प्रोसेसिंग] (https://arxiv.org/abs/2110.13900) सानयुआन चेन, चेंगयी वांग, झेंगयांग चेन, यू वू, शुजी लियू, ज़ुओ चेन, जिन्यु ली, नाओयुकी कांडा, ताकुया योशियोका, ज़िओंग जिओ, जियान वू, लॉन्ग झोउ, शुओ रेन, यानमिन कियान, याओ कियान, जियान वू, माइकल ज़ेंग, फुरु वेई। -1. **[Whisper](https://huggingface.co/docs/transformers/model_doc/whisper)** (OpenAI से) साथ में कागज [बड़े पैमाने पर कमजोर पर्यवेक्षण के माध्यम से मजबूत भाषण पहचान](https://cdn. openai.com/papers/whisper.pdf) एलेक रैडफोर्ड, जोंग वूक किम, ताओ जू, ग्रेग ब्रॉकमैन, क्रिस्टीन मैकलीवे, इल्या सुत्स्केवर द्वारा। -1. **[X-CLIP](https://huggingface.co/docs/transformers/model_doc/xclip)** (माइक्रोसॉफ्ट रिसर्च से) कागज के साथ [एक्सपैंडिंग लैंग्वेज-इमेज प्रीट्रेन्ड मॉडल फॉर जनरल वीडियो रिकग्निशन](https: //arxiv.org/abs/2208.02816) बोलिन नी, होउवेन पेंग, मिंगाओ चेन, सोंगयांग झांग, गाओफेंग मेंग, जियानलोंग फू, शिमिंग जियांग, हैबिन लिंग द्वारा। -1. **[X-MOD](https://huggingface.co/docs/transformers/model_doc/xmod)** (Meta AI से) Jonas Pfeiffer, Naman Goyal, Xi Lin, Xian Li, James Cross, Sebastian Riedel, Mikel Artetxe. द्वाराअनुसंधान पत्र [Lifting the Curse of Multilinguality by Pre-training Modular Transformers](http://dx.doi.org/10.18653/v1/2022.naacl-main.255) के साथ जारी किया गया -1. **[XGLM](https://huggingface.co/docs/transformers/model_doc/xglm)** (From Facebook AI) released with the paper [Few-shot Learning with Multilingual Language Models](https://arxiv.org/abs/2112.10668) by Xi Victoria Lin, Todor Mihaylov, Mikel Artetxe, Tianlu Wang, Shuohui Chen, Daniel Simig, Myle Ott, Naman Goyal, Shruti Bhosale, Jingfei Du, Ramakanth Pasunuru, Sam Shleifer, Punit Singh Koura, Vishrav Chaudhary, Brian O'Horo, Jeff Wang, Luke Zettlemoyer, Zornitsa Kozareva, Mona Diab, Veselin Stoyanov, Xian Li. -1. **[XLM](https://huggingface.co/docs/transformers/model_doc/xlm)** (फेसबुक से) साथ में पेपर [क्रॉस-लिंगुअल लैंग्वेज मॉडल प्रीट्रेनिंग] (https://arxiv.org/abs/1901.07291) गिलाउम लैम्पल और एलेक्सिस कोनो द्वारा। -1. **[XLM-ProphetNet](https://huggingface.co/docs/transformers/model_doc/xlm-prophetnet)** (माइक्रोसॉफ्ट रिसर्च से) साथ में कागज [ProphetNet: प्रेडिक्टिंग फ्यूचर एन-ग्राम फॉर सीक्वेंस-टू- सीक्वेंस प्री-ट्रेनिंग](https://arxiv.org/abs/2001.04063) यू यान, वीज़ेन क्यूई, येयुन गोंग, दयाहेंग लियू, नान डुआन, जिउशेंग चेन, रुओफ़ेई झांग और मिंग झोउ द्वारा। -1. **[XLM-RoBERTa](https://huggingface.co/docs/transformers/model_doc/xlm-roberta)** (फेसबुक एआई से), साथ में पेपर [अनसुपरवाइज्ड क्रॉस-लिंगुअल रिप्रेजेंटेशन लर्निंग एट स्केल] (https://arxiv.org/abs/1911.02116) एलेक्सिस कोन्यू*, कार्तिकेय खंडेलवाल*, नमन गोयल, विश्रव चौधरी, गिलाउम वेनज़ेक, फ्रांसिस्को गुज़मैन द्वारा , एडौर्ड ग्रेव, मायल ओट, ल्यूक ज़ेटलमॉयर और वेसेलिन स्टोयानोव द्वारा। -1. **[XLM-RoBERTa-XL](https://huggingface.co/docs/transformers/model_doc/xlm-roberta-xl)** (Facebook AI से) साथ में कागज [बहुभाषी नकाबपोश भाषा के लिए बड़े पैमाने पर ट्रांसफॉर्मर ] मॉडलिंग](https://arxiv.org/abs/2105.00572) नमन गोयल, जिंगफेई डू, मायल ओट, गिरि अनंतरामन, एलेक्सिस कोनो द्वारा पोस्ट किया गया। -1. **[XLM-V](https://huggingface.co/docs/transformers/model_doc/xlm-v)** (from Meta AI) released with the paper [XLM-V: Overcoming the Vocabulary Bottleneck in Multilingual Masked Language Models](https://arxiv.org/abs/2301.10472) by Davis Liang, Hila Gonen, Yuning Mao, Rui Hou, Naman Goyal, Marjan Ghazvininejad, Luke Zettlemoyer, Madian Khabsa. -1. **[XLNet](https://huggingface.co/docs/transformers/model_doc/xlnet)** (Google/CMU से) साथ वाला पेपर [XLNet: जनरलाइज्ड ऑटोरेग्रेसिव प्रीट्रेनिंग फॉर लैंग्वेज अंडरस्टैंडिंग](https://arxiv ज़ीलिन यांग*, ज़िहांग दाई*, यिमिंग यांग, जैम कार्बोनेल, रुस्लान सलाखुतदीनोव, क्वोक वी. ले ​​द्वारा .org/abs/1906.08237)। -1. **[XLS-R](https://huggingface.co/docs/transformers/model_doc/xls_r)** (Facebook AI से) साथ वाला पेपर [XLS-R: सेल्फ सुपरवाइज्ड क्रॉस-लिंगुअल स्पीच रिप्रेजेंटेशन लर्निंग एट स्केल](https://arxiv.org/abs/2111.09296) अरुण बाबू, चांगहान वांग, एंड्रोस तजंद्रा, कुशाल लखोटिया, कियानटोंग जू, नमन गोयल, कृतिका सिंह, पैट्रिक वॉन प्लैटन, याथार्थ सराफ, जुआन पिनो, एलेक्सी बेवस्की, एलेक्सिस कोन्यू, माइकल औली द्वारा पोस्ट किया गया। -1. **[XLSR-Wav2Vec2](https://huggingface.co/docs/transformers/model_doc/xlsr_wav2vec2)** (फेसबुक एआई से) साथ में पेपर [अनसुपरवाइज्ड क्रॉस-लिंगुअल रिप्रेजेंटेशन लर्निंग फॉर स्पीच रिकग्निशन] (https://arxiv.org/abs/2006.13979) एलेक्सिस कोन्यू, एलेक्सी बेवस्की, रोनन कोलोबर्ट, अब्देलरहमान मोहम्मद, माइकल औली द्वारा। -1. **[YOLOS](https://huggingface.co/docs/transformers/model_doc/yolos)** (हुआझोंग यूनिवर्सिटी ऑफ साइंस एंड टेक्नोलॉजी से) साथ में पेपर [यू ओनली लुक एट वन सीक्वेंस: रीथिंकिंग ट्रांसफॉर्मर इन विज़न थ्रू ऑब्जेक्ट डिटेक्शन](https://arxiv.org/abs/2106.00666) युक्सिन फेंग, बेनचेंग लियाओ, जिंगगैंग वांग, जेमिन फेंग, जियांग क्यूई, रुई वू, जियानवेई नीयू, वेन्यू लियू द्वारा पोस्ट किया गया। -1. **[YOSO](https://huggingface.co/docs/transformers/model_doc/yoso)** (विस्कॉन्सिन विश्वविद्यालय - मैडिसन से) साथ में पेपर [यू ओनली सैंपल (लगभग) ज़ानपेंग ज़ेंग, युनयांग ज़िओंग द्वारा , सत्य एन. रवि, शैलेश आचार्य, ग्लेन फंग, विकास सिंह द्वारा पोस्ट किया गया। -1. एक नए मॉडल में योगदान देना चाहते हैं? नए मॉडल जोड़ने में आपका मार्गदर्शन करने के लिए हमारे पास एक **विस्तृत मार्गदर्शिका और टेम्प्लेट** है। आप उन्हें [`टेम्पलेट्स`](./templates) निर्देशिका में पा सकते हैं। पीआर शुरू करने से पहले [योगदान दिशानिर्देश] (./CONTRIBUTING.md) देखना और अनुरक्षकों से संपर्क करना या प्रतिक्रिया प्राप्त करने के लिए एक नया मुद्दा खोलना याद रखें। - -यह जांचने के लिए कि क्या किसी मॉडल में पहले से ही Flax, PyTorch या TensorFlow का कार्यान्वयन है, या यदि उसके पास Tokenizers लाइब्रेरी में संबंधित टोकन है, तो [यह तालिका] (https://huggingface.co/ docs/transformers/index#supported) देखें। -फ्रेमवर्क)। - -इन कार्यान्वयनों का परीक्षण कई डेटासेट पर किया गया है (देखें केस स्क्रिप्ट का उपयोग करें) और वैनिला कार्यान्वयन के लिए तुलनात्मक रूप से प्रदर्शन करना चाहिए। आप उपयोग के मामले के दस्तावेज़ [इस अनुभाग](https://huggingface.co/docs/transformers/examples) में व्यवहार का विवरण पढ़ सकते हैं। - - -## अधिक समझें - -|अध्याय | विवरण | -|-|-| -| [दस्तावेज़ीकरण](https://huggingface.co/transformers/) | पूरा एपीआई दस्तावेज़ीकरण और ट्यूटोरियल | -| [कार्य सारांश](https://huggingface.co/docs/transformers/task_summary) | ट्रांसफॉर्मर समर्थित कार्य | -| [प्रीप्रोसेसिंग ट्यूटोरियल](https://huggingface.co/docs/transformers/preprocessing) | मॉडल के लिए डेटा तैयार करने के लिए `टोकनाइज़र` का उपयोग करना | -| [प्रशिक्षण और फाइन-ट्यूनिंग](https://huggingface.co/docs/transformers/training) | PyTorch/TensorFlow के ट्रेनिंग लूप या `ट्रेनर` API में ट्रांसफॉर्मर द्वारा दिए गए मॉडल का उपयोग करें | -| [क्विक स्टार्ट: ट्वीकिंग एंड यूज़ केस स्क्रिप्ट्स](https://github.com/huggingface/transformers/tree/main/examples) | विभिन्न कार्यों के लिए केस स्क्रिप्ट का उपयोग करें | -| [मॉडल साझा करना और अपलोड करना](https://huggingface.co/docs/transformers/model_sharing) | समुदाय के साथ अपने फाइन टूनड मॉडल अपलोड और साझा करें | -| [माइग्रेशन](https://huggingface.co/docs/transformers/migration) | `पाइटोरच-ट्रांसफॉर्मर्स` या `पाइटोरच-प्रीट्रेनड-बर्ट` से ट्रांसफॉर्मर में माइग्रेट करना | - -## उद्धरण - -हमने आधिकारिक तौर पर इस लाइब्रेरी का [पेपर](https://www.aclweb.org/anthology/2020.emnlp-demos.6/) प्रकाशित किया है, अगर आप ट्रान्सफ़ॉर्मर्स लाइब्रेरी का उपयोग करते हैं, तो कृपया उद्धृत करें: -```bibtex -@inproceedings{wolf-etal-2020-transformers, - title = "Transformers: State-of-the-Art Natural Language Processing", - author = "Thomas Wolf and Lysandre Debut and Victor Sanh and Julien Chaumond and Clement Delangue and Anthony Moi and Pierric Cistac and Tim Rault and Rémi Louf and Morgan Funtowicz and Joe Davison and Sam Shleifer and Patrick von Platen and Clara Ma and Yacine Jernite and Julien Plu and Canwen Xu and Teven Le Scao and Sylvain Gugger and Mariama Drame and Quentin Lhoest and Alexander M. Rush", - booktitle = "Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations", - month = oct, - year = "2020", - address = "Online", - publisher = "Association for Computational Linguistics", - url = "https://www.aclweb.org/anthology/2020.emnlp-demos.6", - pages = "38--45" -} -``` diff --git a/spaces/chrismay/Sentiment-demo-app/app.py b/spaces/chrismay/Sentiment-demo-app/app.py deleted file mode 100644 index 8d36a09acf82da2c5138d0a999571267bc384192..0000000000000000000000000000000000000000 --- a/spaces/chrismay/Sentiment-demo-app/app.py +++ /dev/null @@ -1,18 +0,0 @@ -import streamlit as st -from transformers import pipeline -import gc - -st.header("Sentiment-demo-app") -st.subheader("Please be patient and wait up to a minute until the demo app is loaded.") -st.caption("This is a very simple demo application for a zero-shot classification pipeline to classify positive, neutral, or negative sentiment for a short text. Enter your text in the box below and press CTRl+ENTER to run the model.") - -classifier = pipeline("zero-shot-classification", model='facebook/bart-large-mnli') - -text = st.text_area('Enter text here!') -candidate_labels = ['Positive', 'Neutral', 'Negative'] - -if text: - out = classifier(text, candidate_labels) - st.json(out) - del out - gc.collect() \ No newline at end of file diff --git a/spaces/chuan-hd/law-assistant-chatbot/.venv/bin/publish.py b/spaces/chuan-hd/law-assistant-chatbot/.venv/bin/publish.py deleted file mode 100644 index 72e0a290d797a86adf4cc2a43c74e3685cd55e53..0000000000000000000000000000000000000000 --- a/spaces/chuan-hd/law-assistant-chatbot/.venv/bin/publish.py +++ /dev/null @@ -1,19 +0,0 @@ -import argparse -import re -import subprocess - -version_pattern = r'\d\.\d\.\d' -parser = argparse.ArgumentParser() -parser.add_argument('version', help='a SEMVER string X.Y.Z') -args = parser.parse_args() -if not re.match(version_pattern, args.version): - print('argument must be SEMVER string in format X.Y.Z') -else: - with open('setup.py') as fp: - old_setupfile = fp.read() - new_setupfile = re.sub(f"version='{version_pattern}'", - f"version='{args.version}'", old_setupfile) - with open('setup.py', 'w') as fp: - print(new_setupfile, file=fp) - - subprocess.run(['./publish.sh', 'v' + args.version]) diff --git a/spaces/chuan-hd/law-assistant-chatbot/.venv/lib/python3.11/site-packages/aiohttp/hdrs.py b/spaces/chuan-hd/law-assistant-chatbot/.venv/lib/python3.11/site-packages/aiohttp/hdrs.py deleted file mode 100644 index a619f2543e47cbd708a67cd3dd756fdd3094aa6b..0000000000000000000000000000000000000000 --- a/spaces/chuan-hd/law-assistant-chatbot/.venv/lib/python3.11/site-packages/aiohttp/hdrs.py +++ /dev/null @@ -1,114 +0,0 @@ -"""HTTP Headers constants.""" - -# After changing the file content call ./tools/gen.py -# to regenerate the headers parser -import sys -from typing import Set - -from multidict import istr - -if sys.version_info >= (3, 8): - from typing import Final -else: - from typing_extensions import Final - -METH_ANY: Final[str] = "*" -METH_CONNECT: Final[str] = "CONNECT" -METH_HEAD: Final[str] = "HEAD" -METH_GET: Final[str] = "GET" -METH_DELETE: Final[str] = "DELETE" -METH_OPTIONS: Final[str] = "OPTIONS" -METH_PATCH: Final[str] = "PATCH" -METH_POST: Final[str] = "POST" -METH_PUT: Final[str] = "PUT" -METH_TRACE: Final[str] = "TRACE" - -METH_ALL: Final[Set[str]] = { - METH_CONNECT, - METH_HEAD, - METH_GET, - METH_DELETE, - METH_OPTIONS, - METH_PATCH, - METH_POST, - METH_PUT, - METH_TRACE, -} - -ACCEPT: Final[istr] = istr("Accept") -ACCEPT_CHARSET: Final[istr] = istr("Accept-Charset") -ACCEPT_ENCODING: Final[istr] = istr("Accept-Encoding") -ACCEPT_LANGUAGE: Final[istr] = istr("Accept-Language") -ACCEPT_RANGES: Final[istr] = istr("Accept-Ranges") -ACCESS_CONTROL_MAX_AGE: Final[istr] = istr("Access-Control-Max-Age") -ACCESS_CONTROL_ALLOW_CREDENTIALS: Final[istr] = istr("Access-Control-Allow-Credentials") -ACCESS_CONTROL_ALLOW_HEADERS: Final[istr] = istr("Access-Control-Allow-Headers") -ACCESS_CONTROL_ALLOW_METHODS: Final[istr] = istr("Access-Control-Allow-Methods") -ACCESS_CONTROL_ALLOW_ORIGIN: Final[istr] = istr("Access-Control-Allow-Origin") -ACCESS_CONTROL_EXPOSE_HEADERS: Final[istr] = istr("Access-Control-Expose-Headers") -ACCESS_CONTROL_REQUEST_HEADERS: Final[istr] = istr("Access-Control-Request-Headers") -ACCESS_CONTROL_REQUEST_METHOD: Final[istr] = istr("Access-Control-Request-Method") -AGE: Final[istr] = istr("Age") -ALLOW: Final[istr] = istr("Allow") -AUTHORIZATION: Final[istr] = istr("Authorization") -CACHE_CONTROL: Final[istr] = istr("Cache-Control") -CONNECTION: Final[istr] = istr("Connection") -CONTENT_DISPOSITION: Final[istr] = istr("Content-Disposition") -CONTENT_ENCODING: Final[istr] = istr("Content-Encoding") -CONTENT_LANGUAGE: Final[istr] = istr("Content-Language") -CONTENT_LENGTH: Final[istr] = istr("Content-Length") -CONTENT_LOCATION: Final[istr] = istr("Content-Location") -CONTENT_MD5: Final[istr] = istr("Content-MD5") -CONTENT_RANGE: Final[istr] = istr("Content-Range") -CONTENT_TRANSFER_ENCODING: Final[istr] = istr("Content-Transfer-Encoding") -CONTENT_TYPE: Final[istr] = istr("Content-Type") -COOKIE: Final[istr] = istr("Cookie") -DATE: Final[istr] = istr("Date") -DESTINATION: Final[istr] = istr("Destination") -DIGEST: Final[istr] = istr("Digest") -ETAG: Final[istr] = istr("Etag") -EXPECT: Final[istr] = istr("Expect") -EXPIRES: Final[istr] = istr("Expires") -FORWARDED: Final[istr] = istr("Forwarded") -FROM: Final[istr] = istr("From") -HOST: Final[istr] = istr("Host") -IF_MATCH: Final[istr] = istr("If-Match") -IF_MODIFIED_SINCE: Final[istr] = istr("If-Modified-Since") -IF_NONE_MATCH: Final[istr] = istr("If-None-Match") -IF_RANGE: Final[istr] = istr("If-Range") -IF_UNMODIFIED_SINCE: Final[istr] = istr("If-Unmodified-Since") -KEEP_ALIVE: Final[istr] = istr("Keep-Alive") -LAST_EVENT_ID: Final[istr] = istr("Last-Event-ID") -LAST_MODIFIED: Final[istr] = istr("Last-Modified") -LINK: Final[istr] = istr("Link") -LOCATION: Final[istr] = istr("Location") -MAX_FORWARDS: Final[istr] = istr("Max-Forwards") -ORIGIN: Final[istr] = istr("Origin") -PRAGMA: Final[istr] = istr("Pragma") -PROXY_AUTHENTICATE: Final[istr] = istr("Proxy-Authenticate") -PROXY_AUTHORIZATION: Final[istr] = istr("Proxy-Authorization") -RANGE: Final[istr] = istr("Range") -REFERER: Final[istr] = istr("Referer") -RETRY_AFTER: Final[istr] = istr("Retry-After") -SEC_WEBSOCKET_ACCEPT: Final[istr] = istr("Sec-WebSocket-Accept") -SEC_WEBSOCKET_VERSION: Final[istr] = istr("Sec-WebSocket-Version") -SEC_WEBSOCKET_PROTOCOL: Final[istr] = istr("Sec-WebSocket-Protocol") -SEC_WEBSOCKET_EXTENSIONS: Final[istr] = istr("Sec-WebSocket-Extensions") -SEC_WEBSOCKET_KEY: Final[istr] = istr("Sec-WebSocket-Key") -SEC_WEBSOCKET_KEY1: Final[istr] = istr("Sec-WebSocket-Key1") -SERVER: Final[istr] = istr("Server") -SET_COOKIE: Final[istr] = istr("Set-Cookie") -TE: Final[istr] = istr("TE") -TRAILER: Final[istr] = istr("Trailer") -TRANSFER_ENCODING: Final[istr] = istr("Transfer-Encoding") -UPGRADE: Final[istr] = istr("Upgrade") -URI: Final[istr] = istr("URI") -USER_AGENT: Final[istr] = istr("User-Agent") -VARY: Final[istr] = istr("Vary") -VIA: Final[istr] = istr("Via") -WANT_DIGEST: Final[istr] = istr("Want-Digest") -WARNING: Final[istr] = istr("Warning") -WWW_AUTHENTICATE: Final[istr] = istr("WWW-Authenticate") -X_FORWARDED_FOR: Final[istr] = istr("X-Forwarded-For") -X_FORWARDED_HOST: Final[istr] = istr("X-Forwarded-Host") -X_FORWARDED_PROTO: Final[istr] = istr("X-Forwarded-Proto") diff --git a/spaces/chuan-hd/law-assistant-chatbot/.venv/lib/python3.11/site-packages/chardet/chardistribution.py b/spaces/chuan-hd/law-assistant-chatbot/.venv/lib/python3.11/site-packages/chardet/chardistribution.py deleted file mode 100644 index 176cb996408e6681a88722783919efc0e9dafb29..0000000000000000000000000000000000000000 --- a/spaces/chuan-hd/law-assistant-chatbot/.venv/lib/python3.11/site-packages/chardet/chardistribution.py +++ /dev/null @@ -1,261 +0,0 @@ -######################## BEGIN LICENSE BLOCK ######################## -# The Original Code is Mozilla Communicator client code. -# -# The Initial Developer of the Original Code is -# Netscape Communications Corporation. -# Portions created by the Initial Developer are Copyright (C) 1998 -# the Initial Developer. All Rights Reserved. -# -# Contributor(s): -# Mark Pilgrim - port to Python -# -# This library is free software; you can redistribute it and/or -# modify it under the terms of the GNU Lesser General Public -# License as published by the Free Software Foundation; either -# version 2.1 of the License, or (at your option) any later version. -# -# This library is distributed in the hope that it will be useful, -# but WITHOUT ANY WARRANTY; without even the implied warranty of -# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU -# Lesser General Public License for more details. -# -# You should have received a copy of the GNU Lesser General Public -# License along with this library; if not, write to the Free Software -# Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA -# 02110-1301 USA -######################### END LICENSE BLOCK ######################### - -from typing import Tuple, Union - -from .big5freq import ( - BIG5_CHAR_TO_FREQ_ORDER, - BIG5_TABLE_SIZE, - BIG5_TYPICAL_DISTRIBUTION_RATIO, -) -from .euckrfreq import ( - EUCKR_CHAR_TO_FREQ_ORDER, - EUCKR_TABLE_SIZE, - EUCKR_TYPICAL_DISTRIBUTION_RATIO, -) -from .euctwfreq import ( - EUCTW_CHAR_TO_FREQ_ORDER, - EUCTW_TABLE_SIZE, - EUCTW_TYPICAL_DISTRIBUTION_RATIO, -) -from .gb2312freq import ( - GB2312_CHAR_TO_FREQ_ORDER, - GB2312_TABLE_SIZE, - GB2312_TYPICAL_DISTRIBUTION_RATIO, -) -from .jisfreq import ( - JIS_CHAR_TO_FREQ_ORDER, - JIS_TABLE_SIZE, - JIS_TYPICAL_DISTRIBUTION_RATIO, -) -from .johabfreq import JOHAB_TO_EUCKR_ORDER_TABLE - - -class CharDistributionAnalysis: - ENOUGH_DATA_THRESHOLD = 1024 - SURE_YES = 0.99 - SURE_NO = 0.01 - MINIMUM_DATA_THRESHOLD = 3 - - def __init__(self) -> None: - # Mapping table to get frequency order from char order (get from - # GetOrder()) - self._char_to_freq_order: Tuple[int, ...] = tuple() - self._table_size = 0 # Size of above table - # This is a constant value which varies from language to language, - # used in calculating confidence. See - # http://www.mozilla.org/projects/intl/UniversalCharsetDetection.html - # for further detail. - self.typical_distribution_ratio = 0.0 - self._done = False - self._total_chars = 0 - self._freq_chars = 0 - self.reset() - - def reset(self) -> None: - """reset analyser, clear any state""" - # If this flag is set to True, detection is done and conclusion has - # been made - self._done = False - self._total_chars = 0 # Total characters encountered - # The number of characters whose frequency order is less than 512 - self._freq_chars = 0 - - def feed(self, char: Union[bytes, bytearray], char_len: int) -> None: - """feed a character with known length""" - if char_len == 2: - # we only care about 2-bytes character in our distribution analysis - order = self.get_order(char) - else: - order = -1 - if order >= 0: - self._total_chars += 1 - # order is valid - if order < self._table_size: - if 512 > self._char_to_freq_order[order]: - self._freq_chars += 1 - - def get_confidence(self) -> float: - """return confidence based on existing data""" - # if we didn't receive any character in our consideration range, - # return negative answer - if self._total_chars <= 0 or self._freq_chars <= self.MINIMUM_DATA_THRESHOLD: - return self.SURE_NO - - if self._total_chars != self._freq_chars: - r = self._freq_chars / ( - (self._total_chars - self._freq_chars) * self.typical_distribution_ratio - ) - if r < self.SURE_YES: - return r - - # normalize confidence (we don't want to be 100% sure) - return self.SURE_YES - - def got_enough_data(self) -> bool: - # It is not necessary to receive all data to draw conclusion. - # For charset detection, certain amount of data is enough - return self._total_chars > self.ENOUGH_DATA_THRESHOLD - - def get_order(self, _: Union[bytes, bytearray]) -> int: - # We do not handle characters based on the original encoding string, - # but convert this encoding string to a number, here called order. - # This allows multiple encodings of a language to share one frequency - # table. - return -1 - - -class EUCTWDistributionAnalysis(CharDistributionAnalysis): - def __init__(self) -> None: - super().__init__() - self._char_to_freq_order = EUCTW_CHAR_TO_FREQ_ORDER - self._table_size = EUCTW_TABLE_SIZE - self.typical_distribution_ratio = EUCTW_TYPICAL_DISTRIBUTION_RATIO - - def get_order(self, byte_str: Union[bytes, bytearray]) -> int: - # for euc-TW encoding, we are interested - # first byte range: 0xc4 -- 0xfe - # second byte range: 0xa1 -- 0xfe - # no validation needed here. State machine has done that - first_char = byte_str[0] - if first_char >= 0xC4: - return 94 * (first_char - 0xC4) + byte_str[1] - 0xA1 - return -1 - - -class EUCKRDistributionAnalysis(CharDistributionAnalysis): - def __init__(self) -> None: - super().__init__() - self._char_to_freq_order = EUCKR_CHAR_TO_FREQ_ORDER - self._table_size = EUCKR_TABLE_SIZE - self.typical_distribution_ratio = EUCKR_TYPICAL_DISTRIBUTION_RATIO - - def get_order(self, byte_str: Union[bytes, bytearray]) -> int: - # for euc-KR encoding, we are interested - # first byte range: 0xb0 -- 0xfe - # second byte range: 0xa1 -- 0xfe - # no validation needed here. State machine has done that - first_char = byte_str[0] - if first_char >= 0xB0: - return 94 * (first_char - 0xB0) + byte_str[1] - 0xA1 - return -1 - - -class JOHABDistributionAnalysis(CharDistributionAnalysis): - def __init__(self) -> None: - super().__init__() - self._char_to_freq_order = EUCKR_CHAR_TO_FREQ_ORDER - self._table_size = EUCKR_TABLE_SIZE - self.typical_distribution_ratio = EUCKR_TYPICAL_DISTRIBUTION_RATIO - - def get_order(self, byte_str: Union[bytes, bytearray]) -> int: - first_char = byte_str[0] - if 0x88 <= first_char < 0xD4: - code = first_char * 256 + byte_str[1] - return JOHAB_TO_EUCKR_ORDER_TABLE.get(code, -1) - return -1 - - -class GB2312DistributionAnalysis(CharDistributionAnalysis): - def __init__(self) -> None: - super().__init__() - self._char_to_freq_order = GB2312_CHAR_TO_FREQ_ORDER - self._table_size = GB2312_TABLE_SIZE - self.typical_distribution_ratio = GB2312_TYPICAL_DISTRIBUTION_RATIO - - def get_order(self, byte_str: Union[bytes, bytearray]) -> int: - # for GB2312 encoding, we are interested - # first byte range: 0xb0 -- 0xfe - # second byte range: 0xa1 -- 0xfe - # no validation needed here. State machine has done that - first_char, second_char = byte_str[0], byte_str[1] - if (first_char >= 0xB0) and (second_char >= 0xA1): - return 94 * (first_char - 0xB0) + second_char - 0xA1 - return -1 - - -class Big5DistributionAnalysis(CharDistributionAnalysis): - def __init__(self) -> None: - super().__init__() - self._char_to_freq_order = BIG5_CHAR_TO_FREQ_ORDER - self._table_size = BIG5_TABLE_SIZE - self.typical_distribution_ratio = BIG5_TYPICAL_DISTRIBUTION_RATIO - - def get_order(self, byte_str: Union[bytes, bytearray]) -> int: - # for big5 encoding, we are interested - # first byte range: 0xa4 -- 0xfe - # second byte range: 0x40 -- 0x7e , 0xa1 -- 0xfe - # no validation needed here. State machine has done that - first_char, second_char = byte_str[0], byte_str[1] - if first_char >= 0xA4: - if second_char >= 0xA1: - return 157 * (first_char - 0xA4) + second_char - 0xA1 + 63 - return 157 * (first_char - 0xA4) + second_char - 0x40 - return -1 - - -class SJISDistributionAnalysis(CharDistributionAnalysis): - def __init__(self) -> None: - super().__init__() - self._char_to_freq_order = JIS_CHAR_TO_FREQ_ORDER - self._table_size = JIS_TABLE_SIZE - self.typical_distribution_ratio = JIS_TYPICAL_DISTRIBUTION_RATIO - - def get_order(self, byte_str: Union[bytes, bytearray]) -> int: - # for sjis encoding, we are interested - # first byte range: 0x81 -- 0x9f , 0xe0 -- 0xfe - # second byte range: 0x40 -- 0x7e, 0x81 -- oxfe - # no validation needed here. State machine has done that - first_char, second_char = byte_str[0], byte_str[1] - if 0x81 <= first_char <= 0x9F: - order = 188 * (first_char - 0x81) - elif 0xE0 <= first_char <= 0xEF: - order = 188 * (first_char - 0xE0 + 31) - else: - return -1 - order = order + second_char - 0x40 - if second_char > 0x7F: - order = -1 - return order - - -class EUCJPDistributionAnalysis(CharDistributionAnalysis): - def __init__(self) -> None: - super().__init__() - self._char_to_freq_order = JIS_CHAR_TO_FREQ_ORDER - self._table_size = JIS_TABLE_SIZE - self.typical_distribution_ratio = JIS_TYPICAL_DISTRIBUTION_RATIO - - def get_order(self, byte_str: Union[bytes, bytearray]) -> int: - # for euc-JP encoding, we are interested - # first byte range: 0xa0 -- 0xfe - # second byte range: 0xa1 -- 0xfe - # no validation needed here. State machine has done that - char = byte_str[0] - if char >= 0xA0: - return 94 * (char - 0xA1) + byte_str[1] - 0xA1 - return -1 diff --git a/spaces/cihyFjudo/fairness-paper-search/Badrinath Ki Dulhania tamil dubbed 1080p online The story of a small-town boy and a girl who wants an independent life.md b/spaces/cihyFjudo/fairness-paper-search/Badrinath Ki Dulhania tamil dubbed 1080p online The story of a small-town boy and a girl who wants an independent life.md deleted file mode 100644 index ce87d4ddb7967d4981ef6a88abe7b53554f111c4..0000000000000000000000000000000000000000 --- a/spaces/cihyFjudo/fairness-paper-search/Badrinath Ki Dulhania tamil dubbed 1080p online The story of a small-town boy and a girl who wants an independent life.md +++ /dev/null @@ -1,6 +0,0 @@ -

      Badrinath Ki Dulhania tamil dubbed 1080p online


      Download 🔗 https://tinurli.com/2uwip1



      - - aaccfb2cb3
      -
      -
      -

      diff --git a/spaces/cihyFjudo/fairness-paper-search/University of MD College Park Library Hours and Services Everything You Need to Know.md b/spaces/cihyFjudo/fairness-paper-search/University of MD College Park Library Hours and Services Everything You Need to Know.md deleted file mode 100644 index 321d98f97ab24f6e0521fc6e092743aad957c9b2..0000000000000000000000000000000000000000 --- a/spaces/cihyFjudo/fairness-paper-search/University of MD College Park Library Hours and Services Everything You Need to Know.md +++ /dev/null @@ -1,17 +0,0 @@ -
      -

      The UMD Libraries are a key academic resource that supports the teaching, learning, and research goals of the university. The various materials collected by the libraries can be accessed by students, scholars, and the general public. The libraries feature 4 million volumes and a substantial number of e-resources (including more than 17,000 e-journal titles), a variety of archives and special collections, and a host of technological resources which enable remote online access to the Libraries' holdings and services. They are members of both the Big Ten Academic Alliance (BTAA)[2] and the University System of Maryland and Affiliate Institutions (USMAI). The libraries are currently ranked 10th in electronic resources as a percentage of total library materials by the 115-member Association of Research Libraries.[3][4]

      -

      university of md college park library hours


      Download Zip >> https://tinurli.com/2uwk23



      -

      A library/gym building was constructed on campus in 1893, which survived the Great Fire of 1912;[5][6] the building, which stood where Tydings Hall now stands, was razed in 1958.[7] A new library building, called Shoemaker Library (now known as the Shoemaker Building), was constructed in 1931 (named for Samuel M. Shoemaker, chairman of the Board of Regents from 1916 to 1933), and served as the university's main library until the construction of McKeldin Library in 1958.[8][9]

      -

      The university's library became a Federal depository library in 1925, a status it has held since. In 1965, the library system became the Regional Depository for Maryland, Delaware, and the District of Columbia.[10]

      -

      McKeldin Library is the main branch of the University of Maryland Library system. Constructed in 1958, the building is named for Theodore McKeldin, the former Governor of Maryland.[14] McKeldin Library is one of the largest buildings on campus,[15] consisting of seven floors and a basement.[16] Located at the western end of McKeldin Mall, the library is home to the university's General Collection.[17] and the 90,000 volume East Asia Collection.[18] McKeldin Library also serves as a regional Federal depository library, housing the U.S. Government Information, Maps & GIS Services collection,[19] and previously hosted the Maryland Institute for Technology in the Humanities (MITH)[20] until the summer of 2012, when MITH moved to its new home in the university's Hornbake Library. Also housed in McKeldin Library are several computer labs, a copy shop, and Footnotes Café.[21]

      -

      McKeldin Library remains open to students, faculty, and staff on a 24/5 schedule most of the Fall and Spring semesters (from 11am Sunday morning to 8pm Friday night), in order to provide late night study hours for university students; A UMD (College Park) identification card is necessary to gain access to the building during the late night study hours (after 11pm and before 8am). Note: During Fall and Spring semesters McKeldin library is closed to all users on Fridays after 8pm. Saturday open hours are from 10am to 9pm. 24/5 re-opening is at 11am Sunday morning.[22]

      -

      Former Dean of Libraries Patricia Steele announced plans to gut the second floor of McKeldin during the summer of 2010 in order to make room for a new "Terrapin Learning Commons" (commonly referred to as the TLC). Steele hoped to "reevaluate" all seven of the library's floors, with the ultimate goal of (gradually) transforming McKeldin into a study-oriented, laptop-friendly central library for the university, and perhaps creating a floor specifically designed for graduate students.[23][24][25]The new laptop-friendly learning commons opened for the Fall 2011 semester, with plans to add multimedia workstations and lockers which can recharge laptops in between classes. A graduate-only study room opened later during the fall semester.[26] In September 2012, the TLC expanded to include a Tech Desk,[27] which provides a variety of services, including equipment loan, and specialized printing support.[28]

      -

      -

      Hours: Visitor parking is enforced seven days a week from 7 a.m. to midnight, unless otherwise noted on the meter. With the exception of Labor Day, meters are not enforced on university-observed holidays.

      -

      Visitors and vehicles displaying state-issued disabled parking identification may park in designated visitor spaces including those that are ADA accessible. Parking in campus lots that begin with a letter or number during restricted hours is not permitted. This includes accessible spaces.

      -

      At individual metered spaces: Guests with disabilities may park at individual metered spaces for twice the amount of time listed on the meter OR four hours, whichever is shorter, and do NOT need to pay for parking during that time.

      -

      Designated visitor parking on campus is available using digital pay stations in one surface lot and four parking garages. A limited number of metered spaces are also located around campus, and many faculty and staff lots offer unrestricted parking during evening hours.

      -

      Parking fines and towing fees will put a damper on your UMD visit. So, before you hit the road, review your parking options, their hours of operation, and what they cost. Here are the 9 best places for visitors to park at the University of Maryland, both on and off campus.

      -


      John E. Harms Academic Center, Prince Frederick campus, College of Southern Maryland, Prince Frederick, Maryland, November 2017. Photo by Diane F. Evartt.

    8. College 411: A Student Guide to Higher Education & Financial Aid in Maryland (MHEC)
    9. Plan for College (U.S. Department of Education)
    10. Financial Aid Resources
    11. Private (or Alternative) Student Loans
    12. Scholarships & Savings Plans George Peabody statue before Peabody Institute, Mount Vernon Place, Baltimore, Maryland, March 2009. Photo by Diane F. Evartt.PUBLIC UNIVERSITIES & COLLEGES
      In Maryland, public higher education is served by:
      • Baltimore City Community College
      • Morgan State University
      • St. Mary's College of Maryland
      • University System of Maryland, which includes eleven campuses:
      University of Maryland School of Law, 500 West Baltimore St., Baltimore, Maryland, December 2007. Photo by Diane F. Evartt.
    13. Bowie State University
    14. Coppin State University
    15. Frostburg State University
    16. Salisbury University
    17. Towson University
    18. University of Baltimore
    19. University of Maryland, Baltimore
    20. University of Maryland Baltimore County
    21. University of Maryland, College Park
    22. University of Maryland Eastern Shore
    23. University of Maryland Global Campus (formerly University of Maryland University College)College of Agriculture & Natural Resources, Symons Hall, University of Maryland, College Park, Maryland, August 2003. Photo by Diane F. Evartt.The University System of Maryland also includes the University of Maryland Center for Environmental Science.In addition, sixteen community colleges and eight regional higher education centers serve the public. Information about public universities and colleges (including community colleges) is available from the Maryland Higher Education Commission.Careers Center, Anne Arundel Community College, Arnold, Maryland, October 2015. Photo by Diane F. Evartt.In Fall 2021, some 271,900 students (undergraduate, graduate, & professional) enrolled at Maryland public universities and colleges. For undergraduates in Fall 2020, Maryland residents constituted 91.9% of enrollees at community colleges, 66.4% at public four-year institutions, and 41.9% at independent universities and colleges.In the 2021-22 school year, the average yearly cost for resident undergraduates attending a State college is $9,820. For nonresidents, the average is $23,654. William E. Henry Administration Building, Bowie State University, Bowie, Maryland, September 2017. Photo by Diane F. Evartt.A consortium of community colleges and universities offer courses online to students unable to attend classes on a campus through MarylandOnline. Initiated in the fall of 1999, the consortium now includes 19 members: Allegany College of Maryland; Anne Arundel Community College; Baltimore City Community College; Community College of Baltimore County; Carroll Community College; Cecil College; Chesapeake College; College of Southern Maryland; Frederick Community College; Garrett College; Hagerstown Community College; Harford Community College; Howard Community College; Montgomery College; Morgan State University; Prince George's Community College; Stevenson University; University of Maryland Global Campus; and Wor-Wic Community College.Maryland also participates in the Academic Common Market, an education consortium of fifteen southern states. Reduced tuition is offered to students who attend schools out of state because their program is not available at a public in-state college or university. States who participate with Maryland in this program are: Alabama; Arkansas; Delaware; Florida (graduate programs); Georgia; Kentucky; Louisiana; Mississippi; Oklahoma; South Carolina; Tennessee; Texas (graduate programs); Virginia; and West Virginia. Health Sciences & Human Services Library, University of Maryland School of Medicine, 601 West Lombard St., Baltimore, Maryland, September 2018. Photo by Diane F. Evartt.Postgraduate professional degree programs are offered by: University of Baltimore (law); University of Maryland, Baltimore (dentistry, law, medicine, pharmacy); University of Maryland, College Park (veterinary medicine); and The Johns Hopkins University (medicine)Maryland has 20 higher education institutions designated as National Center of Academic Excellence in Cybersecurity, a joint program of the National Security Agency, the Cybersecurity and Infrastructure Security Agency, the Federal Bureau of Investigation, the National Institute of Standards and Technology/National Initiative on Cybersecurity Education, the National Science Foundation, the Department of Defense Office of the Chief Information Officer, and US Cyber Command. These institutions include Anne Arundel Community College, Bowie State University, Capitol Technology University, Cecil College, Community College of Baltimore County, Hagerstown Community College, Harford Community College, Hood College, Howard Community College, The Johns Hopkins University, Montgomery College, Morgan State University, Prince George's Community College, SANS Technology Institute, College of Southern Maryland, Towson University, United States Naval Academy, University of Maryland, Baltimore County, University of Maryland, College Park, and University of Maryland Global Campus. Francis King Carey School of Law, University of Maryland, 500 West Baltimore St., Baltimore, Maryland, August 2018. Photo by Diane F. Evartt.Scholarships & Savings Plans. To further education in Maryland, the State provides numerous assistance and scholarship options for college. The Maryland 529 Board oversees the College Savings Plans of Maryland, including the Maryland Prepaid College Trust, and the Maryland College Investment Plan. Assistance also is offered through the Office of Student Financial Assistance. Within the Maryland Higher Education Commission, the Office is responsible for all State student financial aid programs.Wellness & Aquatics Center (Building D), Leonardtown Campus, College of Southern Maryland, Hollywood Road, Leonardtown, Maryland, November 2017. Photo by Diane F. Evartt.Founded in Annapolis in 1845, the U.S. Naval Academy is a federal institution that prepares young men and women to become professional officers in the U.S. Navy and the U.S. Marine Corps. Each academic year, over 4,000 midshipmen enroll as full-time students at the U.S. Naval Academy. After four years, graduating midshipmen are granted a Bachelor of Science degree in one of 26 majors and are commissioned as either ensigns in the U.S. Navy or second lieutenants in the U.S. Marine Corps for a minimum of five years on active duty.
      Midshipmen, U.S. Naval Academy, Annapolis, Maryland, July 2009. Photo by Andrew L. Baringer.In U.S. News & World Report magazine's "2022 Best Colleges" list, the U.S. Naval Academy ranked first as the Top Public School among National Liberal Arts Colleges and tied for sixth among National Liberal Arts Colleges.

      U.S. Naval Academy Chapel, Annapolis, Maryland, April 1999. Photo by Diane P. Frese.During Commissioning Week, prior to graduation, the Navy's flight demonstration squadron, the Blue Angels, performs over Annapolis.U.S. Navy Blue Angel (left), Navy-Marine Corps Memorial Stadium, 550 Taylor Ave., Annapolis, Maryland, April 2016. Photo by Sarah A. Hanks.U.S. Navy Blue Angels (right), Baltimore, Maryland, September 2014. Photo by Sarah A. Hanks.Maryland Government
      Maryland Constitutional Offices & Agencies
      Maryland Departments
      Maryland Independent Agencies
      Maryland Executive Commissions, Committees, Task Forces, & Advisory Boards
      Interstate Agencies (Maryland memberships)
      Maryland Universities & Colleges
      Maryland Counties
      Maryland Municipalities
      Maryland at a GlanceMaryland Manual On-LineSearch the Manual
      e-mail: mdmanual@maryland.gov

      aaccfb2cb3
      -
      -
      \ No newline at end of file diff --git a/spaces/colakin/video-generater/public/ffmpeg/fftools/Makefile b/spaces/colakin/video-generater/public/ffmpeg/fftools/Makefile deleted file mode 100644 index 56820e6bc8384b3236487a78e6db49d3279d9eda..0000000000000000000000000000000000000000 --- a/spaces/colakin/video-generater/public/ffmpeg/fftools/Makefile +++ /dev/null @@ -1,62 +0,0 @@ -AVPROGS-$(CONFIG_FFMPEG) += ffmpeg -AVPROGS-$(CONFIG_FFPLAY) += ffplay -AVPROGS-$(CONFIG_FFPROBE) += ffprobe - -AVPROGS := $(AVPROGS-yes:%=%$(PROGSSUF)$(EXESUF)) -PROGS += $(AVPROGS) - -AVBASENAMES = ffmpeg ffplay ffprobe -ALLAVPROGS = $(AVBASENAMES:%=%$(PROGSSUF)$(EXESUF)) -ALLAVPROGS_G = $(AVBASENAMES:%=%$(PROGSSUF)_g$(EXESUF)) - -OBJS-ffmpeg += \ - fftools/ffmpeg_dec.o \ - fftools/ffmpeg_demux.o \ - fftools/ffmpeg_enc.o \ - fftools/ffmpeg_filter.o \ - fftools/ffmpeg_hw.o \ - fftools/ffmpeg_mux.o \ - fftools/ffmpeg_mux_init.o \ - fftools/ffmpeg_opt.o \ - fftools/objpool.o \ - fftools/sync_queue.o \ - fftools/thread_queue.o \ - -define DOFFTOOL -OBJS-$(1) += fftools/cmdutils.o fftools/opt_common.o fftools/$(1).o $(OBJS-$(1)-yes) -ifdef HAVE_GNU_WINDRES -OBJS-$(1) += fftools/fftoolsres.o -endif -$(1)$(PROGSSUF)_g$(EXESUF): $$(OBJS-$(1)) -$$(OBJS-$(1)): | fftools -$$(OBJS-$(1)): CFLAGS += $(CFLAGS-$(1)) -$(1)$(PROGSSUF)_g$(EXESUF): LDFLAGS += $(LDFLAGS-$(1)) -$(1)$(PROGSSUF)_g$(EXESUF): FF_EXTRALIBS += $(EXTRALIBS-$(1)) --include $$(OBJS-$(1):.o=.d) -endef - -$(foreach P,$(AVPROGS-yes),$(eval $(call DOFFTOOL,$(P)))) - -all: $(AVPROGS) - -fftools/ffprobe.o fftools/cmdutils.o: libavutil/ffversion.h | fftools -OUTDIRS += fftools - -ifdef AVPROGS -install: install-progs install-data -endif - -install-progs-yes: -install-progs-$(CONFIG_SHARED): install-libs - -install-progs: install-progs-yes $(AVPROGS) - $(Q)mkdir -p "$(BINDIR)" - $(INSTALL) -c -m 755 $(AVPROGS) "$(BINDIR)" - -uninstall: uninstall-progs - -uninstall-progs: - $(RM) $(addprefix "$(BINDIR)/", $(ALLAVPROGS)) - -clean:: - $(RM) $(ALLAVPROGS) $(ALLAVPROGS_G) $(CLEANSUFFIXES:%=fftools/%) diff --git a/spaces/colakin/video-generater/public/ffmpeg/libavcodec/ac3dec.c b/spaces/colakin/video-generater/public/ffmpeg/libavcodec/ac3dec.c deleted file mode 100644 index fc0cbeb4938b71ebf670ed0a1f23afe61464abb5..0000000000000000000000000000000000000000 --- a/spaces/colakin/video-generater/public/ffmpeg/libavcodec/ac3dec.c +++ /dev/null @@ -1,1876 +0,0 @@ -/* - * AC-3 Audio Decoder - * This code was developed as part of Google Summer of Code 2006. - * E-AC-3 support was added as part of Google Summer of Code 2007. - * - * Copyright (c) 2006 Kartikey Mahendra BHATT (bhattkm at gmail dot com) - * Copyright (c) 2007-2008 Bartlomiej Wolowiec - * Copyright (c) 2007 Justin Ruggles - * - * This file is part of FFmpeg. - * - * FFmpeg is free software; you can redistribute it and/or - * modify it under the terms of the GNU Lesser General Public - * License as published by the Free Software Foundation; either - * version 2.1 of the License, or (at your option) any later version. - * - * FFmpeg is distributed in the hope that it will be useful, - * but WITHOUT ANY WARRANTY; without even the implied warranty of - * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU - * Lesser General Public License for more details. - * - * You should have received a copy of the GNU Lesser General Public - * License along with FFmpeg; if not, write to the Free Software - * Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA - */ - -#include "config_components.h" - -#include -#include -#include -#include - -#include "libavutil/channel_layout.h" -#include "libavutil/crc.h" -#include "libavutil/downmix_info.h" -#include "libavutil/intmath.h" -#include "libavutil/opt.h" -#include "libavutil/thread.h" -#include "bswapdsp.h" -#include "aac_ac3_parser.h" -#include "ac3_parser_internal.h" -#include "ac3dec.h" -#include "ac3dec_data.h" -#include "ac3defs.h" -#include "decode.h" -#include "kbdwin.h" - -/** - * table for ungrouping 3 values in 7 bits. - * used for exponents and bap=2 mantissas - */ -static uint8_t ungroup_3_in_7_bits_tab[128][3]; - -/** tables for ungrouping mantissas */ -static int b1_mantissas[32][3]; -static int b2_mantissas[128][3]; -static int b3_mantissas[8]; -static int b4_mantissas[128][2]; -static int b5_mantissas[16]; - -/** - * Quantization table: levels for symmetric. bits for asymmetric. - * reference: Table 7.18 Mapping of bap to Quantizer - */ -static const uint8_t quantization_tab[16] = { - 0, 3, 5, 7, 11, 15, - 5, 6, 7, 8, 9, 10, 11, 12, 14, 16 -}; - -#if (!USE_FIXED) -/** dynamic range table. converts codes to scale factors. */ -static float dynamic_range_tab[256]; -float ff_ac3_heavy_dynamic_range_tab[256]; -#endif - -/** Adjustments in dB gain */ -static const float gain_levels[9] = { - LEVEL_PLUS_3DB, - LEVEL_PLUS_1POINT5DB, - LEVEL_ONE, - LEVEL_MINUS_1POINT5DB, - LEVEL_MINUS_3DB, - LEVEL_MINUS_4POINT5DB, - LEVEL_MINUS_6DB, - LEVEL_ZERO, - LEVEL_MINUS_9DB -}; - -/** Adjustments in dB gain (LFE, +10 to -21 dB) */ -static const float gain_levels_lfe[32] = { - 3.162275, 2.818382, 2.511886, 2.238719, 1.995261, 1.778278, 1.584893, - 1.412536, 1.258924, 1.122018, 1.000000, 0.891251, 0.794328, 0.707946, - 0.630957, 0.562341, 0.501187, 0.446683, 0.398107, 0.354813, 0.316227, - 0.281838, 0.251188, 0.223872, 0.199526, 0.177828, 0.158489, 0.141253, - 0.125892, 0.112201, 0.100000, 0.089125 -}; - -/** - * Table for default stereo downmixing coefficients - * reference: Section 7.8.2 Downmixing Into Two Channels - */ -static const uint8_t ac3_default_coeffs[8][5][2] = { - { { 2, 7 }, { 7, 2 }, }, - { { 4, 4 }, }, - { { 2, 7 }, { 7, 2 }, }, - { { 2, 7 }, { 5, 5 }, { 7, 2 }, }, - { { 2, 7 }, { 7, 2 }, { 6, 6 }, }, - { { 2, 7 }, { 5, 5 }, { 7, 2 }, { 8, 8 }, }, - { { 2, 7 }, { 7, 2 }, { 6, 7 }, { 7, 6 }, }, - { { 2, 7 }, { 5, 5 }, { 7, 2 }, { 6, 7 }, { 7, 6 }, }, -}; - -/** - * Symmetrical Dequantization - * reference: Section 7.3.3 Expansion of Mantissas for Symmetrical Quantization - * Tables 7.19 to 7.23 - */ -static inline int -symmetric_dequant(int code, int levels) -{ - return ((code - (levels >> 1)) * (1 << 24)) / levels; -} - -/* - * Initialize tables at runtime. - */ -static av_cold void ac3_tables_init(void) -{ - int i; - - /* generate table for ungrouping 3 values in 7 bits - reference: Section 7.1.3 Exponent Decoding */ - for (i = 0; i < 128; i++) { - ungroup_3_in_7_bits_tab[i][0] = i / 25; - ungroup_3_in_7_bits_tab[i][1] = (i % 25) / 5; - ungroup_3_in_7_bits_tab[i][2] = (i % 25) % 5; - } - - /* generate grouped mantissa tables - reference: Section 7.3.5 Ungrouping of Mantissas */ - for (i = 0; i < 32; i++) { - /* bap=1 mantissas */ - b1_mantissas[i][0] = symmetric_dequant(ff_ac3_ungroup_3_in_5_bits_tab[i][0], 3); - b1_mantissas[i][1] = symmetric_dequant(ff_ac3_ungroup_3_in_5_bits_tab[i][1], 3); - b1_mantissas[i][2] = symmetric_dequant(ff_ac3_ungroup_3_in_5_bits_tab[i][2], 3); - } - for (i = 0; i < 128; i++) { - /* bap=2 mantissas */ - b2_mantissas[i][0] = symmetric_dequant(ungroup_3_in_7_bits_tab[i][0], 5); - b2_mantissas[i][1] = symmetric_dequant(ungroup_3_in_7_bits_tab[i][1], 5); - b2_mantissas[i][2] = symmetric_dequant(ungroup_3_in_7_bits_tab[i][2], 5); - - /* bap=4 mantissas */ - b4_mantissas[i][0] = symmetric_dequant(i / 11, 11); - b4_mantissas[i][1] = symmetric_dequant(i % 11, 11); - } - /* generate ungrouped mantissa tables - reference: Tables 7.21 and 7.23 */ - for (i = 0; i < 7; i++) { - /* bap=3 mantissas */ - b3_mantissas[i] = symmetric_dequant(i, 7); - } - for (i = 0; i < 15; i++) { - /* bap=5 mantissas */ - b5_mantissas[i] = symmetric_dequant(i, 15); - } - -#if (!USE_FIXED) - /* generate dynamic range table - reference: Section 7.7.1 Dynamic Range Control */ - for (i = 0; i < 256; i++) { - int v = (i >> 5) - ((i >> 7) << 3) - 5; - dynamic_range_tab[i] = powf(2.0f, v) * ((i & 0x1F) | 0x20); - } - - /* generate compr dynamic range table - reference: Section 7.7.2 Heavy Compression */ - for (i = 0; i < 256; i++) { - int v = (i >> 4) - ((i >> 7) << 4) - 4; - ff_ac3_heavy_dynamic_range_tab[i] = powf(2.0f, v) * ((i & 0xF) | 0x10); - } -#endif -} - -static void ac3_downmix(AVCodecContext *avctx) -{ - AC3DecodeContext *s = avctx->priv_data; - const AVChannelLayout mono = (AVChannelLayout)AV_CHANNEL_LAYOUT_MONO; - const AVChannelLayout stereo = (AVChannelLayout)AV_CHANNEL_LAYOUT_STEREO; - - /* allow downmixing to stereo or mono */ -#if FF_API_OLD_CHANNEL_LAYOUT -FF_DISABLE_DEPRECATION_WARNINGS - if (avctx->request_channel_layout) { - av_channel_layout_uninit(&s->downmix_layout); - av_channel_layout_from_mask(&s->downmix_layout, avctx->request_channel_layout); - } -FF_ENABLE_DEPRECATION_WARNINGS -#endif - if (avctx->ch_layout.nb_channels > 1 && - !av_channel_layout_compare(&s->downmix_layout, &mono)) { - av_channel_layout_uninit(&avctx->ch_layout); - avctx->ch_layout = (AVChannelLayout)AV_CHANNEL_LAYOUT_MONO; - } else if (avctx->ch_layout.nb_channels > 2 && - !av_channel_layout_compare(&s->downmix_layout, &stereo)) { - av_channel_layout_uninit(&avctx->ch_layout); - avctx->ch_layout = (AVChannelLayout)AV_CHANNEL_LAYOUT_STEREO; - } - s->downmixed = 1; -} - -/** - * AVCodec initialization - */ -static av_cold int ac3_decode_init(AVCodecContext *avctx) -{ - static AVOnce init_static_once = AV_ONCE_INIT; - AC3DecodeContext *s = avctx->priv_data; - const float scale = 1.0f; - int i, ret; - - s->avctx = avctx; - - if ((ret = av_tx_init(&s->tx_128, &s->tx_fn_128, IMDCT_TYPE, 1, 128, &scale, 0))) - return ret; - - if ((ret = av_tx_init(&s->tx_256, &s->tx_fn_256, IMDCT_TYPE, 1, 256, &scale, 0))) - return ret; - - AC3_RENAME(ff_kbd_window_init)(s->window, 5.0, 256); - ff_bswapdsp_init(&s->bdsp); - -#if (USE_FIXED) - s->fdsp = avpriv_alloc_fixed_dsp(avctx->flags & AV_CODEC_FLAG_BITEXACT); -#else - ff_fmt_convert_init(&s->fmt_conv); - s->fdsp = avpriv_float_dsp_alloc(avctx->flags & AV_CODEC_FLAG_BITEXACT); -#endif - if (!s->fdsp) - return AVERROR(ENOMEM); - - ff_ac3dsp_init(&s->ac3dsp); - av_lfg_init(&s->dith_state, 0); - - if (USE_FIXED) - avctx->sample_fmt = AV_SAMPLE_FMT_S16P; - else - avctx->sample_fmt = AV_SAMPLE_FMT_FLTP; - - ac3_downmix(avctx); - - for (i = 0; i < AC3_MAX_CHANNELS; i++) { - s->xcfptr[i] = s->transform_coeffs[i]; - s->dlyptr[i] = s->delay[i]; - } - - ff_thread_once(&init_static_once, ac3_tables_init); - - return 0; -} - -/** - * Parse the 'sync info' and 'bit stream info' from the AC-3 bitstream. - * GetBitContext within AC3DecodeContext must point to - * the start of the synchronized AC-3 bitstream. - */ -static int ac3_parse_header(AC3DecodeContext *s) -{ - GetBitContext *gbc = &s->gbc; - int i; - - /* read the rest of the bsi. read twice for dual mono mode. */ - i = !s->channel_mode; - do { - s->dialog_normalization[(!s->channel_mode)-i] = -get_bits(gbc, 5); - if (s->dialog_normalization[(!s->channel_mode)-i] == 0) { - s->dialog_normalization[(!s->channel_mode)-i] = -31; - } - if (s->target_level != 0) { - s->level_gain[(!s->channel_mode)-i] = powf(2.0f, - (float)(s->target_level - - s->dialog_normalization[(!s->channel_mode)-i])/6.0f); - } - if (s->compression_exists[(!s->channel_mode)-i] = get_bits1(gbc)) { - s->heavy_dynamic_range[(!s->channel_mode)-i] = - AC3_HEAVY_RANGE(get_bits(gbc, 8)); - } - if (get_bits1(gbc)) - skip_bits(gbc, 8); //skip language code - if (get_bits1(gbc)) - skip_bits(gbc, 7); //skip audio production information - } while (i--); - - skip_bits(gbc, 2); //skip copyright bit and original bitstream bit - - /* skip the timecodes or parse the Alternate Bit Stream Syntax */ - if (s->bitstream_id != 6) { - if (get_bits1(gbc)) - skip_bits(gbc, 14); //skip timecode1 - if (get_bits1(gbc)) - skip_bits(gbc, 14); //skip timecode2 - } else { - if (get_bits1(gbc)) { - s->preferred_downmix = get_bits(gbc, 2); - s->center_mix_level_ltrt = get_bits(gbc, 3); - s->surround_mix_level_ltrt = av_clip(get_bits(gbc, 3), 3, 7); - s->center_mix_level = get_bits(gbc, 3); - s->surround_mix_level = av_clip(get_bits(gbc, 3), 3, 7); - } - if (get_bits1(gbc)) { - s->dolby_surround_ex_mode = get_bits(gbc, 2); - s->dolby_headphone_mode = get_bits(gbc, 2); - skip_bits(gbc, 10); // skip adconvtyp (1), xbsi2 (8), encinfo (1) - } - } - - /* skip additional bitstream info */ - if (get_bits1(gbc)) { - i = get_bits(gbc, 6); - do { - skip_bits(gbc, 8); - } while (i--); - } - - return 0; -} - -/** - * Common function to parse AC-3 or E-AC-3 frame header - */ -static int parse_frame_header(AC3DecodeContext *s) -{ - AC3HeaderInfo hdr; - int err; - - err = ff_ac3_parse_header(&s->gbc, &hdr); - if (err) - return err; - - /* get decoding parameters from header info */ - s->bit_alloc_params.sr_code = hdr.sr_code; - s->bitstream_id = hdr.bitstream_id; - s->bitstream_mode = hdr.bitstream_mode; - s->channel_mode = hdr.channel_mode; - s->lfe_on = hdr.lfe_on; - s->bit_alloc_params.sr_shift = hdr.sr_shift; - s->sample_rate = hdr.sample_rate; - s->bit_rate = hdr.bit_rate; - s->channels = hdr.channels; - s->fbw_channels = s->channels - s->lfe_on; - s->lfe_ch = s->fbw_channels + 1; - s->frame_size = hdr.frame_size; - s->superframe_size += hdr.frame_size; - s->preferred_downmix = AC3_DMIXMOD_NOTINDICATED; - s->center_mix_level = hdr.center_mix_level; - s->center_mix_level_ltrt = 4; // -3.0dB - s->surround_mix_level = hdr.surround_mix_level; - s->surround_mix_level_ltrt = 4; // -3.0dB - s->lfe_mix_level_exists = 0; - s->num_blocks = hdr.num_blocks; - s->frame_type = hdr.frame_type; - s->substreamid = hdr.substreamid; - s->dolby_surround_mode = hdr.dolby_surround_mode; - s->dolby_surround_ex_mode = AC3_DSUREXMOD_NOTINDICATED; - s->dolby_headphone_mode = AC3_DHEADPHONMOD_NOTINDICATED; - - if (s->lfe_on) { - s->start_freq[s->lfe_ch] = 0; - s->end_freq[s->lfe_ch] = 7; - s->num_exp_groups[s->lfe_ch] = 2; - s->channel_in_cpl[s->lfe_ch] = 0; - } - - if (s->bitstream_id <= 10) { - s->eac3 = 0; - s->snr_offset_strategy = 2; - s->block_switch_syntax = 1; - s->dither_flag_syntax = 1; - s->bit_allocation_syntax = 1; - s->fast_gain_syntax = 0; - s->first_cpl_leak = 0; - s->dba_syntax = 1; - s->skip_syntax = 1; - memset(s->channel_uses_aht, 0, sizeof(s->channel_uses_aht)); - return ac3_parse_header(s); - } else if (CONFIG_EAC3_DECODER) { - s->eac3 = 1; - return ff_eac3_parse_header(s); - } else { - av_log(s->avctx, AV_LOG_ERROR, "E-AC-3 support not compiled in\n"); - return AVERROR(ENOSYS); - } -} - -/** - * Set stereo downmixing coefficients based on frame header info. - * reference: Section 7.8.2 Downmixing Into Two Channels - */ -static int set_downmix_coeffs(AC3DecodeContext *s) -{ - int i; - float cmix = gain_levels[s-> center_mix_level]; - float smix = gain_levels[s->surround_mix_level]; - float norm0, norm1; - float downmix_coeffs[2][AC3_MAX_CHANNELS]; - - if (!s->downmix_coeffs[0]) { - s->downmix_coeffs[0] = av_malloc_array(2 * AC3_MAX_CHANNELS, - sizeof(**s->downmix_coeffs)); - if (!s->downmix_coeffs[0]) - return AVERROR(ENOMEM); - s->downmix_coeffs[1] = s->downmix_coeffs[0] + AC3_MAX_CHANNELS; - } - - for (i = 0; i < s->fbw_channels; i++) { - downmix_coeffs[0][i] = gain_levels[ac3_default_coeffs[s->channel_mode][i][0]]; - downmix_coeffs[1][i] = gain_levels[ac3_default_coeffs[s->channel_mode][i][1]]; - } - if (s->channel_mode > 1 && s->channel_mode & 1) { - downmix_coeffs[0][1] = downmix_coeffs[1][1] = cmix; - } - if (s->channel_mode == AC3_CHMODE_2F1R || s->channel_mode == AC3_CHMODE_3F1R) { - int nf = s->channel_mode - 2; - downmix_coeffs[0][nf] = downmix_coeffs[1][nf] = smix * LEVEL_MINUS_3DB; - } - if (s->channel_mode == AC3_CHMODE_2F2R || s->channel_mode == AC3_CHMODE_3F2R) { - int nf = s->channel_mode - 4; - downmix_coeffs[0][nf] = downmix_coeffs[1][nf+1] = smix; - } - - /* renormalize */ - norm0 = norm1 = 0.0; - for (i = 0; i < s->fbw_channels; i++) { - norm0 += downmix_coeffs[0][i]; - norm1 += downmix_coeffs[1][i]; - } - norm0 = 1.0f / norm0; - norm1 = 1.0f / norm1; - for (i = 0; i < s->fbw_channels; i++) { - downmix_coeffs[0][i] *= norm0; - downmix_coeffs[1][i] *= norm1; - } - - if (s->output_mode == AC3_CHMODE_MONO) { - for (i = 0; i < s->fbw_channels; i++) - downmix_coeffs[0][i] = (downmix_coeffs[0][i] + - downmix_coeffs[1][i]) * LEVEL_MINUS_3DB; - } - for (i = 0; i < s->fbw_channels; i++) { - s->downmix_coeffs[0][i] = FIXR12(downmix_coeffs[0][i]); - s->downmix_coeffs[1][i] = FIXR12(downmix_coeffs[1][i]); - } - - return 0; -} - -/** - * Decode the grouped exponents according to exponent strategy. - * reference: Section 7.1.3 Exponent Decoding - */ -static int decode_exponents(AC3DecodeContext *s, - GetBitContext *gbc, int exp_strategy, int ngrps, - uint8_t absexp, int8_t *dexps) -{ - int i, j, grp, group_size; - int dexp[256]; - int expacc, prevexp; - - /* unpack groups */ - group_size = exp_strategy + (exp_strategy == EXP_D45); - for (grp = 0, i = 0; grp < ngrps; grp++) { - expacc = get_bits(gbc, 7); - if (expacc >= 125) { - av_log(s->avctx, AV_LOG_ERROR, "expacc %d is out-of-range\n", expacc); - return AVERROR_INVALIDDATA; - } - dexp[i++] = ungroup_3_in_7_bits_tab[expacc][0]; - dexp[i++] = ungroup_3_in_7_bits_tab[expacc][1]; - dexp[i++] = ungroup_3_in_7_bits_tab[expacc][2]; - } - - /* convert to absolute exps and expand groups */ - prevexp = absexp; - for (i = 0, j = 0; i < ngrps * 3; i++) { - prevexp += dexp[i] - 2; - if (prevexp > 24U) { - av_log(s->avctx, AV_LOG_ERROR, "exponent %d is out-of-range\n", prevexp); - return AVERROR_INVALIDDATA; - } - switch (group_size) { - case 4: dexps[j++] = prevexp; - dexps[j++] = prevexp; - case 2: dexps[j++] = prevexp; - case 1: dexps[j++] = prevexp; - } - } - return 0; -} - -/** - * Generate transform coefficients for each coupled channel in the coupling - * range using the coupling coefficients and coupling coordinates. - * reference: Section 7.4.3 Coupling Coordinate Format - */ -static void calc_transform_coeffs_cpl(AC3DecodeContext *s) -{ - int bin, band, ch; - - bin = s->start_freq[CPL_CH]; - for (band = 0; band < s->num_cpl_bands; band++) { - int band_start = bin; - int band_end = bin + s->cpl_band_sizes[band]; - for (ch = 1; ch <= s->fbw_channels; ch++) { - if (s->channel_in_cpl[ch]) { - int cpl_coord = s->cpl_coords[ch][band] << 5; - for (bin = band_start; bin < band_end; bin++) { - s->fixed_coeffs[ch][bin] = - MULH(s->fixed_coeffs[CPL_CH][bin] * (1 << 4), cpl_coord); - } - if (ch == 2 && s->phase_flags[band]) { - for (bin = band_start; bin < band_end; bin++) - s->fixed_coeffs[2][bin] = -s->fixed_coeffs[2][bin]; - } - } - } - bin = band_end; - } -} - -/** - * Grouped mantissas for 3-level 5-level and 11-level quantization - */ -typedef struct mant_groups { - int b1_mant[2]; - int b2_mant[2]; - int b4_mant; - int b1; - int b2; - int b4; -} mant_groups; - -/** - * Decode the transform coefficients for a particular channel - * reference: Section 7.3 Quantization and Decoding of Mantissas - */ -static void ac3_decode_transform_coeffs_ch(AC3DecodeContext *s, int ch_index, mant_groups *m) -{ - int start_freq = s->start_freq[ch_index]; - int end_freq = s->end_freq[ch_index]; - uint8_t *baps = s->bap[ch_index]; - int8_t *exps = s->dexps[ch_index]; - int32_t *coeffs = s->fixed_coeffs[ch_index]; - int dither = (ch_index == CPL_CH) || s->dither_flag[ch_index]; - GetBitContext *gbc = &s->gbc; - int freq; - - for (freq = start_freq; freq < end_freq; freq++) { - int bap = baps[freq]; - int mantissa; - switch (bap) { - case 0: - /* random noise with approximate range of -0.707 to 0.707 */ - if (dither) - mantissa = (((av_lfg_get(&s->dith_state)>>8)*181)>>8) - 5931008; - else - mantissa = 0; - break; - case 1: - if (m->b1) { - m->b1--; - mantissa = m->b1_mant[m->b1]; - } else { - int bits = get_bits(gbc, 5); - mantissa = b1_mantissas[bits][0]; - m->b1_mant[1] = b1_mantissas[bits][1]; - m->b1_mant[0] = b1_mantissas[bits][2]; - m->b1 = 2; - } - break; - case 2: - if (m->b2) { - m->b2--; - mantissa = m->b2_mant[m->b2]; - } else { - int bits = get_bits(gbc, 7); - mantissa = b2_mantissas[bits][0]; - m->b2_mant[1] = b2_mantissas[bits][1]; - m->b2_mant[0] = b2_mantissas[bits][2]; - m->b2 = 2; - } - break; - case 3: - mantissa = b3_mantissas[get_bits(gbc, 3)]; - break; - case 4: - if (m->b4) { - m->b4 = 0; - mantissa = m->b4_mant; - } else { - int bits = get_bits(gbc, 7); - mantissa = b4_mantissas[bits][0]; - m->b4_mant = b4_mantissas[bits][1]; - m->b4 = 1; - } - break; - case 5: - mantissa = b5_mantissas[get_bits(gbc, 4)]; - break; - default: /* 6 to 15 */ - /* Shift mantissa and sign-extend it. */ - if (bap > 15) { - av_log(s->avctx, AV_LOG_ERROR, "bap %d is invalid in plain AC-3\n", bap); - bap = 15; - } - mantissa = (unsigned)get_sbits(gbc, quantization_tab[bap]) << (24 - quantization_tab[bap]); - break; - } - coeffs[freq] = mantissa >> exps[freq]; - } -} - -/** - * Remove random dithering from coupling range coefficients with zero-bit - * mantissas for coupled channels which do not use dithering. - * reference: Section 7.3.4 Dither for Zero Bit Mantissas (bap=0) - */ -static void remove_dithering(AC3DecodeContext *s) { - int ch, i; - - for (ch = 1; ch <= s->fbw_channels; ch++) { - if (!s->dither_flag[ch] && s->channel_in_cpl[ch]) { - for (i = s->start_freq[CPL_CH]; i < s->end_freq[CPL_CH]; i++) { - if (!s->bap[CPL_CH][i]) - s->fixed_coeffs[ch][i] = 0; - } - } - } -} - -static inline void decode_transform_coeffs_ch(AC3DecodeContext *s, int blk, - int ch, mant_groups *m) -{ - if (!s->channel_uses_aht[ch]) { - ac3_decode_transform_coeffs_ch(s, ch, m); - } else { - /* if AHT is used, mantissas for all blocks are encoded in the first - block of the frame. */ - int bin; - if (CONFIG_EAC3_DECODER && !blk) - ff_eac3_decode_transform_coeffs_aht_ch(s, ch); - for (bin = s->start_freq[ch]; bin < s->end_freq[ch]; bin++) { - s->fixed_coeffs[ch][bin] = s->pre_mantissa[ch][bin][blk] >> s->dexps[ch][bin]; - } - } -} - -/** - * Decode the transform coefficients. - */ -static inline void decode_transform_coeffs(AC3DecodeContext *s, int blk) -{ - int ch, end; - int got_cplchan = 0; - mant_groups m; - - m.b1 = m.b2 = m.b4 = 0; - - for (ch = 1; ch <= s->channels; ch++) { - /* transform coefficients for full-bandwidth channel */ - decode_transform_coeffs_ch(s, blk, ch, &m); - /* transform coefficients for coupling channel come right after the - coefficients for the first coupled channel*/ - if (s->channel_in_cpl[ch]) { - if (!got_cplchan) { - decode_transform_coeffs_ch(s, blk, CPL_CH, &m); - calc_transform_coeffs_cpl(s); - got_cplchan = 1; - } - end = s->end_freq[CPL_CH]; - } else { - end = s->end_freq[ch]; - } - do - s->fixed_coeffs[ch][end] = 0; - while (++end < 256); - } - - /* zero the dithered coefficients for appropriate channels */ - remove_dithering(s); -} - -/** - * Stereo rematrixing. - * reference: Section 7.5.4 Rematrixing : Decoding Technique - */ -static void do_rematrixing(AC3DecodeContext *s) -{ - int bnd, i; - int end, bndend; - - end = FFMIN(s->end_freq[1], s->end_freq[2]); - - for (bnd = 0; bnd < s->num_rematrixing_bands; bnd++) { - if (s->rematrixing_flags[bnd]) { - bndend = FFMIN(end, ff_ac3_rematrix_band_tab[bnd + 1]); - for (i = ff_ac3_rematrix_band_tab[bnd]; i < bndend; i++) { - int tmp0 = s->fixed_coeffs[1][i]; - s->fixed_coeffs[1][i] += s->fixed_coeffs[2][i]; - s->fixed_coeffs[2][i] = tmp0 - s->fixed_coeffs[2][i]; - } - } - } -} - -/** - * Inverse MDCT Transform. - * Convert frequency domain coefficients to time-domain audio samples. - * reference: Section 7.9.4 Transformation Equations - */ -static inline void do_imdct(AC3DecodeContext *s, int channels, int offset) -{ - int ch; - - for (ch = 1; ch <= channels; ch++) { - if (s->block_switch[ch]) { - int i; - INTFLOAT *x = s->tmp_output + 128; - for (i = 0; i < 128; i++) - x[i] = s->transform_coeffs[ch][2 * i]; - s->tx_fn_128(s->tx_128, s->tmp_output, x, sizeof(INTFLOAT)); -#if USE_FIXED - s->fdsp->vector_fmul_window_scaled(s->outptr[ch - 1], s->delay[ch - 1 + offset], - s->tmp_output, s->window, 128, 8); -#else - s->fdsp->vector_fmul_window(s->outptr[ch - 1], s->delay[ch - 1 + offset], - s->tmp_output, s->window, 128); -#endif - for (i = 0; i < 128; i++) - x[i] = s->transform_coeffs[ch][2 * i + 1]; - s->tx_fn_128(s->tx_128, s->delay[ch - 1 + offset], x, sizeof(INTFLOAT)); - } else { - s->tx_fn_256(s->tx_256, s->tmp_output, s->transform_coeffs[ch], sizeof(INTFLOAT)); -#if USE_FIXED - s->fdsp->vector_fmul_window_scaled(s->outptr[ch - 1], s->delay[ch - 1 + offset], - s->tmp_output, s->window, 128, 8); -#else - s->fdsp->vector_fmul_window(s->outptr[ch - 1], s->delay[ch - 1 + offset], - s->tmp_output, s->window, 128); -#endif - memcpy(s->delay[ch - 1 + offset], s->tmp_output + 128, 128 * sizeof(INTFLOAT)); - } - } -} - -/** - * Upmix delay samples from stereo to original channel layout. - */ -static void ac3_upmix_delay(AC3DecodeContext *s) -{ - int channel_data_size = sizeof(s->delay[0]); - switch (s->channel_mode) { - case AC3_CHMODE_DUALMONO: - case AC3_CHMODE_STEREO: - /* upmix mono to stereo */ - memcpy(s->delay[1], s->delay[0], channel_data_size); - break; - case AC3_CHMODE_2F2R: - memset(s->delay[3], 0, channel_data_size); - case AC3_CHMODE_2F1R: - memset(s->delay[2], 0, channel_data_size); - break; - case AC3_CHMODE_3F2R: - memset(s->delay[4], 0, channel_data_size); - case AC3_CHMODE_3F1R: - memset(s->delay[3], 0, channel_data_size); - case AC3_CHMODE_3F: - memcpy(s->delay[2], s->delay[1], channel_data_size); - memset(s->delay[1], 0, channel_data_size); - break; - } -} - -/** - * Decode band structure for coupling, spectral extension, or enhanced coupling. - * The band structure defines how many subbands are in each band. For each - * subband in the range, 1 means it is combined with the previous band, and 0 - * means that it starts a new band. - * - * @param[in] gbc bit reader context - * @param[in] blk block number - * @param[in] eac3 flag to indicate E-AC-3 - * @param[in] ecpl flag to indicate enhanced coupling - * @param[in] start_subband subband number for start of range - * @param[in] end_subband subband number for end of range - * @param[in] default_band_struct default band structure table - * @param[out] num_bands number of bands (optionally NULL) - * @param[out] band_sizes array containing the number of bins in each band (optionally NULL) - * @param[in,out] band_struct current band structure - */ -static void decode_band_structure(GetBitContext *gbc, int blk, int eac3, - int ecpl, int start_subband, int end_subband, - const uint8_t *default_band_struct, - int *num_bands, uint8_t *band_sizes, - uint8_t *band_struct, int band_struct_size) -{ - int subbnd, bnd, n_subbands, n_bands=0; - uint8_t bnd_sz[22]; - - n_subbands = end_subband - start_subband; - - if (!blk) - memcpy(band_struct, default_band_struct, band_struct_size); - - av_assert0(band_struct_size >= start_subband + n_subbands); - - band_struct += start_subband + 1; - - /* decode band structure from bitstream or use default */ - if (!eac3 || get_bits1(gbc)) { - for (subbnd = 0; subbnd < n_subbands - 1; subbnd++) { - band_struct[subbnd] = get_bits1(gbc); - } - } - - /* calculate number of bands and band sizes based on band structure. - note that the first 4 subbands in enhanced coupling span only 6 bins - instead of 12. */ - if (num_bands || band_sizes ) { - n_bands = n_subbands; - bnd_sz[0] = ecpl ? 6 : 12; - for (bnd = 0, subbnd = 1; subbnd < n_subbands; subbnd++) { - int subbnd_size = (ecpl && subbnd < 4) ? 6 : 12; - if (band_struct[subbnd - 1]) { - n_bands--; - bnd_sz[bnd] += subbnd_size; - } else { - bnd_sz[++bnd] = subbnd_size; - } - } - } - - /* set optional output params */ - if (num_bands) - *num_bands = n_bands; - if (band_sizes) - memcpy(band_sizes, bnd_sz, n_bands); -} - -static inline int spx_strategy(AC3DecodeContext *s, int blk) -{ - GetBitContext *bc = &s->gbc; - int fbw_channels = s->fbw_channels; - int dst_start_freq, dst_end_freq, src_start_freq, - start_subband, end_subband, ch; - - /* determine which channels use spx */ - if (s->channel_mode == AC3_CHMODE_MONO) { - s->channel_uses_spx[1] = 1; - } else { - for (ch = 1; ch <= fbw_channels; ch++) - s->channel_uses_spx[ch] = get_bits1(bc); - } - - /* get the frequency bins of the spx copy region and the spx start - and end subbands */ - dst_start_freq = get_bits(bc, 2); - start_subband = get_bits(bc, 3) + 2; - if (start_subband > 7) - start_subband += start_subband - 7; - end_subband = get_bits(bc, 3) + 5; -#if USE_FIXED - s->spx_dst_end_freq = end_freq_inv_tab[end_subband-5]; -#endif - if (end_subband > 7) - end_subband += end_subband - 7; - dst_start_freq = dst_start_freq * 12 + 25; - src_start_freq = start_subband * 12 + 25; - dst_end_freq = end_subband * 12 + 25; - - /* check validity of spx ranges */ - if (start_subband >= end_subband) { - av_log(s->avctx, AV_LOG_ERROR, "invalid spectral extension " - "range (%d >= %d)\n", start_subband, end_subband); - return AVERROR_INVALIDDATA; - } - if (dst_start_freq >= src_start_freq) { - av_log(s->avctx, AV_LOG_ERROR, "invalid spectral extension " - "copy start bin (%d >= %d)\n", dst_start_freq, src_start_freq); - return AVERROR_INVALIDDATA; - } - - s->spx_dst_start_freq = dst_start_freq; - s->spx_src_start_freq = src_start_freq; - if (!USE_FIXED) - s->spx_dst_end_freq = dst_end_freq; - - decode_band_structure(bc, blk, s->eac3, 0, - start_subband, end_subband, - ff_eac3_default_spx_band_struct, - &s->num_spx_bands, - s->spx_band_sizes, - s->spx_band_struct, sizeof(s->spx_band_struct)); - return 0; -} - -static inline void spx_coordinates(AC3DecodeContext *s) -{ - GetBitContext *bc = &s->gbc; - int fbw_channels = s->fbw_channels; - int ch, bnd; - - for (ch = 1; ch <= fbw_channels; ch++) { - if (s->channel_uses_spx[ch]) { - if (s->first_spx_coords[ch] || get_bits1(bc)) { - INTFLOAT spx_blend; - int bin, master_spx_coord; - - s->first_spx_coords[ch] = 0; - spx_blend = AC3_SPX_BLEND(get_bits(bc, 5)); - master_spx_coord = get_bits(bc, 2) * 3; - - bin = s->spx_src_start_freq; - for (bnd = 0; bnd < s->num_spx_bands; bnd++) { - int bandsize = s->spx_band_sizes[bnd]; - int spx_coord_exp, spx_coord_mant; - INTFLOAT nratio, sblend, nblend; -#if USE_FIXED - /* calculate blending factors */ - int64_t accu = ((bin << 23) + (bandsize << 22)) - * (int64_t)s->spx_dst_end_freq; - nratio = (int)(accu >> 32); - nratio -= spx_blend << 18; - - if (nratio < 0) { - nblend = 0; - sblend = 0x800000; - } else if (nratio > 0x7fffff) { - nblend = 14529495; // sqrt(3) in FP.23 - sblend = 0; - } else { - nblend = fixed_sqrt(nratio, 23); - accu = (int64_t)nblend * 1859775393; - nblend = (int)((accu + (1<<29)) >> 30); - sblend = fixed_sqrt(0x800000 - nratio, 23); - } -#else - float spx_coord; - - /* calculate blending factors */ - nratio = ((float)((bin + (bandsize >> 1))) / s->spx_dst_end_freq) - spx_blend; - nratio = av_clipf(nratio, 0.0f, 1.0f); - nblend = sqrtf(3.0f * nratio); // noise is scaled by sqrt(3) - // to give unity variance - sblend = sqrtf(1.0f - nratio); -#endif - bin += bandsize; - - /* decode spx coordinates */ - spx_coord_exp = get_bits(bc, 4); - spx_coord_mant = get_bits(bc, 2); - if (spx_coord_exp == 15) spx_coord_mant <<= 1; - else spx_coord_mant += 4; - spx_coord_mant <<= (25 - spx_coord_exp - master_spx_coord); - - /* multiply noise and signal blending factors by spx coordinate */ -#if USE_FIXED - accu = (int64_t)nblend * spx_coord_mant; - s->spx_noise_blend[ch][bnd] = (int)((accu + (1<<22)) >> 23); - accu = (int64_t)sblend * spx_coord_mant; - s->spx_signal_blend[ch][bnd] = (int)((accu + (1<<22)) >> 23); -#else - spx_coord = spx_coord_mant * (1.0f / (1 << 23)); - s->spx_noise_blend [ch][bnd] = nblend * spx_coord; - s->spx_signal_blend[ch][bnd] = sblend * spx_coord; -#endif - } - } - } else { - s->first_spx_coords[ch] = 1; - } - } -} - -static inline int coupling_strategy(AC3DecodeContext *s, int blk, - uint8_t *bit_alloc_stages) -{ - GetBitContext *bc = &s->gbc; - int fbw_channels = s->fbw_channels; - int channel_mode = s->channel_mode; - int ch; - - memset(bit_alloc_stages, 3, AC3_MAX_CHANNELS); - if (!s->eac3) - s->cpl_in_use[blk] = get_bits1(bc); - if (s->cpl_in_use[blk]) { - /* coupling in use */ - int cpl_start_subband, cpl_end_subband; - - if (channel_mode < AC3_CHMODE_STEREO) { - av_log(s->avctx, AV_LOG_ERROR, "coupling not allowed in mono or dual-mono\n"); - return AVERROR_INVALIDDATA; - } - - /* check for enhanced coupling */ - if (s->eac3 && get_bits1(bc)) { - /* TODO: parse enhanced coupling strategy info */ - avpriv_request_sample(s->avctx, "Enhanced coupling"); - return AVERROR_PATCHWELCOME; - } - - /* determine which channels are coupled */ - if (s->eac3 && s->channel_mode == AC3_CHMODE_STEREO) { - s->channel_in_cpl[1] = 1; - s->channel_in_cpl[2] = 1; - } else { - for (ch = 1; ch <= fbw_channels; ch++) - s->channel_in_cpl[ch] = get_bits1(bc); - } - - /* phase flags in use */ - if (channel_mode == AC3_CHMODE_STEREO) - s->phase_flags_in_use = get_bits1(bc); - - /* coupling frequency range */ - cpl_start_subband = get_bits(bc, 4); - cpl_end_subband = s->spx_in_use ? (s->spx_src_start_freq - 37) / 12 : - get_bits(bc, 4) + 3; - if (cpl_start_subband >= cpl_end_subband) { - av_log(s->avctx, AV_LOG_ERROR, "invalid coupling range (%d >= %d)\n", - cpl_start_subband, cpl_end_subband); - return AVERROR_INVALIDDATA; - } - s->start_freq[CPL_CH] = cpl_start_subband * 12 + 37; - s->end_freq[CPL_CH] = cpl_end_subband * 12 + 37; - - decode_band_structure(bc, blk, s->eac3, 0, cpl_start_subband, - cpl_end_subband, - ff_eac3_default_cpl_band_struct, - &s->num_cpl_bands, s->cpl_band_sizes, - s->cpl_band_struct, sizeof(s->cpl_band_struct)); - } else { - /* coupling not in use */ - for (ch = 1; ch <= fbw_channels; ch++) { - s->channel_in_cpl[ch] = 0; - s->first_cpl_coords[ch] = 1; - } - s->first_cpl_leak = s->eac3; - s->phase_flags_in_use = 0; - } - - return 0; -} - -static inline int coupling_coordinates(AC3DecodeContext *s, int blk) -{ - GetBitContext *bc = &s->gbc; - int fbw_channels = s->fbw_channels; - int ch, bnd; - int cpl_coords_exist = 0; - - for (ch = 1; ch <= fbw_channels; ch++) { - if (s->channel_in_cpl[ch]) { - if ((s->eac3 && s->first_cpl_coords[ch]) || get_bits1(bc)) { - int master_cpl_coord, cpl_coord_exp, cpl_coord_mant; - s->first_cpl_coords[ch] = 0; - cpl_coords_exist = 1; - master_cpl_coord = 3 * get_bits(bc, 2); - for (bnd = 0; bnd < s->num_cpl_bands; bnd++) { - cpl_coord_exp = get_bits(bc, 4); - cpl_coord_mant = get_bits(bc, 4); - if (cpl_coord_exp == 15) - s->cpl_coords[ch][bnd] = cpl_coord_mant << 22; - else - s->cpl_coords[ch][bnd] = (cpl_coord_mant + 16) << 21; - s->cpl_coords[ch][bnd] >>= (cpl_coord_exp + master_cpl_coord); - } - } else if (!blk) { - av_log(s->avctx, AV_LOG_ERROR, "new coupling coordinates must " - "be present in block 0\n"); - return AVERROR_INVALIDDATA; - } - } else { - /* channel not in coupling */ - s->first_cpl_coords[ch] = 1; - } - } - /* phase flags */ - if (s->channel_mode == AC3_CHMODE_STEREO && cpl_coords_exist) { - for (bnd = 0; bnd < s->num_cpl_bands; bnd++) { - s->phase_flags[bnd] = s->phase_flags_in_use ? get_bits1(bc) : 0; - } - } - - return 0; -} - -/** - * Decode a single audio block from the AC-3 bitstream. - */ -static int decode_audio_block(AC3DecodeContext *s, int blk, int offset) -{ - int fbw_channels = s->fbw_channels; - int channel_mode = s->channel_mode; - int i, bnd, seg, ch, ret; - int different_transforms; - int downmix_output; - int cpl_in_use; - GetBitContext *gbc = &s->gbc; - uint8_t bit_alloc_stages[AC3_MAX_CHANNELS] = { 0 }; - - /* block switch flags */ - different_transforms = 0; - if (s->block_switch_syntax) { - for (ch = 1; ch <= fbw_channels; ch++) { - s->block_switch[ch] = get_bits1(gbc); - if (ch > 1 && s->block_switch[ch] != s->block_switch[1]) - different_transforms = 1; - } - } - - /* dithering flags */ - if (s->dither_flag_syntax) { - for (ch = 1; ch <= fbw_channels; ch++) { - s->dither_flag[ch] = get_bits1(gbc); - } - } - - /* dynamic range */ - i = !s->channel_mode; - do { - if (get_bits1(gbc)) { - /* Allow asymmetric application of DRC when drc_scale > 1. - Amplification of quiet sounds is enhanced */ - int range_bits = get_bits(gbc, 8); - INTFLOAT range = AC3_RANGE(range_bits); - if (range_bits <= 127 || s->drc_scale <= 1.0) - s->dynamic_range[i] = AC3_DYNAMIC_RANGE(range); - else - s->dynamic_range[i] = range; - } else if (blk == 0) { - s->dynamic_range[i] = AC3_DYNAMIC_RANGE1; - } - } while (i--); - - /* spectral extension strategy */ - if (s->eac3 && (!blk || get_bits1(gbc))) { - s->spx_in_use = get_bits1(gbc); - if (s->spx_in_use) { - if ((ret = spx_strategy(s, blk)) < 0) - return ret; - } - } - if (!s->eac3 || !s->spx_in_use) { - s->spx_in_use = 0; - for (ch = 1; ch <= fbw_channels; ch++) { - s->channel_uses_spx[ch] = 0; - s->first_spx_coords[ch] = 1; - } - } - - /* spectral extension coordinates */ - if (s->spx_in_use) - spx_coordinates(s); - - /* coupling strategy */ - if (s->eac3 ? s->cpl_strategy_exists[blk] : get_bits1(gbc)) { - if ((ret = coupling_strategy(s, blk, bit_alloc_stages)) < 0) - return ret; - } else if (!s->eac3) { - if (!blk) { - av_log(s->avctx, AV_LOG_ERROR, "new coupling strategy must " - "be present in block 0\n"); - return AVERROR_INVALIDDATA; - } else { - s->cpl_in_use[blk] = s->cpl_in_use[blk-1]; - } - } - cpl_in_use = s->cpl_in_use[blk]; - - /* coupling coordinates */ - if (cpl_in_use) { - if ((ret = coupling_coordinates(s, blk)) < 0) - return ret; - } - - /* stereo rematrixing strategy and band structure */ - if (channel_mode == AC3_CHMODE_STEREO) { - if ((s->eac3 && !blk) || get_bits1(gbc)) { - s->num_rematrixing_bands = 4; - if (cpl_in_use && s->start_freq[CPL_CH] <= 61) { - s->num_rematrixing_bands -= 1 + (s->start_freq[CPL_CH] == 37); - } else if (s->spx_in_use && s->spx_src_start_freq <= 61) { - s->num_rematrixing_bands--; - } - for (bnd = 0; bnd < s->num_rematrixing_bands; bnd++) - s->rematrixing_flags[bnd] = get_bits1(gbc); - } else if (!blk) { - av_log(s->avctx, AV_LOG_WARNING, "Warning: " - "new rematrixing strategy not present in block 0\n"); - s->num_rematrixing_bands = 0; - } - } - - /* exponent strategies for each channel */ - for (ch = !cpl_in_use; ch <= s->channels; ch++) { - if (!s->eac3) - s->exp_strategy[blk][ch] = get_bits(gbc, 2 - (ch == s->lfe_ch)); - if (s->exp_strategy[blk][ch] != EXP_REUSE) - bit_alloc_stages[ch] = 3; - } - - /* channel bandwidth */ - for (ch = 1; ch <= fbw_channels; ch++) { - s->start_freq[ch] = 0; - if (s->exp_strategy[blk][ch] != EXP_REUSE) { - int group_size; - int prev = s->end_freq[ch]; - if (s->channel_in_cpl[ch]) - s->end_freq[ch] = s->start_freq[CPL_CH]; - else if (s->channel_uses_spx[ch]) - s->end_freq[ch] = s->spx_src_start_freq; - else { - int bandwidth_code = get_bits(gbc, 6); - if (bandwidth_code > 60) { - av_log(s->avctx, AV_LOG_ERROR, "bandwidth code = %d > 60\n", bandwidth_code); - return AVERROR_INVALIDDATA; - } - s->end_freq[ch] = bandwidth_code * 3 + 73; - } - group_size = 3 << (s->exp_strategy[blk][ch] - 1); - s->num_exp_groups[ch] = (s->end_freq[ch] + group_size-4) / group_size; - if (blk > 0 && s->end_freq[ch] != prev) - memset(bit_alloc_stages, 3, AC3_MAX_CHANNELS); - } - } - if (cpl_in_use && s->exp_strategy[blk][CPL_CH] != EXP_REUSE) { - s->num_exp_groups[CPL_CH] = (s->end_freq[CPL_CH] - s->start_freq[CPL_CH]) / - (3 << (s->exp_strategy[blk][CPL_CH] - 1)); - } - - /* decode exponents for each channel */ - for (ch = !cpl_in_use; ch <= s->channels; ch++) { - if (s->exp_strategy[blk][ch] != EXP_REUSE) { - s->dexps[ch][0] = get_bits(gbc, 4) << !ch; - if (decode_exponents(s, gbc, s->exp_strategy[blk][ch], - s->num_exp_groups[ch], s->dexps[ch][0], - &s->dexps[ch][s->start_freq[ch]+!!ch])) { - return AVERROR_INVALIDDATA; - } - if (ch != CPL_CH && ch != s->lfe_ch) - skip_bits(gbc, 2); /* skip gainrng */ - } - } - - /* bit allocation information */ - if (s->bit_allocation_syntax) { - if (get_bits1(gbc)) { - s->bit_alloc_params.slow_decay = ff_ac3_slow_decay_tab[get_bits(gbc, 2)] >> s->bit_alloc_params.sr_shift; - s->bit_alloc_params.fast_decay = ff_ac3_fast_decay_tab[get_bits(gbc, 2)] >> s->bit_alloc_params.sr_shift; - s->bit_alloc_params.slow_gain = ff_ac3_slow_gain_tab[get_bits(gbc, 2)]; - s->bit_alloc_params.db_per_bit = ff_ac3_db_per_bit_tab[get_bits(gbc, 2)]; - s->bit_alloc_params.floor = ff_ac3_floor_tab[get_bits(gbc, 3)]; - for (ch = !cpl_in_use; ch <= s->channels; ch++) - bit_alloc_stages[ch] = FFMAX(bit_alloc_stages[ch], 2); - } else if (!blk) { - av_log(s->avctx, AV_LOG_ERROR, "new bit allocation info must " - "be present in block 0\n"); - return AVERROR_INVALIDDATA; - } - } - - /* signal-to-noise ratio offsets and fast gains (signal-to-mask ratios) */ - if (!s->eac3 || !blk) { - if (s->snr_offset_strategy && get_bits1(gbc)) { - int snr = 0; - int csnr; - csnr = (get_bits(gbc, 6) - 15) << 4; - for (i = ch = !cpl_in_use; ch <= s->channels; ch++) { - /* snr offset */ - if (ch == i || s->snr_offset_strategy == 2) - snr = (csnr + get_bits(gbc, 4)) << 2; - /* run at least last bit allocation stage if snr offset changes */ - if (blk && s->snr_offset[ch] != snr) { - bit_alloc_stages[ch] = FFMAX(bit_alloc_stages[ch], 1); - } - s->snr_offset[ch] = snr; - - /* fast gain (normal AC-3 only) */ - if (!s->eac3) { - int prev = s->fast_gain[ch]; - s->fast_gain[ch] = ff_ac3_fast_gain_tab[get_bits(gbc, 3)]; - /* run last 2 bit allocation stages if fast gain changes */ - if (blk && prev != s->fast_gain[ch]) - bit_alloc_stages[ch] = FFMAX(bit_alloc_stages[ch], 2); - } - } - } else if (!s->eac3 && !blk) { - av_log(s->avctx, AV_LOG_ERROR, "new snr offsets must be present in block 0\n"); - return AVERROR_INVALIDDATA; - } - } - - /* fast gain (E-AC-3 only) */ - if (s->fast_gain_syntax && get_bits1(gbc)) { - for (ch = !cpl_in_use; ch <= s->channels; ch++) { - int prev = s->fast_gain[ch]; - s->fast_gain[ch] = ff_ac3_fast_gain_tab[get_bits(gbc, 3)]; - /* run last 2 bit allocation stages if fast gain changes */ - if (blk && prev != s->fast_gain[ch]) - bit_alloc_stages[ch] = FFMAX(bit_alloc_stages[ch], 2); - } - } else if (s->eac3 && !blk) { - for (ch = !cpl_in_use; ch <= s->channels; ch++) - s->fast_gain[ch] = ff_ac3_fast_gain_tab[4]; - } - - /* E-AC-3 to AC-3 converter SNR offset */ - if (s->frame_type == EAC3_FRAME_TYPE_INDEPENDENT && get_bits1(gbc)) { - skip_bits(gbc, 10); // skip converter snr offset - } - - /* coupling leak information */ - if (cpl_in_use) { - if (s->first_cpl_leak || get_bits1(gbc)) { - int fl = get_bits(gbc, 3); - int sl = get_bits(gbc, 3); - /* run last 2 bit allocation stages for coupling channel if - coupling leak changes */ - if (blk && (fl != s->bit_alloc_params.cpl_fast_leak || - sl != s->bit_alloc_params.cpl_slow_leak)) { - bit_alloc_stages[CPL_CH] = FFMAX(bit_alloc_stages[CPL_CH], 2); - } - s->bit_alloc_params.cpl_fast_leak = fl; - s->bit_alloc_params.cpl_slow_leak = sl; - } else if (!s->eac3 && !blk) { - av_log(s->avctx, AV_LOG_ERROR, "new coupling leak info must " - "be present in block 0\n"); - return AVERROR_INVALIDDATA; - } - s->first_cpl_leak = 0; - } - - /* delta bit allocation information */ - if (s->dba_syntax && get_bits1(gbc)) { - /* delta bit allocation exists (strategy) */ - for (ch = !cpl_in_use; ch <= fbw_channels; ch++) { - s->dba_mode[ch] = get_bits(gbc, 2); - if (s->dba_mode[ch] == DBA_RESERVED) { - av_log(s->avctx, AV_LOG_ERROR, "delta bit allocation strategy reserved\n"); - return AVERROR_INVALIDDATA; - } - bit_alloc_stages[ch] = FFMAX(bit_alloc_stages[ch], 2); - } - /* channel delta offset, len and bit allocation */ - for (ch = !cpl_in_use; ch <= fbw_channels; ch++) { - if (s->dba_mode[ch] == DBA_NEW) { - s->dba_nsegs[ch] = get_bits(gbc, 3) + 1; - for (seg = 0; seg < s->dba_nsegs[ch]; seg++) { - s->dba_offsets[ch][seg] = get_bits(gbc, 5); - s->dba_lengths[ch][seg] = get_bits(gbc, 4); - s->dba_values[ch][seg] = get_bits(gbc, 3); - } - /* run last 2 bit allocation stages if new dba values */ - bit_alloc_stages[ch] = FFMAX(bit_alloc_stages[ch], 2); - } - } - } else if (blk == 0) { - for (ch = 0; ch <= s->channels; ch++) { - s->dba_mode[ch] = DBA_NONE; - } - } - - /* Bit allocation */ - for (ch = !cpl_in_use; ch <= s->channels; ch++) { - if (bit_alloc_stages[ch] > 2) { - /* Exponent mapping into PSD and PSD integration */ - ff_ac3_bit_alloc_calc_psd(s->dexps[ch], - s->start_freq[ch], s->end_freq[ch], - s->psd[ch], s->band_psd[ch]); - } - if (bit_alloc_stages[ch] > 1) { - /* Compute excitation function, Compute masking curve, and - Apply delta bit allocation */ - if (ff_ac3_bit_alloc_calc_mask(&s->bit_alloc_params, s->band_psd[ch], - s->start_freq[ch], s->end_freq[ch], - s->fast_gain[ch], (ch == s->lfe_ch), - s->dba_mode[ch], s->dba_nsegs[ch], - s->dba_offsets[ch], s->dba_lengths[ch], - s->dba_values[ch], s->mask[ch])) { - av_log(s->avctx, AV_LOG_ERROR, "error in bit allocation\n"); - return AVERROR_INVALIDDATA; - } - } - if (bit_alloc_stages[ch] > 0) { - /* Compute bit allocation */ - const uint8_t *bap_tab = s->channel_uses_aht[ch] ? - ff_eac3_hebap_tab : ff_ac3_bap_tab; - s->ac3dsp.bit_alloc_calc_bap(s->mask[ch], s->psd[ch], - s->start_freq[ch], s->end_freq[ch], - s->snr_offset[ch], - s->bit_alloc_params.floor, - bap_tab, s->bap[ch]); - } - } - - /* unused dummy data */ - if (s->skip_syntax && get_bits1(gbc)) { - int skipl = get_bits(gbc, 9); - skip_bits_long(gbc, 8 * skipl); - } - - /* unpack the transform coefficients - this also uncouples channels if coupling is in use. */ - decode_transform_coeffs(s, blk); - - /* TODO: generate enhanced coupling coordinates and uncouple */ - - /* recover coefficients if rematrixing is in use */ - if (s->channel_mode == AC3_CHMODE_STEREO) - do_rematrixing(s); - - /* apply scaling to coefficients (headroom, dynrng) */ - for (ch = 1; ch <= s->channels; ch++) { - int audio_channel = 0; - INTFLOAT gain; - if (s->channel_mode == AC3_CHMODE_DUALMONO && ch <= 2) - audio_channel = 2-ch; - if (s->heavy_compression && s->compression_exists[audio_channel]) - gain = s->heavy_dynamic_range[audio_channel]; - else - gain = s->dynamic_range[audio_channel]; - -#if USE_FIXED - scale_coefs(s->transform_coeffs[ch], s->fixed_coeffs[ch], gain, 256); -#else - if (s->target_level != 0) - gain = gain * s->level_gain[audio_channel]; - gain *= 1.0 / 4194304.0f; - s->fmt_conv.int32_to_float_fmul_scalar(s->transform_coeffs[ch], - s->fixed_coeffs[ch], gain, 256); -#endif - } - - /* apply spectral extension to high frequency bins */ - if (CONFIG_EAC3_DECODER && s->spx_in_use) { - ff_eac3_apply_spectral_extension(s); - } - - /* downmix and MDCT. order depends on whether block switching is used for - any channel in this block. this is because coefficients for the long - and short transforms cannot be mixed. */ - downmix_output = s->channels != s->out_channels && - !((s->output_mode & AC3_OUTPUT_LFEON) && - s->fbw_channels == s->out_channels); - if (different_transforms) { - /* the delay samples have already been downmixed, so we upmix the delay - samples in order to reconstruct all channels before downmixing. */ - if (s->downmixed) { - s->downmixed = 0; - ac3_upmix_delay(s); - } - - do_imdct(s, s->channels, offset); - - if (downmix_output) { -#if USE_FIXED - ac3_downmix_c_fixed16(s->outptr, s->downmix_coeffs, - s->out_channels, s->fbw_channels, 256); -#else - ff_ac3dsp_downmix(&s->ac3dsp, s->outptr, s->downmix_coeffs, - s->out_channels, s->fbw_channels, 256); -#endif - } - } else { - if (downmix_output) { - AC3_RENAME(ff_ac3dsp_downmix)(&s->ac3dsp, s->xcfptr + 1, s->downmix_coeffs, - s->out_channels, s->fbw_channels, 256); - } - - if (downmix_output && !s->downmixed) { - s->downmixed = 1; - AC3_RENAME(ff_ac3dsp_downmix)(&s->ac3dsp, s->dlyptr, s->downmix_coeffs, - s->out_channels, s->fbw_channels, 128); - } - - do_imdct(s, s->out_channels, offset); - } - - return 0; -} - -/** - * Decode a single AC-3 frame. - */ -static int ac3_decode_frame(AVCodecContext *avctx, AVFrame *frame, - int *got_frame_ptr, AVPacket *avpkt) -{ - const uint8_t *buf = avpkt->data; - int buf_size, full_buf_size = avpkt->size; - AC3DecodeContext *s = avctx->priv_data; - int blk, ch, err, offset, ret; - int i; - int skip = 0, got_independent_frame = 0; - const uint8_t *channel_map; - uint8_t extended_channel_map[EAC3_MAX_CHANNELS]; - const SHORTFLOAT *output[AC3_MAX_CHANNELS]; - enum AVMatrixEncoding matrix_encoding; - AVDownmixInfo *downmix_info; - uint64_t mask; - - s->superframe_size = 0; - - buf_size = full_buf_size; - i = ff_ac3_find_syncword(buf, buf_size); - if (i < 0 || i > 10) - return i; - buf += i; - buf_size -= i; - - /* copy input buffer to decoder context to avoid reading past the end - of the buffer, which can be caused by a damaged input stream. */ - if (buf_size >= 2 && AV_RB16(buf) == 0x770B) { - // seems to be byte-swapped AC-3 - int cnt = FFMIN(buf_size, AC3_FRAME_BUFFER_SIZE) >> 1; - s->bdsp.bswap16_buf((uint16_t *) s->input_buffer, - (const uint16_t *) buf, cnt); - } else - memcpy(s->input_buffer, buf, FFMIN(buf_size, AC3_FRAME_BUFFER_SIZE)); - - /* if consistent noise generation is enabled, seed the linear feedback generator - * with the contents of the AC-3 frame so that the noise is identical across - * decodes given the same AC-3 frame data, for use with non-linear edititing software. */ - if (s->consistent_noise_generation) - av_lfg_init_from_data(&s->dith_state, s->input_buffer, FFMIN(buf_size, AC3_FRAME_BUFFER_SIZE)); - - buf = s->input_buffer; -dependent_frame: - /* initialize the GetBitContext with the start of valid AC-3 Frame */ - if ((ret = init_get_bits8(&s->gbc, buf, buf_size)) < 0) - return ret; - - /* parse the syncinfo */ - err = parse_frame_header(s); - - if (err) { - switch (err) { - case AAC_AC3_PARSE_ERROR_SYNC: - av_log(avctx, AV_LOG_ERROR, "frame sync error\n"); - return AVERROR_INVALIDDATA; - case AAC_AC3_PARSE_ERROR_BSID: - av_log(avctx, AV_LOG_ERROR, "invalid bitstream id\n"); - break; - case AAC_AC3_PARSE_ERROR_SAMPLE_RATE: - av_log(avctx, AV_LOG_ERROR, "invalid sample rate\n"); - break; - case AAC_AC3_PARSE_ERROR_FRAME_SIZE: - av_log(avctx, AV_LOG_ERROR, "invalid frame size\n"); - break; - case AAC_AC3_PARSE_ERROR_FRAME_TYPE: - /* skip frame if CRC is ok. otherwise use error concealment. */ - /* TODO: add support for substreams */ - if (s->substreamid) { - av_log(avctx, AV_LOG_DEBUG, - "unsupported substream %d: skipping frame\n", - s->substreamid); - *got_frame_ptr = 0; - return buf_size; - } else { - av_log(avctx, AV_LOG_ERROR, "invalid frame type\n"); - } - break; - case AAC_AC3_PARSE_ERROR_CRC: - case AAC_AC3_PARSE_ERROR_CHANNEL_CFG: - break; - default: // Normal AVERROR do not try to recover. - *got_frame_ptr = 0; - return err; - } - } else { - /* check that reported frame size fits in input buffer */ - if (s->frame_size > buf_size) { - av_log(avctx, AV_LOG_ERROR, "incomplete frame\n"); - err = AAC_AC3_PARSE_ERROR_FRAME_SIZE; - } else if (avctx->err_recognition & (AV_EF_CRCCHECK|AV_EF_CAREFUL)) { - /* check for crc mismatch */ - if (av_crc(av_crc_get_table(AV_CRC_16_ANSI), 0, &buf[2], - s->frame_size - 2)) { - av_log(avctx, AV_LOG_ERROR, "frame CRC mismatch\n"); - if (avctx->err_recognition & AV_EF_EXPLODE) - return AVERROR_INVALIDDATA; - err = AAC_AC3_PARSE_ERROR_CRC; - } - } - } - - if (s->frame_type == EAC3_FRAME_TYPE_DEPENDENT && !got_independent_frame) { - av_log(avctx, AV_LOG_WARNING, "Ignoring dependent frame without independent frame.\n"); - *got_frame_ptr = 0; - return FFMIN(full_buf_size, s->frame_size); - } - - /* channel config */ - if (!err || (s->channels && s->out_channels != s->channels)) { - s->out_channels = s->channels; - s->output_mode = s->channel_mode; - if (s->lfe_on) - s->output_mode |= AC3_OUTPUT_LFEON; - if (s->channels > 1 && - !av_channel_layout_compare(&s->downmix_layout, &(AVChannelLayout)AV_CHANNEL_LAYOUT_MONO)) { - s->out_channels = 1; - s->output_mode = AC3_CHMODE_MONO; - } else if (s->channels > 2 && - !av_channel_layout_compare(&s->downmix_layout, &(AVChannelLayout)AV_CHANNEL_LAYOUT_STEREO)) { - s->out_channels = 2; - s->output_mode = AC3_CHMODE_STEREO; - } - - s->loro_center_mix_level = gain_levels[s-> center_mix_level]; - s->loro_surround_mix_level = gain_levels[s->surround_mix_level]; - s->ltrt_center_mix_level = LEVEL_MINUS_3DB; - s->ltrt_surround_mix_level = LEVEL_MINUS_3DB; - /* set downmixing coefficients if needed */ - if (s->channels != s->out_channels && !((s->output_mode & AC3_OUTPUT_LFEON) && - s->fbw_channels == s->out_channels)) { - if ((ret = set_downmix_coeffs(s)) < 0) { - av_log(avctx, AV_LOG_ERROR, "error setting downmix coeffs\n"); - return ret; - } - } - } else if (!s->channels) { - av_log(avctx, AV_LOG_ERROR, "unable to determine channel mode\n"); - return AVERROR_INVALIDDATA; - } - - mask = ff_ac3_channel_layout_tab[s->output_mode & ~AC3_OUTPUT_LFEON]; - if (s->output_mode & AC3_OUTPUT_LFEON) - mask |= AV_CH_LOW_FREQUENCY; - - av_channel_layout_uninit(&avctx->ch_layout); - av_channel_layout_from_mask(&avctx->ch_layout, mask); - - /* set audio service type based on bitstream mode for AC-3 */ - avctx->audio_service_type = s->bitstream_mode; - if (s->bitstream_mode == 0x7 && s->channels > 1) - avctx->audio_service_type = AV_AUDIO_SERVICE_TYPE_KARAOKE; - - /* decode the audio blocks */ - channel_map = ff_ac3_dec_channel_map[s->output_mode & ~AC3_OUTPUT_LFEON][s->lfe_on]; - offset = s->frame_type == EAC3_FRAME_TYPE_DEPENDENT ? AC3_MAX_CHANNELS : 0; - for (ch = 0; ch < AC3_MAX_CHANNELS; ch++) { - output[ch] = s->output[ch + offset]; - s->outptr[ch] = s->output[ch + offset]; - } - for (ch = 0; ch < s->channels; ch++) { - if (ch < s->out_channels) - s->outptr[channel_map[ch]] = s->output_buffer[ch + offset]; - } - for (blk = 0; blk < s->num_blocks; blk++) { - if (!err && decode_audio_block(s, blk, offset)) { - av_log(avctx, AV_LOG_ERROR, "error decoding the audio block\n"); - err = 1; - } - if (err) - for (ch = 0; ch < s->out_channels; ch++) - memcpy(s->output_buffer[ch + offset] + AC3_BLOCK_SIZE*blk, output[ch], AC3_BLOCK_SIZE*sizeof(SHORTFLOAT)); - for (ch = 0; ch < s->out_channels; ch++) - output[ch] = s->outptr[channel_map[ch]]; - for (ch = 0; ch < s->out_channels; ch++) { - if (!ch || channel_map[ch]) - s->outptr[channel_map[ch]] += AC3_BLOCK_SIZE; - } - } - - /* keep last block for error concealment in next frame */ - for (ch = 0; ch < s->out_channels; ch++) - memcpy(s->output[ch + offset], output[ch], AC3_BLOCK_SIZE*sizeof(SHORTFLOAT)); - - /* check if there is dependent frame */ - if (buf_size > s->frame_size) { - AC3HeaderInfo hdr; - int err; - - if (buf_size - s->frame_size <= 16) { - skip = buf_size - s->frame_size; - goto skip; - } - - if ((ret = init_get_bits8(&s->gbc, buf + s->frame_size, buf_size - s->frame_size)) < 0) - return ret; - - err = ff_ac3_parse_header(&s->gbc, &hdr); - if (err) - return err; - - if (hdr.frame_type == EAC3_FRAME_TYPE_DEPENDENT) { - if (hdr.num_blocks != s->num_blocks || s->sample_rate != hdr.sample_rate) { - av_log(avctx, AV_LOG_WARNING, "Ignoring non-compatible dependent frame.\n"); - } else { - buf += s->frame_size; - buf_size -= s->frame_size; - s->prev_output_mode = s->output_mode; - s->prev_bit_rate = s->bit_rate; - got_independent_frame = 1; - goto dependent_frame; - } - } - } -skip: - - frame->decode_error_flags = err ? FF_DECODE_ERROR_INVALID_BITSTREAM : 0; - - /* if frame is ok, set audio parameters */ - if (!err) { - avctx->sample_rate = s->sample_rate; - avctx->bit_rate = s->bit_rate + s->prev_bit_rate; - avctx->profile = s->eac3_extension_type_a == 1 ? FF_PROFILE_EAC3_DDP_ATMOS : FF_PROFILE_UNKNOWN; - } - - if (!avctx->sample_rate) { - av_log(avctx, AV_LOG_ERROR, "Could not determine the sample rate\n"); - return AVERROR_INVALIDDATA; - } - - for (ch = 0; ch < EAC3_MAX_CHANNELS; ch++) - extended_channel_map[ch] = ch; - - if (s->frame_type == EAC3_FRAME_TYPE_DEPENDENT) { - uint64_t ich_layout = ff_ac3_channel_layout_tab[s->prev_output_mode & ~AC3_OUTPUT_LFEON]; - int channel_map_size = ff_ac3_channels_tab[s->output_mode & ~AC3_OUTPUT_LFEON] + s->lfe_on; - uint64_t channel_layout; - int extend = 0; - - if (s->prev_output_mode & AC3_OUTPUT_LFEON) - ich_layout |= AV_CH_LOW_FREQUENCY; - - channel_layout = ich_layout; - for (ch = 0; ch < 16; ch++) { - if (s->channel_map & (1 << (EAC3_MAX_CHANNELS - ch - 1))) { - channel_layout |= ff_eac3_custom_channel_map_locations[ch][1]; - } - } - if (av_popcount64(channel_layout) > EAC3_MAX_CHANNELS) { - av_log(avctx, AV_LOG_ERROR, "Too many channels (%d) coded\n", - av_popcount64(channel_layout)); - return AVERROR_INVALIDDATA; - } - - av_channel_layout_uninit(&avctx->ch_layout); - av_channel_layout_from_mask(&avctx->ch_layout, channel_layout); - - for (ch = 0; ch < EAC3_MAX_CHANNELS; ch++) { - if (s->channel_map & (1 << (EAC3_MAX_CHANNELS - ch - 1))) { - if (ff_eac3_custom_channel_map_locations[ch][0]) { - int index = av_channel_layout_index_from_channel(&avctx->ch_layout, - ff_ctzll(ff_eac3_custom_channel_map_locations[ch][1])); - if (index < 0) - return AVERROR_INVALIDDATA; - if (extend >= channel_map_size) - break; - - extended_channel_map[index] = offset + channel_map[extend++]; - } else { - int i; - - for (i = 0; i < 64; i++) { - if ((1ULL << i) & ff_eac3_custom_channel_map_locations[ch][1]) { - int index = av_channel_layout_index_from_channel(&avctx->ch_layout, i); - if (index < 0) - return AVERROR_INVALIDDATA; - if (extend >= channel_map_size) - break; - - extended_channel_map[index] = offset + channel_map[extend++]; - } - } - } - } - } - - ac3_downmix(avctx); - } - - /* get output buffer */ - frame->nb_samples = s->num_blocks * AC3_BLOCK_SIZE; - if ((ret = ff_get_buffer(avctx, frame, 0)) < 0) - return ret; - - for (ch = 0; ch < avctx->ch_layout.nb_channels; ch++) { - int map = extended_channel_map[ch]; - av_assert0(ch>=AV_NUM_DATA_POINTERS || frame->extended_data[ch] == frame->data[ch]); - memcpy((SHORTFLOAT *)frame->extended_data[ch], - s->output_buffer[map], - s->num_blocks * AC3_BLOCK_SIZE * sizeof(SHORTFLOAT)); - } - - /* - * AVMatrixEncoding - * - * Check whether the input layout is compatible, and make sure we're not - * downmixing (else the matrix encoding is no longer applicable). - */ - matrix_encoding = AV_MATRIX_ENCODING_NONE; - if (s->channel_mode == AC3_CHMODE_STEREO && - s->channel_mode == (s->output_mode & ~AC3_OUTPUT_LFEON)) { - if (s->dolby_surround_mode == AC3_DSURMOD_ON) - matrix_encoding = AV_MATRIX_ENCODING_DOLBY; - else if (s->dolby_headphone_mode == AC3_DHEADPHONMOD_ON) - matrix_encoding = AV_MATRIX_ENCODING_DOLBYHEADPHONE; - } else if (s->channel_mode >= AC3_CHMODE_2F2R && - s->channel_mode == (s->output_mode & ~AC3_OUTPUT_LFEON)) { - switch (s->dolby_surround_ex_mode) { - case AC3_DSUREXMOD_ON: // EX or PLIIx - matrix_encoding = AV_MATRIX_ENCODING_DOLBYEX; - break; - case AC3_DSUREXMOD_PLIIZ: - matrix_encoding = AV_MATRIX_ENCODING_DPLIIZ; - break; - default: // not indicated or off - break; - } - } - if ((ret = ff_side_data_update_matrix_encoding(frame, matrix_encoding)) < 0) - return ret; - - /* AVDownmixInfo */ - if ((downmix_info = av_downmix_info_update_side_data(frame))) { - switch (s->preferred_downmix) { - case AC3_DMIXMOD_LTRT: - downmix_info->preferred_downmix_type = AV_DOWNMIX_TYPE_LTRT; - break; - case AC3_DMIXMOD_LORO: - downmix_info->preferred_downmix_type = AV_DOWNMIX_TYPE_LORO; - break; - case AC3_DMIXMOD_DPLII: - downmix_info->preferred_downmix_type = AV_DOWNMIX_TYPE_DPLII; - break; - default: - downmix_info->preferred_downmix_type = AV_DOWNMIX_TYPE_UNKNOWN; - break; - } - downmix_info->center_mix_level = gain_levels[s-> center_mix_level]; - downmix_info->center_mix_level_ltrt = gain_levels[s-> center_mix_level_ltrt]; - downmix_info->surround_mix_level = gain_levels[s-> surround_mix_level]; - downmix_info->surround_mix_level_ltrt = gain_levels[s->surround_mix_level_ltrt]; - if (s->lfe_mix_level_exists) - downmix_info->lfe_mix_level = gain_levels_lfe[s->lfe_mix_level]; - else - downmix_info->lfe_mix_level = 0.0; // -inf dB - } else - return AVERROR(ENOMEM); - - *got_frame_ptr = 1; - - if (!s->superframe_size) - return FFMIN(full_buf_size, s->frame_size + skip); - - return FFMIN(full_buf_size, s->superframe_size + skip); -} - -/** - * Uninitialize the AC-3 decoder. - */ -static av_cold int ac3_decode_end(AVCodecContext *avctx) -{ - AC3DecodeContext *s = avctx->priv_data; - av_tx_uninit(&s->tx_256); - av_tx_uninit(&s->tx_128); - av_freep(&s->fdsp); - av_freep(&s->downmix_coeffs[0]); - - return 0; -} - -#define OFFSET(x) offsetof(AC3DecodeContext, x) -#define PAR (AV_OPT_FLAG_DECODING_PARAM | AV_OPT_FLAG_AUDIO_PARAM) diff --git a/spaces/colakin/video-generater/public/ffmpeg/libavcodec/lsp.h b/spaces/colakin/video-generater/public/ffmpeg/libavcodec/lsp.h deleted file mode 100644 index 26b1382eda3f73b7b0e02e3452199716e391f51a..0000000000000000000000000000000000000000 --- a/spaces/colakin/video-generater/public/ffmpeg/libavcodec/lsp.h +++ /dev/null @@ -1,118 +0,0 @@ -/* - * LSP computing for ACELP-based codecs - * - * Copyright (c) 2008 Vladimir Voroshilov - * - * This file is part of FFmpeg. - * - * FFmpeg is free software; you can redistribute it and/or - * modify it under the terms of the GNU Lesser General Public - * License as published by the Free Software Foundation; either - * version 2.1 of the License, or (at your option) any later version. - * - * FFmpeg is distributed in the hope that it will be useful, - * but WITHOUT ANY WARRANTY; without even the implied warranty of - * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU - * Lesser General Public License for more details. - * - * You should have received a copy of the GNU Lesser General Public - * License along with FFmpeg; if not, write to the Free Software - * Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA - */ - -#ifndef AVCODEC_LSP_H -#define AVCODEC_LSP_H - -#include - -/** - (I.F) means fixed-point value with F fractional and I integer bits -*/ - -/** - * @brief ensure a minimum distance between LSFs - * @param[in,out] lsfq LSF to check and adjust - * @param lsfq_min_distance minimum distance between LSFs - * @param lsfq_min minimum allowed LSF value - * @param lsfq_max maximum allowed LSF value - * @param lp_order LP filter order - */ -void ff_acelp_reorder_lsf(int16_t* lsfq, int lsfq_min_distance, int lsfq_min, int lsfq_max, int lp_order); - -/** - * Adjust the quantized LSFs so they are increasing and not too close. - * - * This step is not mentioned in the AMR spec but is in the reference C decoder. - * Omitting this step creates audible distortion on the sinusoidal sweep - * test vectors in 3GPP TS 26.074. - * - * @param[in,out] lsf LSFs in Hertz - * @param min_spacing minimum distance between two consecutive lsf values - * @param size size of the lsf vector - */ -void ff_set_min_dist_lsf(float *lsf, double min_spacing, int size); - -/** - * @brief Convert LSF to LSP - * @param[out] lsp LSP coefficients (-0x8000 <= (0.15) < 0x8000) - * @param lsf normalized LSF coefficients (0 <= (2.13) < 0x2000 * PI) - * @param lp_order LP filter order - * - * @remark It is safe to pass the same array into the lsf and lsp parameters. - */ -void ff_acelp_lsf2lsp(int16_t *lsp, const int16_t *lsf, int lp_order); - -/** - * Floating point version of ff_acelp_lsf2lsp() - */ -void ff_acelp_lsf2lspd(double *lsp, const float *lsf, int lp_order); - -/** - * @brief LSP to LP conversion (3.2.6 of G.729) - * @param[out] lp decoded LP coefficients (-0x8000 <= (3.12) < 0x8000) - * @param lsp LSP coefficients (-0x8000 <= (0.15) < 0x8000) - * @param lp_half_order LP filter order, divided by 2 - */ -void ff_acelp_lsp2lpc(int16_t* lp, const int16_t* lsp, int lp_half_order); - -/** - * LSP to LP conversion (5.2.4 of AMR-WB) - */ -void ff_amrwb_lsp2lpc(const double *lsp, float *lp, int lp_order); - -/** - * @brief Interpolate LSP for the first subframe and convert LSP -> LP for both subframes (3.2.5 and 3.2.6 of G.729) - * @param[out] lp_1st decoded LP coefficients for first subframe (-0x8000 <= (3.12) < 0x8000) - * @param[out] lp_2nd decoded LP coefficients for second subframe (-0x8000 <= (3.12) < 0x8000) - * @param lsp_2nd LSP coefficients of the second subframe (-0x8000 <= (0.15) < 0x8000) - * @param lsp_prev LSP coefficients from the second subframe of the previous frame (-0x8000 <= (0.15) < 0x8000) - * @param lp_order LP filter order - */ -void ff_acelp_lp_decode(int16_t* lp_1st, int16_t* lp_2nd, const int16_t* lsp_2nd, const int16_t* lsp_prev, int lp_order); - - -#define MAX_LP_HALF_ORDER 10 -#define MAX_LP_ORDER (2*MAX_LP_HALF_ORDER) - -/** - * Reconstruct LPC coefficients from the line spectral pair frequencies. - * - * @param lsp line spectral pairs in cosine domain - * @param lpc linear predictive coding coefficients - * @param lp_half_order half the number of the amount of LPCs to be - * reconstructed, need to be smaller or equal to MAX_LP_HALF_ORDER - * - * @note buffers should have a minimum size of 2*lp_half_order elements. - * - * TIA/EIA/IS-733 2.4.3.3.5 - */ -void ff_acelp_lspd2lpc(const double *lsp, float *lpc, int lp_half_order); - -/** - * Sort values in ascending order. - * - * @note O(n) if data already sorted, O(n^2) - otherwise - */ -void ff_sort_nearly_sorted_floats(float *vals, int len); - -#endif /* AVCODEC_LSP_H */ diff --git a/spaces/coldlarry/lr_pdf/README.md b/spaces/coldlarry/lr_pdf/README.md deleted file mode 100644 index ef29503e00385250b9dcbd0cf7de284c80bdc9c7..0000000000000000000000000000000000000000 --- a/spaces/coldlarry/lr_pdf/README.md +++ /dev/null @@ -1,13 +0,0 @@ ---- -title: Lr Pdf -emoji: 📊 -colorFrom: pink -colorTo: yellow -sdk: gradio -sdk_version: 3.23.0 -app_file: app.py -pinned: false -license: openrail ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/congsaPfin/Manga-OCR/logs/Death Worm Mod Apk v2.0.038 How to Get Unlimited Money and Gems in the Best Worm Game Ever.md b/spaces/congsaPfin/Manga-OCR/logs/Death Worm Mod Apk v2.0.038 How to Get Unlimited Money and Gems in the Best Worm Game Ever.md deleted file mode 100644 index 430b4c2d6f7f22076d18a903bf6d136e1fa4d02c..0000000000000000000000000000000000000000 --- a/spaces/congsaPfin/Manga-OCR/logs/Death Worm Mod Apk v2.0.038 How to Get Unlimited Money and Gems in the Best Worm Game Ever.md +++ /dev/null @@ -1,141 +0,0 @@ - - - -
      -

      Death Worm Mod APK: How to Download and Play the Ultimate Monster Game

      -

      Have you ever wondered what it would be like to be a giant worm that can devour anything in its path? If you have, then you should try Death Worm Mod APK, a modified version of the popular arcade game Death Worm. In this game, you can control a monstrous worm that can eat humans, animals, cars, tanks, helicopters, and even aliens. You can also upgrade your worm with different skins, abilities, and weapons. Plus, you can enjoy unlimited money and gems, unlocked levels and modes, enhanced graphics and sound effects, and more.

      -

      In this article, I will show you what Death Worm Mod APK is, how to download and install it, how to play it, why you should play it, and what are some of the drawbacks of playing it. By the end of this article, you will be ready to unleash your inner monster and have fun with Death Worm Mod APK.

      -

      death worm mod apk (unlimited money and gems 2.0 038)


      Download ->->->-> https://urlca.com/2uO7UB



      -

      What is Death Worm Mod APK?

      -

      Death Worm Mod APK is a modified version of the original Death Worm game developed by PlayCreek LLC. The original game was released in 2010 for Android devices and later for iOS devices. The game was inspired by the 1984 movie Dune, where a giant sandworm terrorizes a desert planet. The game has received positive reviews from critics and players alike for its simple yet addictive gameplay, colorful graphics, and humorous sound effects.

      -

      Death Worm Mod APK is a hacked version of the original game that gives you access to unlimited money and gems, unlocked levels and modes, enhanced graphics and sound effects, and more. With these features, you can customize your worm with different skins, abilities, and weapons. You can also play in different environments such as desert, jungle, city, arctic, or space. You can also choose from different modes such as campaign mode, survival mode, or mini-games mode.

      -

      Features of Death Worm Mod APK

      -

      Death Worm Mod APK has many features that make it more fun and exciting than the original game. Here are some of them:

      -

      Unlimited Money and Gems

      -

      With Death Worm Mod APK, you don't have to worry about running out of money or gems. You can use them to buy upgrades for your worm such as speed boosters, fireballs, nitro boosters, spikes, armor plates, electric shocks, bombs, and more. You can also use them to unlock new skins for your worm such as dragon skin, robot skin, zombie skin, and more.

      -

      Unlocked Levels and Modes

      -

      With Death Worm Mod APK, you don't have to

      With Death Worm Mod APK, you don't have to complete the levels or modes in the original game to unlock them. You can play any level or mode you want from the start. You can choose from 60 levels in the campaign mode, where you have to complete different objectives and face different enemies. You can also play the survival mode, where you have to survive as long as possible and eat as many creatures as you can. You can also play the mini-games mode, where you can enjoy various mini-games such as soccer, air hockey, bowling, and more.

      -

      Enhanced Graphics and Sound Effects

      -

      With Death Worm Mod APK, you can enjoy better graphics and sound effects than the original game. The graphics are more detailed and realistic, with improved lighting and shadows. The sound effects are more loud and clear, with more variety and humor. You can hear the screams of your victims, the explosions of your weapons, and the roars of your worm.

      -

      How to Download and Install Death Worm Mod APK?

      -

      If you want to download and install Death Worm Mod APK, you need to follow these steps:

      -

      Requirements for Death Worm Mod APK

      -

      Before you download and install Death Worm Mod APK, you need to make sure that your device meets these requirements:

      -

      death worm hack apk download free unlimited coins and diamonds
      -death worm modded apk latest version 2.0 038 with all features unlocked
      -how to install death worm mod apk on android device
      -death worm unlimited money and gems apk free download for android
      -death worm game mod apk offline play without internet connection
      -death worm mod apk 2.0 038 mega mod with unlimited everything
      -download death worm mod apk from getmodsapk.com[^1^]
      -death worm cheats and tips for mod apk players
      -death worm mod apk review and gameplay video
      -death worm mod apk no root required for android devices
      -death worm monster game mod apk with unlimited power-ups and upgrades
      -death worm full version mod apk download link
      -death worm mod apk new update 2.0 038 with bug fixes and improvements
      -death worm hacked apk for android phone and tablet
      -death worm premium mod apk with no ads and in-app purchases
      -death worm simulator mod apk with realistic graphics and physics
      -death worm 2 mod apk with more levels and challenges
      -death worm pro mod apk with extra features and modes
      -death worm cracked apk with unlimited lives and health
      -death worm fun mod apk with funny sounds and animations
      -death worm original mod apk with original game content and settings
      -death worm best mod apk with high ratings and reviews
      -death worm super mod apk with super speed and size
      -death worm extreme mod apk with extreme difficulty and enemies
      -death worm ultimate mod apk with ultimate customization and options
      -death worm old version mod apk with old graphics and gameplay
      -death worm online mod apk with online multiplayer and chat
      -death worm 3d mod apk with 3d graphics and effects
      -death worm hd mod apk with hd resolution and quality
      -death worm lite mod apk with lite size and performance
      -death worm gold mod apk with gold coins and gems
      -death worm vip mod apk with vip access and benefits
      -death worm deluxe mod apk with deluxe features and items
      -death worm plus mod apk with plus bonuses and rewards
      -death worm magic mod apk with magic abilities and spells
      -death worm fire mod apk with fire breath and explosions
      -death worm ice mod apk with ice powers and freeze effects
      -death worm electric mod apk with electric shocks and sparks
      -death worm poison mod apk with poison spit and venom effects
      -death worm rainbow mod apk with rainbow colors and effects

      -
        -
      • Your device must have Android 4.1 or higher.
      • -
      • Your device must have at least 50 MB of free storage space.
      • -
      • Your device must have a stable internet connection.
      • -
      • Your device must allow installation from unknown sources. To enable this, go to Settings > Security > Unknown Sources and toggle it on.
      • -
      -

      Steps to Download and Install Death Worm Mod APK

      -

      After you have checked the requirements, you can follow these steps to download and install Death Worm Mod APK:

      -
        -
      1. Go to this link and click on the download button to download the Death Worm Mod APK file.
      2. -
      3. Once the download is complete, locate the file in your device's file manager and tap on it to start the installation process.
      4. -
      5. Follow the instructions on the screen and wait for the installation to finish.
      6. -
      7. Launch the game from your app drawer or home screen and enjoy playing Death Worm Mod APK.
      8. -
      -

      How to Play Death Worm Mod APK?

      -

      Playing Death Worm Mod APK is easy and fun. Here are some tips on how to play it:

      -

      Controls and Gameplay

      -

      The controls of Death Worm Mod APK are simple and intuitive. You can use the joystick on the left side of the screen to move your worm around. You can also use the buttons on the right side of the screen to activate your weapons or abilities. You can also tilt your device to control your worm's direction.

      -

      The gameplay of Death Worm Mod APK is exciting and addictive. You can eat anything that moves on the surface or underground. You can also destroy buildings, vehicles, planes, and spaceships with your weapons or abilities. You can also collect coins, gems, stars, and power-ups along the way. You can use them to upgrade your worm or buy new skins, abilities, or weapons.

      -

      Tips and Tricks for Death Worm Mod APK

      -

      If you want to master Death Worm Mod APK, you need to follow these tips and tricks:

      -
        -
      • Use your weapons or abilities wisely. They have limited charges or cooldowns, so don't waste them on small targets or when you don't need them.
      • -
      • Avoid enemies that can harm you. Some enemies such as tanks, helicopters, aliens, or bosses can shoot at you or drop bombs on you. Try to dodge their attacks or destroy them before they hurt you.
      • -
      • Eat as many creatures as you can. Eating creatures will increase your score, fill up your health bar, and charge up your weapons or abilities. Eating humans will also make them scream hilariously.
      • -
      • Explore different environments and scenarios. Each environment has its own challenges and surprises. For example, in the desert, you can encounter sandstorms that reduce your visibility. In the city, you can cause traffic jams or blackouts by destroying power lines. In space, you can encounter asteroids or UFOs that can crash into you.
      • -
      • Play different modes and mini-games. Each mode and mini-game has its own objectives and rules. For example, in the campaign mode, you have to complete different missions such as eating a certain number of creatures, destroying a certain number of vehicles, or surviving for a certain amount of time. In the survival mode, you have to last as long as possible without dying. In the mini-games mode, you have to score as many points as you can by playing soccer, air hockey, bowling, and more.
      • -
      -

      Why You Should Play Death Worm Mod APK?

      -

      Death Worm Mod APK is a game that you should play if you love arcade games, monster games, or just having fun. Here are some of the reasons why you should play it:

      -

      Benefits of Playing Death Worm Mod APK

      -

      Playing Death Worm Mod APK has many benefits, such as:

      -

      Fun and Addictive Gameplay

      -

      Death Worm Mod APK is a game that will keep you entertained for hours. You will never get bored of eating, destroying, and exploring with your worm. You will also enjoy the humor and the thrill of the game. You will feel like a powerful and unstoppable monster that can do anything.

      -

      Challenge Yourself and Your Friends

      -

      Death Worm Mod APK is a game that will challenge your skills and reflexes. You will have to face different enemies, obstacles, and scenarios that will test your abilities. You will also have to complete different missions and objectives that will require your strategy and creativity. You can also compete with your friends and see who can get the highest score, the longest survival time, or the most achievements.

      -

      Explore Different Environments and Scenarios

      -

      Death Worm Mod APK is a game that will let you explore different environments and scenarios. You can play in different settings such as desert, jungle, city, arctic, or space. You can also encounter different creatures such as humans, animals, cars, tanks, helicopters, aliens, and more. You can also experience different events such as sandstorms, blackouts, asteroids, or UFOs. You will never know what to expect next.

      -

      Drawbacks of Playing Death Worm Mod APK

      -

      Playing Death Worm Mod APK also has some drawbacks, such as:

      -

      Potential Security Risks

      -

      Death Worm Mod APK is a modified version of the original game that is not authorized by the developer or the Google Play Store. This means that it may contain viruses, malware, or spyware that can harm your device or steal your personal information. You should always be careful when downloading and installing modded games from unknown sources.

      -

      Possible Compatibility Issues

      -

      Death Worm Mod APK may not work properly on some devices or Android versions. It may crash, freeze, lag, or glitch during the gameplay. It may also cause conflicts with other apps or games on your device. You should always check the requirements and reviews of the modded game before downloading and installing it.

      -

      Ethical Concerns

      -

      Death Worm Mod APK may violate the intellectual property rights of the original developer or the Google Play Store. It may also promote violence, gore, or cruelty towards humans or animals. It may also offend some people who are sensitive to these topics. You should always respect the original developer and the Google Play Store policies when playing modded games.

      -

      Conclusion

      -

      Death Worm Mod APK is a modified version of the popular arcade game Death Worm that gives you unlimited money and gems, unlocked levels and modes, enhanced graphics and sound effects, and more. It is a fun and addictive game that lets you control a giant worm that can eat anything in its path. It is also a challenging and exciting game that lets you face different enemies, obstacles, and scenarios. It is also a game that lets you explore different environments and scenarios.

      -

      However, Death Worm Mod APK also has some drawbacks such as potential security risks, possible compatibility issues, and ethical concerns. You should always be careful when downloading and installing modded games from unknown sources. You should also respect the original developer and the Google Play Store policies when playing modded games.

      -

      If you want to try Death Worm Mod APK, you can follow the steps in this article to download and install it. You can also follow the tips in this article to play it. You can also share your thoughts and experiences with Death Worm Mod APK in the comments section below.

      -

      Frequently Asked Questions

      -

      Here are some of the frequently asked questions about Death Worm Mod APK:

      -
        -
      1. Is Death Worm Mod APK safe to download and install?
      2. -

        Death Worm Mod APK is not an official version of the original game that is authorized by the developer or the Google Play Store. It may contain viruses, malware, or spyware that can harm your device or steal your personal information. You should always be careful when downloading and installing modded games from unknown sources. You should also scan the file with an antivirus software before opening it.

        -
      3. Is Death Worm Mod APK legal to play?
      4. -

        Death Worm Mod APK may violate the intellectual property rights of the original developer or the Google Play Store. It may also promote violence, gore, or cruelty towards humans or animals. It may also offend some people who are sensitive to these topics. You should always respect the original developer and the Google Play Store policies when playing modded games. You should also be aware of the legal consequences of playing modded games in your country or region.

        -
      5. How can I update Death Worm Mod APK?
      6. -

        Death Worm Mod APK may not receive regular updates from the original developer or the Google Play Store. It may also become incompatible with newer versions of Android or other apps or games on your device. You should always check the source of the modded game for any updates or patches. You should also backup your data before updating or uninstalling the modded game.

        -
      7. How can I uninstall Death Worm Mod APK?
      8. -

        If you want to uninstall Death Worm Mod APK, you can follow these steps:

        -
          -
        1. Go to Settings > Apps > Death Worm Mod APK and tap on it.
        2. -
        3. Tap on Uninstall and confirm your choice.
        4. -
        5. Wait for the uninstallation process to finish.
        6. -
        7. Delete the Death Worm Mod APK file from your device's file manager.
        8. -
        -
      9. What are some alternatives to Death Worm Mod APK?
      10. -

        If you are looking for some alternatives to Death Worm Mod APK, you can try these games:

        -
          -
        • Super Mega Worm: A similar game where you control a giant worm that can eat humans, animals, vehicles, and more. You can also upgrade your worm with different abilities and weapons.
        • -
        • Worms Zone .io: A multiplayer game where you control a worm that can grow bigger by eating food and other worms. You can also customize your worm with different skins and accessories.
        • -
        • Worms 3: A strategy game where you control a team of worms that can use various weapons and tools to defeat other teams of worms. You can also play online with your friends or other players.
        • -
        -
      -

      I hope this article has helped you learn more about Death Worm Mod APK and how to download and play it. If you have any questions or feedback, please leave them in the comments section below. Thank you for reading and have a great day!

      401be4b1e0
      -
      -
      \ No newline at end of file diff --git a/spaces/congsaPfin/Manga-OCR/logs/Download Ultimate Car Driving Simulator MOD APK with Unlimited Money and Infinite Fun.md b/spaces/congsaPfin/Manga-OCR/logs/Download Ultimate Car Driving Simulator MOD APK with Unlimited Money and Infinite Fun.md deleted file mode 100644 index 7f12fccc67707625e402f421dc9e3b68edbad4fa..0000000000000000000000000000000000000000 --- a/spaces/congsaPfin/Manga-OCR/logs/Download Ultimate Car Driving Simulator MOD APK with Unlimited Money and Infinite Fun.md +++ /dev/null @@ -1,112 +0,0 @@ - -

      Ultimate Car Driving Simulator Mod APK Dinheiro Infinito: How to Download and Play

      -

      If you are a fan of car driving simulator games, you might have heard of Ultimate Car Driving Simulator, a popular game that lets you drive various cars in an open world environment. But did you know that there is a mod apk version of the game that gives you unlimited money and unlocks all the features? In this article, we will tell you what Ultimate Car Driving Simulator is, why you should use the mod apk version, how to download and install it, and how to play it. Let's get started!

      -

      ultimate car driving simulator mod apk dinheiro infinito


      DOWNLOAD ☆☆☆ https://urlca.com/2uOcBy



      -

      What is Ultimate Car Driving Simulator?

      -

      Ultimate Car Driving Simulator is a game developed by Sir Studios that allows you to experience realistic car driving physics, graphics, and sounds. You can choose from a wide range of cars, from sports cars to off-road vehicles, and customize them with various parts and accessories. You can also explore a huge open world map with different terrains, roads, and landmarks. You can perform stunts, drifts, jumps, and crashes, and enjoy the realistic damage system. You can also earn money by completing missions and challenges, and use it to buy new cars and upgrades.

      -

      Features of the game

      -

      Some of the features of Ultimate Car Driving Simulator are:

      -
        -
      • Realistic driving physics and controls
      • -
      • High-quality graphics and sound effects
      • -
      • Wide selection of cars with different characteristics
      • -
      • Customization options for cars, such as paint, wheels, spoilers, etc.
      • -
      • Huge open world map with various locations and scenarios
      • -
      • Stunt ramps, loops, bridges, and obstacles
      • -
      • Realistic damage system and car deformation
      • -
      • Missions and challenges to earn money and rewards
      • -
      • Leaderboards and achievements to compete with other players
      • -
      -

      Why use the mod apk version?

      -

      The mod apk version of Ultimate Car Driving Simulator is a modified version of the game that gives you some advantages over the original version. The mod apk version gives you unlimited money, which means you can buy any car you want and upgrade it to the max. You can also unlock all the features of the game, such as all the maps, all the customization options, all the missions, etc. The mod apk version also removes ads from the game, so you can enjoy a smoother gameplay without interruptions.

      -

      ultimate car driving simulator hack apk unlimited money
      -ultimate car driving simulator mod apk download latest version
      -ultimate car driving simulator mod apk android 1
      -ultimate car driving simulator mod apk all cars unlocked
      -ultimate car driving simulator mod apk free shopping
      -ultimate car driving simulator mod apk revdl
      -ultimate car driving simulator mod apk rexdl
      -ultimate car driving simulator mod apk happymod
      -ultimate car driving simulator mod apk 2023
      -ultimate car driving simulator mod apk offline
      -ultimate car driving simulator mod apk unlimited gems
      -ultimate car driving simulator mod apk unlimited nitro
      -ultimate car driving simulator mod apk premium unlocked
      -ultimate car driving simulator mod apk no ads
      -ultimate car driving simulator mod apk obb
      -ultimate car driving simulator mod apk data
      -ultimate car driving simulator mod apk pure
      -ultimate car driving simulator mod apk vip
      -ultimate car driving simulator mod apk pro
      -ultimate car driving simulator mod apk mega
      -ultimate car driving simulator mod apk full version
      -ultimate car driving simulator mod apk new update
      -ultimate car driving simulator mod apk old version
      -ultimate car driving simulator mod apk original
      -ultimate car driving simulator mod apk online
      -ultimate car driving simulator hack dinheiro infinito download
      -ultimate car driving simulator dinheiro infinito 2023 atualizado
      -ultimate car driving simulator dinheiro infinito e diamantes infinitos
      -ultimate car driving simulator dinheiro infinito e desbloqueado
      -ultimate car driving simulator dinheiro infinito e vip
      -ultimate car driving simulator dinheiro infinito e nitro infinito
      -ultimate car driving simulator dinheiro infinito e gemas infinitas
      -ultimate car driving simulator dinheiro infinito e tudo liberado
      -ultimate car driving simulator dinheiro infinito e premium
      -ultimate car driving simulator dinheiro infinito e sem anúncios
      -baixar ultimate car driving simulator dinheiro infinito 2023
      -como instalar ultimate car driving simulator dinheiro infinito
      -como jogar ultimate car driving simulator dinheiro infinito online
      -como atualizar ultimate car driving simulator dinheiro infinito 2023
      -como ter dinheiro infinito no jogo ultimate car driving simulator 2023

      -

      How to download and install Ultimate Car Driving Simulator Mod APK Dinheiro Infinito?

      -

      If you want to download and install Ultimate Car Driving Simulator Mod APK Dinheiro Infinito, you need to follow some simple steps. Here they are:

      -

      Requirements for the mod apk

      -

      Before you download and install the mod apk, you need to make sure that your device meets some requirements. You need to have:

      -
        -
      • An Android device with version 4.4 or higher
      • -
      • At least 100 MB of free storage space
      • -
      • A stable internet connection
      • -
      • A file manager app
      • -
      • The permission to install apps from unknown sources (you can enable this in your device settings)
      • -
      -

      Steps to download and install the mod apk

      -

      Once you have checked the requirements, you can proceed with the following steps:

      -
        -
      1. Go to [this link](^1^) and download the Ultimate Car Driving Simulator Mod APK Dinheiro Infinito file.
      2. -
      3. Locate the downloaded file in your file manager app and tap on it.
      4. Follow the instructions on the screen and install the mod apk.
      5. -
      6. Launch the game and enjoy the unlimited money and features.
      7. -
      -

      Note: You may need to uninstall the original version of the game before installing the mod apk, or use a different device to avoid any conflicts.

      -

      How to play Ultimate Car Driving Simulator Mod APK Dinheiro Infinito?

      -

      Playing Ultimate Car Driving Simulator Mod APK Dinheiro Infinito is very easy and fun. Here are some tips on how to play the game:

      -

      Choose your car and customize it

      -

      When you start the game, you will see a garage with various cars to choose from. You can swipe left or right to browse through the cars, and tap on the one you like. You can also use the filter button to sort the cars by category, such as sports, off-road, classic, etc. Once you have selected your car, you can tap on the customize button to modify its appearance and performance. You can change the color, wheels, spoilers, exhausts, and other parts of your car. You can also upgrade the engine, brakes, suspension, and other aspects of your car. You can use the unlimited money you have to buy any part or upgrade you want.

      -

      Explore the open world and perform stunts

      -

      After you have customized your car, you can tap on the play button to enter the open world map. You can drive your car anywhere you want, and enjoy the realistic physics and graphics of the game. You can also find various stunt ramps, loops, bridges, and obstacles on the map, and use them to perform amazing stunts with your car. You can also drift, jump, and crash your car, and see how it deforms and damages. You can also switch between different camera angles, such as first-person, third-person, top-down, etc., to get a different perspective of your driving.

      -

      Earn money and unlock new cars and upgrades

      -

      As you drive around the map, you will see various missions and challenges that you can complete to earn money and rewards. Some of the missions include racing against other cars, delivering packages, escaping from the police, etc. Some of the challenges include reaching a certain speed, drifting for a certain distance, jumping over a certain height, etc. You can also earn money by collecting coins and gems that are scattered around the map. You can use the money you earn to buy new cars and upgrades from the garage. You can also unlock new maps and features by completing certain achievements.

      -

      Conclusion

      -

      Ultimate Car Driving Simulator Mod APK Dinheiro Infinito is a great game for anyone who loves car driving simulator games. It gives you unlimited money and unlocks all the features of the game, so you can enjoy driving any car you want in a huge open world map. You can also customize your car with various parts and accessories, and perform stunts and missions with it. The game has realistic physics and graphics, and a variety of scenarios and challenges to keep you entertained. If you want to download and play Ultimate Car Driving Simulator Mod APK Dinheiro Infinito, just follow the steps we have provided in this article.

      -

      FAQs

      -

      Here are some frequently asked questions about Ultimate Car Driving Simulator Mod APK Dinheiro Infinito:

      -
        -
      • Is Ultimate Car Driving Simulator Mod APK Dinheiro Infinito safe to download and install?
        -Yes, Ultimate Car Driving Simulator Mod APK Dinheiro Infinito is safe to download and install. It does not contain any viruses or malware that could harm your device or data. However, you should always download it from a trusted source like [this link], and not from any unknown or suspicious websites.
      • -
      • Do I need to root my device to use Ultimate Car Driving Simulator Mod APK Dinheiro Infinito?
        -No, you do not need to root your device to use Ultimate Car Driving Simulator Mod APK Dinheiro Infinito. The mod apk works fine on both rooted and non-rooted devices.
      • -
      • Can I play Ultimate Car Driving Simulator Mod APK Dinheiro Infinito online with other players?
        -No, Ultimate Car Driving Simulator Mod APK Dinheiro Infinito is an offline game that does not support online multiplayer mode. You can only play it solo or with AI-controlled cars.
      • -
      • Can I update Ultimate Car Driving Simulator Mod APK Dinheiro Infinito to get new features?
        -Yes, you can update Ultimate Car Driving Simulator Mod APK Dinheiro Infinito whenever there is a new version available. However, you may need to uninstall the previous version of the mod apk before installing the new one, or use a different device to avoid any conflicts. You can also check [this link] for the latest updates and news about the game.
      • -
      • What are some alternatives to Ultimate Car Driving Simulator Mod APK Dinheiro Infinito?
        -If you are looking for some other car driving simulator games that you can play on your Android device, you can try some of these alternatives:
      • -
          -
        • Real Car Parking 2: A realistic car parking simulator game that tests your driving skills and accuracy. You can drive various cars and park them in different scenarios. You can also customize your cars and enjoy the 3D graphics and sound effects.
        • -
        • CarX Drift Racing 2: A thrilling car drifting simulator game that lets you experience the adrenaline of drifting. You can choose from a variety of cars and tracks, and compete with other players online. You can also tune and upgrade your cars and create your own club.
        • -
        • Extreme Car Driving Simulator: A fun car driving simulator game that lets you drive fast and furious. You can drive freely in a city with no traffic or rules, and perform stunts and burnouts. You can also choose from different modes, such as checkpoint, traffic, free mode, etc.
        • -
        -
      -

      I hope you enjoyed this article on Ultimate Car Driving Simulator Mod APK Dinheiro Infinito. If you have any questions or feedback, feel free to leave a comment below. Happy driving!

      197e85843d
      -
      -
      \ No newline at end of file diff --git a/spaces/congsaPfin/Manga-OCR/logs/Join the Cookie Run Kingdom Adventure - Download the APK Now.md b/spaces/congsaPfin/Manga-OCR/logs/Join the Cookie Run Kingdom Adventure - Download the APK Now.md deleted file mode 100644 index 142a5bcf1800d3bb52af011fd6605e619e4f12a1..0000000000000000000000000000000000000000 --- a/spaces/congsaPfin/Manga-OCR/logs/Join the Cookie Run Kingdom Adventure - Download the APK Now.md +++ /dev/null @@ -1,116 +0,0 @@ - -

      Cookie Run: Kingdom APK - A Sweet and Fun Game for Android

      -

      Do you love cookies? Do you love games? If you answered yes to both questions, then you will love Cookie Run: Kingdom APK, a sweet and fun game for Android devices. In this game, you can build your own cookie kingdom, fight against evil forces, collect and upgrade cookie characters, and join guilds with other players. Sounds delicious, right? Let's find out more about this game in this article.

      -

      What is Cookie Run: Kingdom APK?

      -

      Cookie Run: Kingdom APK is a kingdom builder and battle RPG game developed by Devsisters Corporation, the same company that created the popular Cookie Run series. It is a sequel to Cookie Run: OvenBreak, which was released in 2016. In this game, you can explore the colorful and cute world of cookies, where you can create your own cookie kingdom, fight against the dark legion of the Dark Enchantress Cookie, and discover the secrets of the ancient cookies and their kingdoms.

      -

      cookie run kingdom apk


      Download Zip 🆓 https://urlca.com/2uO5eL



      -

      A kingdom builder and battle RPG game

      -

      In Cookie Run: Kingdom APK, you can design your own cookie kingdom with various decors, such as buildings, plants, furniture, and more. You can also expand your territory by clearing stages and defeating enemies. You can also fight against other players in PvP mode, where you can test your skills and strategies.

      -

      A sequel to the popular Cookie Run series

      -

      Cookie Run: Kingdom APK is a continuation of the story of GingerBrave and his friends, who escaped from the oven in Cookie Run: OvenBreak. In this game, they face a new threat from the Dark Enchantress Cookie, who wants to destroy all the cookie kingdoms. You can join them in their adventure and meet new cookie characters along the way.

      -

      A colorful and cute world of cookies

      -

      Cookie Run: Kingdom APK has a charming graphics style that will appeal to both kids and adults. The game features a variety of cookie characters, each with their own personality, voice, and skills. The game also has a lively soundtrack and sound effects that match the mood of the game.

      -

      How to download and install Cookie Run: Kingdom APK?

      -

      If you want to play Cookie Run: Kingdom APK on your Android device, you can download it from Google Play or APKCombo. Here are the steps to do so:

      -

      cookie run kingdom apk download
      -cookie run kingdom apk mod
      -cookie run kingdom apk latest version
      -cookie run kingdom apk obb
      -cookie run kingdom apk android
      -cookie run kingdom apk ios
      -cookie run kingdom apk free
      -cookie run kingdom apk offline
      -cookie run kingdom apk update
      -cookie run kingdom apk hack
      -cookie run kingdom apk file
      -cookie run kingdom apk mirror
      -cookie run kingdom apk pure
      -cookie run kingdom apk data
      -cookie run kingdom apk nox
      -cookie run kingdom apk bluestacks
      -cookie run kingdom apk reddit
      -cookie run kingdom apk 4.3.002
      -cookie run kingdom apk 4.2.001
      -cookie run kingdom apk 4.1.001
      -cookie run kingdom apk 4.0.001
      -cookie run kingdom apk 3.0.001
      -cookie run kingdom apk 2.0.001
      -cookie run kingdom apk 1.0.001
      -cookie run kingdom apk beta
      -cookie run kingdom apk global
      -cookie run kingdom apk english
      -cookie run kingdom apk korean
      -cookie run kingdom apk chinese
      -cookie run kingdom apk japanese
      -cookie run kingdom apk german
      -cookie run kingdom apk french
      -cookie run kingdom apk spanish
      -cookie run kingdom apk portuguese
      -cookie run kingdom apk italian
      -cookie run kingdom apk russian
      -cookie run kingdom apk turkish
      -cookie run kingdom apk arabic
      -cookie run kingdom apk thai
      -cookie run kingdom apk vietnamese
      -cookie run kingdom apk indonesian
      -cookie run kingdom apk malay
      -cookie run kingdom apk filipino
      -cookie run kingdom apk hindi
      -cookie run kingdom apk urdu
      -cookie run kingdom apk bengali
      -cookie run kingdom apk tamil
      -cookie run kingdom apk telugu
      -cookie run kingdom apk marathi

      -

      Download from Google Play or APKCombo

      -

      You can download Cookie Run: Kingdom APK from Google Play by searching for it on the app store or by clicking [here](^2^). Alternatively, you can download it from APKCombo by searching for it on the website or by clicking [here](^1^). The file size is about 100 MB.

      -

      Enable unknown sources on your device

      -

      If you download Cookie Run: Kingdom APK from APKCombo, you need to enable unknown sources on your device to install it. To do this, go to Settings > Security and enable the option to install apps from unknown sources. This will allow you to install Cookie Run: Kingdom APK on your device.

      -

      Install the APK file and enjoy the game

      -

      Once you have downloaded Cookie Run: Kingdom APK, you can install it by tapping on the file and following the instructions. After the installation is complete, you can open the game and start playing. You may need to grant some permissions to the game, such as access to your storage, location, and contacts.

      -

      What are the features of Cookie Run: Kingdom APK?

      -

      Cookie Run: Kingdom APK is a game that offers a lot of features for you to enjoy. Here are some of them:

      -

      Build your own cookie kingdom with various decors

      -

      In Cookie Run: Kingdom APK, you can customize your own cookie kingdom with different types of decors, such as buildings, plants, furniture, and more. You can also unlock new decors by clearing stages and completing quests. You can arrange your decors according to your preference and style. You can also visit other players' kingdoms and see how they decorated theirs.

      -

      Fight against the dark legion of the Dark Enchantress Cookie

      -

      In Cookie Run: Kingdom APK, you can also engage in battles against the dark legion of the Dark Enchantress Cookie, who wants to destroy all the cookie kingdoms. You can form a team of up to five cookie characters, each with their own skills and abilities. You can also use special items and combos to enhance your performance. You can fight in various modes, such as story mode, guild mode, PvP mode, and more.

      -

      Collect and upgrade over 200 cookie characters

      -

      In Cookie Run: Kingdom APK, you can collect and upgrade over 200 cookie characters, each with their own personality, voice, and skills. You can obtain new cookie characters by summoning them with crystals or cookies. You can also upgrade your cookie characters by leveling them up, enhancing their skills, equipping them with treasures, and awakening them. You can also mix and match different cookie characters to create your own unique team.

      -

      Join guilds and cooperate with other players

      -

      In Cookie Run: Kingdom APK, you can also join guilds and cooperate with other players. You can chat with your guild members, share tips and strategies, and help each other out. You can also participate in guild battles, where you can compete with other guilds for rewards and glory. You can also join events and challenges that are exclusive for guild members.

      -

      What are the pros and cons of Cookie Run: Kingdom APK?

      -

      Cookie Run: Kingdom APK is a game that has its pros and cons. Here are some of them:

      -

      Pros

      -
        -
      • Fun and addictive gameplay

        -

        Cookie Run: Kingdom APK is a game that offers a fun and addictive gameplay that will keep you entertained for hours. You can enjoy building your own cookie kingdom, fighting against enemies, collecting and upgrading cookie characters, and joining guilds with other players. The game also has a lot of content and features that will make you want to play more.

      • -
      • Charming graphics and sound effects

        -

        Cookie Run: Kingdom APK is a game that has a charming graphics style that will appeal to both kids and adults. The game features a variety of cookie characters, each with their own personality, voice, and skills. The game also has a lively soundtrack and sound effects that match the mood of the game.

      • -
      • Free to play with regular updates

        -

        Cookie Run: Kingdom APK is a game that is free to play with regular updates. You can download and play the game without spending any money. The game also provides regular updates that add new content and features to the game, such as new cookie characters, new stages, new events, and more.

      • -
      -

      Cons

      -
        -
      • Requires internet connection and storage space

        -

        Cookie Run: Kingdom APK is a game that requires internet connection and storage space to play. You need to have a stable internet connection to access the game's features and modes. You also need to have enough storage space on your device to download and install the game.

      • -
      • May have some bugs and glitches

        -

        Cookie Run: Kingdom APK is a game that may have some bugs and glitches that affect the gameplay. Some users have reported issues such as crashing, freezing, lagging, loading errors, login errors, and more. The developers are working on fixing these issues as soon as possible.

      • -
      • < h4>May have some in-app purchases and ads -

        Cookie Run: Kingdom APK is a game that may have some in-app purchases and ads that may affect the gameplay. Some users may find the in-app purchases and ads to be annoying or unfair. The game also has a stamina system that limits the number of stages you can play per day. You can buy more stamina with crystals or cookies, which can be obtained by playing the game or by spending real money.

      • -
      -

      Conclusion

      -

      Cookie Run: Kingdom APK is a sweet and fun game for Android devices that lets you build your own cookie kingdom, fight against evil forces, collect and upgrade cookie characters, and join guilds with other players. The game has a fun and addictive gameplay, charming graphics and sound effects, and free to play with regular updates. However, the game also requires internet connection and storage space, may have some bugs and glitches, and may have some in-app purchases and ads. If you are looking for a game that will make you hungry for cookies and adventure, you should try Cookie Run: Kingdom APK.

      -

      FAQs

      -
        -
      • Q: What are the minimum requirements to play Cookie Run: Kingdom APK?

        -

        A: The minimum requirements to play Cookie Run: Kingdom APK are Android 4.4 or higher, 2 GB of RAM, and 100 MB of storage space.

      • -
      • Q: How can I get more crystals or cookies in Cookie Run: Kingdom APK?

        -

        A: You can get more crystals or cookies by playing the game, completing quests, participating in events, watching ads, or buying them with real money.

      • -
      • Q: How can I contact the developers of Cookie Run: Kingdom APK?

        -

        A: You can contact the developers of Cookie Run: Kingdom APK by sending an email to cookierun@devsisters.com or by visiting their official website [here].

      • -
      • Q: How can I join a guild in Cookie Run: Kingdom APK?

        -

        A: You can join a guild in Cookie Run: Kingdom APK by tapping on the guild icon on the main screen, searching for a guild that suits your preferences, and applying to join it. You can also create your own guild if you have enough crystals.

      • -
      • Q: How can I update Cookie Run: Kingdom APK?

        -

        A: You can update Cookie Run: Kingdom APK by downloading the latest version from Google Play or APKCombo. You can also check for updates by tapping on the settings icon on the main screen and selecting the update option.

      • -

      197e85843d
      -
      -
      \ No newline at end of file diff --git a/spaces/congsaPfin/Manga-OCR/logs/Play Music Tiles - Magic Tiles with MOD APK and Get Unlimited Money and Gems.md b/spaces/congsaPfin/Manga-OCR/logs/Play Music Tiles - Magic Tiles with MOD APK and Get Unlimited Money and Gems.md deleted file mode 100644 index 34ab9b3b7ae8891f0a645d94083e664743bc23bc..0000000000000000000000000000000000000000 --- a/spaces/congsaPfin/Manga-OCR/logs/Play Music Tiles - Magic Tiles with MOD APK and Get Unlimited Money and Gems.md +++ /dev/null @@ -1,145 +0,0 @@ -
      -

      Music Tiles - Magic Tiles Mod APK: A Fun and Relaxing Piano Game

      -

      Do you love music and piano? Do you want to play your favorite songs on your mobile device? Do you want to have unlimited money and gems to unlock new songs and items? If you answered yes to any of these questions, then you should try Music Tiles - Magic Tiles Mod APK, a fun and relaxing piano game that will keep you entertained for hours.

      -

      music tiles magic tiles mod apk unlimited money and gems


      DOWNLOAD · https://urlca.com/2uO5fX



      -

      What is Music Tiles - Magic Tiles?

      -

      Music Tiles - Magic Tiles is a music piano game that lets you play various songs on a virtual keyboard. You can choose from different genres, such as pop, rock, classical, anime, and more. You can also select different modes, such as normal, hard, endless, battle, and custom. The game has a relaxing visual design that changes according to the song and mode. You can also customize your keyboard with different themes and colors.

      -

      Music Tiles - Magic Tiles Mod APK is a modified version of the original game that gives you unlimited money and gems. With these resources, you can unlock new songs, themes, colors, and items without spending real money. You can also enjoy the game without ads or interruptions.

      -

      How to download and install Music Tiles - Magic Tiles Mod APK?

      -

      If you want to download and install Music Tiles - Magic Tiles Mod APK, you need to follow these steps:

      -
        -
      1. Go to [this link](^1^) and download the mod apk file.
      2. -
      3. Enable unknown sources on your device settings.
      4. -
      5. Locate the downloaded file and tap on it to install it.
      6. -
      7. Launch the game and enjoy playing with unlimited money and gems.
      8. -
      -

      Before you download and install Music Tiles - Magic Tiles Mod APK, you need to know these permissions and requirements:

      -
        -
      • The mod apk file size is 75 MB.
      • -
      • The mod apk file requires Android 4.4 or higher.
      • -
      • The mod apk file may not work on some devices or regions.
      • -
      -

      By downloading and installing Music Tiles - Magic Tiles Mod APK, you can enjoy these features and benefits:

      -

      music tiles magic tiles mod apk download free unlimited gems
      -music tiles magic tiles hack mod apk unlimited money and diamonds
      -music tiles magic tiles piano game mod apk unlimited coins and gems
      -music tiles magic tiles latest version mod apk unlimited everything
      -music tiles magic tiles premium mod apk unlocked all songs and gems
      -music tiles magic tiles cheats mod apk free money and gems
      -music tiles magic tiles offline mod apk unlimited cash and gems
      -music tiles magic tiles pro mod apk unlimited lives and gems
      -music tiles magic tiles vip mod apk unlimited gold and gems
      -music tiles magic tiles 3d mod apk unlimited stars and gems
      -music tiles magic tiles 2023 mod apk unlimited keys and gems
      -music tiles magic tiles online mod apk unlimited energy and gems
      -music tiles magic tiles no ads mod apk unlimited rewards and gems
      -music tiles magic tiles best mod apk unlimited levels and gems
      -music tiles magic tiles new mod apk unlimited songs and gems
      -music tiles magic tiles fun mod apk unlimited challenges and gems
      -music tiles magic tiles easy mod apk unlimited hints and gems
      -music tiles magic tiles hard mod apk unlimited speed and gems
      -music tiles magic tiles classic mod apk unlimited notes and gems
      -music tiles magic tiles pop mod apk unlimited coins and diamonds
      -music tiles magic tiles rock mod apk unlimited money and stars
      -music tiles magic tiles jazz mod apk unlimited cash and diamonds
      -music tiles magic tiles edm mod apk unlimited gold and stars
      -music tiles magic tiles rap mod apk unlimited lives and diamonds
      -music tiles magic tiles hip hop mod apk unlimited energy and stars
      -music tiles magic tiles country mod apk unlimited rewards and diamonds
      -music tiles magic tiles reggae mod apk unlimited levels and stars
      -music tiles magic tiles r&b mod apk unlimited songs and diamonds
      -music tiles magic tiles disco mod apk unlimited challenges and stars
      -music tiles magic

      -
        -
      • You can access all songs, themes, colors, and items without paying or waiting.
      • -
      • You can play the game without ads or interruptions.
      • -
      • You can have more fun and challenge with different modes and levels.
      • -
      -

      How to play Music Tiles - Magic Tiles Mod APK?

      -

      The basic rules of playing Music Tiles - Magic Tiles Mod APK are simple:

      -
        -
      • Select a song and a mode from the menu.
      • -
      • Tap on the black tiles as they appear on the screen.
      • -
      • Avoid tapping on the white tiles or missing any black tiles.
      • -
      • Follow the rhythm and tempo of the song.
      • -
      -

      Here are some tips for playing Music Tiles - Magic Tiles Mod APK:

      -
        -
      • -
      • Use the hint button to show the next tile.
      • -
      • Use the double tap button to tap two tiles at once.
      • -
      • Use the bomb button to clear all tiles on the screen.
      • -
      -

      The game has different modes and levels that you can play:

      - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -Play the song with your own settings and preferences. - - - -
      ModeDescriptionLevelDifficulty
      NormalPlay the song with normal speed and difficulty.Easy, Medium, Hard, ExpertLow, Medium, High, Very High
      HardPlay the song with faster speed and higher difficulty.Easy, Medium, Hard, ExpertMedium, High, Very High, Extreme
      EndlessPlay the song with endless tiles and increasing speed and difficulty.N/AN/A
      BattlePlay the song with another player online and compete for the highest score.N/AN/A
      CustomN/AN/A
      -

      The game also has rewards and achievements that you can earn:

      -
        -
      • You can earn money and gems by playing songs, completing missions, watching ads, and spinning the wheel.
      • -
      • You can use money and gems to unlock new songs, themes, colors, and items.
      • -
      • You can earn stars by playing songs and achieving high scores.
      • -
      • You can use stars to unlock new modes and levels.
      • -
      • You can earn badges by playing songs and completing achievements.
      • -
      • You can use badges to show off your skills and progress.
      • -
      -

      Why should you play Music Tiles - Magic Tiles Mod APK?

      -

      There are many reasons why you should play Music Tiles - Magic Tiles Mod APK. Here are some of them:

      -
        -
      • Playing a music piano game can improve your musical skills, such as rhythm, timing, coordination, and memory.
      • -
      • Playing different songs and modes can provide you with fun and challenge, as well as variety and diversity.
      • -
      • Playing with unlimited money and gems can give you satisfaction and convenience, as well as freedom and creativity.
      • -
      -

      Music Tiles - Magic Tiles Mod APK is a fun and relaxing piano game that will make you feel like a piano master. You can play your favorite songs on a virtual keyboard, enjoy the soothing visuals and sounds, and unlock new songs and items with unlimited money and gems. You can also play different modes and levels, earn rewards and achievements, and compete with other players online. Music Tiles - Magic Tiles Mod APK is a game that you will love to play and play again.

      -

      Conclusion

      -

      In conclusion, Music Tiles - Magic Tiles Mod APK is a music piano game that lets you play various songs on a virtual keyboard. You can choose from different genres, such as pop, rock, classical, anime, and more. You can also select different modes, such as normal, hard, endless, battle, and custom. The game has a relaxing visual design that changes according to the song and mode. You can also customize your keyboard with different themes and colors. Music Tiles - Magic Tiles Mod APK is a modified version of the original game that gives you unlimited money and gems. With these resources, you can unlock new songs, themes, colors, and items without spending real money. You can also enjoy the game without ads or interruptions. Music Tiles - Magic Tiles Mod APK is a fun and relaxing piano game that will keep you entertained for hours.

      -

      If you want to try Music Tiles - Magic Tiles Mod APK, you can download it from [this link] and follow the steps to install it on your device. You will not regret it!

      -

      Here are five unique FAQs after the conclusion:

      -
        -
      1. Q: How many songs are available in Music Tiles - Magic Tiles Mod APK?
      2. -
      3. A: There are over 1000 songs in Music Tiles - Magic Tiles Mod APK, covering different genres, such as pop, rock, classical, anime, and more. You can unlock all of them with unlimited money and gems.
      4. -
      5. Q: How do I change the theme or color of my keyboard in Music Tiles - Magic Tiles Mod APK?
      6. -
      7. A: You can change the theme or color of your keyboard in Music Tiles - Magic Tiles Mod APK by tapping on the settings icon on the top right corner of the screen. You can choose from different themes, such as wood, marble, neon, rainbow, galaxy, etc. You can also choose from different colors, such as red, blue, green, yellow, pink, etc. You can unlock more themes and colors with unlimited money and gems.
      8. -
      9. Q: How do I play online with other players in Music Tiles - Magic Tiles Mod APK?
      10. -
      11. A: You can play online with other players in Music Tiles - Magic Tiles Mod APK by tapping on the battle mode icon on the main menu. You will be matched with another player who is playing the same song as you. You will see their score and progress on the top of the screen. The player who has the higher score at the end of the song wins the battle.
      12. -
      13. Q: How do I earn badges in Music Tiles - Magic Tiles Mod APK?
      14. -
      15. A: You can earn badges in Music Tiles - Magic Tiles Mod APK by playing songs and completing achievements. There are different types of badges, such as bronze, silver, gold, platinum, diamond, etc. Each badge has a different requirement for earning it. For example, to earn the bronze badge for playing pop songs, you need to play 10 pop songs. To earn the diamond badge for playing hard mode songs, you need to play 100 hard mode songs.
      16. -
      17. Q: How do I use the pause, hint, double tap, and bomb buttons in Music Tiles - Magic Tiles Mod APK?
      18. -
      19. A: You can use the pause, hint, double tap, and bomb buttons in Music Tiles - Magic Tiles Mod APK by tapping on them on the bottom of the screen. The pause button lets you pause and resume the game. The hint button shows you the next tile to tap. The double tap button lets you tap two tiles at once. The bomb button clears all tiles on the screen. You can use these buttons once per song, and you can buy more with unlimited money and gems.
      20. -

      197e85843d
      -
      -
      \ No newline at end of file diff --git a/spaces/congsaPfin/Manga-OCR/logs/WhatsApp Plus Red APK How to Download and Install This Amazing WhatsApp Alternative on Your Android Device.md b/spaces/congsaPfin/Manga-OCR/logs/WhatsApp Plus Red APK How to Download and Install This Amazing WhatsApp Alternative on Your Android Device.md deleted file mode 100644 index 031b134bb04cf8ed696e0f54626de2a6f86b3458..0000000000000000000000000000000000000000 --- a/spaces/congsaPfin/Manga-OCR/logs/WhatsApp Plus Red APK How to Download and Install This Amazing WhatsApp Alternative on Your Android Device.md +++ /dev/null @@ -1,87 +0,0 @@ - -

      WhatsApp Plus Red APK Download: What You Need to Know

      -

      WhatsApp is one of the most popular messaging apps in the world, with over two billion users. However, some people are not satisfied with the official app and look for unofficial versions that offer more features and customization options. One of these versions is WhatsApp Plus Red, a modified version of WhatsApp that claims to have more functions and themes than the original app. But is it safe and legal to use? And what are the alternatives if you want to switch from WhatsApp? In this article, we will answer these questions and more.

      -

      What is WhatsApp Plus Red?

      -

      WhatsApp Plus Red is an unofficial app made by the developer Alex Mods that is similar to the original WhatsApp. It has a red theme and icon, and offers some extra features that WhatsApp does not have. Some of these features are:

      -

      whatsapp plus red apk download


      Download Filehttps://urlca.com/2uO7QR



      -

      Features of WhatsApp Plus Red

      -
        -
      • Prevents deletion of messages. If your contacts regret something they have written to you, they will not have the opportunity to withdraw their messages, because they will remain registered in your chats in the same way.
      • -
      • It prevents them from seeing when you are typing.
      • -
      • Blocks them from seeing when you’ve seen a status.
      • -
      • You can change the colors, fonts, and themes of WhatsApp.
      • -
      • You can disable voice calls and hide your profile picture.
      • -
      • Multiple account support - up to 4 accounts.
      • -
      • Possible to 'undelete' previously sent messages.
      • -
      -

      How to Download and Install WhatsApp Plus Red

      -

      To download and install WhatsApp Plus Red, you need to follow these steps:

      -
        -
      1. Uninstall the official version of WhatsApp from your device.
      2. -
      3. Go to the website whatsplus.org and download the latest version of WhatsApp Plus Red APK (17.20).
      4. -
      5. Enable the option to install from unknown sources on your device settings.
      6. -
      7. Install the APK file on your device.
      8. -
      9. Enter your phone number and verify the code sent to you.
      10. -
      11. Choose your profile picture and start using WhatsApp Plus Red.
      12. -
      -

      What are the Risks of Using WhatsApp Plus Red?

      -

      While WhatsApp Plus Red may seem appealing for some users, it also comes with some risks that you should be aware of before using it. These risks include:

      -

      Security and Privacy Issues

      -

      WhatsApp Plus Red is not an official app, and it is not available on the Google Play Store. This means that it is not verified by Google or Meta, and it may contain malware or spyware that can harm your device or steal your data. Moreover, WhatsApp Plus Red does not guarantee end-to-end encryption for all your messages, unlike the official app. This means that your messages may be intercepted or accessed by third parties, including the developer of the app or hackers. Therefore, using WhatsApp Plus Red may compromise your security and privacy.

      -

      whatsapp plus red latest version apk download
      -whatsapp plus red apk download for android
      -whatsapp plus red apk download 2023
      -whatsapp plus red apk download free
      -whatsapp plus red apk download alex mods
      -whatsapp plus red apk download may 2023
      -whatsapp plus red apk download link
      -whatsapp plus red apk download official website
      -whatsapp plus red apk download without ban
      -whatsapp plus red apk download new features
      -whatsapp plus red apk download anti revoke
      -whatsapp plus red apk download with stickers
      -whatsapp plus red apk download modded
      -whatsapp plus red apk download no root
      -whatsapp plus red apk download safe and secure
      -whatsapp plus red apk download update
      -whatsapp plus red apk download how to install
      -whatsapp plus red apk download backup and restore
      -whatsapp plus red apk download custom themes
      -whatsapp plus red apk download hide online status
      -whatsapp plus red apk download video call
      -whatsapp plus red apk download group chat
      -whatsapp plus red apk download media sharing
      -whatsapp plus red apk download privacy settings
      -whatsapp plus red apk download dark mode
      -whatsapp plus red apk download alternative apps
      -whatsapp plus red apk download pros and cons
      -whatsapp plus red apk download reviews and ratings
      -whatsapp plus red apk download faq and support
      -whatsapp plus red apk download tips and tricks

      -

      Possible Ban from WhatsApp

      -

      Another risk of using WhatsApp Plus Red is that you may be banned from using the official WhatsApp app. Meta does not allow users to use unofficial versions of its apps, and it may detect if you are using one. If this happens, you may receive a warning message or a temporary ban from WhatsApp. In some cases, you may even face a permanent ban if you continue to use an unofficial app. This means that you will lose access to all your chats, contacts, and media on WhatsApp.

      -

      What are the Alternatives to WhatsApp Plus Red?

      -

      If you are looking for a messaging app that offers more features and customization options than WhatsApp, but without the risks of using an unofficial app, you may want to consider some alternatives. Some of the alternatives to WhatsApp Plus Red are:

      -

      Signal

      -

      Signal is a messaging app that focuses on privacy and security. It uses end-to-end encryption for all your messages, calls, and video chats, and does not collect any metadata or personal information from its users. You can also send disappearing messages, blur faces in photos, and lock the app with a passcode or biometric authentication. Signal is open-source and funded by donations, so you don't have to worry about ads or data selling. Signal is available for iOS, Android, Windows, macOS, and Linux.

      -

      Telegram

      -

      Telegram is another popular messaging app that offers fast, secure, and cloud-based communication. You can send text, voice, video, and media messages to your contacts, as well as create groups and channels for up to 200,000 members. Telegram also supports end-to-end encryption for secret chats, self-destructing messages, and bots that can enhance your experience. Telegram has a web version and desktop apps for Windows, macOS, and Linux, as well as mobile apps for iOS and Android.

      -

      Discord

      -

      Discord is a messaging app that is mainly designed for gamers and communities. You can create or join servers that host various channels for different topics and purposes. You can also chat with your friends via text, voice, or video, and share your screen or stream games. Discord has a lot of customization options and integrations with other apps and services. Discord is free to use, but you can upgrade to Discord Nitro for more features and perks. Discord has apps for iOS, Android, Windows, macOS, and Linux, as well as a web version.

      -

      Conclusion

      -

      WhatsApp Plus Red is an unofficial app that claims to offer more features and themes than the original WhatsApp app. However, it also comes with some risks that may compromise your security, privacy, and access to WhatsApp. If you are looking for a messaging app that has more functions and customization options than WhatsApp, but without the dangers of using an unofficial app, you may want to consider some of the alternatives we mentioned above. These apps are safe, legal, and reliable, and they may suit your needs better than WhatsApp Plus Red.

      -

      FAQs

      -
        -
      • Is WhatsApp Plus Red legal?
        -WhatsApp Plus Red is not illegal per se, but it violates the terms of service of WhatsApp. This means that WhatsApp can ban you from using its service if it detects that you are using an unofficial app.
      • -
      • Is WhatsApp Plus Red safe?
        -WhatsApp Plus Red is not safe to use because it is not verified by Google or Meta, and it may contain malware or spyware that can harm your device or steal your data. Moreover, WhatsApp Plus Red does not guarantee end-to-end encryption for all your messages, unlike the official app.
      • -
      • How can I update WhatsApp Plus Red?
        -To update WhatsApp Plus Red, you need to uninstall the old version of the app from your device and download the latest version of the APK file from the website whatsplus.org. Then you need to install the APK file on your device and verify your phone number again.
      • -
      • Can I use WhatsApp Plus Red with the official WhatsApp app?
        -No, you cannot use WhatsApp Plus Red with the official WhatsApp app on the same device. You need to uninstall the official app before installing WhatsApp Plus Red.
      • -
      • Can I restore my chats from WhatsApp to WhatsApp Plus Red?
        -Yes, you can restore your chats from WhatsApp to WhatsApp Plus Red if you have a backup of your chats on Google Drive or your device storage. You need to select the option to restore chats when you install WhatsApp Plus Red on your device.
      • -

      197e85843d
      -
      -
      \ No newline at end of file diff --git a/spaces/coreml-community/ControlNet-v1-1-Annotators-cpu/annotator/mmpkg/mmseg/apis/__init__.py b/spaces/coreml-community/ControlNet-v1-1-Annotators-cpu/annotator/mmpkg/mmseg/apis/__init__.py deleted file mode 100644 index 170724be38de42daf2bc1a1910e181d68818f165..0000000000000000000000000000000000000000 --- a/spaces/coreml-community/ControlNet-v1-1-Annotators-cpu/annotator/mmpkg/mmseg/apis/__init__.py +++ /dev/null @@ -1,9 +0,0 @@ -from .inference import inference_segmentor, init_segmentor, show_result_pyplot -from .test import multi_gpu_test, single_gpu_test -from .train import get_root_logger, set_random_seed, train_segmentor - -__all__ = [ - 'get_root_logger', 'set_random_seed', 'train_segmentor', 'init_segmentor', - 'inference_segmentor', 'multi_gpu_test', 'single_gpu_test', - 'show_result_pyplot' -] diff --git a/spaces/cpluoiudy00001/QQsign/devices/device_8958.js b/spaces/cpluoiudy00001/QQsign/devices/device_8958.js deleted file mode 100644 index 455ddb0108b70276949e6539926481590a98e0d9..0000000000000000000000000000000000000000 --- a/spaces/cpluoiudy00001/QQsign/devices/device_8958.js +++ /dev/null @@ -1,344 +0,0 @@ -"use strict"; -var __importDefault = (this && this.__importDefault) || function (mod) { - return (mod && mod.__esModule) ? mod : { "default": mod }; -}; -Object.defineProperty(exports, "__esModule", { value: true }); -exports.getApkInfo = exports.Platform = exports.Device = exports.generateFullDevice = exports.generateShortDevice = void 0; -const crypto_1 = require("crypto"); -const constants_1 = require("./constants"); -const axios_1 = __importDefault(require("axios")); -const algo_1 = require("./algo"); -function generateImei() { - let imei = `86${(0, constants_1.randomString)(12, '0123456789')}`; - function calcSP(imei) { - let sum = 0; - for (let i = 0; i < imei.length; ++i) { - if (i % 2) { - let j = parseInt(imei[i]) * 2; - sum += j % 10 + Math.floor(j / 10); - } - else { - sum += parseInt(imei[i]); - } - } - return (100 - sum) % 10; - } - return imei + calcSP(imei); -} -/** 生成短设备信息 */ -function generateShortDevice() { - const randstr = (length, num = false) => { - const map = num ? '0123456789' : '0123456789abcdef'; - return (0, constants_1.randomString)(length, map); - }; - return { - "--begin--": "该设备为随机生成,丢失后不能得到原先配置", - product: `ILPP-${randstr(5).toUpperCase()}`, - device: `${randstr(5).toUpperCase()}`, - board: `${randstr(5).toUpperCase()}`, - brand: `${randstr(4).toUpperCase()}`, - model: `ICQQ ${randstr(4).toUpperCase()}`, - wifi_ssid: `HUAWEI-${randstr(7)}`, - bootloader: `U-boot`, - android_id: `IL.${randstr(7, true)}.${randstr(4, true)}`, - boot_id: `${randstr(8)}-${randstr(4)}-${randstr(4)}-${randstr(4)}-${randstr(12)}`, - proc_version: `Linux version 5.10.101-android12-${randstr(8)}`, - mac_address: `2D:${randstr(2).toUpperCase()}:${randstr(2).toUpperCase()}:${randstr(2).toUpperCase()}:${randstr(2).toUpperCase()}:${randstr(2).toUpperCase()}`, - ip_address: `192.168.${randstr(2, true)}.${randstr(2, true)}`, - imei: `${generateImei()}`, - incremental: `${randstr(10, true).toUpperCase()}`, - "--end--": "修改后可能需要重新验证设备。" - }; -} -exports.generateShortDevice = generateShortDevice; -/** 生成完整设备信息 */ -function generateFullDevice(apk, d) { - if (!d) - d = generateShortDevice(); - return { - display: d.android_id, - product: d.product, - device: d.device, - board: d.board, - brand: d.brand, - model: d.model, - bootloader: d.bootloader, - fingerprint: `${d.brand}/${d.product}/${d.device}:10/${d.android_id}/${d.incremental}:user/release-keys`, - boot_id: d.boot_id, - proc_version: d.proc_version, - baseband: "", - sim: "T-Mobile", - os_type: "android", - mac_address: d.mac_address, - ip_address: d.ip_address, - wifi_bssid: d.mac_address, - wifi_ssid: d.wifi_ssid, - imei: d.imei, - android_id: (0, constants_1.md5)(d.android_id).toString("hex"), - apn: "wifi", - version: { - incremental: d.incremental, - release: "10", - codename: "REL", - sdk: 29, - }, - imsi: (0, crypto_1.randomBytes)(16), - guid: (0, constants_1.md5)(Buffer.concat([Buffer.from(d.imei), Buffer.from(d.mac_address)])), - }; -} -exports.generateFullDevice = generateFullDevice; -class Device { - constructor(apk, d) { - this.apk = apk; - this.secret = 'ZdJqM15EeO2zWc08'; - this.publicKey = `-----BEGIN PUBLIC KEY----- -MIGfMA0GCSqGSIb3DQEBAQUAA4GNADCBiQKBgQDEIxgwoutfwoJxcGQeedgP7FG9 -qaIuS0qzfR8gWkrkTZKM2iWHn2ajQpBRZjMSoSf6+KJGvar2ORhBfpDXyVtZCKpq -LQ+FLkpncClKVIrBwv6PHyUvuCb0rIarmgDnzkfQAqVufEtR64iazGDKatvJ9y6B -9NMbHddGSAUmRTCrHQIDAQAB ------END PUBLIC KEY-----`; - if (!d) - d = generateShortDevice(); - Object.assign(this, generateFullDevice(apk, d)); - } - async getQIMEI() { - if (this.apk.app_key === "") { - return; - } - const k = (0, constants_1.randomString)(16); - const key = (0, algo_1.encryptPKCS1)(this.publicKey, k); - const time = Date.now(); - const nonce = (0, constants_1.randomString)(16); - const payload = this.genRandomPayloadByDevice(); - const params = (0, algo_1.aesEncrypt)(JSON.stringify(payload), k).toString('base64'); - try { - const { data } = await axios_1.default.post("https://snowflake.qq.com/ola/android", { - key, - params, - time, nonce, - sign: (0, constants_1.md5)(key + params + time + nonce + this.secret).toString("hex"), - extra: '' - }, { - headers: { - 'User-Agent': `Dalvik/2.1.0 (Linux; U; Android ${this.version.release}; PCRT00 Build/N2G48H)`, - 'Content-Type': "application/json" - } - }); - if (data?.code !== 0) { - return; - } - const { q16, q36 } = JSON.parse((0, algo_1.aesDecrypt)(data.data, k)); - this.qImei16 = q16; - this.qImei36 = q36; - } - catch { - } - } - genRandomPayloadByDevice() { - const fixedRand = (max = 1, min = 0) => { - if (max < min) - [max, min] = [min, max]; - const diff = max - min; - return Math.floor(Math.random() * diff) + min; - }; - const reserved = { - "harmony": "0", - "clone": Math.random() > 0.5 ? "1" : "0", - "containe": "", - "oz": "", - "oo": "", - "kelong": Math.random() > 0.5 ? "1" : "0", - "uptimes": (0, constants_1.formatTime)(new Date()), - "multiUser": Math.random() > 0.5 ? "1" : "0", - "bod": this.board, - "brd": this.brand, - "dv": this.device, - "firstLevel": "", - "manufact": this.brand, - "name": this.model, - "host": "se.infra", - "kernel": this.fingerprint - }; - const timestamp = Date.now(); - this.mtime = this.mtime || Date.now(); - const mtime1 = new Date(this.mtime || Date.now()); - const dateFormat = (fmt, time = Date.now()) => (0, constants_1.formatTime)(time, fmt); - const mtimeStr1 = dateFormat("YYYY-mm-ddHHMMSS", mtime1) + "." + this.imei.slice(2, 11); - const mtime2 = new Date(this.mtime - parseInt(this.imei.slice(2, 4))); - const mtimeStr2 = dateFormat("YYYY-mm-ddHHMMSS", mtime2) + "." + this.imei.slice(5, 14); - let beaconIdArr = [ - `${(0, constants_1.formatTime)(new Date(timestamp + fixedRand(60, 0)))}.${String(fixedRand(99, 0)).padStart(2, '0')}0000000`, - mtimeStr1, - '0000000000000000', - (0, constants_1.md5)(this.android_id + this.imei).toString("hex").slice(0, 16), - ...new Array(4).fill(false).map((_) => fixedRand(10000000, 1000000)), - this.boot_id, - '1', - fixedRand(5, 0), - fixedRand(5, 0), - `${(0, constants_1.formatTime)(new Date(timestamp + fixedRand(60, 0)))}.${String(fixedRand(99, 0)).padStart(2, '0')}0000000`, - `${(0, constants_1.formatTime)(new Date(timestamp + fixedRand(60, 0)))}.${String(fixedRand(99, 0)).padStart(2, '0')}0000000`, - fixedRand(5, 0), - fixedRand(100, 10), - `${(0, constants_1.formatTime)(new Date(timestamp + fixedRand(60, 0)))}.${String(fixedRand(99, 0)).padStart(2, '0')}0000000`, - `${(0, constants_1.formatTime)(new Date(timestamp + fixedRand(60, 0)))}.${String(fixedRand(99, 0)).padStart(2, '0')}0000000`, - fixedRand(50000, 10000), - fixedRand(100, 10), - `${(0, constants_1.formatTime)(new Date(timestamp + fixedRand(60, 0)))}.${String(fixedRand(99, 0)).padStart(2, '0')}0000000`, - mtimeStr2, - fixedRand(10000, 1000), - fixedRand(5, 0), - `${dateFormat("YYYY-mm-ddHHMMSS")}.${String(((10 + parseInt(this.imei.slice(5, 7))) % 100)).padStart(2, "0")}0000000`, - `${dateFormat("YYYY-mm-ddHHMMSS")}.${String(((11 + parseInt(this.imei.slice(5, 7))) % 100)).padStart(2, "0")}0000000`, - fixedRand(10000, 1000), - fixedRand(100, 10), - `${dateFormat("YYYY-mm-ddHHMMSS")}.${String(((11 + parseInt(this.imei.slice(5, 7))) % 100)).padStart(2, "0")}0000000`, - `${dateFormat("YYYY-mm-ddHHMMSS")}.${String(((11 + parseInt(this.imei.slice(5, 7))) % 100)).padStart(2, "0")}0000000`, - fixedRand(10000, 1000), - fixedRand(5, 0), - `${(0, constants_1.formatTime)(new Date(timestamp + fixedRand(60, 0)))}.${String(fixedRand(99, 0)).padStart(2, '0')}0000000`, - `${(0, constants_1.formatTime)(new Date(timestamp + fixedRand(60, 0)))}.${String(fixedRand(99, 0)).padStart(2, '0')}0000000`, - fixedRand(5, 0), - fixedRand(100, 10), - `${(0, constants_1.formatTime)(new Date(timestamp + fixedRand(60, 0)))}.${String(fixedRand(99, 0)).padStart(2, '0')}0000000`, - `${(0, constants_1.formatTime)(new Date(timestamp + fixedRand(60, 0)))}.${String(fixedRand(99, 0)).padStart(2, '0')}0000000`, - fixedRand(5, 0), - fixedRand(5, 0), - ].map((str, idx) => `k${idx + 1}:${str}`); - return { - "androidId": this.android_id, - "platformId": 1, - "appKey": this.apk.app_key, - "appVersion": this.apk.version, - "beaconIdSrc": beaconIdArr.join(';'), - "brand": this.brand, - "channelId": "2017", - "cid": "", - "imei": this.imei, - "imsi": this.imsi.toString("hex"), - "mac": this.mac_address, - "model": this.model, - "networkType": "unknown", - "oaid": "", - "osVersion": `Android ${this.version.release},level ${this.version.sdk}`, - "qimei": "", - "qimei36": "", - "sdkVersion": "1.2.13.6", - "targetSdkVersion": "26", - "audit": "", - "userId": "{}", - "packageId": this.apk.id, - "deviceType": this.display, - "sdkName": "", - "reserved": JSON.stringify(reserved), - }; - } -} -exports.Device = Device; -/** 支持的登录设备平台 */ -var Platform; -(function (Platform) { - Platform[Platform["Android"] = 1] = "Android"; - Platform[Platform["aPad"] = 2] = "aPad"; - Platform[Platform["Watch"] = 3] = "Watch"; - Platform[Platform["iMac"] = 4] = "iMac"; - Platform[Platform["iPad"] = 5] = "iPad"; - Platform[Platform["Tim"] = 6] = "Tim"; -})(Platform = exports.Platform || (exports.Platform = {})); -const mobile = { - id: "com.tencent.mobileqq", - app_key: '0S200MNJT807V3GE', - name: "A8.9.58.11175", - version: "8.9.58.11175", - ver: "8.9.58", - sign: Buffer.from('A6 B7 45 BF 24 A2 C2 77 52 77 16 F6 F3 6E B6 8D'.split(' ').map(s => parseInt(s, 16))), - buildtime: 1684467300, - appid: 16, - subid: 537163194, - bitmap: 150470524, - main_sig_map: 16724722, - sub_sig_map: 0x10400, - sdkver: "6.0.0.2545", - display: "Android_8.9.58", - qua: 'V1_AND_SQ_8.9.58_4108_YYB_D', - ssover: 20, -}; -const tim = { - id: "com.tencent.tim", - app_key: '0S200MNJT807V3GE', - name: "A3.5.1.3168", - version: "3.5.1.3168", - ver: "3.5.1", - sign: Buffer.from('775e696d09856872fdd8ab4f3f06b1e0', 'hex'), - buildtime: 1630062176, - appid: 16, - subid: 537150355, - bitmap: 150470524, - main_sig_map: 16724722, - sub_sig_map: 0x10400, - sdkver: "6.0.0.2484", - display: "Tim", - qua: "V1_AND_SQ_8.3.9_351_TIM_D", - ssover: 18, -}; -const watch = { - id: "com.tencent.qqlite", - app_key: '0S200MNJT807V3GE', - name: "A2.0.8", - version: "2.0.8", - ver: "2.0.8", - sign: Buffer.from('A6 B7 45 BF 24 A2 C2 77 52 77 16 F6 F3 6E B6 8D'.split(' ').map(s => parseInt(s, 16))), - buildtime: 1559564731, - appid: 16, - subid: 537065138, - bitmap: 16252796, - main_sig_map: 16724722, - sub_sig_map: 0x10400, - sdkver: "6.0.0.2365", - display: "Watch", - qua: '', - ssover: 5 -}; -const hd = { - id: "com.tencent.minihd.qq", - app_key: '0S200MNJT807V3GE', - name: "A5.9.3.3468", - version: "5.9.3.3468", - ver: "5.9.3", - sign: Buffer.from('AA 39 78 F4 1F D9 6F F9 91 4A 66 9E 18 64 74 C7'.split(' ').map(s => parseInt(s, 16))), - buildtime: 1637427966, - appid: 16, - subid: 537128930, - bitmap: 150470524, - main_sig_map: 1970400, - sub_sig_map: 66560, - sdkver: "6.0.0.2433", - display: "iMac", - qua: '', - ssover: 12 -}; -const apklist = { - [Platform.Android]: mobile, - [Platform.Tim]: tim, - [Platform.aPad]: { - ...mobile, - subid: 537163242, - display: 'aPad_8.9.58' - }, - [Platform.Watch]: watch, - [Platform.iMac]: { ...hd }, - [Platform.iPad]: { - ...mobile, - subid: 537155074, - sign: hd.sign, - name: '8.9.50.611', - ver: '8.9.50', - sdkver: '6.0.0.2535', - qua: 'V1_AND_SQ_8.9.50_3898_YYB_D', - display: 'iPad' - }, -}; -function getApkInfo(p) { - return apklist[p] || apklist[Platform.Android]; -} -exports.getApkInfo = getApkInfo; diff --git a/spaces/crashedice/signify/SOURCE/yolo_files/models/experimental.py b/spaces/crashedice/signify/SOURCE/yolo_files/models/experimental.py deleted file mode 100644 index 73f0ae87930ae2acd76006556fd48d47932ebc21..0000000000000000000000000000000000000000 --- a/spaces/crashedice/signify/SOURCE/yolo_files/models/experimental.py +++ /dev/null @@ -1,136 +0,0 @@ -# YOLOv5 experimental modules - -import numpy as np -import torch -import torch.nn as nn - -from SOURCE.yolo_files.models.common import Conv, DWConv -from SOURCE.yolo_files.utils.google_utils import attempt_download - - -class CrossConv(nn.Module): - # Cross Convolution Downsample - def __init__(self, c1, c2, k=3, s=1, g=1, e=1.0, shortcut=False): - # ch_in, ch_out, kernel, stride, groups, expansion, shortcut - super(CrossConv, self).__init__() - c_ = int(c2 * e) # hidden channels - self.cv1 = Conv(c1, c_, (1, k), (1, s)) - self.cv2 = Conv(c_, c2, (k, 1), (s, 1), g=g) - self.add = shortcut and c1 == c2 - - def forward(self, x): - return x + self.cv2(self.cv1(x)) if self.add else self.cv2(self.cv1(x)) - - -class Sum(nn.Module): - # Weighted sum of 2 or more layers https://arxiv.org/abs/1911.09070 - def __init__(self, n, weight=False): # n: number of inputs - super(Sum, self).__init__() - self.weight = weight # apply weights boolean - self.iter = range(n - 1) # iter object - if weight: - self.w = nn.Parameter(-torch.arange(1., n) / 2, requires_grad=True) # layer weights - - def forward(self, x): - y = x[0] # no weight - if self.weight: - w = torch.sigmoid(self.w) * 2 - for i in self.iter: - y = y + x[i + 1] * w[i] - else: - for i in self.iter: - y = y + x[i + 1] - return y - - -class GhostConv(nn.Module): - # Ghost Convolution https://github.com/huawei-noah/ghostnet - def __init__(self, c1, c2, k=1, s=1, g=1, act=True): # ch_in, ch_out, kernel, stride, groups - super(GhostConv, self).__init__() - c_ = c2 // 2 # hidden channels - self.cv1 = Conv(c1, c_, k, s, None, g, act) - self.cv2 = Conv(c_, c_, 5, 1, None, c_, act) - - def forward(self, x): - y = self.cv1(x) - return torch.cat([y, self.cv2(y)], 1) - - -class GhostBottleneck(nn.Module): - # Ghost Bottleneck https://github.com/huawei-noah/ghostnet - def __init__(self, c1, c2, k=3, s=1): # ch_in, ch_out, kernel, stride - super(GhostBottleneck, self).__init__() - c_ = c2 // 2 - self.conv = nn.Sequential(GhostConv(c1, c_, 1, 1), # pw - DWConv(c_, c_, k, s, act=False) if s == 2 else nn.Identity(), # dw - GhostConv(c_, c2, 1, 1, act=False)) # pw-linear - self.shortcut = nn.Sequential(DWConv(c1, c1, k, s, act=False), - Conv(c1, c2, 1, 1, act=False)) if s == 2 else nn.Identity() - - def forward(self, x): - return self.conv(x) + self.shortcut(x) - - -class MixConv2d(nn.Module): - # Mixed Depthwise Conv https://arxiv.org/abs/1907.09595 - def __init__(self, c1, c2, k=(1, 3), s=1, equal_ch=True): - super(MixConv2d, self).__init__() - groups = len(k) - if equal_ch: # equal c_ per group - i = torch.linspace(0, groups - 1E-6, c2).floor() # c2 indices - c_ = [(i == g).sum() for g in range(groups)] # intermediate channels - else: # equal weight.numel() per group - b = [c2] + [0] * groups - a = np.eye(groups + 1, groups, k=-1) - a -= np.roll(a, 1, axis=1) - a *= np.array(k) ** 2 - a[0] = 1 - c_ = np.linalg.lstsq(a, b, rcond=None)[0].round() # solve for equal weight indices, ax = b - - self.m = nn.ModuleList([nn.Conv2d(c1, int(c_[g]), k[g], s, k[g] // 2, bias=False) for g in range(groups)]) - self.bn = nn.BatchNorm2d(c2) - self.act = nn.LeakyReLU(0.1, inplace=True) - - def forward(self, x): - return x + self.act(self.bn(torch.cat([m(x) for m in self.m], 1))) - - -class Ensemble(nn.ModuleList): - # Ensemble of models - def __init__(self): - super(Ensemble, self).__init__() - - def forward(self, x, augment=False): - y = [] - for module in self: - y.append(module(x, augment)[0]) - # y = torch.stack(y).max(0)[0] # max ensemble - # y = torch.stack(y).mean(0) # mean ensemble - y = torch.cat(y, 1) # nms ensemble - return y, None # inference, train output - - -def attempt_load(weights, map_location=None, inplace=True): - from SOURCE.yolo_files.models.yolo import Detect, Model - - # Loads an ensemble of models weights=[a,b,c] or a single model weights=[a] or weights=a - model = Ensemble() - for w in weights if isinstance(weights, list) else [weights]: - attempt_download(w) - ckpt = torch.load(w, map_location=map_location) # load - model.append(ckpt['ema' if ckpt.get('ema') else 'model'].float().fuse().eval()) # FP32 model - - # Compatibility updates - for m in model.modules(): - if type(m) in [nn.Hardswish, nn.LeakyReLU, nn.ReLU, nn.ReLU6, nn.SiLU, Detect, Model]: - m.inplace = inplace # pytorch 1.7.0 compatibility - elif type(m) is Conv: - m._non_persistent_buffers_set = set() # pytorch 1.6.0 compatibility - - if len(model) == 1: - return model[-1] # return model - else: - print('Ensemble created with %s\n' % weights) - for k in ['names', 'stride']: - setattr(model, k, getattr(model[-1], k)) - return model # return ensemble diff --git a/spaces/daddyjin/TalkingFaceGeneration/Demo_TFR_Pirenderer/src/face3d/models/arcface_torch/docs/install.md b/spaces/daddyjin/TalkingFaceGeneration/Demo_TFR_Pirenderer/src/face3d/models/arcface_torch/docs/install.md deleted file mode 100644 index 6314a40441285e9236438e468caf8b71a407531a..0000000000000000000000000000000000000000 --- a/spaces/daddyjin/TalkingFaceGeneration/Demo_TFR_Pirenderer/src/face3d/models/arcface_torch/docs/install.md +++ /dev/null @@ -1,51 +0,0 @@ -## v1.8.0 -### Linux and Windows -```shell -# CUDA 11.0 -pip --default-timeout=100 install torch==1.8.0+cu111 torchvision==0.9.0+cu111 torchaudio==0.8.0 -f https://download.pytorch.org/whl/torch_stable.html - -# CUDA 10.2 -pip --default-timeout=100 install torch==1.8.0 torchvision==0.9.0 torchaudio==0.8.0 - -# CPU only -pip --default-timeout=100 install torch==1.8.0+cpu torchvision==0.9.0+cpu torchaudio==0.8.0 -f https://download.pytorch.org/whl/torch_stable.html - -``` - - -## v1.7.1 -### Linux and Windows -```shell -# CUDA 11.0 -pip install torch==1.7.1+cu110 torchvision==0.8.2+cu110 torchaudio==0.7.2 -f https://download.pytorch.org/whl/torch_stable.html - -# CUDA 10.2 -pip install torch==1.7.1 torchvision==0.8.2 torchaudio==0.7.2 - -# CUDA 10.1 -pip install torch==1.7.1+cu101 torchvision==0.8.2+cu101 torchaudio==0.7.2 -f https://download.pytorch.org/whl/torch_stable.html - -# CUDA 9.2 -pip install torch==1.7.1+cu92 torchvision==0.8.2+cu92 torchaudio==0.7.2 -f https://download.pytorch.org/whl/torch_stable.html - -# CPU only -pip install torch==1.7.1+cpu torchvision==0.8.2+cpu torchaudio==0.7.2 -f https://download.pytorch.org/whl/torch_stable.html -``` - - -## v1.6.0 - -### Linux and Windows -```shell -# CUDA 10.2 -pip install torch==1.6.0 torchvision==0.7.0 - -# CUDA 10.1 -pip install torch==1.6.0+cu101 torchvision==0.7.0+cu101 -f https://download.pytorch.org/whl/torch_stable.html - -# CUDA 9.2 -pip install torch==1.6.0+cu92 torchvision==0.7.0+cu92 -f https://download.pytorch.org/whl/torch_stable.html - -# CPU only -pip install torch==1.6.0+cpu torchvision==0.7.0+cpu -f https://download.pytorch.org/whl/torch_stable.html -``` \ No newline at end of file diff --git a/spaces/dawood/Kanye-AI/README.md b/spaces/dawood/Kanye-AI/README.md deleted file mode 100644 index 3b488a9349e19e03614951f57684cf82d12a3a70..0000000000000000000000000000000000000000 --- a/spaces/dawood/Kanye-AI/README.md +++ /dev/null @@ -1,13 +0,0 @@ ---- -title: Kanye AI -emoji: 📊 -colorFrom: red -colorTo: pink -sdk: gradio -sdk_version: 3.24.1 -app_file: app.py -pinned: false -license: apache-2.0 ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/dawood/microsoft_windows/theme_dropdown.py b/spaces/dawood/microsoft_windows/theme_dropdown.py deleted file mode 100644 index 6235388fd00549553df44028f3ccf03e946994ea..0000000000000000000000000000000000000000 --- a/spaces/dawood/microsoft_windows/theme_dropdown.py +++ /dev/null @@ -1,57 +0,0 @@ -import os -import pathlib - -from gradio.themes.utils import ThemeAsset - - -def create_theme_dropdown(): - import gradio as gr - - asset_path = pathlib.Path(__file__).parent / "themes" - themes = [] - for theme_asset in os.listdir(str(asset_path)): - themes.append( - (ThemeAsset(theme_asset), gr.Theme.load(str(asset_path / theme_asset))) - ) - - def make_else_if(theme_asset): - return f""" - else if (theme == '{str(theme_asset[0].version)}') {{ - var theme_css = `{theme_asset[1]._get_theme_css()}` - }}""" - - head, tail = themes[0], themes[1:] - if_statement = f""" - if (theme == "{str(head[0].version)}") {{ - var theme_css = `{head[1]._get_theme_css()}` - }} {" ".join(make_else_if(t) for t in tail)} - """ - - latest_to_oldest = sorted([t[0] for t in themes], key=lambda asset: asset.version)[ - ::-1 - ] - latest_to_oldest = [str(t.version) for t in latest_to_oldest] - - component = gr.Dropdown( - choices=latest_to_oldest, - value=latest_to_oldest[0], - render=False, - label="Select Version", - ).style(container=False) - - return ( - component, - f""" - (theme) => {{ - if (!document.querySelector('.theme-css')) {{ - var theme_elem = document.createElement('style'); - theme_elem.classList.add('theme-css'); - document.head.appendChild(theme_elem); - }} else {{ - var theme_elem = document.querySelector('.theme-css'); - }} - {if_statement} - theme_elem.innerHTML = theme_css; - }} - """, - ) diff --git a/spaces/daydayup1225/Chat-web/app_modules/utils.py b/spaces/daydayup1225/Chat-web/app_modules/utils.py deleted file mode 100644 index 5dd5626958a6c9c9a4c50208e95a836de1660094..0000000000000000000000000000000000000000 --- a/spaces/daydayup1225/Chat-web/app_modules/utils.py +++ /dev/null @@ -1,454 +0,0 @@ -# -*- coding:utf-8 -*- -from __future__ import annotations -from typing import TYPE_CHECKING, Any, Callable, Dict, List, Tuple, Type -import logging -import json -import os -import datetime -import hashlib -import csv -import requests -import re -import html -import markdown2 -import torch -import sys -import gc -from pygments.lexers import guess_lexer, ClassNotFound - -import gradio as gr -from pypinyin import lazy_pinyin -import tiktoken -import mdtex2html -from markdown import markdown -from pygments import highlight -from pygments.lexers import guess_lexer,get_lexer_by_name -from pygments.formatters import HtmlFormatter -import transformers -from peft import PeftModel -from transformers import LlamaForCausalLM, LlamaTokenizer - -from transformers import ( - AutoTokenizer, - AutoModelForCausalLM, -) - -from app_modules.presets import * - -logging.basicConfig( - level=logging.INFO, - format="%(asctime)s [%(levelname)s] [%(filename)s:%(lineno)d] %(message)s", -) - - -def markdown_to_html_with_syntax_highlight(md_str): - def replacer(match): - lang = match.group(1) or "text" - code = match.group(2) - lang = lang.strip() - #print(1,lang) - if lang=="text": - lexer = guess_lexer(code) - lang = lexer.name - #print(2,lang) - try: - lexer = get_lexer_by_name(lang, stripall=True) - except ValueError: - lexer = get_lexer_by_name("python", stripall=True) - formatter = HtmlFormatter() - #print(3,lexer.name) - highlighted_code = highlight(code, lexer, formatter) - - return f'
      {highlighted_code}
      ' - - code_block_pattern = r"```(\w+)?\n([\s\S]+?)\n```" - md_str = re.sub(code_block_pattern, replacer, md_str, flags=re.MULTILINE) - - html_str = markdown(md_str) - return html_str - - -def normalize_markdown(md_text: str) -> str: - lines = md_text.split("\n") - normalized_lines = [] - inside_list = False - - for i, line in enumerate(lines): - if re.match(r"^(\d+\.|-|\*|\+)\s", line.strip()): - if not inside_list and i > 0 and lines[i - 1].strip() != "": - normalized_lines.append("") - inside_list = True - normalized_lines.append(line) - elif inside_list and line.strip() == "": - if i < len(lines) - 1 and not re.match( - r"^(\d+\.|-|\*|\+)\s", lines[i + 1].strip() - ): - normalized_lines.append(line) - continue - else: - inside_list = False - normalized_lines.append(line) - - return "\n".join(normalized_lines) - - -def convert_mdtext(md_text): - code_block_pattern = re.compile(r"```(.*?)(?:```|$)", re.DOTALL) - inline_code_pattern = re.compile(r"`(.*?)`", re.DOTALL) - code_blocks = code_block_pattern.findall(md_text) - non_code_parts = code_block_pattern.split(md_text)[::2] - - result = [] - for non_code, code in zip(non_code_parts, code_blocks + [""]): - if non_code.strip(): - non_code = normalize_markdown(non_code) - if inline_code_pattern.search(non_code): - result.append(markdown(non_code, extensions=["tables"])) - else: - result.append(mdtex2html.convert(non_code, extensions=["tables"])) - if code.strip(): - # _, code = detect_language(code) # 暂时去除代码高亮功能,因为在大段代码的情况下会出现问题 - # code = code.replace("\n\n", "\n") # 暂时去除代码中的空行,因为在大段代码的情况下会出现问题 - code = f"\n```{code}\n\n```" - code = markdown_to_html_with_syntax_highlight(code) - result.append(code) - result = "".join(result) - result += ALREADY_CONVERTED_MARK - return result - -def convert_asis(userinput): - return f"

      {html.escape(userinput)}

      "+ALREADY_CONVERTED_MARK - -def detect_converted_mark(userinput): - if userinput.endswith(ALREADY_CONVERTED_MARK): - return True - else: - return False - - - -def detect_language(code): - if code.startswith("\n"): - first_line = "" - else: - first_line = code.strip().split("\n", 1)[0] - language = first_line.lower() if first_line else "" - code_without_language = code[len(first_line) :].lstrip() if first_line else code - return language, code_without_language - -def convert_to_markdown(text): - text = text.replace("$","$") - def replace_leading_tabs_and_spaces(line): - new_line = [] - - for char in line: - if char == "\t": - new_line.append(" ") - elif char == " ": - new_line.append(" ") - else: - break - return "".join(new_line) + line[len(new_line):] - - markdown_text = "" - lines = text.split("\n") - in_code_block = False - - for line in lines: - if in_code_block is False and line.startswith("```"): - in_code_block = True - markdown_text += "```\n" - elif in_code_block is True and line.startswith("```"): - in_code_block = False - markdown_text += "```\n" - elif in_code_block: - markdown_text += f"{line}\n" - else: - line = replace_leading_tabs_and_spaces(line) - line = re.sub(r"^(#)", r"\\\1", line) - markdown_text += f"{line} \n" - - return markdown_text - -def add_language_tag(text): - def detect_language(code_block): - try: - lexer = guess_lexer(code_block) - return lexer.name.lower() - except ClassNotFound: - return "" - - code_block_pattern = re.compile(r"(```)(\w*\n[^`]+```)", re.MULTILINE) - - def replacement(match): - code_block = match.group(2) - if match.group(2).startswith("\n"): - language = detect_language(code_block) - if language: - return f"```{language}{code_block}```" - else: - return f"```\n{code_block}```" - else: - return match.group(1) + code_block + "```" - - text2 = code_block_pattern.sub(replacement, text) - return text2 - -def delete_last_conversation(chatbot, history): - if len(chatbot) > 0: - chatbot.pop() - - if len(history) > 0: - history.pop() - - return ( - chatbot, - history, - "Delete Done", - ) - -def reset_state(): - return [], [], "Reset Done" - -def reset_textbox(): - return gr.update(value=""),"" - -def cancel_outputing(): - return "Stop Done" - -def transfer_input(inputs): - # 一次性返回,降低延迟 - textbox = reset_textbox() - return ( - inputs, - gr.update(value=""), - gr.Button.update(visible=True), - ) - - -class State: - interrupted = False - - def interrupt(self): - self.interrupted = True - - def recover(self): - self.interrupted = False -shared_state = State() - - - - - -# Greedy Search -def greedy_search(input_ids: torch.Tensor, - model: torch.nn.Module, - tokenizer: transformers.PreTrainedTokenizer, - stop_words: list, - max_length: int, - temperature: float = 1.0, - top_p: float = 1.0, - top_k: int = 25) -> Iterator[str]: - generated_tokens = [] - past_key_values = None - current_length = 1 - for i in range(max_length): - with torch.no_grad(): - if past_key_values is None: - outputs = model(input_ids) - else: - outputs = model(input_ids[:, -1:], past_key_values=past_key_values) - logits = outputs.logits[:, -1, :] - past_key_values = outputs.past_key_values - - # apply temperature - logits /= temperature - - probs = torch.softmax(logits, dim=-1) - # apply top_p - probs_sort, probs_idx = torch.sort(probs, dim=-1, descending=True) - probs_sum = torch.cumsum(probs_sort, dim=-1) - mask = probs_sum - probs_sort > top_p - probs_sort[mask] = 0.0 - - # apply top_k - # if top_k is not None: - # probs_sort1, _ = torch.topk(probs_sort, top_k) - # min_top_probs_sort = torch.min(probs_sort1, dim=-1, keepdim=True).values - # probs_sort = torch.where(probs_sort < min_top_probs_sort, torch.full_like(probs_sort, float(0.0)), probs_sort) - - probs_sort.div_(probs_sort.sum(dim=-1, keepdim=True)) - next_token = torch.multinomial(probs_sort, num_samples=1) - next_token = torch.gather(probs_idx, -1, next_token) - - input_ids = torch.cat((input_ids, next_token), dim=-1) - - generated_tokens.append(next_token[0].item()) - text = tokenizer.decode(generated_tokens) - - yield text - if any([x in text for x in stop_words]): - del past_key_values - del logits - del probs - del probs_sort - del probs_idx - del probs_sum - gc.collect() - return - -def generate_prompt_with_history(text,history,tokenizer,max_length=2048): - prompt = "The following is a conversation between a human and an AI assistant. The human and the AI assistant take turns chatting. Human statements start with [|Human|] and AI assistant statements start with [|AI|]. The AI assistant always provides responses in as much detail as possible, and in Markdown format. The AI assistant always declines to engage with topics, questions and instructions related to unethical, controversial, or sensitive issues. Complete the transcript in exactly that format.\n[|Human|]Hello!\n[|AI|]Hi!" - history = ["\n[|Human|]{}\n[|AI|]{}".format(x[0],x[1]) for x in history] - history.append("\n[|Human|]{}\n[|AI|]".format(text)) - history_text = "" - flag = False - for x in history[::-1]: - if tokenizer(prompt+history_text+x, return_tensors="pt")['input_ids'].size(-1) <= max_length: - history_text = x + history_text - flag = True - else: - break - if flag: - return prompt+history_text,tokenizer(prompt+history_text, return_tensors="pt") - else: - return None - - -def is_stop_word_or_prefix(s: str, stop_words: list) -> bool: - for stop_word in stop_words: - if s.endswith(stop_word): - return True - for i in range(1, len(stop_word)): - if s.endswith(stop_word[:i]): - return True - return False - - - -def load_tokenizer_and_model(base_model, adapter_model, load_8bit=False): #base_model, adapter_model, load_8bit=False - if torch.cuda.is_available(): - device = "cuda" - else: - device = "cpu" - - try: - if torch.backends.mps.is_available(): - device = "mps" - except: # noqa: E722 - pass - # tokenizer = LlamaTokenizer.from_pretrained(base_model) - # if device == "cuda": - # model = LlamaForCausalLM.from_pretrained( - # base_model, - # load_in_8bit=load_8bit, - # torch_dtype=torch.float16, - # device_map="auto", - # ) - # model = PeftModel.from_pretrained( - # model, - # adapter_model, - # torch_dtype=torch.float16, - # ) - # elif device == "mps": - # model = LlamaForCausalLM.from_pretrained( - # base_model, - # device_map={"": device}, - # torch_dtype=torch.float16, - # ) - # model = PeftModel.from_pretrained( - # model, - # adapter_model, - # device_map={"": device}, - # torch_dtype=torch.float16, - # ) - # else: - # model = LlamaForCausalLM.from_pretrained( - # base_model, device_map={"": device}, low_cpu_mem_usage=True - # ) - # model = PeftModel.from_pretrained( - # model, - # adapter_model, - # device_map={"": device}, - # ) - - tokenizer = AutoTokenizer.from_pretrained(base_model) - if device == "cuda": - model = AutoModelForCausalLM.from_pretrained( - base_model, - load_in_8bit=load_8bit, - torch_dtype=torch.float16, - device_map="auto", - ) - model = PeftModel.from_pretrained( - model, - adapter_model, - torch_dtype=torch.float16, - ) - elif device == "mps": - model = AutoModelForCausalLM.from_pretrained( - base_model, - device_map={"": device}, - torch_dtype=torch.float16, - ) - model = PeftModel.from_pretrained( - model, - adapter_model, - device_map={"": device}, - torch_dtype=torch.float16, - ) - else: - model = AutoModelForCausalLM.from_pretrained( - base_model, device_map={"": device}, low_cpu_mem_usage=True - ) - model = PeftModel.from_pretrained( - model, - adapter_model, - device_map={"": device}, - ) - - # if not load_8bit: - # model.half() # seems to fix bugs for some users. - - model.eval() - return tokenizer, model, device - - - -def load_finetune_tokenizer_and_model(base_model_name_or_path, load_8bit=False): - if torch.cuda.is_available(): - device = "cuda" - else: - device = "cpu" - - try: - if torch.backends.mps.is_available(): - device = "mps" - except: # noqa: E722 - pass - - tokenizer = AutoTokenizer.from_pretrained(base_model_name_or_path) - if device == "cuda": - model = AutoModelForCausalLM.from_pretrained( - base_model_name_or_path, - load_in_8bit=load_8bit, - torch_dtype=torch.float16, - device_map="auto", - ) - elif device == "mps": - model = AutoModelForCausalLM.from_pretrained( - base_model_name_or_path, - device_map={"": device}, - torch_dtype=torch.float16, - ) - else: - model = AutoModelForCausalLM.from_pretrained( - base_model_name_or_path, device_map={"": device}, low_cpu_mem_usage=True - ) - - # if not load_8bit: - # model.half() # seems to fix bugs for some users. - - model.eval() - return tokenizer, model, device - diff --git a/spaces/dcarpintero/nlp-summarizer-pegasus/.venv/lib/python3.9/site-packages/PIL/BdfFontFile.py b/spaces/dcarpintero/nlp-summarizer-pegasus/.venv/lib/python3.9/site-packages/PIL/BdfFontFile.py deleted file mode 100644 index 075d462907abcace9610a686052e643582602a8f..0000000000000000000000000000000000000000 --- a/spaces/dcarpintero/nlp-summarizer-pegasus/.venv/lib/python3.9/site-packages/PIL/BdfFontFile.py +++ /dev/null @@ -1,122 +0,0 @@ -# -# The Python Imaging Library -# $Id$ -# -# bitmap distribution font (bdf) file parser -# -# history: -# 1996-05-16 fl created (as bdf2pil) -# 1997-08-25 fl converted to FontFile driver -# 2001-05-25 fl removed bogus __init__ call -# 2002-11-20 fl robustification (from Kevin Cazabon, Dmitry Vasiliev) -# 2003-04-22 fl more robustification (from Graham Dumpleton) -# -# Copyright (c) 1997-2003 by Secret Labs AB. -# Copyright (c) 1997-2003 by Fredrik Lundh. -# -# See the README file for information on usage and redistribution. -# - -""" -Parse X Bitmap Distribution Format (BDF) -""" - - -from . import FontFile, Image - -bdf_slant = { - "R": "Roman", - "I": "Italic", - "O": "Oblique", - "RI": "Reverse Italic", - "RO": "Reverse Oblique", - "OT": "Other", -} - -bdf_spacing = {"P": "Proportional", "M": "Monospaced", "C": "Cell"} - - -def bdf_char(f): - # skip to STARTCHAR - while True: - s = f.readline() - if not s: - return None - if s[:9] == b"STARTCHAR": - break - id = s[9:].strip().decode("ascii") - - # load symbol properties - props = {} - while True: - s = f.readline() - if not s or s[:6] == b"BITMAP": - break - i = s.find(b" ") - props[s[:i].decode("ascii")] = s[i + 1 : -1].decode("ascii") - - # load bitmap - bitmap = [] - while True: - s = f.readline() - if not s or s[:7] == b"ENDCHAR": - break - bitmap.append(s[:-1]) - bitmap = b"".join(bitmap) - - # The word BBX - # followed by the width in x (BBw), height in y (BBh), - # and x and y displacement (BBxoff0, BByoff0) - # of the lower left corner from the origin of the character. - width, height, x_disp, y_disp = [int(p) for p in props["BBX"].split()] - - # The word DWIDTH - # followed by the width in x and y of the character in device pixels. - dwx, dwy = [int(p) for p in props["DWIDTH"].split()] - - bbox = ( - (dwx, dwy), - (x_disp, -y_disp - height, width + x_disp, -y_disp), - (0, 0, width, height), - ) - - try: - im = Image.frombytes("1", (width, height), bitmap, "hex", "1") - except ValueError: - # deal with zero-width characters - im = Image.new("1", (width, height)) - - return id, int(props["ENCODING"]), bbox, im - - -class BdfFontFile(FontFile.FontFile): - """Font file plugin for the X11 BDF format.""" - - def __init__(self, fp): - super().__init__() - - s = fp.readline() - if s[:13] != b"STARTFONT 2.1": - msg = "not a valid BDF file" - raise SyntaxError(msg) - - props = {} - comments = [] - - while True: - s = fp.readline() - if not s or s[:13] == b"ENDPROPERTIES": - break - i = s.find(b" ") - props[s[:i].decode("ascii")] = s[i + 1 : -1].decode("ascii") - if s[:i] in [b"COMMENT", b"COPYRIGHT"]: - if s.find(b"LogicalFontDescription") < 0: - comments.append(s[i + 1 : -1].decode("ascii")) - - while True: - c = bdf_char(fp) - if not c: - break - id, ch, (xy, dst, src), im = c - if 0 <= ch < len(self.glyph): - self.glyph[ch] = xy, dst, src, im diff --git a/spaces/dcarpintero/nlp-summarizer-pegasus/.venv/lib/python3.9/site-packages/gradio/components/highlighted_text.py b/spaces/dcarpintero/nlp-summarizer-pegasus/.venv/lib/python3.9/site-packages/gradio/components/highlighted_text.py deleted file mode 100644 index 690a1f0e8e7d0b4b9d6c3b24dfd33e67c78917c8..0000000000000000000000000000000000000000 --- a/spaces/dcarpintero/nlp-summarizer-pegasus/.venv/lib/python3.9/site-packages/gradio/components/highlighted_text.py +++ /dev/null @@ -1,206 +0,0 @@ -"""gr.HighlightedText() component.""" - -from __future__ import annotations - -from typing import Callable, Literal - -from gradio_client.documentation import document, set_documentation_group -from gradio_client.serializing import ( - JSONSerializable, -) - -from gradio.components.base import IOComponent, _Keywords -from gradio.deprecation import warn_style_method_deprecation -from gradio.events import ( - Changeable, - EventListenerMethod, - Selectable, -) - -set_documentation_group("component") - - -@document() -class HighlightedText(Changeable, Selectable, IOComponent, JSONSerializable): - """ - Displays text that contains spans that are highlighted by category or numerical value. - Preprocessing: this component does *not* accept input. - Postprocessing: expects a {List[Tuple[str, float | str]]]} consisting of spans of text and their associated labels, or a {Dict} with two keys: (1) "text" whose value is the complete text, and (2) "entities", which is a list of dictionaries, each of which have the keys: "entity" (consisting of the entity label, can alternatively be called "entity_group"), "start" (the character index where the label starts), and "end" (the character index where the label ends). Entities should not overlap. - - Demos: diff_texts, text_analysis - Guides: named-entity-recognition - """ - - def __init__( - self, - value: list[tuple[str, str | float | None]] | dict | Callable | None = None, - *, - color_map: dict[str, str] - | None = None, # Parameter moved to HighlightedText.style() - show_legend: bool = False, - combine_adjacent: bool = False, - adjacent_separator: str = "", - label: str | None = None, - every: float | None = None, - show_label: bool | None = None, - container: bool = True, - scale: int | None = None, - min_width: int = 160, - visible: bool = True, - elem_id: str | None = None, - elem_classes: list[str] | str | None = None, - **kwargs, - ): - """ - Parameters: - value: Default value to show. If callable, the function will be called whenever the app loads to set the initial value of the component. - show_legend: whether to show span categories in a separate legend or inline. - combine_adjacent: If True, will merge the labels of adjacent tokens belonging to the same category. - adjacent_separator: Specifies the separator to be used between tokens if combine_adjacent is True. - label: component name in interface. - every: If `value` is a callable, run the function 'every' number of seconds while the client connection is open. Has no effect otherwise. Queue must be enabled. The event can be accessed (e.g. to cancel it) via this component's .load_event attribute. - show_label: if True, will display label. - container: If True, will place the component in a container - providing some extra padding around the border. - scale: relative width compared to adjacent Components in a Row. For example, if Component A has scale=2, and Component B has scale=1, A will be twice as wide as B. Should be an integer. - min_width: minimum pixel width, will wrap if not sufficient screen space to satisfy this value. If a certain scale value results in this Component being narrower than min_width, the min_width parameter will be respected first. - visible: If False, component will be hidden. - elem_id: An optional string that is assigned as the id of this component in the HTML DOM. Can be used for targeting CSS styles. - elem_classes: An optional list of strings that are assigned as the classes of this component in the HTML DOM. Can be used for targeting CSS styles. - """ - self.color_map = color_map - self.show_legend = show_legend - self.combine_adjacent = combine_adjacent - self.adjacent_separator = adjacent_separator - self.select: EventListenerMethod - """ - Event listener for when the user selects Highlighted text span. - Uses event data gradio.SelectData to carry `value` referring to selected [text, label] tuple, and `index` to refer to span index. - See EventData documentation on how to use this event data. - """ - IOComponent.__init__( - self, - label=label, - every=every, - show_label=show_label, - container=container, - scale=scale, - min_width=min_width, - visible=visible, - elem_id=elem_id, - elem_classes=elem_classes, - value=value, - **kwargs, - ) - - def get_config(self): - return { - "color_map": self.color_map, - "show_legend": self.show_legend, - "value": self.value, - "selectable": self.selectable, - **IOComponent.get_config(self), - } - - @staticmethod - def update( - value: list[tuple[str, str | float | None]] - | dict - | Literal[_Keywords.NO_VALUE] - | None = _Keywords.NO_VALUE, - color_map: dict[str, str] | None = None, - show_legend: bool | None = None, - label: str | None = None, - show_label: bool | None = None, - container: bool | None = None, - scale: int | None = None, - min_width: int | None = None, - visible: bool | None = None, - ): - updated_config = { - "color_map": color_map, - "show_legend": show_legend, - "label": label, - "show_label": show_label, - "container": container, - "scale": scale, - "min_width": min_width, - "visible": visible, - "value": value, - "__type__": "update", - } - return updated_config - - def postprocess( - self, y: list[tuple[str, str | float | None]] | dict | None - ) -> list[tuple[str, str | float | None]] | None: - """ - Parameters: - y: List of (word, category) tuples, or a dictionary of two keys: "text", and "entities", which itself is a list of dictionaries, each of which have the keys: "entity" (or "entity_group"), "start", and "end" - Returns: - List of (word, category) tuples - """ - if y is None: - return None - if isinstance(y, dict): - try: - text = y["text"] - entities = y["entities"] - except KeyError as ke: - raise ValueError( - "Expected a dictionary with keys 'text' and 'entities' " - "for the value of the HighlightedText component." - ) from ke - if len(entities) == 0: - y = [(text, None)] - else: - list_format = [] - index = 0 - entities = sorted(entities, key=lambda x: x["start"]) - for entity in entities: - list_format.append((text[index : entity["start"]], None)) - entity_category = entity.get("entity") or entity.get("entity_group") - list_format.append( - (text[entity["start"] : entity["end"]], entity_category) - ) - index = entity["end"] - list_format.append((text[index:], None)) - y = list_format - if self.combine_adjacent: - output = [] - running_text, running_category = None, None - for text, category in y: - if running_text is None: - running_text = text - running_category = category - elif category == running_category: - running_text += self.adjacent_separator + text - elif not text: - # Skip fully empty item, these get added in processing - # of dictionaries. - pass - else: - output.append((running_text, running_category)) - running_text = text - running_category = category - if running_text is not None: - output.append((running_text, running_category)) - return output - else: - return y - - def style( - self, - *, - color_map: dict[str, str] | None = None, - container: bool | None = None, - **kwargs, - ): - """ - This method is deprecated. Please set these arguments in the constructor instead. - """ - warn_style_method_deprecation() - if container is not None: - self.container = container - if color_map is not None: - self.color_map = color_map - return self diff --git a/spaces/dcarpintero/nlp-summarizer-pegasus/.venv/lib/python3.9/site-packages/httpx/__version__.py b/spaces/dcarpintero/nlp-summarizer-pegasus/.venv/lib/python3.9/site-packages/httpx/__version__.py deleted file mode 100644 index 6a8e63c60262fc2650cb5c71514a4b23f949aa58..0000000000000000000000000000000000000000 --- a/spaces/dcarpintero/nlp-summarizer-pegasus/.venv/lib/python3.9/site-packages/httpx/__version__.py +++ /dev/null @@ -1,3 +0,0 @@ -__title__ = "httpx" -__description__ = "A next generation HTTP client, for Python 3." -__version__ = "0.24.1" diff --git a/spaces/dcarpintero/nlp-summarizer-pegasus/.venv/lib/python3.9/site-packages/markdown_it/rules_block/fence.py b/spaces/dcarpintero/nlp-summarizer-pegasus/.venv/lib/python3.9/site-packages/markdown_it/rules_block/fence.py deleted file mode 100644 index 263f1b8de8dcdd0dd736eeafab2d9da34ec2c205..0000000000000000000000000000000000000000 --- a/spaces/dcarpintero/nlp-summarizer-pegasus/.venv/lib/python3.9/site-packages/markdown_it/rules_block/fence.py +++ /dev/null @@ -1,101 +0,0 @@ -# fences (``` lang, ~~~ lang) -import logging - -from .state_block import StateBlock - -LOGGER = logging.getLogger(__name__) - - -def fence(state: StateBlock, startLine: int, endLine: int, silent: bool) -> bool: - LOGGER.debug("entering fence: %s, %s, %s, %s", state, startLine, endLine, silent) - - haveEndMarker = False - pos = state.bMarks[startLine] + state.tShift[startLine] - maximum = state.eMarks[startLine] - - if state.is_code_block(startLine): - return False - - if pos + 3 > maximum: - return False - - marker = state.src[pos] - - if marker not in ("~", "`"): - return False - - # scan marker length - mem = pos - pos = state.skipCharsStr(pos, marker) - - length = pos - mem - - if length < 3: - return False - - markup = state.src[mem:pos] - params = state.src[pos:maximum] - - if marker == "`" and marker in params: - return False - - # Since start is found, we can report success here in validation mode - if silent: - return True - - # search end of block - nextLine = startLine - - while True: - nextLine += 1 - if nextLine >= endLine: - # unclosed block should be autoclosed by end of document. - # also block seems to be autoclosed by end of parent - break - - pos = mem = state.bMarks[nextLine] + state.tShift[nextLine] - maximum = state.eMarks[nextLine] - - if pos < maximum and state.sCount[nextLine] < state.blkIndent: - # non-empty line with negative indent should stop the list: - # - ``` - # test - break - - try: - if state.src[pos] != marker: - continue - except IndexError: - break - - if state.is_code_block(nextLine): - continue - - pos = state.skipCharsStr(pos, marker) - - # closing code fence must be at least as long as the opening one - if pos - mem < length: - continue - - # make sure tail has spaces only - pos = state.skipSpaces(pos) - - if pos < maximum: - continue - - haveEndMarker = True - # found! - break - - # If a fence has heading spaces, they should be removed from its inner block - length = state.sCount[startLine] - - state.line = nextLine + (1 if haveEndMarker else 0) - - token = state.push("fence", "code", 0) - token.info = params - token.content = state.getLines(startLine + 1, nextLine, length, True) - token.markup = markup - token.map = [startLine, state.line] - - return True diff --git a/spaces/declare-lab/tango/diffusers/src/diffusers/schedulers/scheduling_pndm_flax.py b/spaces/declare-lab/tango/diffusers/src/diffusers/schedulers/scheduling_pndm_flax.py deleted file mode 100644 index c654f2de8dd3e4f96403cce4b9db8f8b7b69861f..0000000000000000000000000000000000000000 --- a/spaces/declare-lab/tango/diffusers/src/diffusers/schedulers/scheduling_pndm_flax.py +++ /dev/null @@ -1,511 +0,0 @@ -# Copyright 2023 Zhejiang University Team and The HuggingFace Team. All rights reserved. -# -# Licensed under the Apache License, Version 2.0 (the "License"); -# you may not use this file except in compliance with the License. -# You may obtain a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an "AS IS" BASIS, -# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -# See the License for the specific language governing permissions and -# limitations under the License. - -# DISCLAIMER: This file is strongly influenced by https://github.com/ermongroup/ddim - -from dataclasses import dataclass -from typing import Optional, Tuple, Union - -import flax -import jax -import jax.numpy as jnp - -from ..configuration_utils import ConfigMixin, register_to_config -from .scheduling_utils_flax import ( - CommonSchedulerState, - FlaxKarrasDiffusionSchedulers, - FlaxSchedulerMixin, - FlaxSchedulerOutput, - add_noise_common, -) - - -@flax.struct.dataclass -class PNDMSchedulerState: - common: CommonSchedulerState - final_alpha_cumprod: jnp.ndarray - - # setable values - init_noise_sigma: jnp.ndarray - timesteps: jnp.ndarray - num_inference_steps: Optional[int] = None - prk_timesteps: Optional[jnp.ndarray] = None - plms_timesteps: Optional[jnp.ndarray] = None - - # running values - cur_model_output: Optional[jnp.ndarray] = None - counter: Optional[jnp.int32] = None - cur_sample: Optional[jnp.ndarray] = None - ets: Optional[jnp.ndarray] = None - - @classmethod - def create( - cls, - common: CommonSchedulerState, - final_alpha_cumprod: jnp.ndarray, - init_noise_sigma: jnp.ndarray, - timesteps: jnp.ndarray, - ): - return cls( - common=common, - final_alpha_cumprod=final_alpha_cumprod, - init_noise_sigma=init_noise_sigma, - timesteps=timesteps, - ) - - -@dataclass -class FlaxPNDMSchedulerOutput(FlaxSchedulerOutput): - state: PNDMSchedulerState - - -class FlaxPNDMScheduler(FlaxSchedulerMixin, ConfigMixin): - """ - Pseudo numerical methods for diffusion models (PNDM) proposes using more advanced ODE integration techniques, - namely Runge-Kutta method and a linear multi-step method. - - [`~ConfigMixin`] takes care of storing all config attributes that are passed in the scheduler's `__init__` - function, such as `num_train_timesteps`. They can be accessed via `scheduler.config.num_train_timesteps`. - [`SchedulerMixin`] provides general loading and saving functionality via the [`SchedulerMixin.save_pretrained`] and - [`~SchedulerMixin.from_pretrained`] functions. - - For more details, see the original paper: https://arxiv.org/abs/2202.09778 - - Args: - num_train_timesteps (`int`): number of diffusion steps used to train the model. - beta_start (`float`): the starting `beta` value of inference. - beta_end (`float`): the final `beta` value. - beta_schedule (`str`): - the beta schedule, a mapping from a beta range to a sequence of betas for stepping the model. Choose from - `linear`, `scaled_linear`, or `squaredcos_cap_v2`. - trained_betas (`jnp.ndarray`, optional): - option to pass an array of betas directly to the constructor to bypass `beta_start`, `beta_end` etc. - skip_prk_steps (`bool`): - allows the scheduler to skip the Runge-Kutta steps that are defined in the original paper as being required - before plms steps; defaults to `False`. - set_alpha_to_one (`bool`, default `False`): - each diffusion step uses the value of alphas product at that step and at the previous one. For the final - step there is no previous alpha. When this option is `True` the previous alpha product is fixed to `1`, - otherwise it uses the value of alpha at step 0. - steps_offset (`int`, default `0`): - an offset added to the inference steps. You can use a combination of `offset=1` and - `set_alpha_to_one=False`, to make the last step use step 0 for the previous alpha product, as done in - stable diffusion. - prediction_type (`str`, default `epsilon`, optional): - prediction type of the scheduler function, one of `epsilon` (predicting the noise of the diffusion - process), `sample` (directly predicting the noisy sample`) or `v_prediction` (see section 2.4 - https://imagen.research.google/video/paper.pdf) - dtype (`jnp.dtype`, *optional*, defaults to `jnp.float32`): - the `dtype` used for params and computation. - """ - - _compatibles = [e.name for e in FlaxKarrasDiffusionSchedulers] - - dtype: jnp.dtype - pndm_order: int - - @property - def has_state(self): - return True - - @register_to_config - def __init__( - self, - num_train_timesteps: int = 1000, - beta_start: float = 0.0001, - beta_end: float = 0.02, - beta_schedule: str = "linear", - trained_betas: Optional[jnp.ndarray] = None, - skip_prk_steps: bool = False, - set_alpha_to_one: bool = False, - steps_offset: int = 0, - prediction_type: str = "epsilon", - dtype: jnp.dtype = jnp.float32, - ): - self.dtype = dtype - - # For now we only support F-PNDM, i.e. the runge-kutta method - # For more information on the algorithm please take a look at the paper: https://arxiv.org/pdf/2202.09778.pdf - # mainly at formula (9), (12), (13) and the Algorithm 2. - self.pndm_order = 4 - - def create_state(self, common: Optional[CommonSchedulerState] = None) -> PNDMSchedulerState: - if common is None: - common = CommonSchedulerState.create(self) - - # At every step in ddim, we are looking into the previous alphas_cumprod - # For the final step, there is no previous alphas_cumprod because we are already at 0 - # `set_alpha_to_one` decides whether we set this parameter simply to one or - # whether we use the final alpha of the "non-previous" one. - final_alpha_cumprod = ( - jnp.array(1.0, dtype=self.dtype) if self.config.set_alpha_to_one else common.alphas_cumprod[0] - ) - - # standard deviation of the initial noise distribution - init_noise_sigma = jnp.array(1.0, dtype=self.dtype) - - timesteps = jnp.arange(0, self.config.num_train_timesteps).round()[::-1] - - return PNDMSchedulerState.create( - common=common, - final_alpha_cumprod=final_alpha_cumprod, - init_noise_sigma=init_noise_sigma, - timesteps=timesteps, - ) - - def set_timesteps(self, state: PNDMSchedulerState, num_inference_steps: int, shape: Tuple) -> PNDMSchedulerState: - """ - Sets the discrete timesteps used for the diffusion chain. Supporting function to be run before inference. - - Args: - state (`PNDMSchedulerState`): - the `FlaxPNDMScheduler` state data class instance. - num_inference_steps (`int`): - the number of diffusion steps used when generating samples with a pre-trained model. - shape (`Tuple`): - the shape of the samples to be generated. - """ - - step_ratio = self.config.num_train_timesteps // num_inference_steps - # creates integer timesteps by multiplying by ratio - # rounding to avoid issues when num_inference_step is power of 3 - _timesteps = (jnp.arange(0, num_inference_steps) * step_ratio).round() + self.config.steps_offset - - if self.config.skip_prk_steps: - # for some models like stable diffusion the prk steps can/should be skipped to - # produce better results. When using PNDM with `self.config.skip_prk_steps` the implementation - # is based on crowsonkb's PLMS sampler implementation: https://github.com/CompVis/latent-diffusion/pull/51 - - prk_timesteps = jnp.array([], dtype=jnp.int32) - plms_timesteps = jnp.concatenate([_timesteps[:-1], _timesteps[-2:-1], _timesteps[-1:]])[::-1] - - else: - prk_timesteps = _timesteps[-self.pndm_order :].repeat(2) + jnp.tile( - jnp.array([0, self.config.num_train_timesteps // num_inference_steps // 2], dtype=jnp.int32), - self.pndm_order, - ) - - prk_timesteps = (prk_timesteps[:-1].repeat(2)[1:-1])[::-1] - plms_timesteps = _timesteps[:-3][::-1] - - timesteps = jnp.concatenate([prk_timesteps, plms_timesteps]) - - # initial running values - - cur_model_output = jnp.zeros(shape, dtype=self.dtype) - counter = jnp.int32(0) - cur_sample = jnp.zeros(shape, dtype=self.dtype) - ets = jnp.zeros((4,) + shape, dtype=self.dtype) - - return state.replace( - timesteps=timesteps, - num_inference_steps=num_inference_steps, - prk_timesteps=prk_timesteps, - plms_timesteps=plms_timesteps, - cur_model_output=cur_model_output, - counter=counter, - cur_sample=cur_sample, - ets=ets, - ) - - def scale_model_input( - self, state: PNDMSchedulerState, sample: jnp.ndarray, timestep: Optional[int] = None - ) -> jnp.ndarray: - """ - Ensures interchangeability with schedulers that need to scale the denoising model input depending on the - current timestep. - - Args: - state (`PNDMSchedulerState`): the `FlaxPNDMScheduler` state data class instance. - sample (`jnp.ndarray`): input sample - timestep (`int`, optional): current timestep - - Returns: - `jnp.ndarray`: scaled input sample - """ - return sample - - def step( - self, - state: PNDMSchedulerState, - model_output: jnp.ndarray, - timestep: int, - sample: jnp.ndarray, - return_dict: bool = True, - ) -> Union[FlaxPNDMSchedulerOutput, Tuple]: - """ - Predict the sample at the previous timestep by reversing the SDE. Core function to propagate the diffusion - process from the learned model outputs (most often the predicted noise). - - This function calls `step_prk()` or `step_plms()` depending on the internal variable `counter`. - - Args: - state (`PNDMSchedulerState`): the `FlaxPNDMScheduler` state data class instance. - model_output (`jnp.ndarray`): direct output from learned diffusion model. - timestep (`int`): current discrete timestep in the diffusion chain. - sample (`jnp.ndarray`): - current instance of sample being created by diffusion process. - return_dict (`bool`): option for returning tuple rather than FlaxPNDMSchedulerOutput class - - Returns: - [`FlaxPNDMSchedulerOutput`] or `tuple`: [`FlaxPNDMSchedulerOutput`] if `return_dict` is True, otherwise a - `tuple`. When returning a tuple, the first element is the sample tensor. - - """ - - if state.num_inference_steps is None: - raise ValueError( - "Number of inference steps is 'None', you need to run 'set_timesteps' after creating the scheduler" - ) - - if self.config.skip_prk_steps: - prev_sample, state = self.step_plms(state, model_output, timestep, sample) - else: - prk_prev_sample, prk_state = self.step_prk(state, model_output, timestep, sample) - plms_prev_sample, plms_state = self.step_plms(state, model_output, timestep, sample) - - cond = state.counter < len(state.prk_timesteps) - - prev_sample = jax.lax.select(cond, prk_prev_sample, plms_prev_sample) - - state = state.replace( - cur_model_output=jax.lax.select(cond, prk_state.cur_model_output, plms_state.cur_model_output), - ets=jax.lax.select(cond, prk_state.ets, plms_state.ets), - cur_sample=jax.lax.select(cond, prk_state.cur_sample, plms_state.cur_sample), - counter=jax.lax.select(cond, prk_state.counter, plms_state.counter), - ) - - if not return_dict: - return (prev_sample, state) - - return FlaxPNDMSchedulerOutput(prev_sample=prev_sample, state=state) - - def step_prk( - self, - state: PNDMSchedulerState, - model_output: jnp.ndarray, - timestep: int, - sample: jnp.ndarray, - ) -> Union[FlaxPNDMSchedulerOutput, Tuple]: - """ - Step function propagating the sample with the Runge-Kutta method. RK takes 4 forward passes to approximate the - solution to the differential equation. - - Args: - state (`PNDMSchedulerState`): the `FlaxPNDMScheduler` state data class instance. - model_output (`jnp.ndarray`): direct output from learned diffusion model. - timestep (`int`): current discrete timestep in the diffusion chain. - sample (`jnp.ndarray`): - current instance of sample being created by diffusion process. - return_dict (`bool`): option for returning tuple rather than FlaxPNDMSchedulerOutput class - - Returns: - [`FlaxPNDMSchedulerOutput`] or `tuple`: [`FlaxPNDMSchedulerOutput`] if `return_dict` is True, otherwise a - `tuple`. When returning a tuple, the first element is the sample tensor. - - """ - - if state.num_inference_steps is None: - raise ValueError( - "Number of inference steps is 'None', you need to run 'set_timesteps' after creating the scheduler" - ) - - diff_to_prev = jnp.where( - state.counter % 2, 0, self.config.num_train_timesteps // state.num_inference_steps // 2 - ) - prev_timestep = timestep - diff_to_prev - timestep = state.prk_timesteps[state.counter // 4 * 4] - - model_output = jax.lax.select( - (state.counter % 4) != 3, - model_output, # remainder 0, 1, 2 - state.cur_model_output + 1 / 6 * model_output, # remainder 3 - ) - - state = state.replace( - cur_model_output=jax.lax.select_n( - state.counter % 4, - state.cur_model_output + 1 / 6 * model_output, # remainder 0 - state.cur_model_output + 1 / 3 * model_output, # remainder 1 - state.cur_model_output + 1 / 3 * model_output, # remainder 2 - jnp.zeros_like(state.cur_model_output), # remainder 3 - ), - ets=jax.lax.select( - (state.counter % 4) == 0, - state.ets.at[0:3].set(state.ets[1:4]).at[3].set(model_output), # remainder 0 - state.ets, # remainder 1, 2, 3 - ), - cur_sample=jax.lax.select( - (state.counter % 4) == 0, - sample, # remainder 0 - state.cur_sample, # remainder 1, 2, 3 - ), - ) - - cur_sample = state.cur_sample - prev_sample = self._get_prev_sample(state, cur_sample, timestep, prev_timestep, model_output) - state = state.replace(counter=state.counter + 1) - - return (prev_sample, state) - - def step_plms( - self, - state: PNDMSchedulerState, - model_output: jnp.ndarray, - timestep: int, - sample: jnp.ndarray, - ) -> Union[FlaxPNDMSchedulerOutput, Tuple]: - """ - Step function propagating the sample with the linear multi-step method. This has one forward pass with multiple - times to approximate the solution. - - Args: - state (`PNDMSchedulerState`): the `FlaxPNDMScheduler` state data class instance. - model_output (`jnp.ndarray`): direct output from learned diffusion model. - timestep (`int`): current discrete timestep in the diffusion chain. - sample (`jnp.ndarray`): - current instance of sample being created by diffusion process. - return_dict (`bool`): option for returning tuple rather than FlaxPNDMSchedulerOutput class - - Returns: - [`FlaxPNDMSchedulerOutput`] or `tuple`: [`FlaxPNDMSchedulerOutput`] if `return_dict` is True, otherwise a - `tuple`. When returning a tuple, the first element is the sample tensor. - - """ - - if state.num_inference_steps is None: - raise ValueError( - "Number of inference steps is 'None', you need to run 'set_timesteps' after creating the scheduler" - ) - - # NOTE: There is no way to check in the jitted runtime if the prk mode was ran before - - prev_timestep = timestep - self.config.num_train_timesteps // state.num_inference_steps - prev_timestep = jnp.where(prev_timestep > 0, prev_timestep, 0) - - # Reference: - # if state.counter != 1: - # state.ets.append(model_output) - # else: - # prev_timestep = timestep - # timestep = timestep + self.config.num_train_timesteps // state.num_inference_steps - - prev_timestep = jnp.where(state.counter == 1, timestep, prev_timestep) - timestep = jnp.where( - state.counter == 1, timestep + self.config.num_train_timesteps // state.num_inference_steps, timestep - ) - - # Reference: - # if len(state.ets) == 1 and state.counter == 0: - # model_output = model_output - # state.cur_sample = sample - # elif len(state.ets) == 1 and state.counter == 1: - # model_output = (model_output + state.ets[-1]) / 2 - # sample = state.cur_sample - # state.cur_sample = None - # elif len(state.ets) == 2: - # model_output = (3 * state.ets[-1] - state.ets[-2]) / 2 - # elif len(state.ets) == 3: - # model_output = (23 * state.ets[-1] - 16 * state.ets[-2] + 5 * state.ets[-3]) / 12 - # else: - # model_output = (1 / 24) * (55 * state.ets[-1] - 59 * state.ets[-2] + 37 * state.ets[-3] - 9 * state.ets[-4]) - - state = state.replace( - ets=jax.lax.select( - state.counter != 1, - state.ets.at[0:3].set(state.ets[1:4]).at[3].set(model_output), # counter != 1 - state.ets, # counter 1 - ), - cur_sample=jax.lax.select( - state.counter != 1, - sample, # counter != 1 - state.cur_sample, # counter 1 - ), - ) - - state = state.replace( - cur_model_output=jax.lax.select_n( - jnp.clip(state.counter, 0, 4), - model_output, # counter 0 - (model_output + state.ets[-1]) / 2, # counter 1 - (3 * state.ets[-1] - state.ets[-2]) / 2, # counter 2 - (23 * state.ets[-1] - 16 * state.ets[-2] + 5 * state.ets[-3]) / 12, # counter 3 - (1 / 24) - * (55 * state.ets[-1] - 59 * state.ets[-2] + 37 * state.ets[-3] - 9 * state.ets[-4]), # counter >= 4 - ), - ) - - sample = state.cur_sample - model_output = state.cur_model_output - prev_sample = self._get_prev_sample(state, sample, timestep, prev_timestep, model_output) - state = state.replace(counter=state.counter + 1) - - return (prev_sample, state) - - def _get_prev_sample(self, state: PNDMSchedulerState, sample, timestep, prev_timestep, model_output): - # See formula (9) of PNDM paper https://arxiv.org/pdf/2202.09778.pdf - # this function computes x_(t−δ) using the formula of (9) - # Note that x_t needs to be added to both sides of the equation - - # Notation ( -> - # alpha_prod_t -> α_t - # alpha_prod_t_prev -> α_(t−δ) - # beta_prod_t -> (1 - α_t) - # beta_prod_t_prev -> (1 - α_(t−δ)) - # sample -> x_t - # model_output -> e_θ(x_t, t) - # prev_sample -> x_(t−δ) - alpha_prod_t = state.common.alphas_cumprod[timestep] - alpha_prod_t_prev = jnp.where( - prev_timestep >= 0, state.common.alphas_cumprod[prev_timestep], state.final_alpha_cumprod - ) - beta_prod_t = 1 - alpha_prod_t - beta_prod_t_prev = 1 - alpha_prod_t_prev - - if self.config.prediction_type == "v_prediction": - model_output = (alpha_prod_t**0.5) * model_output + (beta_prod_t**0.5) * sample - elif self.config.prediction_type != "epsilon": - raise ValueError( - f"prediction_type given as {self.config.prediction_type} must be one of `epsilon` or `v_prediction`" - ) - - # corresponds to (α_(t−δ) - α_t) divided by - # denominator of x_t in formula (9) and plus 1 - # Note: (α_(t−δ) - α_t) / (sqrt(α_t) * (sqrt(α_(t−δ)) + sqr(α_t))) = - # sqrt(α_(t−δ)) / sqrt(α_t)) - sample_coeff = (alpha_prod_t_prev / alpha_prod_t) ** (0.5) - - # corresponds to denominator of e_θ(x_t, t) in formula (9) - model_output_denom_coeff = alpha_prod_t * beta_prod_t_prev ** (0.5) + ( - alpha_prod_t * beta_prod_t * alpha_prod_t_prev - ) ** (0.5) - - # full formula (9) - prev_sample = ( - sample_coeff * sample - (alpha_prod_t_prev - alpha_prod_t) * model_output / model_output_denom_coeff - ) - - return prev_sample - - def add_noise( - self, - state: PNDMSchedulerState, - original_samples: jnp.ndarray, - noise: jnp.ndarray, - timesteps: jnp.ndarray, - ) -> jnp.ndarray: - return add_noise_common(state.common, original_samples, noise, timesteps) - - def __len__(self): - return self.config.num_train_timesteps diff --git a/spaces/descript/vampnet/vampnet/modules/transformer.py b/spaces/descript/vampnet/vampnet/modules/transformer.py deleted file mode 100644 index 0858644d363d50c9395b2fbf5177f7ad5659114b..0000000000000000000000000000000000000000 --- a/spaces/descript/vampnet/vampnet/modules/transformer.py +++ /dev/null @@ -1,953 +0,0 @@ -import math -import logging -from typing import Optional, Tuple, Union - -import numpy as np -import torch -import torch.nn as nn -import torch.nn.functional as F -from einops import rearrange -import loralib as lora -import audiotools as at - -from .activations import get_activation -from .layers import CodebookEmbedding -from .layers import FiLM -from .layers import SequentialWithFiLM -from .layers import WNConv1d -from ..util import scalar_to_batch_tensor, codebook_flatten, codebook_unflatten -from ..mask import _gamma - -LORA_R = 8 - -# def log(t, eps=1e-20): -# return torch.log(t + eps) - - -def gumbel_noise_like(t): - noise = torch.zeros_like(t).uniform_(1e-20, 1) - return -torch.log(-torch.log(noise)) - - -def gumbel_sample(t, temperature=1.0, dim=-1): - return ((t / max(temperature, 1e-10)) + gumbel_noise_like(t)).argmax(dim=dim) - - -class RMSNorm(nn.Module): - def __init__(self, hidden_size: int, eps=1e-6): - super().__init__() - self.weight = nn.Parameter(torch.ones(hidden_size)) - self.var_eps = eps - - def forward(self, x): - """Returns root mean square normalized version of input `x` - # T5 uses a layer_norm which only scales and doesn't shift, which is also known - # as Root Mean Square Layer Normalization https://arxiv.org/abs/1910.07467 - # thus varience is calculated w/o mean and there is no bias - Parameters - ---------- - x : Tensor[B x T x D] - Returns - ------- - Tensor[B x T x D] - """ - var = x.pow(2).mean(-1, keepdim=True) - x = x * torch.rsqrt(var + self.var_eps) - - return self.weight * x - - -class FeedForward(nn.Module): - def __init__( - self, d_model: int = 512, dropout: float = 0.1, activation: str = "geglu" - ): - super().__init__() - factor = 2 if activation == "geglu" else 1 - self.w_1 = lora.Linear(d_model, d_model * 4, bias=False, r=LORA_R) - self.w_2 = lora.Linear(d_model * 4 // factor, d_model, bias=False, r=LORA_R) - self.drop = nn.Dropout(dropout) - self.act = get_activation(activation)() - - def forward(self, x): - """Computes position-wise feed-forward layer - Parameters - ---------- - x : Tensor[B x T x D] - Returns - ------- - Tensor[B x T x D] - """ - x = self.w_1(x) - x = self.act(x) - x = self.drop(x) - x = self.w_2(x) - return x - - -class MultiHeadRelativeAttention(nn.Module): - def __init__( - self, - n_head: int = 8, - d_model: int = 512, - dropout: float = 0.1, - bidirectional: bool = True, - has_relative_attention_bias: bool = True, - attention_num_buckets: int = 32, - attention_max_distance: int = 128, - ): - super().__init__() - d_head = d_model // n_head - self.n_head = n_head - self.d_head = d_head - self.bidirectional = bidirectional - self.has_relative_attention_bias = has_relative_attention_bias - self.attention_num_buckets = attention_num_buckets - self.attention_max_distance = attention_max_distance - - # Create linear query, key, value projections - self.w_qs = lora.Linear(d_model, d_model, bias=False, r=LORA_R) - self.w_ks = nn.Linear(d_model, d_model, bias=False) - self.w_vs = lora.Linear(d_model, d_model, bias=False, r=LORA_R) - - # Create linear final output projection - self.fc = lora.Linear(d_model, d_model, bias=False, r=LORA_R) - - # Dropout for attention output weights - self.dropout = nn.Dropout(dropout) - - # Create relative positional embeddings (if turned on) - if has_relative_attention_bias: - self.relative_attention_bias = nn.Embedding(attention_num_buckets, n_head) - - def _relative_position_bucket(self, relative_position): - """Converts unbounded relative position into bounded set of buckets - with half "exact" buckets (1 position = 1 bucket) and half "log-spaced" - buckets - Parameters - ---------- - relative_position : Tensor[T_q x T_kv] - Relative positions between queries and key_value items - Returns - ------- - Tensor[T_q x T_kv] - Input relative positions converted into buckets - """ - relative_buckets = 0 - num_buckets = self.attention_num_buckets - max_distance = self.attention_max_distance - - # Convert relative position for (-inf, inf) to [0, inf] - # Negative relative positions correspond to past - # Positive relative positions correspond to future - if self.bidirectional: - # use half buckets for each side (past / future) - num_buckets //= 2 - - # Shift the position positions by `num_buckets` to wrap around - # negative positions - relative_buckets += (relative_position > 0).to(torch.long) * num_buckets - relative_position = torch.abs(relative_position) - else: - # If not bidirectional, ignore positive positions and wrap - # negative positions to positive - relative_position = -torch.min( - relative_position, torch.zeros_like(relative_position) - ) - - # Allocate half of the buckets are for exact increments in positions - max_exact = num_buckets // 2 - is_small = relative_position < max_exact - - # The other half of the buckets are for logarithmically bigger bins in - # positions up to `max_distance` - relative_postion_if_large = max_exact + ( - torch.log(relative_position.float() / max_exact) - / math.log(max_distance / max_exact) - * (num_buckets - max_exact) - ).to(torch.long) - - # Clip the max relative position to `num_buckets - 1` - relative_postion_if_large = torch.min( - relative_postion_if_large, - torch.full_like(relative_postion_if_large, num_buckets - 1), - ) - - # Choose relative buckets based on small or large positions - relative_buckets += torch.where( - is_small, relative_position, relative_postion_if_large - ) - - return relative_buckets - - def compute_bias(self, query_length, key_length): - """Computes a position bias scalar for each index in query_length x key_length - Parameters - ---------- - query_length : int - key_length : int - Returns - ------- - Tensor[heads x 1 x T_q x T_kv] - Position bias to be applied on attention logits - """ - - query_position = torch.arange(query_length, dtype=torch.long)[:, None] - key_position = torch.arange(key_length, dtype=torch.long)[None, :] - relative_position = key_position - query_position - - # Convert relative position to buckets - relative_position_bucket = self._relative_position_bucket(relative_position) - relative_position_bucket = relative_position_bucket.to( - self.relative_attention_bias.weight.device - ) - - # Index attention bias values - values = self.relative_attention_bias(relative_position_bucket) - values = rearrange(values, "q k h -> h 1 q k") - - return values - - def forward(self, q, k, v, mask=None, position_bias=None): - """Computes attention over (keys, values) for every timestep in query - Parameters - ---------- - q : Tensor[B x T_q x d_model] - Query vectors - k : Tensor[B x T_kv x d_model] - Key vectors to compute attention over - v : Tensor[B x T_kv x d_model] - Value vectors corresponding to the keys - mask : Tensor[B x T_q x T_kv], optional - position_bias: Tensor[head x 1 x T_q x T_kv] - Returns - ------- - Tensor[B x T_q x d_model] - Outputs after attending (key, value) using queries - """ - # Compute query, key, value projections - q = rearrange(self.w_qs(q), "b l (head k) -> head b l k", head=self.n_head) - k = rearrange(self.w_ks(k), "b t (head k) -> head b t k", head=self.n_head) - v = rearrange(self.w_vs(v), "b t (head k) -> head b t k", head=self.n_head) - - # Compute attention matrix - attn = torch.einsum("hblk,hbtk->hblt", [q, k]) / np.sqrt(q.shape[-1]) - - # Add relative position bias to attention scores - if position_bias is None: - if self.has_relative_attention_bias: - position_bias = self.compute_bias(q.size(-2), k.size(-2)) - else: - position_bias = torch.zeros_like(attn) - attn += position_bias - - # Apply mask to attention scores to prevent looking up invalid locations - if mask is not None: - attn = attn.masked_fill(mask[None] == 0, -1e9) - - # Normalize attention scores and add dropout - attn = torch.softmax(attn, dim=3) - attn = self.dropout(attn) - - # Compute attended outputs (product of attention matrix and values) - output = torch.einsum("hblt,hbtv->hblv", [attn, v]) - output = rearrange(output, "head b l v -> b l (head v)") - output = self.fc(output) - - return output, position_bias - - -class TransformerLayer(nn.Module): - def __init__( - self, - d_model: int = 512, - d_cond: int = 64, - n_heads: int = 8, - bidirectional: bool = True, - is_decoder: bool = False, - has_relative_attention_bias: bool = False, - flash_attn: bool = False, - dropout: float = 0.1, - ): - super().__init__() - # Store args - self.is_decoder = is_decoder - - # Create self-attention layer - self.norm_1 = RMSNorm(d_model) - self.film_1 = FiLM(d_cond, d_model) - self.flash_attn = flash_attn - - if flash_attn: - from flash_attn.flash_attention import FlashMHA - self.self_attn = FlashMHA( - embed_dim=d_model, - num_heads=n_heads, - attention_dropout=dropout, - causal=False, - ) - else: - self.self_attn = MultiHeadRelativeAttention( - n_heads, d_model, dropout, bidirectional, has_relative_attention_bias - ) - - # (Optional) Create cross-attention layer - if is_decoder: - self.norm_2 = RMSNorm(d_model) - self.film_2 = FiLM(d_cond, d_model) - self.cross_attn = MultiHeadRelativeAttention( - n_heads, - d_model, - dropout, - bidirectional=True, - has_relative_attention_bias=False, - ) - - # Create last feed-forward layer - self.norm_3 = RMSNorm(d_model) - self.film_3 = FiLM(d_cond, d_model) - self.feed_forward = FeedForward(d_model=d_model, dropout=dropout) - - # Create dropout - self.dropout = nn.Dropout(dropout) - - def forward( - self, - x, - x_mask, - cond, - src=None, - src_mask=None, - position_bias=None, - encoder_decoder_position_bias=None, - ): - """Computes one transformer layer consisting of self attention, (op) cross attention - and feedforward layer - Parameters - ---------- - x : Tensor[B x T_q x D] - x_mask : Tensor[B x T_q] - src : Tensor[B x T_kv x D], optional - src_mask : Tensor[B x T_kv x D], optional - position_bias : Tensor[heads x B x T_q x T_q], optional - Relative position bias for self attention layer - encoder_decoder_position_bias : Tensor[heads x B x T_q x T_kv], optional - Relative position bias for cross attention layer - Returns - ------- - Tensor[B x T_q x D] - """ - y = self.norm_1(x) - y = self.film_1(y.permute(0, 2, 1), cond).permute(0, 2, 1) - if self.flash_attn: - with torch.autocast(y.device.type, dtype=torch.bfloat16): - y = self.self_attn(y)[0] - else: - y, position_bias = self.self_attn(y, y, y, x_mask, position_bias) - x = x + self.dropout(y) - - if self.is_decoder: - y = self.norm_2(x) - y = self.film_2(y.permute(0, 2, 1), cond).permute(0, 2, 1) - y, encoder_decoder_position_bias = self.cross_attn( - y, src, src, src_mask, encoder_decoder_position_bias - ) - x = x + self.dropout(y) - - y = self.norm_3(x) - y = self.film_3( - y.permute( - 0, - 2, - 1, - ), - cond, - ).permute(0, 2, 1) - y = self.feed_forward(y) - x = x + self.dropout(y) - - return x, position_bias, encoder_decoder_position_bias - - -class TransformerStack(nn.Module): - def __init__( - self, - d_model: int = 512, - d_cond: int = 64, - n_heads: int = 8, - n_layers: int = 8, - last_layer: bool = True, - bidirectional: bool = True, - flash_attn: bool = False, - is_decoder: bool = False, - dropout: float = 0.1, - ): - super().__init__() - # Store args - self.bidirectional = bidirectional - self.is_decoder = is_decoder - - # Create transformer layers - # In T5, relative attention bias is shared by all layers in the stack - self.layers = nn.ModuleList( - [ - TransformerLayer( - d_model, - d_cond, - n_heads, - bidirectional, - is_decoder, - has_relative_attention_bias=True if (i == 0) else False, - flash_attn=flash_attn, - dropout=dropout, - ) - for i in range(n_layers) - ] - ) - - # Perform last normalization - self.norm = RMSNorm(d_model) if last_layer else None - - def subsequent_mask(self, size): - return torch.ones(1, size, size).tril().bool() - - def forward(self, x, x_mask, cond=None, src=None, src_mask=None, - return_activations: bool = False - ): - """Computes a full transformer stack - Parameters - ---------- - x : Tensor[B x T_q x D] - x_mask : Tensor[B x T_q] - src : Tensor[B x T_kv x D], optional - src_mask : Tensor[B x T_kv], optional - Returns - ------- - Tensor[B x T_q x D] - """ - - # Convert `src_mask` to (B x T_q x T_kv) shape for cross attention masking - if self.is_decoder: - src_mask = x_mask.unsqueeze(-1) * src_mask.unsqueeze(-2) - - # Convert `x_mask` to (B x T_q x T_q) shape for self attention masking - x_mask = x_mask.unsqueeze(-2) - if not self.bidirectional: - x_mask = x_mask * self.subsequent_mask(x.size(1)).to(x_mask.device) - - # Initialize position biases - position_bias = None - encoder_decoder_position_bias = None - - # Compute transformer layers - if return_activations: - activations = [] - for layer in self.layers: - x, position_bias, encoder_decoder_position_bias = layer( - x=x, - x_mask=x_mask, - cond=cond, - src=src, - src_mask=src_mask, - position_bias=position_bias, - encoder_decoder_position_bias=encoder_decoder_position_bias, - ) - if return_activations: - activations.append(x.detach()) - - - out = self.norm(x) if self.norm is not None else x - if return_activations: - return out, torch.stack(activations) - else: - return out - - -class VampNet(at.ml.BaseModel): - def __init__( - self, - n_heads: int = 20, - n_layers: int = 16, - r_cond_dim: int = 0, - n_codebooks: int = 9, - n_conditioning_codebooks: int = 0, - latent_dim: int = 8, - embedding_dim: int = 1280, - vocab_size: int = 1024, - flash_attn: bool = True, - noise_mode: str = "mask", - dropout: float = 0.1 - ): - super().__init__() - assert r_cond_dim == 0, f"r_cond_dim must be 0 (not supported), but got {r_cond_dim}" - self.n_heads = n_heads - self.n_layers = n_layers - self.r_cond_dim = r_cond_dim - self.n_codebooks = n_codebooks - self.n_conditioning_codebooks = n_conditioning_codebooks - self.embedding_dim = embedding_dim - self.vocab_size = vocab_size - self.latent_dim = latent_dim - self.flash_attn = flash_attn - self.noise_mode = noise_mode - - assert self.noise_mode == "mask", "deprecated" - - self.embedding = CodebookEmbedding( - latent_dim=latent_dim, - n_codebooks=n_codebooks, - vocab_size=vocab_size, - emb_dim=embedding_dim, - special_tokens=["MASK"], - ) - self.mask_token = self.embedding.special_idxs["MASK"] - - self.transformer = TransformerStack( - d_model=embedding_dim, - d_cond=r_cond_dim, - n_heads=n_heads, - n_layers=n_layers, - last_layer=True, - bidirectional=True, - flash_attn=flash_attn, - is_decoder=False, - dropout=dropout, - ) - - # Add final conv layer - self.n_predict_codebooks = n_codebooks - n_conditioning_codebooks - self.classifier = SequentialWithFiLM( - WNConv1d( - embedding_dim, - vocab_size * self.n_predict_codebooks, - kernel_size=1, - padding="same", - # groups=self.n_predict_codebooks, - ), - ) - - def forward(self, x, return_activations: bool = False): - x = self.embedding(x) - x_mask = torch.ones_like(x, dtype=torch.bool)[:, :1, :].squeeze(1) - - x = rearrange(x, "b d n -> b n d") - out = self.transformer(x=x, x_mask=x_mask, return_activations=return_activations) - if return_activations: - out, activations = out - - out = rearrange(out, "b n d -> b d n") - - out = self.classifier(out, None) # no cond here! - - out = rearrange(out, "b (p c) t -> b p (t c)", c=self.n_predict_codebooks) - - if return_activations: - return out, activations - else: - return out - - def r_embed(self, r, max_positions=10000): - if self.r_cond_dim > 0: - dtype = r.dtype - - r = _gamma(r) * max_positions - half_dim = self.r_cond_dim // 2 - - emb = math.log(max_positions) / (half_dim - 1) - emb = torch.arange(half_dim, device=r.device).float().mul(-emb).exp() - - emb = r[:, None] * emb[None, :] - emb = torch.cat([emb.sin(), emb.cos()], dim=1) - - if self.r_cond_dim % 2 == 1: # zero pad - emb = nn.functional.pad(emb, (0, 1), mode="constant") - - return emb.to(dtype) - else: - return r - - @torch.no_grad() - def to_signal(self, z, codec): - """ - convert a sequence of latents to a signal. - """ - assert z.ndim == 3 - - signal = at.AudioSignal( - codec.decode( - codec.quantizer.from_latents(self.embedding.from_codes(z, codec))[0] - )["audio"], - codec.sample_rate, - ) - - # find where the mask token is and replace it with silence in the audio - for tstep in range(z.shape[-1]): - if torch.any(z[:, :, tstep] == self.mask_token): - sample_idx_0 = tstep * codec.hop_length - sample_idx_1 = sample_idx_0 + codec.hop_length - signal.samples[:, :, sample_idx_0:sample_idx_1] = 0.0 - - return signal - - - @torch.no_grad() - def generate( - self, - codec, - time_steps: int = 300, - sampling_steps: int = 36, - start_tokens: Optional[torch.Tensor] = None, - sampling_temperature: float = 1.0, - mask: Optional[torch.Tensor] = None, - mask_temperature: float = 10.5, - typical_filtering=False, - typical_mass=0.2, - typical_min_tokens=1, - top_p=None, - return_signal=True, - seed: int = None, - sample_cutoff: float = 1.0, - ): - if seed is not None: - at.util.seed(seed) - logging.debug(f"beginning generation with {sampling_steps} steps") - - - - ##################### - # resolve initial z # - ##################### - z = start_tokens - - if z is None: - z = torch.full((1, self.n_codebooks, time_steps), self.mask_token).to( - self.device - ) - - logging.debug(f"created z with shape {z.shape}") - - - ################# - # resolve mask # - ################# - - if mask is None: - mask = torch.ones_like(z).to(self.device).int() - mask[:, : self.n_conditioning_codebooks, :] = 0.0 - if mask.ndim == 2: - mask = mask[:, None, :].repeat(1, z.shape[1], 1) - # init_mask = mask.clone() - - logging.debug(f"created mask with shape {mask.shape}") - - - ########### - # set up # - ########## - # apply the mask to z - z_masked = z.masked_fill(mask.bool(), self.mask_token) - # logging.debug(f"z_masked: {z_masked}") - - # how many mask tokens to begin with? - num_mask_tokens_at_start = (z_masked == self.mask_token).sum() - logging.debug(f"num mask tokens at start: {num_mask_tokens_at_start}") - - # how many codebooks are we inferring vs conditioning on? - n_infer_codebooks = self.n_codebooks - self.n_conditioning_codebooks - logging.debug(f"n infer codebooks: {n_infer_codebooks}") - - ################# - # begin sampling # - ################# - - for i in range(sampling_steps): - logging.debug(f"step {i} of {sampling_steps}") - - # our current schedule step - r = scalar_to_batch_tensor( - (i + 1) / sampling_steps, - z.shape[0] - ).to(z.device) - logging.debug(f"r: {r}") - - # get latents - latents = self.embedding.from_codes(z_masked, codec) - logging.debug(f"computed latents with shape: {latents.shape}") - - - # infer from latents - # NOTE: this collapses the codebook dimension into the sequence dimension - logits = self.forward(latents) # b, prob, seq - logits = logits.permute(0, 2, 1) # b, seq, prob - b = logits.shape[0] - - logging.debug(f"permuted logits with shape: {logits.shape}") - - sampled_z, selected_probs = sample_from_logits( - logits, sample=( - (i / sampling_steps) <= sample_cutoff - ), - temperature=sampling_temperature, - typical_filtering=typical_filtering, typical_mass=typical_mass, - typical_min_tokens=typical_min_tokens, - top_k=None, top_p=top_p, return_probs=True, - ) - - logging.debug(f"sampled z with shape: {sampled_z.shape}") - - # flatten z_masked and mask, so we can deal with the sampling logic - # we'll unflatten them at the end of the loop for the next forward pass - # remove conditioning codebooks, we'll add them back at the end - z_masked = codebook_flatten(z_masked[:, self.n_conditioning_codebooks:, :]) - - mask = (z_masked == self.mask_token).int() - - # update the mask, remove conditioning codebooks from the mask - logging.debug(f"updated mask with shape: {mask.shape}") - # add z back into sampled z where the mask was false - sampled_z = torch.where( - mask.bool(), sampled_z, z_masked - ) - logging.debug(f"added z back into sampled z with shape: {sampled_z.shape}") - - # ignore any tokens that weren't masked - selected_probs = torch.where( - mask.bool(), selected_probs, torch.inf - ) - - # get the num tokens to mask, according to the schedule - num_to_mask = torch.floor(_gamma(r) * num_mask_tokens_at_start).unsqueeze(1).long() - logging.debug(f"num to mask: {num_to_mask}") - - if i != (sampling_steps - 1): - num_to_mask = torch.maximum( - torch.tensor(1), - torch.minimum( - mask.sum(dim=-1, keepdim=True) - 1, - num_to_mask - ) - ) - - - # get our new mask - mask = mask_by_random_topk( - num_to_mask, selected_probs, mask_temperature * (1-r) - ) - - # update the mask - z_masked = torch.where( - mask.bool(), self.mask_token, sampled_z - ) - logging.debug(f"updated z_masked with shape: {z_masked.shape}") - - z_masked = codebook_unflatten(z_masked, n_infer_codebooks) - mask = codebook_unflatten(mask, n_infer_codebooks) - logging.debug(f"unflattened z_masked with shape: {z_masked.shape}") - - # add conditioning codebooks back to z_masked - z_masked = torch.cat( - (z[:, :self.n_conditioning_codebooks, :], z_masked), dim=1 - ) - logging.debug(f"added conditioning codebooks back to z_masked with shape: {z_masked.shape}") - - - # add conditioning codebooks back to sampled_z - sampled_z = codebook_unflatten(sampled_z, n_infer_codebooks) - sampled_z = torch.cat( - (z[:, :self.n_conditioning_codebooks, :], sampled_z), dim=1 - ) - - logging.debug(f"finished sampling") - - if return_signal: - return self.to_signal(sampled_z, codec) - else: - return sampled_z - -def sample_from_logits( - logits, - sample: bool = True, - temperature: float = 1.0, - top_k: int = None, - top_p: float = None, - typical_filtering: bool = False, - typical_mass: float = 0.2, - typical_min_tokens: int = 1, - return_probs: bool = False - ): - """Convenience function to sample from a categorial distribution with input as - unnormalized logits. - - Parameters - ---------- - logits : Tensor[..., vocab_size] - config: SamplingConfig - The set of hyperparameters to be used for sampling - sample : bool, optional - Whether to perform multinomial sampling, by default True - temperature : float, optional - Scaling parameter when multinomial samping, by default 1.0 - top_k : int, optional - Restricts sampling to only `top_k` values acc. to probability, - by default None - top_p : float, optional - Restricts sampling to only those values with cumulative - probability = `top_p`, by default None - - Returns - ------- - Tensor[...] - Sampled tokens - """ - shp = logits.shape[:-1] - - if typical_filtering: - typical_filter(logits, - typical_mass=typical_mass, - typical_min_tokens=typical_min_tokens - ) - - # Apply top_k sampling - if top_k is not None: - v, _ = logits.topk(top_k) - logits[logits < v[..., [-1]]] = -float("inf") - - # Apply top_p (nucleus) sampling - if top_p is not None and top_p < 1.0: - v, sorted_indices = logits.sort(descending=True) - cumulative_probs = v.softmax(dim=-1).cumsum(dim=-1) - - sorted_indices_to_remove = cumulative_probs > top_p - # Right shift indices_to_remove to keep 1st token over threshold - sorted_indices_to_remove = F.pad(sorted_indices_to_remove, (1, 0), value=False)[ - ..., :-1 - ] - - # Compute indices_to_remove in unsorted array - indices_to_remove = sorted_indices_to_remove.scatter( - -1, sorted_indices, sorted_indices_to_remove - ) - - logits[indices_to_remove] = -float("inf") - - # Perform multinomial sampling after normalizing logits - probs = ( - F.softmax(logits / temperature, dim=-1) - if temperature > 0 - else logits.softmax(dim=-1) - ) - token = ( - probs.view(-1, probs.size(-1)).multinomial(1).squeeze(1).view(*shp) - if sample - else logits.argmax(-1) - ) - - if return_probs: - token_probs = probs.take_along_dim(token.unsqueeze(-1), dim=-1).squeeze(-1) - return token, token_probs - else: - return token - - - -def mask_by_random_topk( - num_to_mask: int, - probs: torch.Tensor, - temperature: float = 1.0, - ): - """ - Args: - num_to_mask (int): number of tokens to mask - probs (torch.Tensor): probabilities for each sampled event, shape (batch, seq) - temperature (float, optional): temperature. Defaults to 1.0. - """ - logging.debug(f"masking by random topk") - logging.debug(f"num to mask: {num_to_mask}") - logging.debug(f"probs shape: {probs.shape}") - logging.debug(f"temperature: {temperature}") - logging.debug("") - - noise = gumbel_noise_like(probs) - confidence = torch.log(probs) + temperature * noise - logging.debug(f"confidence shape: {confidence.shape}") - - sorted_confidence, sorted_idx = confidence.sort(dim=-1) - logging.debug(f"sorted confidence shape: {sorted_confidence.shape}") - logging.debug(f"sorted idx shape: {sorted_idx.shape}") - - # get the cut off threshold, given the mask length - cut_off = torch.take_along_dim( - sorted_confidence, num_to_mask, axis=-1 - ) - logging.debug(f"cut off shape: {cut_off.shape}") - - # mask out the tokens - mask = confidence < cut_off - logging.debug(f"mask shape: {mask.shape}") - - return mask - -def typical_filter( - logits, - typical_mass: float = 0.95, - typical_min_tokens: int = 1,): - nb, nt, _ = logits.shape - x_flat = rearrange(logits, "b t l -> (b t ) l") - x_flat_norm = torch.nn.functional.log_softmax(x_flat, dim=-1) - x_flat_norm_p = torch.exp(x_flat_norm) - entropy = -(x_flat_norm * x_flat_norm_p).nansum(-1, keepdim=True) - - c_flat_shifted = torch.abs((-x_flat_norm) - entropy) - c_flat_sorted, x_flat_indices = torch.sort(c_flat_shifted, descending=False) - x_flat_cumsum = ( - x_flat.gather(-1, x_flat_indices).softmax(dim=-1).cumsum(dim=-1) - ) - - last_ind = (x_flat_cumsum < typical_mass).sum(dim=-1) - sorted_indices_to_remove = c_flat_sorted > c_flat_sorted.gather( - 1, last_ind.view(-1, 1) - ) - if typical_min_tokens > 1: - sorted_indices_to_remove[..., :typical_min_tokens] = 0 - indices_to_remove = sorted_indices_to_remove.scatter( - 1, x_flat_indices, sorted_indices_to_remove - ) - x_flat = x_flat.masked_fill(indices_to_remove, -float("Inf")) - logits = rearrange(x_flat, "(b t) l -> b t l", t=nt) - return logits - - -if __name__ == "__main__": - # import argbind - from .layers import num_params - - VampNet = argbind.bind(VampNet) - - @argbind.bind(without_prefix=True) - def try_model(device: str = "cuda", batch_size: int = 2, seq_len_s: float = 10.0): - seq_len = int(32000 / 512 * seq_len_s) - - model = VampNet().to(device) - - z = torch.randint( - 0, model.vocab_size, size=(batch_size, model.n_codebooks, seq_len) - ).to(device) - - r = torch.zeros(batch_size).to(device) - - z_mask_latent = torch.rand( - batch_size, model.latent_dim * model.n_codebooks, seq_len - ).to(device) - z_hat = model(z_mask_latent) - - pred = z_hat.argmax(dim=1) - pred = model.embedding.unflatten(pred, n_codebooks=model.n_predict_codebooks) - - print(f"model has {num_params(model)/1e6:<.3f}M parameters") - print(f"prediction has shape {pred.shape}") - breakpoint() - - args = argbind.parse_args() - with argbind.scope(args): - try_model() - - diff --git a/spaces/diacanFperku/AutoGPT/Crack Call Of Duty Black Ops Zombie Mode Enabled SKIDROW REPACK.md b/spaces/diacanFperku/AutoGPT/Crack Call Of Duty Black Ops Zombie Mode Enabled SKIDROW REPACK.md deleted file mode 100644 index 935ccfa93eb951b05d430ed10b4809a4fc9b2dd9..0000000000000000000000000000000000000000 --- a/spaces/diacanFperku/AutoGPT/Crack Call Of Duty Black Ops Zombie Mode Enabled SKIDROW REPACK.md +++ /dev/null @@ -1,6 +0,0 @@ -

      Crack Call of Duty Black Ops Zombie Mode Enabled SKIDROW


      Download Filehttps://gohhs.com/2uFTOM



      -
      - 8a78ff9644
      -
      -
      -

      diff --git a/spaces/diacanFperku/AutoGPT/Minecraft-1.5.2-Hexxit-Mod-Pack(cracked) CPY !NEW!.md b/spaces/diacanFperku/AutoGPT/Minecraft-1.5.2-Hexxit-Mod-Pack(cracked) CPY !NEW!.md deleted file mode 100644 index 68bb90546d796d6ac75d2343328d39bc6bd18641..0000000000000000000000000000000000000000 --- a/spaces/diacanFperku/AutoGPT/Minecraft-1.5.2-Hexxit-Mod-Pack(cracked) CPY !NEW!.md +++ /dev/null @@ -1,10 +0,0 @@ -

      Minecraft-1.5.2-Hexxit-Mod-Pack(cracked) CPY


      DOWNLOAD 🌟 https://gohhs.com/2uFTkD



      -
      -Minecraft Hexxit ModPack Download (Cracked) 14, 2020 - Third step: copy the updated Hexxit modpack to the .minecraft/mods folder (if it doesn't exist, install Forge again or create it ... On this page you can download the latest Minecraft mod. -Hexxit ModPack is a mod pack that adds new blocks, mobs, mechanisms and... -Here you can download Minecraft Hexxit ModPack [v1.11.2] for free via a direct link from the official website! -You probably already know that Hexxit is doing everything possible to make... -Minecraft Hexxit ModPack is a mod pack that adds new blocks, mobs, gears and more to the Minecraft world. 8a78ff9644
      -
      -
      -

      diff --git a/spaces/diacanFperku/AutoGPT/Resident Evil 4 Ultimate HD Edition Repack RG Mechanics Repack [WORK].md b/spaces/diacanFperku/AutoGPT/Resident Evil 4 Ultimate HD Edition Repack RG Mechanics Repack [WORK].md deleted file mode 100644 index 86b579877ee0fc1a5fd2468ed61987bb0f6e837a..0000000000000000000000000000000000000000 --- a/spaces/diacanFperku/AutoGPT/Resident Evil 4 Ultimate HD Edition Repack RG Mechanics Repack [WORK].md +++ /dev/null @@ -1,6 +0,0 @@ - -

      the resident evil 4 is the best resident evil game ever made. the game was developed by shinji mikami, he has just finished the resident evil 3: nemesis. the game was released on the 23rd of september, 2005 in japan and in the usa on the 27th of september, 2005 for the playstation 2. the game was released in germany on the 7th of october, 2005 and in the uk on the 7th of october, 2005. it is a fourth installment in the main series of the resident evil series. it is also the first resident evil game to not be developed by capcom, instead being developed by capcom japan and capcom usa, and was developed to be a transition to the upcoming fifth main series title, resident evil 5. the game is one of the few games that received a remake of the game in the form of resident evil 4: deluxe edition, a re-release of the game on the playstation 3. it features an updated version of the original game that includes both the original and the original xbox version of the game, along with the wii version of the game, a cgi making of documentary and all seven of the main game's cutscenes, the entire resident evil 4 soundtrack and bonus playable characters ada wong and chris redfield, which were previously exclusive to the xbox 360 version of the game. [8]

      -

      Resident Evil 4 Ultimate HD Edition Repack RG Mechanics Repack


      Download File »»» https://gohhs.com/2uFTeU



      -

      before resident evil 4, the closest thing to a horror game was dead space. the game is based on the story of an astronaut named isaac clarke, who is exposed to an experimental drug known as the c-virus. isaac is infected and must fight for survival in a sealed off research facility that is infested by monsters. the main gameplay of the resident evil series has always revolved around the survival horror game, however, resident evil 4 was made to be a more traditional action game. however, despite this, it is still classified as a survival horror game. [9] the game features many more in-game weapons and ammunition types, as well as some extra item recipes. aside from this, the gameplay remains the same as other resident evil games, with the player being able to move around the environment and shoot various enemies. unlike other resident evil games, enemies in this title are not consistently fast, nor are they more powerful than the player. it is the series' first game to feature non-linear gameplay, with most of the game's missions taking place in a series of different environments, although some missions involve controlling a character from a third-person perspective. [10] the game also features two characters with different styles of play. albert wesker, the main character of the game, is a strong, silent type of character. he is a former special forces operative, and still a kind of a deadly assassin despite his age. chris redfield, the other character, is a more aggressive style of character who can use everything from a revolver to a grenade launcher, and is a former member of the u.s. army's delta force.

      899543212b
      -
      -
      \ No newline at end of file diff --git a/spaces/diaoren/OpenSetObstacleDetection/opendet2/__init__.py b/spaces/diaoren/OpenSetObstacleDetection/opendet2/__init__.py deleted file mode 100644 index f63ee6e5c2ab6ff93dcb9085935344985b494e6a..0000000000000000000000000000000000000000 --- a/spaces/diaoren/OpenSetObstacleDetection/opendet2/__init__.py +++ /dev/null @@ -1,7 +0,0 @@ -from .config import * -from .data import * -from .engine import * -from .evaluation import * -from .modeling import * - -__all__ = [k for k in globals().keys() if not k.startswith("_")] diff --git a/spaces/dinhminh20521597/OCR_DEMO/configs/textrecog/master/master_r31_12e_ST_MJ_SA.py b/spaces/dinhminh20521597/OCR_DEMO/configs/textrecog/master/master_r31_12e_ST_MJ_SA.py deleted file mode 100644 index 88f4dbeae79584720134969a9ff1179e0352471d..0000000000000000000000000000000000000000 --- a/spaces/dinhminh20521597/OCR_DEMO/configs/textrecog/master/master_r31_12e_ST_MJ_SA.py +++ /dev/null @@ -1,33 +0,0 @@ -_base_ = [ - '../../_base_/default_runtime.py', '../../_base_/recog_models/master.py', - '../../_base_/schedules/schedule_adam_step_12e.py', - '../../_base_/recog_pipelines/master_pipeline.py', - '../../_base_/recog_datasets/ST_SA_MJ_train.py', - '../../_base_/recog_datasets/academic_test.py' -] - -train_list = {{_base_.train_list}} -test_list = {{_base_.test_list}} - -train_pipeline = {{_base_.train_pipeline}} -test_pipeline = {{_base_.test_pipeline}} - -data = dict( - samples_per_gpu=512, - workers_per_gpu=4, - val_dataloader=dict(samples_per_gpu=128), - test_dataloader=dict(samples_per_gpu=128), - train=dict( - type='UniformConcatDataset', - datasets=train_list, - pipeline=train_pipeline), - val=dict( - type='UniformConcatDataset', - datasets=test_list, - pipeline=test_pipeline), - test=dict( - type='UniformConcatDataset', - datasets=test_list, - pipeline=test_pipeline)) - -evaluation = dict(interval=1, metric='acc') diff --git a/spaces/dpe1/beat_manipulator/beat_manipulator/main.py b/spaces/dpe1/beat_manipulator/beat_manipulator/main.py deleted file mode 100644 index 8009c60822bc8e2c4a6067a023803c8d0d7d8e32..0000000000000000000000000000000000000000 --- a/spaces/dpe1/beat_manipulator/beat_manipulator/main.py +++ /dev/null @@ -1,535 +0,0 @@ -import numpy as np, scipy.interpolate -from . import io, utils -from .effects import BM_EFFECTS -from .metrics import BM_METRICS -from .presets import BM_SAMPLES - - -class song: - def __init__(self, audio = None, sr:int=None, log=True): - if audio is None: - from tkinter import filedialog - audio = filedialog.askopenfilename() - - if isinstance(audio, song): self.path = audio.path - self.audio, self.sr = io._load(audio=audio, sr=sr) - - # unique filename is needed to generate/compare filenames for cached beatmaps - if isinstance(audio, str): - self.path = audio - elif not isinstance(audio, song): - self.path = f'unknown_{hex(int(np.sum(self.audio) * 10**18))}' - - self.log = log - self.beatmap = None - self.normalized = None - - def _slice(self, a): - if a is None: return None - elif isinstance(a, float): - if (a_dec := a % 1) == 0: return self.beatmap[int(a)] - a_int = int(int(a)//1) - start = self.beatmap[a_int] - return int(start + a_dec * (self.beatmap[a_int+1] - start)) - elif isinstance(a, int): return self.beatmap[a] - else: raise TypeError(f'slice indices must be int, float, or None, not {type(a)}. Indice is {a}') - - def __getitem__(self, s): - if isinstance(s, slice): - start = s.start - stop = s.stop - step = s.step - if start is not None and stop is not None: - if start > stop: - is_reversed = -1 - start, stop = stop, start - else: is_reversed = None - if step is None or step == 1: - start = self._slice(start) - stop = self._slice(stop) - if isinstance(self.audio, list): return [self.audio[0][start:stop:is_reversed],self.audio[1][start:stop:is_reversed]] - else: return self.audio[:,start:stop:is_reversed] - else: - i = s.start if s.start is not None else 0 - end = s.stop if s.stop is not None else len(self.beatmap) - if i > end: - step = -step - if step > 0: i, end = end-2, i - elif step < 0: i, end = end-2, i - if step < 0: - is_reversed = True - end -= 1 - else: is_reversed = False - pattern = '' - while ((i > end) if is_reversed else (i < end)): - pattern+=f'{i},' - i+=step - song_copy = song(audio = self.audio, sr = self.sr, log = False) - song_copy.beatmap = self.beatmap.copy() - song_copy.beatmap = np.insert(song_copy.beatmap, 0, 0) - result = song_copy.beatswap(pattern = pattern, return_audio = True) - return result if isinstance(self.audio, np.ndarray) else result.tolist() - - - elif isinstance(s, float): - start = self._slice(s-1) - stop = self._slice(s) - if isinstance(self.audio, list): return [self.audio[0][start:stop],self.audio[1][start:stop]] - else: return self.audio[:,start:stop] - elif isinstance(s, int): - start = self.beatmap[s-1] - stop = self.beatmap[s] - if isinstance(self.audio, list): return [self.audio[0][start:stop],self.audio[1][start:stop]] - else: return self.audio[:,start:stop] - elif isinstance(s, tuple): - start = self._slice(s[0]) - stop = self._slice(s[0] + s[1]) - if stop<0: - start -= stop - stop = -stop - step = -1 - else: step = None - if isinstance(self.audio, list): return [self.audio[0][start:stop:step],self.audio[1][start:stop:step]] - else: return self.audio[:,start:stop:step] - elif isinstance(s, list): - start = s[0] - stop = s[1] if len(s) > 1 else None - if start > stop: - step = -1 - start, stop = stop, start - else: step = None - start = self._slice(start) - stop = self._slice(stop) - if step is not None and stop is None: stop = self._slice(start + s.step) - if isinstance(self.audio, list): return [self.audio[0][start:stop:step],self.audio[1][start:stop:step]] - else: return self.audio[:,start:stop:step] - elif isinstance(s, str): - return self.beatswap(pattern = s, return_audio = True) - - - else: raise TypeError(f'list indices must be int/float/slice/tuple, not {type(s)}; perhaps you missed a comma? Slice is `{s}`') - - - def _print(self, *args, end=None, sep=None): - if self.log: print(*args, end=end, sep=sep) - - - def write(self, output='', ext='mp3', suffix=' (beatswap)', literal_output=False): - """writes""" - if literal_output is False: output = io._outputfilename(output, filename=self.path, suffix=suffix, ext=ext) - io.write_audio(audio=self.audio, sr=self.sr, output=output, log=self.log) - return output - - - def beatmap_generate(self, lib='madmom.BeatDetectionProcessor', caching = True, load_settings = True): - """Find beat positions""" - from . import beatmap - self.beatmap = beatmap.generate(audio = self.audio, sr = self.sr, lib=lib, caching=caching, filename = self.path, log = self.log, load_settings = load_settings) - if load_settings is True: - audio_id=hex(len(self.audio[0])) - settingsDir="beat_manipulator/beatmaps/" + ''.join(self.path.split('/')[-1]) + "_"+lib+"_"+audio_id+'_settings.txt' - import os - if os.path.exists(settingsDir): - with open(settingsDir, 'r') as f: - settings = f.read().split(',') - if settings[3] != None: self.normalized = settings[3] - self.beatmap_default = self.beatmap.copy() - self.lib = lib - - def beatmap_scale(self, scale:float): - from . import beatmap - self.beatmap = beatmap.scale(beatmap = self.beatmap, scale = scale, log = self.log) - - def beatmap_shift(self, shift:float, mode = 1): - from . import beatmap - self.beatmap = beatmap.shift(beatmap = self.beatmap, shift = shift, log = self.log, mode = mode) - - def beatmap_reset(self): - self.beatmap = self.beatmap_default.copy() - - def beatmap_adjust(self, adjust = 500): - self.beatmap = np.append(np.sort(np.absolute(self.beatmap - adjust)), len(self.audio[0])) - - def beatmap_save_settings(self, scale: float = None, shift: float = None, adjust: int = None, normalized = None, overwrite = 'ask'): - from . import beatmap - if self.beatmap is None: self.beatmap_generate() - beatmap.save_settings(audio = self.audio, filename = self.path, scale = scale, shift = shift,adjust = adjust, normalized = normalized, log=self.log, overwrite=overwrite, lib = self.lib) - - def beatswap(self, pattern = '1;"cowbell"s3v2, 2;"cowbell"s2, 3;"cowbell", 4;"cowbell"s0.5, 5;"cowbell"s0.25, 6;"cowbell"s0.4, 7;"cowbell"s0.8, 8;"cowbell"s1.6', - scale:float = 1, shift:float = 0, length = None, samples:dict = BM_SAMPLES, effects:dict = BM_EFFECTS, metrics:dict = BM_METRICS, smoothing: int = 100, adjust=500, return_audio = False, normalize = False, limit_beats=10000, limit_length = 52920000): - - if normalize is True: - self.normalize_beats() - if self.beatmap is None: self.beatmap_generate() - beatmap_default = self.beatmap.copy() - self.beatmap = np.append(np.sort(np.absolute(self.beatmap - adjust)), len(self.audio[0])) - self.beatmap_shift(shift) - self.beatmap_scale(scale) - - # baked in presets - #reverse - if pattern.lower() == 'reverse': - if return_audio is False: - self.audio = self[::-1] - self.beatmap = beatmap_default.copy() - return - else: - result = self[::-1] - self.beatmap = beatmap_default.copy() - return result - # shuffle - elif pattern.lower() == 'shuffle': - import random - beats = list(range(len(self.beatmap))) - random.shuffle(beats) - beats = ','.join(list(str(i) for i in beats)) - if return_audio is False: - self.beatswap(beats) - self.beatmap = beatmap_default.copy() - return - else: - result = self.beatswap(beats, return_audio = True) - self.beatmap = beatmap_default.copy() - return result - # test - elif pattern.lower() == 'test': - if return_audio is False: - self.beatswap('1;"cowbell"s3v2, 2;"cowbell"s2, 3;"cowbell", 4;"cowbell"s0.5, 5;"cowbell"s0.25, 6;"cowbell"s0.4, 7;"cowbell"s0.8, 8;"cowbell"s1.6') - self.beatmap = beatmap_default.copy() - return - else: - result = self.beatswap('1;"cowbell"s3v2, 2;"cowbell"s2, 3;"cowbell", 4;"cowbell"s0.5, 5;"cowbell"s0.25, 6;"cowbell"s0.4, 7;"cowbell"s0.8, 8;"cowbell"s1.6', return_audio = True) - self.beatmap = beatmap_default.copy() - return result - # random - elif pattern.lower() == 'random': - import random,math - pattern = '' - rand_length=0 - limit = 100 - while True: - limit -= 1 - rand_num = int(math.floor(random.triangular(1, 16, rand_length-1))) - if random.uniform(0, rand_num)>rand_length: rand_num = rand_length+1 - rand_slice = random.choices(['','>0.5','>0.25', '<0.5', '<0.25', '<1/3', '<2/3', '>1/3', '>2/3', '<0.75', '>0.75', - f'>{random.uniform(0.01,2)}', f'<{random.uniform(0.01,2)}'], weights = [13,1,1,1,1,1,1,1,1,1,1,1,1], k=1)[0] - - rand_effect = random.choices(['', 's0.5', 's2', f's{random.triangular(0.1,1,4)}', 'r','v0.5', 'v2', 'v0', - f'd{int(random.triangular(1,8,16))}', 'g', 'c', 'c0', 'c1', f'b{int(random.triangular(1,8,4))}'], - weights=[30, 2, 2, 2, 2, 1, 1, 2, 2, 1, 2, 2, 2, 1], k=1)[0] - - rand_join = random.choices([', ', ';'], weights = [5, 1], k=1)[0] - pattern += f'{rand_num}{rand_slice}{rand_effect}{rand_join}' - if rand_join == ',': rand_length+=1 - if rand_length in [4, 8, 16]: - if random.uniform(rand_num,16)>14: break - else: - if random.uniform(rand_num,16)>15.5: break - if limit <= 0: break - pattern_length = 4 - if rand_length > 6: pattern_length = 8 - if rand_length > 12: pattern_length = 16 - if rand_length > 24: pattern_length = 32 - - - - - from . import parse - pattern, operators, pattern_length, shuffle_groups, shuffle_beats, c_slice, c_misc, c_join = parse.parse(pattern = pattern, samples = samples, pattern_length = length, log = self.log) - - #print(f'pattern length = {pattern_length}') - - # beatswap - n=-1 - tries = 0 - metric = None - result=[self.audio[:,:self.beatmap[0]]] - #for i in pattern: print(i) - - - stop = False - total_length = 0 - - # loop over pattern until it reaches the last beat - while n*pattern_length <= len(self.beatmap): - n+=1 - - if stop is True: break - - # Every time pattern loops, shuffles beats with # - if len(shuffle_beats) > 0: - pattern = parse._shuffle(pattern, shuffle_beats, shuffle_groups) - - # Loops over all beats in pattern - for num, b in enumerate(pattern): - - # check if beats limit has been reached - if limit_beats is not None and len(result) >= limit_beats: - stop = True - break - - if len(b) == 4: beat = b[3] # Sample has length 4 - else: beat = b[0] # Else take the beat - - if beat is not None: - beat_as_string = ''.join(beat) if isinstance(beat, list) else beat - # Skips `!` beats - if c_misc[9] in beat_as_string: continue - - # Audio is a sample or a song - if len(b) == 4: - audio = b[0] - - # Audio is a song - if b[2] == c_misc[10]: - try: - - # Song slice is a single beat, takes it - if isinstance(beat, str): - # random beat if `@` in beat (`_` is separator) - if c_misc[4] in beat: beat = parse._random(beat, rchar = c_misc[4], schar = c_misc[5], length = pattern_length) - beat = utils._safer_eval(beat) + pattern_length*n - while beat > len(audio.beatmap)-1: beat = 1 + beat - len(audio.beatmap) - beat = audio[beat] - - # Song slice is a range of beats, takes the beats - elif isinstance(beat, list): - beat = beat.copy() - for i in range(len(beat)-1): # no separator - if c_misc[4] in beat[i]: beat[i] = parse._random(beat[i], rchar = c_misc[4], schar = c_misc[5], length = pattern_length) - beat[i] = utils._safer_eval(beat[i]) - while beat[i] + pattern_length*n > len(audio.beatmap)-1: beat[i] = 1 + beat[i] - len(audio.beatmap) - if beat[2] == c_slice[0]: beat = audio[beat[0] + pattern_length*n : beat[1] + pattern_length*n] - elif beat[2] == c_slice[1]: beat = audio[beat[0] - 1 + pattern_length*n: beat[0] - 1 + beat[1] + pattern_length*n] - elif beat[2] == c_slice[2]: beat = audio[beat[0] - beat[1] + pattern_length*n : beat[0] + pattern_length*n] - - # No Song slice, take whole song - elif beat is None: beat = audio.audio - - except IndexError as e: - print(e) - tries += 1 - if tries > 30: break - continue - - # Audio is an audio file - else: - # No audio slice, takes whole audio - if beat is None: beat = audio - - # Audio slice, takes part of the audio - elif isinstance(beat, list): - audio_length = len(audio[0]) - beat = [min(int(utils._safer_eval(beat[0])*audio_length), audio_length-1), min(int(utils._safer_eval(beat[1])*audio_length), audio_length-1)] - if beat[0] > beat[1]: - beat[0], beat[1] = beat[1], beat[0] - step = -1 - else: step = None - beat = audio[:, beat[0] : beat[1] : step] - - # Audio is a beat - else: - try: - beat_str = beat if isinstance(beat, str) else ''.join(beat) - # Takes a single beat - if isinstance(beat, str): - if c_misc[4] in beat: beat = parse._random(beat, rchar = c_misc[4], schar = c_misc[5], length = pattern_length) - beat = self[utils._safer_eval(beat) + pattern_length*n] - - # Takes a range of beats - elif isinstance(beat, list): - beat = beat.copy() - for i in range(len(beat)-1): # no separator - if c_misc[4] in beat[i]: beat[i] = parse._random(beat[i], rchar = c_misc[4], schar = c_misc[5], length = pattern_length) - beat[i] = utils._safer_eval(beat[i]) - if beat[2] == c_slice[0]: beat = self[beat[0] + pattern_length*n : beat[1] + pattern_length*n] - elif beat[2] == c_slice[1]: beat = self[beat[0] - 1 + pattern_length*n: beat[0] - 1 + beat[1] + pattern_length*n] - elif beat[2] == c_slice[2]: beat = self[beat[0] - beat[1] + pattern_length*n : beat[0] + pattern_length*n] - - # create a variable if `%` in beat - if c_misc[7] in beat_str: metric = parse._metric_get(beat_str, beat, metrics, c_misc[7]) - - except IndexError: - tries += 1 - if tries > 30: break - continue - - if len(beat[0])<1: continue #Ignores empty beats - - # Applies effects - effect = b[1] - for e in effect: - if e[0] in effects: - v = e[1] - e = effects[e[0]] - # parse effect value - if isinstance(v, str): - if metric is not None: v = parse._metric_replace(v, metric, c_misc[7]) - v = utils._safer_eval(v) - - # effects - if e == 'volume': - if v is None: v = 0 - beat = beat * v - elif e == 'downsample': - if v is None: v = 8 - beat = np.repeat(beat[:,::v], v, axis=1) - elif e == 'gradient': - beat = np.gradient(beat, axis=1) - elif e == 'reverse': - beat = beat[:,::-1] - else: - beat = e(beat, v) - - # clip beat to -1, 1 - beat = np.clip(beat, -1, 1) - - # checks if length limit has been reached - if limit_length is not None: - total_length += len(beat[0]) - if total_length>= limit_length: - stop = True - break - - # Adds the processed beat to list of beats. - # Separator is `,` - if operators[num] == c_join[0]: - result.append(beat) - - # Makes sure beat doesn't get added on top of previous beat multiple times when pattern is out of range of song beats, to avoid distorted end. - elif tries<2: - - # Separator is `;` - always use first beat length, normalizes volume to 1.5 - if operators[num] == c_join[1]: - length = len(beat[0]) - prev_length = len(result[-1][0]) - if length > prev_length: - result[-1] += beat[:,:prev_length] - else: - result[-1][:,:length] += beat - limit = np.max(result[-1]) - if limit > 1.5: - result[-1] /= limit*0.75 - - # Separator is `~` - cuts to shortest - elif operators[num] == c_join[2]: - minimum = min(len(beat[0]), len(result[-1][0])) - result[-1] = beat[:,:minimum-1] + result[-1][:,:minimum-1] - - # Separator is `&` - extends to longest - elif operators[num] == c_join[3]: - length = len(beat[0]) - prev_length = len(result[-1][0]) - if length > prev_length: - beat[:,:prev_length] += result[-1] - result[-1] = beat - else: - result[-1][:,:length] += beat - - # Separator is `^` - uses first beat length and multiplies beats, used for sidechain - elif operators[num] == c_join[4]: - length = len(beat[0]) - prev_length = len(result[-1][0]) - if length > prev_length: - result[-1] *= beat[:,:prev_length] - else: - result[-1][:,:length] *= beat - - - # Separator is `$` - always use first beat length, additionally sidechains first beat by second - elif operators[num] == c_join[5]: - from . import effects - length = len(beat[0]) - prev_length = len(result[-1][0]) - if length > prev_length: - result[-1] *= effects.to_sidechain(beat[:,:prev_length]) - result[-1] += beat[:,:prev_length] - else: - result[-1][:,:length] *= effects.to_sidechain(beat) - result[-1][:,:length] += beat - - # Separator is `}` - always use first beat length - elif operators[num] == c_join[6]: - length = len(beat[0]) - prev_length = len(result[-1][0]) - if length > prev_length: - result[-1] += beat[:,:prev_length] - else: - result[-1][:,:length] += beat - - - # smoothing - for i in range(len(result)-1): - current1 = result[i][0][-2] - current2 = result[i][0][-1] - following1 = result[i+1][0][0] - following2 = result[i+1][0][1] - num = (abs(following1 - (current2 + (current2 - current1))) + abs(current2 - (following1 + (following1 - following2))))/2 - if num > 0.0: - num = int(smoothing*num) - if num>3: - try: - line = scipy.interpolate.CubicSpline([0, num+1], [0, following1], bc_type='clamped')(np.arange(0, num, 1)) - #print(line) - line2 = np.linspace(1, 0, num)**0.5 - result[i][0][-num:] *= line2 - result[i][1][-num:] *= line2 - result[i][0][-num:] += line - result[i][1][-num:] += line - except (IndexError, ValueError): pass - - self.beatmap = beatmap_default.copy() - # Beats are conjoined into a song - import functools - import operator - # Makes a [l, r, l, r, ...] list of beats (left and right channels) - result = functools.reduce(operator.iconcat, result, []) - - # Every first beat is conjoined into left channel, every second beat is conjoined into right channel - if return_audio is False: self.audio = np.array([functools.reduce(operator.iconcat, result[::2], []), functools.reduce(operator.iconcat, result[1:][::2], [])]) - else: return np.array([functools.reduce(operator.iconcat, result[::2], []), functools.reduce(operator.iconcat, result[1:][::2], [])]) - - def normalize_beats(self): - if self.normalized is not None: - if ',' in self.normalized: - self.beatswap(pattern = self.normalized) - else: - from . import presets - self.beatswap(*presets.get(self.normalized)) - - def image_generate(self, scale=1, shift=0, mode = 'median'): - if self.beatmap is None: self.beatmap_generate() - beatmap_default = self.beatmap.copy() - self.beatmap_shift(shift) - self.beatmap_scale(scale) - from .image import generate as image_generate - self.image = image_generate(song = self, mode = mode, log = self.log) - self.beatmap = beatmap_default.copy() - - def image_write(self, output='', mode = 'color', max_size = 4096, ext = 'png', rotate=True, suffix = ''): - from .image import write as image_write - output = io._outputfilename(output, self.path, ext=ext, suffix = suffix) - image_write(self.image, output = output, mode = mode, max_size = max_size , rotate = rotate) - return output - - - -def beatswap(audio = None, pattern = 'test', scale = 1, shift = 0, length = None, sr = None, output = '', log = True, suffix = ' (beatswap)', copy = True): - if not isinstance(audio, song): audio = song(audio = audio, sr = sr, log = log) - elif copy is True: - beatmap = audio.beatmap - path = audio.path - audio = song(audio = audio.audio, sr = audio.sr) - audio.beatmap = beatmap - audio.path = path - audio.beatswap(pattern = pattern, scale = scale, shift = shift, length = length) - if output is not None: - return audio.write(output = output, suffix = suffix) - else: return audio - -def image(audio, scale = 1, shift = 0, sr = None, output = '', log = True, suffix = '', max_size = 4096): - if not isinstance(audio, song): audio = song(audio = audio, sr = sr, log = log) - audio.image_generate(scale = scale, shift = shift) - if output is not None: - return audio.image_write(output = output, max_size=max_size, suffix=suffix) - else: return audio.image \ No newline at end of file diff --git a/spaces/elena-k/OmdenaTriesteLongCovid/app.py b/spaces/elena-k/OmdenaTriesteLongCovid/app.py deleted file mode 100644 index 6d4b4c7897a3bb8e5e89cf12c7de1f6e3b4efcb0..0000000000000000000000000000000000000000 --- a/spaces/elena-k/OmdenaTriesteLongCovid/app.py +++ /dev/null @@ -1,203 +0,0 @@ -import gradio as gr -import numpy as np -import pandas as pd -from PIL import Image -#demo = gr.Blocks() - -#with demo : - #gr.Markdown("Test our topic modeling model !") - # tweets_list = ['Long covid malattia strana. Berlusconi è affetto dalla forma più lunga denominata ""Ruby ter""', - # 'Perché, l’unica conseguenza da temere è la morte ? Ha idea di quanti organi può danneggiare il covid ? Mai stato in terapia intensiva? Mai sentito parlare di #LongCovid ?', - # 'C\'è uno studio a riguardo condotto proprio sui più giovani che identifica il long covid alla stregua di ogni strascico di malattie infettive polmonari. Il long covid è dannoso come una polmonite in quanto a effetti a lungo termine. Se lo ritrovo te lo passo, ora sono fuori...', - # 'Mio cugino è guarito dal covid dopo 4 mesi di ospedale, di cui più di 2 intubato, grazie alla testardaggine dei medici che hanno fatto di tutto per salvargli la vita a 57 anni. Ora è nella fase long covid per recuperare i danni fisici riportati', - # 'È importante parlare di #LongCovid e sensibilizzare tutti, giovani compresi, che non è un gioco ma una malattia debilitante/invalidante che può stravolgere la vita. Io 39 anni e #LongCovid da 18 mesi (con 4 figli piccoli). #countlongcovid', - # 'Il Long Covid è una diretta conseguenza di quelli che nei primi tempi sono stati abbandonati a se stessi giorni e giorni e curati solo quando molto aggravati, in ospedale. Se ti curi tempestivamente non hai nessuna conseguenza.', - # 'Non sai di cosa parli sono stato un mese attaccato ad un respiratore e sono salvo per miracolo. Ma questo è niente in confronto con il #LongCovid che mi porto dietro da mesi e mesi. Siete dei criminali a pensare ch\'è meglio curare che prevenire. Dei pazzi da rinchiudere', - # 'A chi dice ""Il COVID è innocuo per i bambini"". Oltre ad alcuni decessi 500+ bambini sono morti di COVID negli USA 2020) c\'è #LongCOVID. Se ne parla in questo studio: ""Studio inglese rileva che il COVID a lungo colpisce fino a 1 bambino su 7 mesi dopo l\'infezione'] - - #tweets = gr.Dropdown(choices=[],label="Example tweets") - #tweets.update(choices = ["A", "B"], value="A") - -import gradio as gr -with gr.Blocks(title = "Long Covid in Italy") as demo: - - gr.Markdown( - """ - #
      Using social media to study Long Covid in Italy
      - """) - with gr.Tabs(): - with gr.TabItem("Topic modeling"): - - gr.Markdown( - """ - ##
      Topic modeling analysis on Twitter
      - """ - ) - with gr.Tabs(): - with gr.TabItem("July-Semptember 2021"): - with gr.Row(): - gr.Image("./wordclouds_Q1 data.png", label="July-September 2021") - - - - tweets_list = ['C\'è uno studio a riguardo condotto proprio sui più giovani che identifica il long covid alla stregua di ogni strascico di malattie infettive polmonari. Il long covid è dannoso come una polmonite in quanto a effetti a lungo termine. Se lo ritrovo te lo passo, ora sono fuori...', - 'Mio cugino è guarito dal covid dopo 4 mesi di ospedale, di cui più di 2 intubato, grazie alla testardaggine dei medici che hanno fatto di tutto per salvargli la vita a 57 anni. Ora è nella fase long covid per recuperare i danni fisici riportati', - 'È importante parlare di #LongCovid e sensibilizzare tutti, giovani compresi, che non è un gioco ma una malattia debilitante/invalidante che può stravolgere la vita. Io 39 anni e #LongCovid da 18 mesi (con 4 figli piccoli). #countlongcovid', - 'Il Long Covid è una diretta conseguenza di quelli che nei primi tempi sono stati abbandonati a se stessi giorni e giorni e curati solo quando molto aggravati, in ospedale. Se ti curi tempestivamente non hai nessuna conseguenza.', - 'Non sai di cosa parli sono stato un mese attaccato ad un respiratore e sono salvo per miracolo. Ma questo è niente in confronto con il #LongCovid che mi porto dietro da mesi e mesi. Siete dei criminali a pensare ch\'è meglio curare che prevenire. Dei pazzi da rinchiudere', - 'A chi dice ""Il COVID è innocuo per i bambini"". Oltre ad alcuni decessi 500+ bambini sono morti di COVID negli USA 2020) c\'è #LongCOVID. Se ne parla in questo studio: ""Studio inglese rileva che il COVID a lungo colpisce fino a 1 bambino su 7 mesi dopo l\'infezione'] - - q1_data_topic_list=['0. Discussion about scientific studies','1. Anxiety about pandemic and the information about it OR Specific people in the context of LC', - '2. Discussion about LC impact in terms of time periods','3. Discussion about LC impact on patient life (impact on life so far or scope for lifelong impact)' , - '4. Treatment scenario', '5. Impact/Consequences of LC on children'] - - - topic_dist_list=[[(0, 0.03202321), (1, 0.26949906), (2, 0.05191976), (3, 0.24642), (4, 0.33530965), (5, 0.064828366)], - [(0, 0.04221856), (1, 0.0374047), (2, 0.06841562), (3, 0.07528768), (4, 0.3782018), (5, 0.3984716)], - [(0, 0.2181524), (1, 0.13380228), (2, 0.021277282), (3, 0.48123622), (4, 0.01883339), (5, 0.12669843)], - [(0, 0.0145399235), (1, 0.01287178), (2, 0.43158862), (3, 0.24750596), (4, 0.264914), (5, 0.028579665)], - [(0, 0.016303344), (1, 0.014450405), (2, 0.36162496), (3, 0.48426068), (4, 0.023487965), (5, 0.09987263)], - [(0, 0.018612841), (1, 0.016472807), (2, 0.44922927), (3, 0.033633586), (4, 0.026889767), (5, 0.45516175)], - [(0, 0.016305258), (1, 0.014453228), (2, 0.7628153), (3, 0.029092493), (4, 0.14613572), (5, 0.031198042)], - [(0, 0.016303508), (1, 0.014449066), (2, 0.15605325), (3, 0.029179793), (4, 0.023376595), (5, 0.7606378)]] - - def display_output(tweet_index): - topics = "
        \ -
      1. Discussion about scientific studies
      2. \ -
      3. Anxiety about pandemic and the information about it OR Specific people in the context of LC
      4. \ -
      5. Discussion about LC impact in terms of time periods
      6. \ -
      7. Discussion about LC impact on patient life (impact on life so far or scope for lifelong impact)
      8. \ -
      9. Treatment scenario
      10. \ -
      11. Impact/Consequences of LC on children
      12. \ -
      " - item = topic_dist_list[tweet_index] - distribution = f'

      Topics Distribution

      ({item[0][0]+1}, {item[0][1]}), ({item[1][0]+1}, {item[1][1]}), ({item[2][0]+1}, {item[2][1]}), ({item[3][0]+1}, {item[3][1]}), ({item[4][0]+1}, {item[4][1]}), ({item[5][0]+1}, {item[5][1]})\ - ' - return gr.HTML.update(distribution, visible=True) - - topics = '\ -

      Topics July to Sept, 2021

      \ -
        \ -
      1. 1. Discussion about scientific studies
      2. \ -
      3. 2. Anxiety about pandemic and the information about it OR Specific people in the context of LC
      4. \ -
      5. 3. Discussion about LC impact in terms of time periods
      6. \ -
      7. 4. Discussion about LC impact on patient life (impact on life so far or scope for lifelong impact)
      8. \ -
      9. 5. Treatment scenario
      10. \ -
      11. 6. Impact/Consequences of LC on children
      12. \ -
      \ - ' - - Q1_topics = gr.HTML(topics, visible=True) - - gr.Markdown( - """ - ### Test our topic modeling model : select a tweet and check the topics distribution ! - """ - ) - - tweet = gr.Dropdown(tweets_list, label="Example tweets", interactive=True, type="index") - - model_output = gr.HTML("", visible=False) - tweet.change(display_output, tweet, model_output) - - with gr.TabItem("October 2021-July 2022"): - - topic_dist_list_Q2_Q4=[[(0, 0.4377157), (1, 0.05924045), (2, 0.1525337), (3, 0.1941842), (4, 0.075339705), (5, 0.08098622)], - [(0, 0.16064012), (1, 0.063850455), (2, 0.08664099), (3, 0.2870743), (4, 0.081202514), (5, 0.32059166)], - [(0, 0.14904374), (1, 0.059243646), (2, 0.08039133), (3, 0.26638654), (4, 0.07534457), (5, 0.36959016)], - [(0, 0.14897935), (1, 0.059245925), (2, 0.08039324), (3, 0.41068354), (4, 0.14752874), (5, 0.15316921)], - [(0, 0.089826144), (1, 0.069229595), (2, 0.09393969), (3, 0.5643193), (4, 0.08804329), (5, 0.09464199)], - [(0, 0.08284077), (1, 0.29718927), (2, 0.08663448), (3, 0.36485678), (4, 0.08119658), (5, 0.08728213)]] - - def display_output_Q2_Q4(tweet_index): - item = topic_dist_list_Q2_Q4[tweet_index] - distribution = f'

      Topics Distribution

      ({item[0][0]+1}, {item[0][1]}), ({item[1][0]+1}, {item[1][1]}), ({item[2][0]+1}, {item[2][1]}), ({item[3][0]+1}, {item[3][1]}), ({item[4][0]+1}, {item[4][1]}), ({item[5][0]+1}, {item[5][1]})\ - ' - return gr.HTML.update(distribution, visible=True) - - with gr.Row(): - gr.Image("./wordclouds_Q2-Q2 data.png", label="October 2021-July 2022") - - Q2_Q4_topics = '\ -

      Topics October 2021 to July 2022

      \ -
        \ -
      1. 1. Variants
      2. \ -
      3. 2. Vaccine side-effects (and general anti-vax/ anti-LC narrative)
      4. \ -
      5. 3. Aftermath of LC or vaccine
      6. \ -
      7. 4. Impact of LC in terms of time OR Risks/Symptoms of LC
      8. \ -
      9. 5. Anger or anxiety about LC information
      10. \ -
      11. 6. Discussion or Information about the science/knowledge surrounding LC
      12. \ -
      \ - ' - - - Q2_Q4_topics_html = gr.HTML(Q2_Q4_topics, visible=True) - - tweet_list_Q2_Q4=["Omicron e Long Covid: palpitazioni e perdita d'udito tra i sintomi - #Omicron #Covid: #palpitazioni ", - 'Long Covid e trombosi. La correlazione è spiegata da Giovanni Esposito, Presidente GISE, in un articolo sul sito https://t.co/8TdI9nhDHY e avvalorata da uno studio svedese pubblicato sul British Medical Journal. https://t.co/UebaXUtfbz', - 'Peccato che il ""long COVID"" che è proprio ciò di cui parla l\'esimio dottore citato determini una alterazione o soppressione del sistema immunitario di cui si sa ancora poco ma che può portare a conseguenze fatali per il paziente.', - 'Il Long covid rappresentava un problema solo fino ad aprile 2021, i vaccini hanno molto ridotto l\'impatto e la gravità delle patologie a lungo termine, in pratica si può dire che il long covid non esiste più', - 'Sicuro, 100-150 morti al giorno, 6 ondate l anno, rischio long covid, rischio evoluzionario, e via dicendo — finitissimo', - 'le cure le fai giorno dopo giorno... ci sono casi di long-covid dopo 6 mesi dall\'infezione. [Vaccino > >Cure] è un dato di fatto', - 'A parte il rischio di sviluppare il #longcovid, il pericolo grave di lasciar circolare il virus e di farlo diventare endemico come preconizza il governo e lo sciagurato #speranza non è nel decorso del singolo caso ma nell\'aumento proporzionale dell\'insorgere di nuove varianti'] - - gr.Markdown( - """ - ### Test our topic modeling model : select a tweet and check the topics distribution ! - """ - ) - - tweet_Q2_Q4 = gr.Dropdown(tweet_list_Q2_Q4, label="Example tweets", interactive=True, type="index") - - model_output_Q2_Q4 = gr.HTML("", visible=False) - tweet_Q2_Q4.change(display_output_Q2_Q4, tweet_Q2_Q4, model_output_Q2_Q4) - - with gr.TabItem("Word frequency"): - def display_freq_plot(group_index): - image_path = f'./word_frequency/frequency_plots/0{group_index + 1}.jpg' - file_path = f'./word_frequency/frequency_data/0{group_index + 1}.csv' - freq_df = pd.read_csv(file_path)#.to_html(index=False, justify="center", border=1) - return gr.Image.update(image_path), gr.Dataframe.update(value = freq_df.to_numpy().tolist()) - - def display_freq_table(group_index): - file_path = f'./word_frequency/frequency_plots/0{group_index + 1}.csv' - freq_df = pd.read_csv(file_path) - return gr.Dataframe(value = freq_df.to_numpy().tolist(), headers = freq_df.columns.values.tolist()) - - - gr.Markdown( - """ - ##
      Word frequency analysis
      - """ - ) - gr.Markdown( - """ - ### Wordclouds per semester - """ - ) - with gr.Row(): - gr.Image("./word_frequency/twint_scraped_2021.png", label="July 2021 - January 2022") - gr.Image("./word_frequency/twint_scraped_2022.png", label="February 2022 - July 2022") - - - word_groups = ['figli, pediatrico, genitori [children, pediatric, parents]', 'complicanze, postumi, disturbi [complications, after-effects, disorders]', 'ansia, nebbia mentale, stanchezza [anxiety, mental fog, fatigue]', - 'vista, gusto, olfatto [sight, taste, smell]', 'scuole, lockdown, longcovidkids [schools, lockdown, longcovidkids]'] - gr.Markdown( - """ - ### Select a group of words to see the frequency in time - """ - ) - word_groups_selection = gr.Dropdown(word_groups, value=word_groups[0], label="Words", interactive=True, type="index") - with gr.Box(): - word_group_freq_image = gr.Image('./word_frequency/frequency_plots/01.jpg', label="Frequency plot") - - freq_df = pd.read_csv('./word_frequency/frequency_data/01.csv')#.to_html(index=False, justify="center", border=1) - freq_table = gr.Dataframe(value = freq_df.to_numpy().tolist(), col_count=5, headers=freq_df.columns.values.tolist()) - - word_groups_selection.change(display_freq_plot, word_groups_selection, - [word_group_freq_image, freq_table]) - -if __name__ == "__main__": - demo.launch(share=True, debug=True) - -#demo.launch(share=True, debug=True) \ No newline at end of file diff --git a/spaces/elevenlabs/tts/README.md b/spaces/elevenlabs/tts/README.md deleted file mode 100644 index 047e44165b7abf68b7e650fea8679961e5a0b377..0000000000000000000000000000000000000000 --- a/spaces/elevenlabs/tts/README.md +++ /dev/null @@ -1,12 +0,0 @@ ---- -title: ElevenLabs TTS -emoji: 🗣️ -colorFrom: yellow -colorTo: pink -sdk: gradio -sdk_version: 3.40.1 -app_file: app.py -pinned: false ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/enzostvs/stable-diffusion-tpu/components/login/index.tsx b/spaces/enzostvs/stable-diffusion-tpu/components/login/index.tsx deleted file mode 100644 index e623d5f2d964253a2d2f6c7129c5cdf8f4c3b6ee..0000000000000000000000000000000000000000 --- a/spaces/enzostvs/stable-diffusion-tpu/components/login/index.tsx +++ /dev/null @@ -1,58 +0,0 @@ -"use client"; - -import { useMount, useUpdateEffect } from "react-use"; -import Image from "next/image"; -import { FaUserAstronaut } from "react-icons/fa"; - -import { useUser } from "@/utils/useUser"; -import HFLogo from "@/assets/images/hf-logo.svg"; -import classNames from "classnames"; -import { HiCheckBadge } from "react-icons/hi2"; - -export const Login = ({ code }: { code?: string }) => { - const { getAuthorization, user } = useUser(); - - useMount(() => getAuthorization(code as string)); - - useUpdateEffect(() => { - if (user) { - setTimeout(() => { - window.location.href = "/"; - }, 2000); - } - }, [user]); - - return ( -
      -
      -

      - {user - ? "You have been logged in successfully. \nRedirecting..." - : "Retrieving your information..."} -

      -
      - HF Logo - {user ? ( - - ) : ( -
      - - - -
      - )} -
      - -
      -
      -
      -
      - ); -}; diff --git a/spaces/falterWliame/Face_Mask_Detection/Bluestacks 2.0.2.5623 MOD Rooted Offline Installer - Core-X Utorrent.md b/spaces/falterWliame/Face_Mask_Detection/Bluestacks 2.0.2.5623 MOD Rooted Offline Installer - Core-X Utorrent.md deleted file mode 100644 index f12ddb003fb9c72c262b63cba88ff3cc64a83a23..0000000000000000000000000000000000000000 --- a/spaces/falterWliame/Face_Mask_Detection/Bluestacks 2.0.2.5623 MOD Rooted Offline Installer - Core-X Utorrent.md +++ /dev/null @@ -1,10 +0,0 @@ -
      -

      carrier broadband home edition 2015 [100m] [carrier]. [7-zip] 2017. 1.6.2 portable. [android app player 2.2.10.0 (07.09.2009) [portable]. bluestacks 4.50.3.2580. offline installer includes a full featured media app. full features blue stacks mobile app player setup iso image with full offline installer and android os for windows 7/8. total download size 138.

      -

      bluestacks 5.10.0.1050. 46999. [offline installer]. [7-zip] 2017. [100m] portable. [windows 10 6. 2. 1709]. [carrier (prt)]. android app player 2.2.0 (07.09.2009) [portable]. bluestacks 4.42.1.2192. offline installer includes a full featured media app. full features blue stacks mobile app player setup iso image with full offline installer and android os for windows 7/8. total download size 138.

      -

      Bluestacks 2.0.2.5623 MOD Rooted {Offline Installer} - {Core-X} utorrent


      Downloadhttps://urlca.com/2uDdUm



      -

      zeustorrents provides you free download for bluestacks offline installer 1.0.2.4144 for your pc. want to download an apk file for your android mobile phone? its simple click on the link bluestacks rooted.

      -

      bluestacks is a platform that allows you to run android apps and games in a similar way as you run on your android smartphone. dont want to use bluestacks all the time? bluestacks offline installer - 8.6.1.1317. download and use.

      -

      bluestacks 4.30.50.1690 with root free download new and latest version for windows. it is full offline installer standalone setup of bluestacks 4.1690. currently, it is not supported to install the older version of bluestacks.

      -

      bluestacks hd app player pro v.0.10.4321 + sdcard mod rooted [multi. 6 days ago. launching the downloaded offline installers directly will install bluestacks 5 nougat 32-bit on your computer by default. to know how to install.

      899543212b
      -
      -
      \ No newline at end of file diff --git a/spaces/falterWliame/Face_Mask_Detection/IMyFone LockWiper 2020 Crack With Serial Key Latest.md b/spaces/falterWliame/Face_Mask_Detection/IMyFone LockWiper 2020 Crack With Serial Key Latest.md deleted file mode 100644 index cb11e6b13b3e8ebb341f325581968dd0c835b35a..0000000000000000000000000000000000000000 --- a/spaces/falterWliame/Face_Mask_Detection/IMyFone LockWiper 2020 Crack With Serial Key Latest.md +++ /dev/null @@ -1,9 +0,0 @@ -
      -

      first of all, we should state that imyfone is a well-established company, and a good number of their products have been around for the better part of a decade. and in that time, their products have been purchased over a million times worldwide. theres no doubt that this company has a solid reputation and i’d give them a real chance without reservation.

      -

      iMyFone LockWiper 2020 Crack With Serial Key Latest


      DOWNLOADhttps://urlca.com/2uDdVc



      -

      take your device to its happy zone and, with an iphone, ipad, or ipod touch, connect it to your computer. in case you have forgotten the passcode, you can utilize imyfone lockwiper to rev up your device from a desktop and begin entering for a new one. in this way, you can use ios to recuperate the passcode or screen time passcode. on the off chance that you are on your pc or mac, you will realize that the apple id is accessible on its control board, so you can reset it from there.

      -

      however, it works only for the iphone, ipad, and ipod touch. you can change your passcode or screen time passcode to the default one by following these steps. inside settings, on the general tab, you can change the apple id password, which is directly related to the account for your ipad, ipod, or iphone. tap the option that you can reset passcode, and then tap the options button next to it. we will discuss the different methods of getting to the passcode on the following pages.

      -

      now we have the passcode, how do we understand the apple id without losing the data? this application does not relate to your apple id. because it is a stand-alone application, it can be utilized with the utilization of an apple id and the application doesn’t overhaul any essential setup. remember that ios keeps the secret, so the apple id is left to change the passcode or screen time passcode as it merits. if apple id can be recovered by using this application, it can’t be harmed with the lost passcode or screen time passcode.

      -

      899543212b
      -
      -
      \ No newline at end of file diff --git a/spaces/fatiXbelha/sd/Download BeamNG.drive v0.27 for Free and Enjoy the New Features.md b/spaces/fatiXbelha/sd/Download BeamNG.drive v0.27 for Free and Enjoy the New Features.md deleted file mode 100644 index 48c742f4fcbfdb82c1ebccca63d9b99ab50f7965..0000000000000000000000000000000000000000 --- a/spaces/fatiXbelha/sd/Download BeamNG.drive v0.27 for Free and Enjoy the New Features.md +++ /dev/null @@ -1,113 +0,0 @@ -
      -

      BeamNG.drive 0.27 Free Download: Everything You Need to Know

      -

      If you are a fan of driving games, you might have heard of BeamNG.drive, a realistic and immersive driving simulator that lets you customize and experiment with various vehicles in different environments. In this article, we will tell you everything you need to know about BeamNG.drive 0.27, the latest update that adds new vehicles, a new level, and many improvements to the game. We will also show you how to download BeamNG.drive 0.27 for free, either by buying the game or by using alternative methods.

      -

      beamng drive 0.27 free download


      DOWNLOAD ::: https://urllie.com/2uND3P



      -

      What is BeamNG.drive?

      -

      BeamNG.drive is a driving game that aims to offer a realistic and dynamic driving experience. Unlike most driving games that use pre-defined physics and damage models, BeamNG.drive uses a soft-body physics engine that simulates every component of a vehicle in real time. This means that the vehicles can deform and react to collisions, terrain, and other forces in a natural and believable way.

      -

      A realistic and immersive driving game

      -

      BeamNG.drive offers a variety of vehicles to drive, from cars and trucks to motorcycles and planes. Each vehicle has its own characteristics, such as weight, power, torque, suspension, steering, brakes, tires, and more. You can drive them in different modes, such as free roam, scenarios, campaigns, time trials, or races. You can also adjust the difficulty level, the weather conditions, the traffic density, and the damage settings to suit your preferences.

      -

      BeamNG.drive also features realistic sound effects that change depending on the vehicle, the environment, and the situation. You can hear the engine revving, the tires screeching, the metal crunching, and the glass shattering. You can also choose from different camera views, such as first-person, third-person, hood cam, chase cam, or orbit cam.

      -

      How to get beamng drive 0.27 for free on PC
      -Beamng drive 0.27 free download full version
      -Beamng drive 0.27 free download with mods
      -Beamng drive 0.27 free download no virus
      -Beamng drive 0.27 free download windows 10
      -Beamng drive 0.27 free download mac
      -Beamng drive 0.27 free download android
      -Beamng drive 0.27 free download steam
      -Beamng drive 0.27 free download crack
      -Beamng drive 0.27 free download torrent
      -Beamng drive 0.27 free download mega
      -Beamng drive 0.27 free download mediafire
      -Beamng drive 0.27 free download google drive
      -Beamng drive 0.27 free download update
      -Beamng drive 0.27 free download latest version
      -Beamng drive 0.27 free download new cars
      -Beamng drive 0.27 free download SP Dunekicker
      -Beamng drive 0.27 free download gameplay
      -Beamng drive 0.27 free download review
      -Beamng drive 0.27 free download tutorial
      -Beamng drive 0.27 free download system requirements
      -Beamng drive 0.27 free download error fix
      -Beamng drive 0.27 free download online multiplayer
      -Beamng drive 0.27 free download mods install
      -Beamng drive 0.27 free download custom maps
      -Beamng drive 0.27 free download realistic physics
      -Beamng drive 0.27 free download best settings
      -Beamng drive 0.27 free download tips and tricks
      -Beamng drive 0.27 free download cheats and hacks
      -Beamng drive 0.27 free download comparison with previous versions
      -Beamng drive 0.27 free download release date and news
      -Beamng drive 0.27 free download developer blog and patch notes[^1^] [^2^]
      -Beamng drive 0.27 free download official website and forum[^1^] [^2^]
      -Beamng drive 0.27 free download fan community and discord server[^1^] [^2^]
      -Beamng drive 0.27 free download videos and screenshots[^1^] [^2^]
      -Beamng drive 0.27 free download challenges and achievements[^1^] [^2^]
      -Beamng drive 0.27 free download fun and funny moments[^1^] [^2^]
      -Beamng drive 0.27 free download crashes and destruction[^1^] [^2^]
      -Beamng drive 0.27 free download stunts and jumps[^1^] [^2^]
      -Beamng drive 0.27 free download races and chases[^1^] [^2^]

      -

      A sandbox for vehicle customization and experimentation

      -

      One of the most fun aspects of BeamNG.drive is that you can customize and experiment with your vehicles in various ways. You can change the color, the wheels, the engine, the transmission, the suspension, the body parts, and more. You can also add accessories such as lights, horns, sirens, spoilers, or decals. You can even swap parts between different vehicles or create your own custom vehicles from scratch.

      -

      BeamNG.drive also allows you to test your vehicles in different situations and see how they perform and react. You can crash them into walls, trees, buildings, or other vehicles. You can drive them off cliffs, ramps, bridges, or loops. You can make them fly through the air or sink into water. You can even attach them to ropes or magnets and swing them around.

      -

      A platform for modding and community content

      -

      Another great feature of BeamNG.drive is that it supports modding and community content. You can download and install mods that add new vehicles , maps, scenarios, skins, sounds, or scripts to the game. You can also create your own mods using the in-game editor or external tools. You can share your mods with other players on the official forum or on websites such as BeamNG.com or Nexus Mods.

      -

      BeamNG.drive also has a large and active community of players who create and share their own content. You can watch videos of their gameplay, stunts, crashes, or experiments on YouTube or Twitch. You can also join their multiplayer sessions, chat with them on Discord, or participate in their challenges and competitions.

      -

      What's new in BeamNG.drive 0.27?

      -

      BeamNG.drive 0.27 is the latest update that was released on June 17, 2023. It adds new vehicles, a new level, and many improvements to the game. Here are some of the highlights of this update:

      -

      New vehicles: SP Dunekicker, SP Rock Basher, Autobello Autobuggy, Autobello Stambecco, FPU Wydra

      -

      BeamNG.drive 0.27 introduces five new vehicles that are designed for off-road adventures. The SP Dunekicker is a buggy that can glide over sand dunes and jump over obstacles. The SP Rock Basher is a rock crawler that can climb steep slopes and traverse rough terrain. The Autobello Autobuggy is a compact and agile buggy that can zip through narrow spaces and tight corners. The Autobello Stambecco is a rally car that can handle dirt roads and gravel tracks. The FPU Wydra is a hovercraft that can float over water and land.

      -

      New level: Johnson Valley

      -

      BeamNG.drive 0.27 also adds a new level called Johnson Valley, which is based on a real-life location in California. It is a large and diverse map that features various landscapes and landmarks, such as mountains, valleys, lakes, rivers, canyons, rocks, roads, trails, campsites, and more. It is a perfect place to explore and enjoy the new vehicles.

      -

      Improved physics, graphics, sound, and performance

      -

      BeamNG.drive 0.27 also improves the physics, graphics, sound, and performance of the game. The physics engine has been updated to support more realistic tire behavior, aerodynamics, suspension geometry, drivetrain dynamics, and collision detection. The graphics engine has been enhanced to support better lighting, shadows, reflections, textures, particles, and water effects. The sound engine has been refined to support more accurate engine sounds, tire sounds, wind sounds, and environmental sounds. The performance has been optimized to reduce loading times, stuttering, and lag.

      -

      New features: Tire Pressure Management System , braked differential steering, rotators as speedometer sources

      -

      BeamNG.drive 0.27 also adds some new features that enhance the gameplay and the realism of the game. The Tire Pressure Management System (TPMS) allows you to adjust the tire pressure of your vehicles, which affects their grip, handling, and fuel consumption. The braked differential steering allows you to steer your vehicles by applying brakes to individual wheels, which is useful for off-road vehicles and tanks. The rotators as speedometer sources allow you to use rotating parts such as propellers, turbines, or fans as speed indicators, which is useful for aircraft and boats.

      -

      How to download BeamNG.drive 0.27 for free?

      -

      If you are interested in downloading BeamNG.drive 0.27 for free, you have two options: the official way and the unofficial way. Here are the pros and cons of each method:

      -

      The official way: buy the game on Steam or Humble Bundle

      -

      The official way to download BeamNG.drive 0.27 for free is to buy the game on Steam or Humble Bundle. The game costs $24.99 on both platforms, but you can sometimes find discounts or sales. By buying the game, you will get access to the latest updates, the official mods, the multiplayer mode, and the technical support. You will also support the developers and help them improve the game.

      -

      The downside of this method is that you have to pay money to get the game, which might not be affordable or desirable for some people. You also have to meet the minimum system requirements to run the game smoothly, which are:

      - - - - - - - -
      OSWindows 10 64 Bit
      ProcessorIntel i5 2500K or AMD Ryzen 3 1200
      Memory8 GB RAM
      GraphicsNvidia GeForce GTX 760 or AMD Radeon R9 270X
      DirectXVersion 11
      Storage20 GB available space
      -

      The unofficial way: use a torrent or a cracked version (not recommended)

      -

      The unofficial way to download BeamNG.drive 0.27 for free is to use a torrent or a cracked version of the game. You can find these files on various websites or forums, such as Pirate Bay, Skidrow, or IGG Games. By using this method, you can get the game without paying any money and without meeting the minimum system requirements.

      -

      The downside of this method is that it is illegal and unethical, as it violates the intellectual property rights of the developers and harms their income. You also risk getting viruses, malware, or spyware on your computer, as these files are often infected or corrupted. You also won't get access to the latest updates, the official mods, the multiplayer mode, or the technical support. You might also face legal consequences if you are caught downloading or distributing pirated content.

      -

      The pros and cons of each method

      -

      To summarize, here are the pros and cons of each method:

      - - - - -
      MethodProsCons
      The official way- Access to the latest updates
      - Access to the official mods
      - Access to the multiplayer mode
      - Access to the technical support
      - Support for the developers
      - Cost money
      - Require minimum system requirements
      The unofficial way- No cost
      - No system requirements
      - Illegal and unethical
      - Risk of viruses, malware, or spyware
      - No access to the latest updates
      - No access to the official mods
      - No access to the multiplayer mode
      - No access to the technical support
      - Legal consequences
      -

      Conclusion

      -

      BeamNG.drive 0.27 is a fantastic update that adds new vehicles, a new level, and many improvements to the game. It is a realistic and immersive driving simulator that lets you customize and experiment with various vehicles in different environments. It also supports modding and community content that enhance the gameplay and the realism of the game. If you want to download BeamNG.drive 0.27 for free, you have two options: the official way and the unofficial way. The official way is to buy the game on Steam or Humble Bundle, which gives you many benefits and supports the developers. The unofficial way is to use a torrent or a cracked version of the game, which is illegal and risky and does not give you any benefits. We recommend that you choose the official way and enjoy the game legally and safely.

      -

      FAQs

      -

      Here are some frequently asked questions about BeamNG.drive 0.27:

      -

      Q: How big is BeamNG.drive 0.27?

      -

      A: BeamNG.drive 0.27 is about 5 GB in size, but it may vary depending on your platform and your previous version of the game.

      -

      Q: How can I update BeamNG.drive to 0.27?

      -

      A: If you have bought the game on Steam or Humble Bundle, you can update BeamNG.drive to 0.27 automatically or manually through the platform. If you have downloaded the game from other sources, you may need to download and install the update separately.

      -

      Q: Can I play BeamNG.drive 0.27 offline?

      -

      A: Yes, you can play BeamNG.drive 0.27 offline, except for the multiplayer mode and some online features such as leaderboards or achievements.

      -

      Q: Can I play BeamNG.drive 0.27 with a controller or a wheel?

      -

      A: Yes, you can play BeamNG.drive 0.27 with a controller or a wheel, as well as a keyboard and a mouse. You can customize the controls and the sensitivity to your liking in the settings menu.

      -

      Q: Can I run BeamNG.drive 0.27 on my PC?

      -

      A: To run BeamNG.drive 0.27 on your PC, you need to meet the minimum system requirements, which are:

      - - - - - - - -
      OSWindows 10 64 Bit
      ProcessorIntel i5 2500K or AMD Ryzen 3 1200
      Memory8 GB RAM
      GraphicsNvidia GeForce GTX 760 or AMD Radeon R9 270X
      DirectXVersion 11
      Storage20 GB available space
      -

      If you want to run the game at higher settings or resolutions, you may need higher system requirements.

      401be4b1e0
      -
      -
      \ No newline at end of file diff --git a/spaces/fatiXbelha/sd/Enjoy Zombie vs Plants on Your PC for Free - Download Now.md b/spaces/fatiXbelha/sd/Enjoy Zombie vs Plants on Your PC for Free - Download Now.md deleted file mode 100644 index 18585941004944e30db1f16e079ed27f198c4472..0000000000000000000000000000000000000000 --- a/spaces/fatiXbelha/sd/Enjoy Zombie vs Plants on Your PC for Free - Download Now.md +++ /dev/null @@ -1,116 +0,0 @@ - -

      Zombie vs Plants Download Free for PC: A Fun and Addictive Strategy Game

      -

      If you are looking for a fun and addictive strategy game that will keep you entertained for hours, then you should try Zombie vs Plants. This is a popular tower defense game that pits you against hordes of zombies who are trying to invade your home. You have to use your arsenal of plants to stop them from reaching your door.

      -

      zombie vs plants download free for pc


      Download Zip ⇒⇒⇒ https://urllie.com/2uNDYi



      -

      Zombie vs Plants is a game that can be enjoyed by anyone, regardless of age or skill level. It has simple controls, colorful graphics, humorous sound effects, and a catchy soundtrack. It also has a variety of modes, levels, plants, and zombies to keep you challenged and engaged. You can play it on your mobile device, but you can also download it for free on your PC and enjoy it on a larger screen and with better performance.

      -

      In this article, we will show you how to download and install Zombie vs Plants on your PC, how to play it, and what are the benefits of playing it on your PC. We will also answer some frequently asked questions about Zombie vs Plants download free for PC. So, without further ado, let's get started!

      -

      How to Download and Install Zombie vs Plants on Your PC

      -

      One of the best things about Zombie vs Plants is that it is free to download and play. However, since it is originally designed for mobile devices, you will need an emulator to run it on your PC. An emulator is a software that allows you to run Android apps on your PC. There are many emulators available online, but we recommend using BlueStacks, which is one of the most popular and reliable ones.

      -

      Here are the steps to download and install Zombie vs Plants on your PC using BlueStacks:

      -
        -
      1. Choose a reliable source to download the game. You can either go to the official website of BlueStacks or use one of the links provided by our web search results . Make sure that the source is safe and secure before downloading anything.
      2. -
      3. Download and install BlueStacks on your PC. Follow the instructions on the screen and wait for the installation process to finish.
      4. -
      5. Launch BlueStacks and sign in with your Google account. This will allow you to access the Google Play Store, where you can find Zombie vs Plants.
      6. -
      7. Search for Zombie vs Plants in the search bar at the top right corner of the screen. Click on the game icon and then click on the install button.
      8. -
      9. Wait for the game to download and install on your PC. Once it is done, you can click on the game icon on the home screen of BlueStacks to start playing Zombie vs Plants on your PC.
      10. -
      -

      Congratulations, you have successfully downloaded and installed Zombie vs Plants on your PC. Now, let's see how to play it and have some fun!

      -

      How to Play Zombie vs Plants on Your PC

      -

      Zombie vs Plants is a game that is easy to learn but hard to master. The basic gameplay and controls are the same as on your mobile device, but you can use your mouse and keyboard to play it on your PC. Here are some of the things you need to know about playing Zombie vs Plants on your PC:

      -
        -
      • The basic gameplay and controls of Zombie vs Plants. The goal of the game is to stop the zombies from reaching your house by planting different types of plants on your lawn. You can plant sunflowers to produce sun, which is the currency of the game. You can use the sun to buy more plants and upgrade them. You can also plant peashooters, wall-nuts, cherry bombs, and other plants to attack, defend, or explode the zombies. You can use your mouse to drag and drop the plants on the lawn, and use your keyboard to access the menu and pause the game.
      • -
      • The different modes and levels of Zombie vs Plants. The game has several modes and levels that offer different challenges and rewards. You can play the Adventure mode, which is the main story mode of the game. You can also play the Mini-Games mode, which has various mini-games that test your skills and strategy. You can also play the Puzzle mode, which has puzzles that require you to use specific plants or zombies. You can also play the Survival mode, which has endless waves of zombies that get harder and harder. You can also play the Zen Garden mode, which allows you to grow and collect plants in a relaxing environment.
      • -
      • The different types of plants and zombies in Zombie vs Plants. The game has a variety of plants and zombies that have different abilities and weaknesses. You have to choose the right plants for each level and situation, and learn how to counter the different zombies. For example, you can use spikeweeds to pop the tires of zombie vehicles, or use magnet-shrooms to steal the metal helmets of buckethead zombies. You can also use jalapenos to burn a whole row of zombies, or use squash to squash a single zombie. You can also use cacti to shoot spikes at flying zombies, or use garlic to divert zombies to another lane.
      • -
      • The tips and tricks to win Zombie vs Plants. The game is not only about planting plants and killing zombies, but also about using your brain and strategy. Here are some of the tips and tricks that can help you win Zombie vs Plants:
      • -
          -
        • Plan ahead and prepare for the next wave of zombies. You can see what types of zombies are coming by looking at the flag at the top right corner of the screen. You can also see how many waves are left by looking at the progress bar at the top of the screen.
        • -
        • Use your sun wisely and efficiently. Sun is the most important resource in the game, so you have to manage it well. Don't waste sun on unnecessary plants or upgrades, and don't let sun disappear from the lawn. Try to plant as many sunflowers as possible in the early stages of the game, and collect sun as fast as you can.
        • -
        • Use your plants strategically and creatively. Don't just plant plants randomly or in a straight line, but think about how they can work together or complement each other. For example, you can plant tall-nuts behind wall-nuts to create a double layer of defense, or plant torchwoods in front of peashooters to make their peas more powerful.
        • -
        • Don't forget about your lawn mowers. Lawn mowers are your last line of defense in case a zombie reaches your house. They will automatically run over any zombie in their lane, but they can only be used once per level. So don't rely on them too much, but don't ignore them either.
        • -
        -
      -

      These are some of the basics of playing Zombie vs Plants on your PC. Of course, there is much more to discover and enjoy in this game, so we encourage you to try it out for yourself and have fun!

      -

      The Benefits of Playing Zombie vs Plants on Your PC

      -

      Playing Zombie vs Plants on your PC has many benefits that you may not get from playing it on your mobile device. Here are some of them:

      -

      zombie vs plants free pc game download
      -how to download zombie vs plants for pc free
      -zombie vs plants pc download full version free
      -zombie vs plants free download windows 10
      -zombie vs plants free online game no download pc
      -zombie vs plants 2 download free for pc
      -zombie vs plants 3 download free for pc
      -zombie vs plants garden warfare download free for pc
      -zombie vs plants game of the year edition download free for pc
      -zombie vs plants ultimate battle download free for pc
      -zombie vs plants apk download free for pc
      -zombie vs plants offline download free for pc
      -zombie vs plants mod download free for pc
      -zombie vs plants cheats download free for pc
      -zombie vs plants hack download free for pc
      -zombie vs plants trainer download free for pc
      -zombie vs plants steam download free for pc
      -zombie vs plants origin download free for pc
      -zombie vs plants bluestacks download free for pc
      -zombie vs plants emulator download free for pc
      -zombie vs plants android download free for pc
      -zombie vs plants ios download free for pc
      -zombie vs plants mac download free for pc
      -zombie vs plants linux download free for pc
      -zombie vs plants chromebook download free for pc
      -zombie vs plants unblocked download free for pc
      -zombie vs plants full screen download free for pc
      -zombie vs plants hd download free for pc
      -zombie vs plants 4k download free for pc
      -zombie vs plants 3d download free for pc
      -zombie vs plants vr download free for pc
      -zombie vs plants ar download free for pc
      -zombie vs plants multiplayer download free for pc
      -zombie vs plants co-op download free for pc
      -zombie vs plants pvp download free for pc
      -zombie vs plants survival mode download free for pc
      -zombie vs plants adventure mode download free for pc
      -zombie vs plants zen garden mode download free for pc
      -zombie vs plants almanac mode download free for pc
      -zombie vs plants mini games mode download free for pc
      -best site to download zombie vs plants for pc free
      -best way to download zombie vs plants for pc free
      -easiest way to download zombie vs plants for pc free
      -fastest way to download zombie vs plants for pc free
      -safest way to download zombie vs plants for pc free
      -latest version of zombie vs plants to download for pc free
      -old version of zombie vs plants to download for pc free
      -new version of zombie vs plants to download for pc free
      -updated version of zombie vs plants to download for pc free

      -
        -
      • The advantages of playing Zombie vs Plants on a larger screen and with better performance. Playing Zombie vs Plants on your PC allows you to enjoy the game on a bigger screen with higher resolution and better graphics quality. This makes the game more immersive and enjoyable, as you can see more details and animations of the plants and zombies. Playing Zombie vs Plants on your PC also allows you to enjoy the game with faster speed and smoother performance, as you can avoid the lag, crashes, or glitches that may occur on your mobile device.
      • -
      • The features and enhancements of playing Zombie vs Plants with an emulator. Playing Zombie vs Plants with an emulator like BlueStacks gives you access to some features and enhancements that can improve your gaming experience. For example, you can use the keyboard mapping feature to customize your controls and shortcuts, or use the multi-instance feature to play multiple games or accounts at the same time. You can also use the screen recorder feature to record your gameplay and share it with your friends, or use the macro feature to automate some tasks and save time.
      • -
      -

      These are some of the benefits of playing Zombie vs Plants on your PC. Of course, you can still play the game on your mobile device if you prefer, but we think that playing it on your PC will give you a better and more enjoyable experience.

      -

      Conclusion: Why You Should Download Zombie vs Plants for Free Today

      -

      Zombie vs Plants is a fun and addictive strategy game that will keep you entertained for hours. It is a game that can be enjoyed by anyone, regardless of age or skill level. It has simple controls, colorful graphics, humorous sound effects, and a catchy soundtrack. It also has a variety of modes, levels, plants, and zombies to keep you challenged and engaged.

      -

      Zombie vs Plants is free to download and play, but you can also download it for free on your PC and enjoy it on a larger screen and with better performance. You can also use an emulator like BlueStacks to access some features and enhancements that can improve your gaming experience. Playing Zombie vs Plants on your PC will make the game more immersive and enjoyable, as well as faster and smoother.

      -

      So, what are you waiting for? Download Zombie vs Plants for free today and have fun defending your home from the zombie invasion!

      -

      FAQs: Frequently Asked Questions About Zombie vs Plants Download Free for PC

      -

      Here are some of the most frequently asked questions about Zombie vs Plants download free for PC:

      -
        -
      1. Is Zombie vs Plants free to download and play?
      2. -

        Yes, Zombie vs Plants is free to download and play. However, the game may contain some in-app purchases or ads that can enhance your gameplay or support the developers. You can choose to buy or watch them if you want, but they are not necessary to enjoy the game.

        -
      3. Is Zombie vs Plants safe to download and install?
      4. -

        Yes, Zombie vs Plants is safe to download and install. However, you should always choose a reliable source to download the game, such as the official website of BlueStacks or one of the links provided by our web search results . You should also scan the downloaded file with an antivirus software before installing it on your PC.

        -
      5. Can I play Zombie vs Plants offline?
      6. -

        Yes, you can play Zombie vs Plants offline. You don't need an internet connection to play the game, unless you want to access some online features such as leaderboards or achievements. However, you may need an internet connection to download and install the game on your PC.

        -
      7. How can I get more coins and power-ups in Zombie vs Plants?
      8. -

        You can get more coins and power-ups in Zombie vs Plants by playing the game regularly and completing the levels. You can also get them by watching ads or buying them with real money. Coins and power-ups can help you buy more plants or upgrade them, or give you some advantages in the game.

        -
      9. What are the minimum system requirements to play Zombie vs Plants on PC?
      10. -

        The minimum system requirements to play Zombie vs Plants on PC are as follows:

        -
          -
        • Operating system: Windows 7 or higher
        • -
        • Processor: Intel or AMD processor
        • -
        • RAM: 2 GB or more
        • -
        • Disk space: 5 GB or more
        • -
        • Graphics card: Any graphics card that supports OpenGL 2.0 or higher
        • -
        -

        You may also need an emulator like BlueStacks to run the game on your PC.

        -

      197e85843d
      -
      -
      \ No newline at end of file diff --git a/spaces/fatmacankara/ASCARIS/out_files/alphafold/readme.md b/spaces/fatmacankara/ASCARIS/out_files/alphafold/readme.md deleted file mode 100644 index 7e902adea5f2da0959a28f3eee5733df1710cbce..0000000000000000000000000000000000000000 --- a/spaces/fatmacankara/ASCARIS/out_files/alphafold/readme.md +++ /dev/null @@ -1 +0,0 @@ -#readme \ No newline at end of file diff --git a/spaces/fb700/chat3/toolbox.py b/spaces/fb700/chat3/toolbox.py deleted file mode 100644 index 05fd368df15f97840871704748e87d3ba9e4ee53..0000000000000000000000000000000000000000 --- a/spaces/fb700/chat3/toolbox.py +++ /dev/null @@ -1,508 +0,0 @@ -import markdown -import importlib -import traceback -import inspect -import re -from latex2mathml.converter import convert as tex2mathml -from functools import wraps, lru_cache -############################### 插件输入输出接驳区 ####################################### -class ChatBotWithCookies(list): - def __init__(self, cookie): - self._cookies = cookie - - def write_list(self, list): - for t in list: - self.append(t) - - def get_list(self): - return [t for t in self] - - def get_cookies(self): - return self._cookies - -def ArgsGeneralWrapper(f): - """ - 装饰器函数,用于重组输入参数,改变输入参数的顺序与结构。 - """ - def decorated(cookies, max_length, llm_model, txt, txt2, top_p, temperature, chatbot, history, system_prompt, *args): - from request_llm.bridge_all import model_info - txt_passon = txt - if txt == "" and txt2 != "": txt_passon = txt2 - # 引入一个有cookie的chatbot - cookies.update({ - 'top_p':top_p, - 'temperature':temperature, - }) - llm_kwargs = { - 'api_key': cookies['api_key'], - 'llm_model': llm_model, - 'top_p':top_p, - 'max_length': max_length, - 'temperature':temperature, - } - plugin_kwargs = { - # 目前还没有 - } - chatbot_with_cookie = ChatBotWithCookies(cookies) - chatbot_with_cookie.write_list(chatbot) - yield from f(txt_passon, llm_kwargs, plugin_kwargs, chatbot_with_cookie, history, system_prompt, *args) - return decorated - -def update_ui(chatbot, history, msg='正常', **kwargs): # 刷新界面 - """ - 刷新用户界面 - """ - assert isinstance(chatbot, ChatBotWithCookies), "在传递chatbot的过程中不要将其丢弃。必要时,可用clear将其清空,然后用for+append循环重新赋值。" - yield chatbot.get_cookies(), chatbot, history, msg - -def CatchException(f): - """ - 装饰器函数,捕捉函数f中的异常并封装到一个生成器中返回,并显示到聊天当中。 - """ - @wraps(f) - def decorated(txt, top_p, temperature, chatbot, history, systemPromptTxt, WEB_PORT): - try: - yield from f(txt, top_p, temperature, chatbot, history, systemPromptTxt, WEB_PORT) - except Exception as e: - from check_proxy import check_proxy - from toolbox import get_conf - proxies, = get_conf('proxies') - tb_str = '```\n' + traceback.format_exc() + '```' - if chatbot is None or len(chatbot) == 0: - chatbot = [["插件调度异常", "异常原因"]] - chatbot[-1] = (chatbot[-1][0], - f"[Local Message] 实验性函数调用出错: \n\n{tb_str} \n\n当前代理可用性: \n\n{check_proxy(proxies)}") - yield from update_ui(chatbot=chatbot, history=history, msg=f'异常 {e}') # 刷新界面 - return decorated - - -def HotReload(f): - """ - HotReload的装饰器函数,用于实现Python函数插件的热更新。 - 函数热更新是指在不停止程序运行的情况下,更新函数代码,从而达到实时更新功能。 - 在装饰器内部,使用wraps(f)来保留函数的元信息,并定义了一个名为decorated的内部函数。 - 内部函数通过使用importlib模块的reload函数和inspect模块的getmodule函数来重新加载并获取函数模块, - 然后通过getattr函数获取函数名,并在新模块中重新加载函数。 - 最后,使用yield from语句返回重新加载过的函数,并在被装饰的函数上执行。 - 最终,装饰器函数返回内部函数。这个内部函数可以将函数的原始定义更新为最新版本,并执行函数的新版本。 - """ - @wraps(f) - def decorated(*args, **kwargs): - fn_name = f.__name__ - f_hot_reload = getattr(importlib.reload(inspect.getmodule(f)), fn_name) - yield from f_hot_reload(*args, **kwargs) - return decorated - - -####################################### 其他小工具 ##################################### - -def get_reduce_token_percent(text): - """ - * 此函数未来将被弃用 - """ - try: - # text = "maximum context length is 4097 tokens. However, your messages resulted in 4870 tokens" - pattern = r"(\d+)\s+tokens\b" - match = re.findall(pattern, text) - EXCEED_ALLO = 500 # 稍微留一点余地,否则在回复时会因余量太少出问题 - max_limit = float(match[0]) - EXCEED_ALLO - current_tokens = float(match[1]) - ratio = max_limit/current_tokens - assert ratio > 0 and ratio < 1 - return ratio, str(int(current_tokens-max_limit)) - except: - return 0.5, '不详' - - - -def write_results_to_file(history, file_name=None): - """ - 将对话记录history以Markdown格式写入文件中。如果没有指定文件名,则使用当前时间生成文件名。 - """ - import os - import time - if file_name is None: - # file_name = time.strftime("chatGPT分析报告%Y-%m-%d-%H-%M-%S", time.localtime()) + '.md' - file_name = 'chatGPT分析报告' + \ - time.strftime("%Y-%m-%d-%H-%M-%S", time.localtime()) + '.md' - os.makedirs('./gpt_log/', exist_ok=True) - with open(f'./gpt_log/{file_name}', 'w', encoding='utf8') as f: - f.write('# chatGPT 分析报告\n') - for i, content in enumerate(history): - try: # 这个bug没找到触发条件,暂时先这样顶一下 - if type(content) != str: - content = str(content) - except: - continue - if i % 2 == 0: - f.write('## ') - f.write(content) - f.write('\n\n') - res = '以上材料已经被写入' + os.path.abspath(f'./gpt_log/{file_name}') - print(res) - return res - - -def regular_txt_to_markdown(text): - """ - 将普通文本转换为Markdown格式的文本。 - """ - text = text.replace('\n', '\n\n') - text = text.replace('\n\n\n', '\n\n') - text = text.replace('\n\n\n', '\n\n') - return text - - - - -def report_execption(chatbot, history, a, b): - """ - 向chatbot中添加错误信息 - """ - chatbot.append((a, b)) - history.append(a) - history.append(b) - - -def text_divide_paragraph(text): - """ - 将文本按照段落分隔符分割开,生成带有段落标签的HTML代码。 - """ - if '```' in text: - # careful input - return text - else: - # wtf input - lines = text.split("\n") - for i, line in enumerate(lines): - lines[i] = lines[i].replace(" ", " ") - text = "
      ".join(lines) - return text - - -def markdown_convertion(txt): - """ - 将Markdown格式的文本转换为HTML格式。如果包含数学公式,则先将公式转换为HTML格式。 - """ - pre = '
      ' - suf = '
      ' - markdown_extension_configs = { - 'mdx_math': { - 'enable_dollar_delimiter': True, - 'use_gitlab_delimiters': False, - }, - } - find_equation_pattern = r'\n', '') - return content - - - if ('$' in txt) and ('```' not in txt): # 有$标识的公式符号,且没有代码段```的标识 - # convert everything to html format - split = markdown.markdown(text='---') - convert_stage_1 = markdown.markdown(text=txt, extensions=['mdx_math', 'fenced_code', 'tables', 'sane_lists'], extension_configs=markdown_extension_configs) - convert_stage_1 = markdown_bug_hunt(convert_stage_1) - # re.DOTALL: Make the '.' special character match any character at all, including a newline; without this flag, '.' will match anything except a newline. Corresponds to the inline flag (?s). - # 1. convert to easy-to-copy tex (do not render math) - convert_stage_2_1, n = re.subn(find_equation_pattern, replace_math_no_render, convert_stage_1, flags=re.DOTALL) - # 2. convert to rendered equation - convert_stage_2_2, n = re.subn(find_equation_pattern, replace_math_render, convert_stage_1, flags=re.DOTALL) - # cat them together - return pre + convert_stage_2_1 + f'{split}' + convert_stage_2_2 + suf - else: - return pre + markdown.markdown(txt, extensions=['fenced_code', 'codehilite', 'tables', 'sane_lists']) + suf - - -def close_up_code_segment_during_stream(gpt_reply): - """ - 在gpt输出代码的中途(输出了前面的```,但还没输出完后面的```),补上后面的``` - - Args: - gpt_reply (str): GPT模型返回的回复字符串。 - - Returns: - str: 返回一个新的字符串,将输出代码片段的“后面的```”补上。 - - """ - if '```' not in gpt_reply: - return gpt_reply - if gpt_reply.endswith('```'): - return gpt_reply - - # 排除了以上两个情况,我们 - segments = gpt_reply.split('```') - n_mark = len(segments) - 1 - if n_mark % 2 == 1: - # print('输出代码片段中!') - return gpt_reply+'\n```' - else: - return gpt_reply - - -def format_io(self, y): - """ - 将输入和输出解析为HTML格式。将y中最后一项的输入部分段落化,并将输出部分的Markdown和数学公式转换为HTML格式。 - """ - if y is None or y == []: - return [] - i_ask, gpt_reply = y[-1] - i_ask = text_divide_paragraph(i_ask) # 输入部分太自由,预处理一波 - gpt_reply = close_up_code_segment_during_stream(gpt_reply) # 当代码输出半截的时候,试着补上后个``` - y[-1] = ( - None if i_ask is None else markdown.markdown(i_ask, extensions=['fenced_code', 'tables']), - None if gpt_reply is None else markdown_convertion(gpt_reply) - ) - return y - - -def find_free_port(): - """ - 返回当前系统中可用的未使用端口。 - """ - import socket - from contextlib import closing - with closing(socket.socket(socket.AF_INET, socket.SOCK_STREAM)) as s: - s.bind(('', 0)) - s.setsockopt(socket.SOL_SOCKET, socket.SO_REUSEADDR, 1) - return s.getsockname()[1] - - -def extract_archive(file_path, dest_dir): - import zipfile - import tarfile - import os - # Get the file extension of the input file - file_extension = os.path.splitext(file_path)[1] - - # Extract the archive based on its extension - if file_extension == '.zip': - with zipfile.ZipFile(file_path, 'r') as zipobj: - zipobj.extractall(path=dest_dir) - print("Successfully extracted zip archive to {}".format(dest_dir)) - - elif file_extension in ['.tar', '.gz', '.bz2']: - with tarfile.open(file_path, 'r:*') as tarobj: - tarobj.extractall(path=dest_dir) - print("Successfully extracted tar archive to {}".format(dest_dir)) - - # 第三方库,需要预先pip install rarfile - # 此外,Windows上还需要安装winrar软件,配置其Path环境变量,如"C:\Program Files\WinRAR"才可以 - elif file_extension == '.rar': - try: - import rarfile - with rarfile.RarFile(file_path) as rf: - rf.extractall(path=dest_dir) - print("Successfully extracted rar archive to {}".format(dest_dir)) - except: - print("Rar format requires additional dependencies to install") - return '\n\n需要安装pip install rarfile来解压rar文件' - - # 第三方库,需要预先pip install py7zr - elif file_extension == '.7z': - try: - import py7zr - with py7zr.SevenZipFile(file_path, mode='r') as f: - f.extractall(path=dest_dir) - print("Successfully extracted 7z archive to {}".format(dest_dir)) - except: - print("7z format requires additional dependencies to install") - return '\n\n需要安装pip install py7zr来解压7z文件' - else: - return '' - return '' - - -def find_recent_files(directory): - """ - me: find files that is created with in one minutes under a directory with python, write a function - gpt: here it is! - """ - import os - import time - current_time = time.time() - one_minute_ago = current_time - 60 - recent_files = [] - - for filename in os.listdir(directory): - file_path = os.path.join(directory, filename) - if file_path.endswith('.log'): - continue - created_time = os.path.getmtime(file_path) - if created_time >= one_minute_ago: - if os.path.isdir(file_path): - continue - recent_files.append(file_path) - - return recent_files - - -def on_file_uploaded(files, chatbot, txt, txt2, checkboxes): - if len(files) == 0: - return chatbot, txt - import shutil - import os - import time - import glob - from toolbox import extract_archive - try: - shutil.rmtree('./private_upload/') - except: - pass - time_tag = time.strftime("%Y-%m-%d-%H-%M-%S", time.localtime()) - os.makedirs(f'private_upload/{time_tag}', exist_ok=True) - err_msg = '' - for file in files: - file_origin_name = os.path.basename(file.orig_name) - shutil.copy(file.name, f'private_upload/{time_tag}/{file_origin_name}') - err_msg += extract_archive(f'private_upload/{time_tag}/{file_origin_name}', - dest_dir=f'private_upload/{time_tag}/{file_origin_name}.extract') - moved_files = [fp for fp in glob.glob( - 'private_upload/**/*', recursive=True)] - if "底部输入区" in checkboxes: - txt = "" - txt2 = f'private_upload/{time_tag}' - else: - txt = f'private_upload/{time_tag}' - txt2 = "" - moved_files_str = '\t\n\n'.join(moved_files) - chatbot.append(['我上传了文件,请查收', - f'[Local Message] 收到以下文件: \n\n{moved_files_str}' + - f'\n\n调用路径参数已自动修正到: \n\n{txt}' + - f'\n\n现在您点击任意“红颜色”标识的函数插件时,以上文件将被作为输入参数'+err_msg]) - return chatbot, txt, txt2 - - -def on_report_generated(files, chatbot): - from toolbox import find_recent_files - report_files = find_recent_files('gpt_log') - if len(report_files) == 0: - return None, chatbot - # files.extend(report_files) - chatbot.append(['汇总报告如何远程获取?', '汇总报告已经添加到右侧“文件上传区”(可能处于折叠状态),请查收。']) - return report_files, chatbot - -def is_openai_api_key(key): - API_MATCH = re.match(r"sk-[a-zA-Z0-9]{48}$", key) - return bool(API_MATCH) - -def is_api2d_key(key): - if key.startswith('fk') and len(key) == 41: - return True - else: - return False - -def is_any_api_key(key): - if ',' in key: - keys = key.split(',') - for k in keys: - if is_any_api_key(k): return True - return False - else: - return is_openai_api_key(key) or is_api2d_key(key) - - -def select_api_key(keys, llm_model): - import random - avail_key_list = [] - key_list = keys.split(',') - - if llm_model.startswith('gpt-'): - for k in key_list: - if is_openai_api_key(k): avail_key_list.append(k) - - if llm_model.startswith('api2d-'): - for k in key_list: - if is_api2d_key(k): avail_key_list.append(k) - - if len(avail_key_list) == 0: - raise RuntimeError(f"您提供的api-key不满足要求,不包含任何可用于{llm_model}的api-key。") - - api_key = random.choice(avail_key_list) # 随机负载均衡 - return api_key - -@lru_cache(maxsize=128) -def read_single_conf_with_lru_cache(arg): - from colorful import print亮红, print亮绿 - try: - r = getattr(importlib.import_module('config_private'), arg) - except: - r = getattr(importlib.import_module('config'), arg) - # 在读取API_KEY时,检查一下是不是忘了改config - if arg == 'API_KEY': - if is_any_api_key(r): - print亮绿(f"[API_KEY] 您的 API_KEY 是: {r[:15]}*** API_KEY 导入成功") - else: - print亮红( "[API_KEY] 正确的 API_KEY 是'sk'开头的51位密钥(OpenAI),或者 'fk'开头的41位密钥,请在config文件中修改API密钥之后再运行。") - if arg == 'proxies': - if r is None: - print亮红('[PROXY] 网络代理状态:未配置。无代理状态下很可能无法访问OpenAI家族的模型。建议:检查USE_PROXY选项是否修改。') - else: - print亮绿('[PROXY] 网络代理状态:已配置。配置信息如下:', r) - assert isinstance(r, dict), 'proxies格式错误,请注意proxies选项的格式,不要遗漏括号。' - return r - - -def get_conf(*args): - # 建议您复制一个config_private.py放自己的秘密, 如API和代理网址, 避免不小心传github被别人看到 - res = [] - for arg in args: - r = read_single_conf_with_lru_cache(arg) - res.append(r) - return res - - -def clear_line_break(txt): - txt = txt.replace('\n', ' ') - txt = txt.replace(' ', ' ') - txt = txt.replace(' ', ' ') - return txt - - -class DummyWith(): - """ - 这段代码定义了一个名为DummyWith的空上下文管理器, - 它的作用是……额……没用,即在代码结构不变得情况下取代其他的上下文管理器。 - 上下文管理器是一种Python对象,用于与with语句一起使用, - 以确保一些资源在代码块执行期间得到正确的初始化和清理。 - 上下文管理器必须实现两个方法,分别为 __enter__()和 __exit__()。 - 在上下文执行开始的情况下,__enter__()方法会在代码块被执行前被调用, - 而在上下文执行结束时,__exit__()方法则会被调用。 - """ - def __enter__(self): - return self - - def __exit__(self, exc_type, exc_value, traceback): - return diff --git a/spaces/fb700/chatglm-fitness-RLHF/src/generate_facerender_batch.py b/spaces/fb700/chatglm-fitness-RLHF/src/generate_facerender_batch.py deleted file mode 100644 index a821a6ece2fcff83c288a0989097d863cfec3dd1..0000000000000000000000000000000000000000 --- a/spaces/fb700/chatglm-fitness-RLHF/src/generate_facerender_batch.py +++ /dev/null @@ -1,135 +0,0 @@ -import os -import numpy as np -from PIL import Image -from skimage import io, img_as_float32, transform -import torch -import scipy.io as scio - -def get_facerender_data(coeff_path, pic_path, first_coeff_path, audio_path, - batch_size, input_yaw_list=None, input_pitch_list=None, input_roll_list=None, - expression_scale=1.0, still_mode = False, preprocess='crop', size = 256, facemodel='facevid2vid'): - - semantic_radius = 13 - video_name = os.path.splitext(os.path.split(coeff_path)[-1])[0] - txt_path = os.path.splitext(coeff_path)[0] - - data={} - - img1 = Image.open(pic_path) - source_image = np.array(img1) - source_image = img_as_float32(source_image) - source_image = transform.resize(source_image, (size, size, 3)) - source_image = source_image.transpose((2, 0, 1)) - source_image_ts = torch.FloatTensor(source_image).unsqueeze(0) - source_image_ts = source_image_ts.repeat(batch_size, 1, 1, 1) - data['source_image'] = source_image_ts - - source_semantics_dict = scio.loadmat(first_coeff_path) - generated_dict = scio.loadmat(coeff_path) - - if 'full' not in preprocess.lower() and facemodel != 'pirender': - source_semantics = source_semantics_dict['coeff_3dmm'][:1,:70] #1 70 - generated_3dmm = generated_dict['coeff_3dmm'][:,:70] - else: - source_semantics = source_semantics_dict['coeff_3dmm'][:1,:73] #1 70 - generated_3dmm = generated_dict['coeff_3dmm'][:,:70] - - source_semantics_new = transform_semantic_1(source_semantics, semantic_radius) - source_semantics_ts = torch.FloatTensor(source_semantics_new).unsqueeze(0) - source_semantics_ts = source_semantics_ts.repeat(batch_size, 1, 1) - data['source_semantics'] = source_semantics_ts - - # target - generated_3dmm[:, :64] = generated_3dmm[:, :64] * expression_scale - - if 'full' in preprocess.lower() or facemodel == 'pirender': - generated_3dmm = np.concatenate([generated_3dmm, np.repeat(source_semantics[:,70:], generated_3dmm.shape[0], axis=0)], axis=1) - - if still_mode: - generated_3dmm[:, 64:] = np.repeat(source_semantics[:, 64:], generated_3dmm.shape[0], axis=0) - - with open(txt_path+'.txt', 'w') as f: - for coeff in generated_3dmm: - for i in coeff: - f.write(str(i)[:7] + ' '+'\t') - f.write('\n') - - target_semantics_list = [] - frame_num = generated_3dmm.shape[0] - data['frame_num'] = frame_num - for frame_idx in range(frame_num): - target_semantics = transform_semantic_target(generated_3dmm, frame_idx, semantic_radius) - target_semantics_list.append(target_semantics) - - remainder = frame_num%batch_size - if remainder!=0: - for _ in range(batch_size-remainder): - target_semantics_list.append(target_semantics) - - target_semantics_np = np.array(target_semantics_list) #frame_num 70 semantic_radius*2+1 - target_semantics_np = target_semantics_np.reshape(batch_size, -1, target_semantics_np.shape[-2], target_semantics_np.shape[-1]) - data['target_semantics_list'] = torch.FloatTensor(target_semantics_np) - data['video_name'] = video_name - data['audio_path'] = audio_path - - if input_yaw_list is not None: - yaw_c_seq = gen_camera_pose(input_yaw_list, frame_num, batch_size) - data['yaw_c_seq'] = torch.FloatTensor(yaw_c_seq) - if input_pitch_list is not None: - pitch_c_seq = gen_camera_pose(input_pitch_list, frame_num, batch_size) - data['pitch_c_seq'] = torch.FloatTensor(pitch_c_seq) - if input_roll_list is not None: - roll_c_seq = gen_camera_pose(input_roll_list, frame_num, batch_size) - data['roll_c_seq'] = torch.FloatTensor(roll_c_seq) - - return data - -def transform_semantic_1(semantic, semantic_radius): - semantic_list = [semantic for i in range(0, semantic_radius*2+1)] - coeff_3dmm = np.concatenate(semantic_list, 0) - return coeff_3dmm.transpose(1,0) - -def transform_semantic_target(coeff_3dmm, frame_index, semantic_radius): - num_frames = coeff_3dmm.shape[0] - seq = list(range(frame_index- semantic_radius, frame_index + semantic_radius+1)) - index = [ min(max(item, 0), num_frames-1) for item in seq ] - coeff_3dmm_g = coeff_3dmm[index, :] - return coeff_3dmm_g.transpose(1,0) - -def gen_camera_pose(camera_degree_list, frame_num, batch_size): - - new_degree_list = [] - if len(camera_degree_list) == 1: - for _ in range(frame_num): - new_degree_list.append(camera_degree_list[0]) - remainder = frame_num%batch_size - if remainder!=0: - for _ in range(batch_size-remainder): - new_degree_list.append(new_degree_list[-1]) - new_degree_np = np.array(new_degree_list).reshape(batch_size, -1) - return new_degree_np - - degree_sum = 0. - for i, degree in enumerate(camera_degree_list[1:]): - degree_sum += abs(degree-camera_degree_list[i]) - - degree_per_frame = degree_sum/(frame_num-1) - for i, degree in enumerate(camera_degree_list[1:]): - degree_last = camera_degree_list[i] - degree_step = degree_per_frame * abs(degree-degree_last)/(degree-degree_last) - new_degree_list = new_degree_list + list(np.arange(degree_last, degree, degree_step)) - if len(new_degree_list) > frame_num: - new_degree_list = new_degree_list[:frame_num] - elif len(new_degree_list) < frame_num: - for _ in range(frame_num-len(new_degree_list)): - new_degree_list.append(new_degree_list[-1]) - print(len(new_degree_list)) - print(frame_num) - - remainder = frame_num%batch_size - if remainder!=0: - for _ in range(batch_size-remainder): - new_degree_list.append(new_degree_list[-1]) - new_degree_np = np.array(new_degree_list).reshape(batch_size, -1) - return new_degree_np - diff --git a/spaces/fclong/summary/fengshen/data/bert_dataloader/load.py b/spaces/fclong/summary/fengshen/data/bert_dataloader/load.py deleted file mode 100644 index b36ce8ae72b74e9fd006f087ee0810a306badd7e..0000000000000000000000000000000000000000 --- a/spaces/fclong/summary/fengshen/data/bert_dataloader/load.py +++ /dev/null @@ -1,200 +0,0 @@ -import os -import re -from pathlib import Path -import glob -from tqdm import tqdm -from contextlib import ExitStack -import datasets -import multiprocessing -from typing import cast, TextIO -from itertools import chain -import json -from concurrent.futures import ProcessPoolExecutor -from random import shuffle -from pytorch_lightning import LightningDataModule -from typing import Optional - -from torch.utils.data import DataLoader - - -# _SPLIT_DATA_PATH = '/data1/datas/wudao_180g_split/test' -_SPLIT_DATA_PATH = '/data1/datas/wudao_180g_split' -_CACHE_SPLIT_DATA_PATH = '/data1/datas/wudao_180g_FSData' - -# feats = datasets.Features({"text": datasets.Value('string')}) - - -class BertDataGenerate(object): - - def __init__(self, - data_files=_SPLIT_DATA_PATH, - save_path=_CACHE_SPLIT_DATA_PATH, - train_test_validation='950,49,1', - num_proc=1, - cache=True): - self.data_files = Path(data_files) - if save_path: - self.save_path = Path(save_path) - else: - self.save_path = self.file_check( - Path(self.data_files.parent, self.data_files.name+'_FSDataset'), - 'save') - self.num_proc = num_proc - self.cache = cache - self.split_idx = self.split_train_test_validation_index(train_test_validation) - if cache: - self.cache_path = self.file_check( - Path(self.save_path.parent, 'FSDataCache', self.data_files.name), 'cache') - else: - self.cache_path = None - - @staticmethod - def file_check(path, path_type): - print(path) - if not path.exists(): - path.mkdir(parents=True) - print(f"Since no {path_type} directory is specified, the program will automatically create it in {path} directory.") - return str(path) - - @staticmethod - def split_train_test_validation_index(train_test_validation): - split_idx_ = [int(i) for i in train_test_validation.split(',')] - idx_dict = { - 'train_rate': split_idx_[0]/sum(split_idx_), - 'test_rate': split_idx_[1]/sum(split_idx_[1:]) - } - return idx_dict - - def process(self, index, path): - print('saving dataset shard {}'.format(index)) - - ds = (datasets.load_dataset('json', data_files=str(path), - cache_dir=self.cache_path, - features=None)) - # ds = ds.map(self.cut_sent,input_columns='text') - # print(d) - # print('!!!',ds) - ds = ds['train'].train_test_split(train_size=self.split_idx['train_rate']) - ds_ = ds['test'].train_test_split(train_size=self.split_idx['test_rate']) - ds = datasets.DatasetDict({ - 'train': ds['train'], - 'test': ds_['train'], - 'validation': ds_['test'] - }) - # print('!!!!',ds) - ds.save_to_disk(Path(self.save_path, path.name)) - return 'saving dataset shard {} done'.format(index) - - def generate_cache_arrow(self) -> None: - ''' - 生成HF支持的缓存文件,加速后续的加载 - ''' - data_dict_paths = self.data_files.rglob('*') - p = ProcessPoolExecutor(max_workers=self.num_proc) - res = list() - - for index, path in enumerate(data_dict_paths): - res.append(p.submit(self.process, index, path)) - - p.shutdown(wait=True) - for future in res: - print(future.result(), flush=True) - - -def load_dataset(num_proc=4, **kargs): - cache_dict_paths = Path(_CACHE_SPLIT_DATA_PATH).glob('*') - ds = [] - res = [] - p = ProcessPoolExecutor(max_workers=num_proc) - for path in cache_dict_paths: - res.append(p.submit(datasets.load_from_disk, - str(path), **kargs)) - - p.shutdown(wait=True) - for future in res: - ds.append(future.result()) - # print(future.result()) - train = [] - test = [] - validation = [] - for ds_ in ds: - train.append(ds_['train']) - test.append(ds_['test']) - validation.append(ds_['validation']) - # ds = datasets.concatenate_datasets(ds) - # print(ds) - return datasets.DatasetDict({ - 'train': datasets.concatenate_datasets(train), - 'test': datasets.concatenate_datasets(test), - 'validation': datasets.concatenate_datasets(validation) - }) - - -class BertDataModule(LightningDataModule): - @ staticmethod - def add_data_specific_args(parent_args): - parser = parent_args.add_argument_group('Universal DataModule') - parser.add_argument('--num_workers', default=8, type=int) - parser.add_argument('--train_batchsize', default=32, type=int) - parser.add_argument('--val_batchsize', default=32, type=int) - parser.add_argument('--test_batchsize', default=32, type=int) - parser.add_argument('--datasets_name', type=str) - # parser.add_argument('--datasets_name', type=str) - parser.add_argument('--train_datasets_field', type=str, default='train') - parser.add_argument('--val_datasets_field', type=str, default='validation') - parser.add_argument('--test_datasets_field', type=str, default='test') - return parent_args - - def __init__( - self, - tokenizer, - collate_fn, - args, - **kwargs, - ): - super().__init__() - self.datasets = load_dataset(num_proc=args.num_workers) - self.tokenizer = tokenizer - self.collate_fn = collate_fn - self.save_hyperparameters(args) - - def setup(self, stage: Optional[str] = None) -> None: - self.train = DataLoader( - self.datasets[self.hparams.train_datasets_field], - batch_size=self.hparams.train_batchsize, - shuffle=True, - num_workers=self.hparams.num_workers, - collate_fn=self.collate_fn, - ) - self.val = DataLoader( - self.datasets[self.hparams.val_datasets_field], - batch_size=self.hparams.val_batchsize, - shuffle=False, - num_workers=self.hparams.num_workers, - collate_fn=self.collate_fn, - ) - self.test = DataLoader( - self.datasets[self.hparams.test_datasets_field], - batch_size=self.hparams.test_batchsize, - shuffle=False, - num_workers=self.hparams.num_workers, - collate_fn=self.collate_fn, - ) - return - - def train_dataloader(self): - return self.train - - def val_dataloader(self): - return self.val - - def test_dataloader(self): - return self.test - - -if __name__ == '__main__': - # pre = PreProcessing(_SPLIT_DATA_PATH) - # pre.processing() - - dataset = BertDataGenerate(_SPLIT_DATA_PATH, num_proc=16) - dataset.generate_cache_arrow() diff --git a/spaces/fclong/summary/fengshen/models/auto/__init__.py b/spaces/fclong/summary/fengshen/models/auto/__init__.py deleted file mode 100644 index ef185f32cc2d9f9b30db1a6a681ce2df34936351..0000000000000000000000000000000000000000 --- a/spaces/fclong/summary/fengshen/models/auto/__init__.py +++ /dev/null @@ -1,56 +0,0 @@ -# coding=utf-8 -# Copyright 2021 The IDEA Authors. All rights reserved. - -# Licensed under the Apache License, Version 2.0 (the "License"); -# you may not use this file except in compliance with the License. -# You may obtain a copy of the License at - -# http://www.apache.org/licenses/LICENSE-2.0 - -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an "AS IS" BASIS, -# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -# See the License for the specific language governing permissions and -# limitations under the License. - -from typing import TYPE_CHECKING - -from transformers.file_utils import _LazyModule, is_torch_available - - -_import_structure = { - "auto_factory": ["get_values"], - "configuration_auto": ["ALL_PRETRAINED_CONFIG_ARCHIVE_MAP", "CONFIG_MAPPING", "MODEL_NAMES_MAPPING", "AutoConfig"], - "tokenization_auto": ["TOKENIZER_MAPPING", "AutoTokenizer"], -} - -if is_torch_available(): - _import_structure["modeling_auto"] = [ - "AutoModel", - "AutoModelForMaskedLM", - "AutoModelForMultipleChoice", - "AutoModelForPreTraining", - "AutoModelForQuestionAnswering", - "AutoModelForSequenceClassification", - "AutoModelForTokenClassification", - ] - -if TYPE_CHECKING: - from .auto_factory import get_values - from .configuration_auto import ALL_PRETRAINED_CONFIG_ARCHIVE_MAP, CONFIG_MAPPING, MODEL_NAMES_MAPPING, AutoConfig - from .tokenization_auto import TOKENIZER_MAPPING, AutoTokenizer - if is_torch_available(): - from .modeling_auto import ( - AutoModel, - AutoModelForMaskedLM, - AutoModelForMultipleChoice, - AutoModelForPreTraining, - AutoModelForQuestionAnswering, - AutoModelForSequenceClassification, - AutoModelForTokenClassification, - ) - -else: - import sys - - sys.modules[__name__] = _LazyModule(__name__, globals()["__file__"], _import_structure) diff --git a/spaces/feregVcuzo/sanity-test-midi/checkpoint/Download Truck Simulator Europe 3 and Enjoy the Most Realistic Truck Physics on Your iPhone.md b/spaces/feregVcuzo/sanity-test-midi/checkpoint/Download Truck Simulator Europe 3 and Enjoy the Most Realistic Truck Physics on Your iPhone.md deleted file mode 100644 index 9b661acd2338d16151856978369e59e5d0c4c0ae..0000000000000000000000000000000000000000 --- a/spaces/feregVcuzo/sanity-test-midi/checkpoint/Download Truck Simulator Europe 3 and Enjoy the Most Realistic Truck Physics on Your iPhone.md +++ /dev/null @@ -1,95 +0,0 @@ -
      -

      Truck Simulator Europe 3: How to Download and Play on iOS

      |

      Do you love driving trucks across Europe? Do you want to experience realistic truck physics, customizations, and traffic? Do you want to explore different cities and deliver cargo in an open world? If you answered yes to any of these questions, then you should try Truck Simulator Europe 3 on your iOS device.

      -

      truck simulator europe 3 download ios


      Download Filehttps://gohhs.com/2uPusq



      -

      Truck Simulator Europe 3 is a truck simulation game that lets you become a real trucker. You can choose from 7 different trucks with various chassis configurations, 25 trailers and many cargo options. You can drive across country roads and highways in different weather conditions and day-night cycles. You can also manage your own business by buying new trucks and trailers, selecting your jobs, and earning money.

      -

      In this article, we will show you how to download and play Truck Simulator Europe 3 on your iOS device. We will also give you some tips and tricks to make your trucking experience more enjoyable. So buckle up and get ready for some trucking fun!

      -

      How to download Truck Simulator Europe 3 on iOS devices?

      -

      Downloading Truck Simulator Europe 3 on your iOS device is very easy. Just follow these simple steps:

      -
        -
      1. Open the App Store on your iOS device.
      2. -
      3. Search for "Truckers of Europe 3" or click [here](^4^) to go directly to the app page.
      4. -
      5. Tap on the "Get" button to install the app for free.
      6. -
      7. Wait for the app to download and install on your device.
      8. -
      9. Launch the app from your home screen or app library.
      10. -
      -

      Congratulations! You have successfully downloaded Truck Simulator Europe 3 on your iOS device. Now you can start your trucking career!

      -

      How to play Truck Simulator Europe 3 on iOS devices?

      -

      Playing Truck Simulator Europe 3 on your iOS device is very intuitive. Just follow these simple steps:

      -
        -
      1. When you launch the app, you will see a menu with different options. You can choose to start a new game, continue an existing game, customize your truck, view your achievements, or change your settings.
      2. -
      3. If you start a new game, you will have to choose a name for your profile and select a truck from the available options. You will also have to choose a city to start from. You can change your truck and city later in the game.
      4. -
      5. If you continue an existing game, you will resume from where you left off. You can access your garage, map, job market, bank, and settings from the main screen.
      6. -
      7. To drive your truck, you will have to use the virtual buttons and pedals on the screen. You can also tilt your device to steer your truck. You can switch between different camera views by tapping on the camera icon. You can also use the map and GPS to navigate your route.
      8. -
      9. To pick up a cargo, you will have to go to the designated location and park your truck near the trailer. You can then attach the trailer by tapping on the trailer icon. You can also detach the trailer by tapping on the same icon.
      10. -
      11. To deliver a cargo, you will have to go to the destination and park your truck in the marked area. You can then detach the trailer and complete the job. You will earn money and experience points for each successful delivery.
      12. -
      13. You can use the money you earn to buy new trucks and trailers, upgrade your existing ones, or pay off your loans. You can also use the experience points to unlock new skills and perks that will improve your performance and reputation.
      14. -
      -

      That's it! You are now ready to enjoy Truck Simulator Europe 3 on your iOS device!

      -

      truck simulator europe 3 app store
      -truck simulator europe 3 realistic missions
      -truck simulator europe 3 customizations
      -truck simulator europe 3 multiplayer mode
      -truck simulator europe 3 online convoy
      -truck simulator europe 3 realistic weather
      -truck simulator europe 3 visual damage
      -truck simulator europe 3 detailed interiors
      -truck simulator europe 3 engine sounds
      -truck simulator europe 3 achievements and leaderboards
      -truck simulator europe 3 free download ios
      -truck simulator europe 3 best european trucks
      -truck simulator europe 3 run your own business
      -truck simulator europe 3 freight deliveries
      -truck simulator europe 3 amazing graphics
      -truck simulator europe 3 easy controls
      -truck simulator europe 3 driving experience
      -truck simulator europe 3 game center support
      -truck simulator europe 3 in-app purchases
      -truck simulator europe 3 ovilex soft srl
      -truckers of europe 3 download ios
      -truckers of europe 3 become a real trucker
      -truckers of europe 3 realistic physics
      -truckers of europe 3 travel across many countries
      -truckers of europe 3 career mode
      -truckers of europe 3 explore the trucking world
      -truckers of europe 3 show off your customized truck
      -truckers of europe 3 king of the road
      -truckers of europe 3 connecting to apple music
      -euro truck evolution sim download ios
      -euro truck evolution sim open-world truck simulator
      -euro truck evolution sim discover the europe driving
      -euro truck evolution sim featuring european trucks with lots of customizations
      -euro truck evolution sim exciting driving experience that will make you feel like driving real trucks
      -euro truck evolution sim visit incredible places like berlin, prague, madrid, rome, paris and more
      -euro truck evolution sim drive across country roads and highways
      -euro truck evolution sim realistic weather conditions and day/night cycle
      -euro truck evolution sim improved ai traffic system
      -euro truck evolution sim request new trucks or features on our social pages
      -euro truck evolution sim alexandru marusac

      -

      Tips and tricks for Truck Simulator Europe 3 on iOS devices

      -

      Here are some tips and tricks that will help you get the most out of Truck Simulator Europe 3 on your iOS device:

      -
        -
      • Follow the traffic rules and avoid accidents. You will lose money and reputation if you break the law or damage your truck or cargo.
      • -
      • Plan your route carefully and choose the best jobs for your truck and skills. You will earn more money and experience points if you deliver high-value or urgent cargoes.
      • -
      • Manage your fuel, fatigue, and damage levels. You will have to refuel your truck at gas stations, rest at hotels or rest areas, and repair your truck at service stations. You can also use the emergency call button if you get stuck or need assistance.
      • -
      • Customize your truck to suit your style and preferences. You can change the color, accessories, wheels, lights, horns, and more of your truck. You can also add stickers, flags, or license plates to personalize your truck.
      • -
      • Explore different cities and landmarks in Europe. You can visit famous places like Paris, Berlin, Rome, London, Amsterdam, and more. You can also discover hidden roads and scenic routes that will make your journey more enjoyable.
      • -
      -

      Conclusion: Summarize the main points and invite readers to try Truck Simulator Europe 3 on iOS devices

      -

      Truck Simulator Europe 3 is a truck simulation game that lets you become a real trucker on your iOS device. You can drive across Europe in 7 different trucks with 25 trailers and many cargo options. You can also manage your own business by buying new trucks and trailers, selecting your jobs, and earning money. You can also customize your truck, follow the traffic rules, plan your route, manage your fuel, fatigue, and damage levels, and explore different cities and landmarks in Europe.

      -

      If you love driving trucks across Europe, then you should definitely try Truck Simulator Europe 3 on your iOS device. It is a fun and realistic game that will keep you entertained for hours. You can download it for free from the App Store [here]. So what are you waiting for? Start your trucking career today!

      -

      FAQs: Answer some common questions about Truck Simulator Europe 3 on iOS devices

      -

      Here are some common questions and answers about Truck Simulator Europe 3 on iOS devices:

      -

      Q: How much space does Truck Simulator Europe 3 take on my iOS device?

      -

      A: Truck Simulator Europe 3 takes about 1 GB of space on your iOS device. Make sure you have enough storage space before downloading it.

      -

      Q: How do I save my progress in Truck Simulator Europe 3?

      -

      A: Truck Simulator Europe 3 automatically saves your progress every time you complete a job or exit the game. You can also manually save your progress by tapping on the pause button and then tapping on the save button.

      -

      Q: How do I change the language of Truck Simulator Europe 3?

      -

      A: Truck Simulator Europe 3 supports multiple languages including English, French, German, Italian, Spanish, Portuguese, Turkish, Russian, Polish, Czech, Hungarian, Romanian, Bulgarian, and Slovak. You can change the language of the game by tapping on the settings button and then tapping on the language button. You can then select your preferred language from the list.

      -

      Q: How do I contact the developers of Truck Simulator Europe 3?

      -

      A: You can contact the developers of Truck Simulator Europe 3 by sending an email to support@truckersofeurope.com. You can also follow them on Facebook, Twitter, Instagram, and YouTube for the latest news and updates.

      -

      Q: How do I rate and review Truck Simulator Europe 3?

      -

      A: You can rate and review Truck Simulator Europe 3 by going to the App Store and tapping on the stars or the write a review button. You can also share your feedback and suggestions with the developers by sending an email to support@truckersofeurope.com.

      -

      Q: How do I get more money and experience points in Truck Simulator Europe 3?

      -

      A: You can get more money and experience points in Truck Simulator Europe 3 by completing more jobs, delivering high-value or urgent cargoes, driving safely and efficiently, and unlocking new skills and perks. You can also watch ads or make in-app purchases to get more money and experience points.

      401be4b1e0
      -
      -
      \ No newline at end of file diff --git a/spaces/fffffu/bing/src/components/welcome-screen.tsx b/spaces/fffffu/bing/src/components/welcome-screen.tsx deleted file mode 100644 index f7449fcbb6c621875e235db98f2790bf7894fb0a..0000000000000000000000000000000000000000 --- a/spaces/fffffu/bing/src/components/welcome-screen.tsx +++ /dev/null @@ -1,34 +0,0 @@ -import { useBing } from '@/lib/hooks/use-bing' - -const exampleMessages = [ - { - heading: '🧐 提出复杂问题', - message: `我可以为我挑剔的只吃橙色食物的孩子做什么饭?` - }, - { - heading: '🙌 获取更好的答案', - message: '销量最高的 3 种宠物吸尘器有哪些优点和缺点?' - }, - { - heading: '🎨 获得创意灵感', - message: `以海盗的口吻写一首关于外太空鳄鱼的俳句` - } -] - -export function WelcomeScreen({ setInput }: Pick, 'setInput'>) { - return ( -
      - {exampleMessages.map(example => ( - - ))} -
      - ) -} diff --git a/spaces/fffiloni/controlnet-animation-doodle/node_modules/ws/index.js b/spaces/fffiloni/controlnet-animation-doodle/node_modules/ws/index.js deleted file mode 100644 index 41edb3b81bc186adfeddb9c79b709242fb385002..0000000000000000000000000000000000000000 --- a/spaces/fffiloni/controlnet-animation-doodle/node_modules/ws/index.js +++ /dev/null @@ -1,13 +0,0 @@ -'use strict'; - -const WebSocket = require('./lib/websocket'); - -WebSocket.createWebSocketStream = require('./lib/stream'); -WebSocket.Server = require('./lib/websocket-server'); -WebSocket.Receiver = require('./lib/receiver'); -WebSocket.Sender = require('./lib/sender'); - -WebSocket.WebSocket = WebSocket; -WebSocket.WebSocketServer = WebSocket.Server; - -module.exports = WebSocket; diff --git a/spaces/fffiloni/lama-video-watermark-remover/bin/blur_predicts.py b/spaces/fffiloni/lama-video-watermark-remover/bin/blur_predicts.py deleted file mode 100644 index a14fcc28d5a906ad3a21ab4ba482f38b4fc411cb..0000000000000000000000000000000000000000 --- a/spaces/fffiloni/lama-video-watermark-remover/bin/blur_predicts.py +++ /dev/null @@ -1,57 +0,0 @@ -#!/usr/bin/env python3 - -import os - -import cv2 -import numpy as np -import tqdm - -from saicinpainting.evaluation.data import PrecomputedInpaintingResultsDataset -from saicinpainting.evaluation.utils import load_yaml - - -def main(args): - config = load_yaml(args.config) - - if not args.predictdir.endswith('/'): - args.predictdir += '/' - - dataset = PrecomputedInpaintingResultsDataset(args.datadir, args.predictdir, **config.dataset_kwargs) - - os.makedirs(os.path.dirname(args.outpath), exist_ok=True) - - for img_i in tqdm.trange(len(dataset)): - pred_fname = dataset.pred_filenames[img_i] - cur_out_fname = os.path.join(args.outpath, pred_fname[len(args.predictdir):]) - os.makedirs(os.path.dirname(cur_out_fname), exist_ok=True) - - sample = dataset[img_i] - img = sample['image'] - mask = sample['mask'] - inpainted = sample['inpainted'] - - inpainted_blurred = cv2.GaussianBlur(np.transpose(inpainted, (1, 2, 0)), - ksize=(args.k, args.k), - sigmaX=args.s, sigmaY=args.s, - borderType=cv2.BORDER_REFLECT) - - cur_res = (1 - mask) * np.transpose(img, (1, 2, 0)) + mask * inpainted_blurred - cur_res = np.clip(cur_res * 255, 0, 255).astype('uint8') - cur_res = cv2.cvtColor(cur_res, cv2.COLOR_RGB2BGR) - cv2.imwrite(cur_out_fname, cur_res) - - -if __name__ == '__main__': - import argparse - - aparser = argparse.ArgumentParser() - aparser.add_argument('config', type=str, help='Path to evaluation config') - aparser.add_argument('datadir', type=str, - help='Path to folder with images and masks (output of gen_mask_dataset.py)') - aparser.add_argument('predictdir', type=str, - help='Path to folder with predicts (e.g. predict_hifill_baseline.py)') - aparser.add_argument('outpath', type=str, help='Where to put results') - aparser.add_argument('-s', type=float, default=0.1, help='Gaussian blur sigma') - aparser.add_argument('-k', type=int, default=5, help='Kernel size in gaussian blur') - - main(aparser.parse_args()) diff --git a/spaces/fkhuggingme/gpt-academic/crazy_functions/test_project/cpp/longcode/jpgd.cpp b/spaces/fkhuggingme/gpt-academic/crazy_functions/test_project/cpp/longcode/jpgd.cpp deleted file mode 100644 index 36d06c8e9068570c3e7624895d474f33dbfe3d29..0000000000000000000000000000000000000000 --- a/spaces/fkhuggingme/gpt-academic/crazy_functions/test_project/cpp/longcode/jpgd.cpp +++ /dev/null @@ -1,3276 +0,0 @@ -// jpgd.cpp - C++ class for JPEG decompression. -// Public domain, Rich Geldreich -// Last updated Apr. 16, 2011 -// Alex Evans: Linear memory allocator (taken from jpge.h). -// -// Supports progressive and baseline sequential JPEG image files, and the most common chroma subsampling factors: Y, H1V1, H2V1, H1V2, and H2V2. -// -// Chroma upsampling quality: H2V2 is upsampled in the frequency domain, H2V1 and H1V2 are upsampled using point sampling. -// Chroma upsampling reference: "Fast Scheme for Image Size Change in the Compressed Domain" -// http://vision.ai.uiuc.edu/~dugad/research/dct/index.html - -#include "jpgd.h" -#include - -#include -// BEGIN EPIC MOD -#define JPGD_ASSERT(x) { assert(x); CA_ASSUME(x); } (void)0 -// END EPIC MOD - -#ifdef _MSC_VER -#pragma warning (disable : 4611) // warning C4611: interaction between '_setjmp' and C++ object destruction is non-portable -#endif - -// Set to 1 to enable freq. domain chroma upsampling on images using H2V2 subsampling (0=faster nearest neighbor sampling). -// This is slower, but results in higher quality on images with highly saturated colors. -#define JPGD_SUPPORT_FREQ_DOMAIN_UPSAMPLING 1 - -#define JPGD_TRUE (1) -#define JPGD_FALSE (0) - -#define JPGD_MAX(a,b) (((a)>(b)) ? (a) : (b)) -#define JPGD_MIN(a,b) (((a)<(b)) ? (a) : (b)) - -namespace jpgd { - - static inline void *jpgd_malloc(size_t nSize) { return FMemory::Malloc(nSize); } - static inline void jpgd_free(void *p) { FMemory::Free(p); } - -// BEGIN EPIC MOD -//@UE3 - use UE3 BGRA encoding instead of assuming RGBA - // stolen from IImageWrapper.h - enum ERGBFormatJPG - { - Invalid = -1, - RGBA = 0, - BGRA = 1, - Gray = 2, - }; - static ERGBFormatJPG jpg_format; -// END EPIC MOD - - // DCT coefficients are stored in this sequence. - static int g_ZAG[64] = { 0,1,8,16,9,2,3,10,17,24,32,25,18,11,4,5,12,19,26,33,40,48,41,34,27,20,13,6,7,14,21,28,35,42,49,56,57,50,43,36,29,22,15,23,30,37,44,51,58,59,52,45,38,31,39,46,53,60,61,54,47,55,62,63 }; - - enum JPEG_MARKER - { - M_SOF0 = 0xC0, M_SOF1 = 0xC1, M_SOF2 = 0xC2, M_SOF3 = 0xC3, M_SOF5 = 0xC5, M_SOF6 = 0xC6, M_SOF7 = 0xC7, M_JPG = 0xC8, - M_SOF9 = 0xC9, M_SOF10 = 0xCA, M_SOF11 = 0xCB, M_SOF13 = 0xCD, M_SOF14 = 0xCE, M_SOF15 = 0xCF, M_DHT = 0xC4, M_DAC = 0xCC, - M_RST0 = 0xD0, M_RST1 = 0xD1, M_RST2 = 0xD2, M_RST3 = 0xD3, M_RST4 = 0xD4, M_RST5 = 0xD5, M_RST6 = 0xD6, M_RST7 = 0xD7, - M_SOI = 0xD8, M_EOI = 0xD9, M_SOS = 0xDA, M_DQT = 0xDB, M_DNL = 0xDC, M_DRI = 0xDD, M_DHP = 0xDE, M_EXP = 0xDF, - M_APP0 = 0xE0, M_APP15 = 0xEF, M_JPG0 = 0xF0, M_JPG13 = 0xFD, M_COM = 0xFE, M_TEM = 0x01, M_ERROR = 0x100, RST0 = 0xD0 - }; - - enum JPEG_SUBSAMPLING { JPGD_GRAYSCALE = 0, JPGD_YH1V1, JPGD_YH2V1, JPGD_YH1V2, JPGD_YH2V2 }; - -#define CONST_BITS 13 -#define PASS1_BITS 2 -#define SCALEDONE ((int32)1) - -#define FIX_0_298631336 ((int32)2446) /* FIX(0.298631336) */ -#define FIX_0_390180644 ((int32)3196) /* FIX(0.390180644) */ -#define FIX_0_541196100 ((int32)4433) /* FIX(0.541196100) */ -#define FIX_0_765366865 ((int32)6270) /* FIX(0.765366865) */ -#define FIX_0_899976223 ((int32)7373) /* FIX(0.899976223) */ -#define FIX_1_175875602 ((int32)9633) /* FIX(1.175875602) */ -#define FIX_1_501321110 ((int32)12299) /* FIX(1.501321110) */ -#define FIX_1_847759065 ((int32)15137) /* FIX(1.847759065) */ -#define FIX_1_961570560 ((int32)16069) /* FIX(1.961570560) */ -#define FIX_2_053119869 ((int32)16819) /* FIX(2.053119869) */ -#define FIX_2_562915447 ((int32)20995) /* FIX(2.562915447) */ -#define FIX_3_072711026 ((int32)25172) /* FIX(3.072711026) */ - -#define DESCALE(x,n) (((x) + (SCALEDONE << ((n)-1))) >> (n)) -#define DESCALE_ZEROSHIFT(x,n) (((x) + (128 << (n)) + (SCALEDONE << ((n)-1))) >> (n)) - -#define MULTIPLY(var, cnst) ((var) * (cnst)) - -#define CLAMP(i) ((static_cast(i) > 255) ? (((~i) >> 31) & 0xFF) : (i)) - - // Compiler creates a fast path 1D IDCT for X non-zero columns - template - struct Row - { - static void idct(int* pTemp, const jpgd_block_t* pSrc) - { - // ACCESS_COL() will be optimized at compile time to either an array access, or 0. -#define ACCESS_COL(x) (((x) < NONZERO_COLS) ? (int)pSrc[x] : 0) - - const int z2 = ACCESS_COL(2), z3 = ACCESS_COL(6); - - const int z1 = MULTIPLY(z2 + z3, FIX_0_541196100); - const int tmp2 = z1 + MULTIPLY(z3, - FIX_1_847759065); - const int tmp3 = z1 + MULTIPLY(z2, FIX_0_765366865); - - const int tmp0 = (ACCESS_COL(0) + ACCESS_COL(4)) << CONST_BITS; - const int tmp1 = (ACCESS_COL(0) - ACCESS_COL(4)) << CONST_BITS; - - const int tmp10 = tmp0 + tmp3, tmp13 = tmp0 - tmp3, tmp11 = tmp1 + tmp2, tmp12 = tmp1 - tmp2; - - const int atmp0 = ACCESS_COL(7), atmp1 = ACCESS_COL(5), atmp2 = ACCESS_COL(3), atmp3 = ACCESS_COL(1); - - const int bz1 = atmp0 + atmp3, bz2 = atmp1 + atmp2, bz3 = atmp0 + atmp2, bz4 = atmp1 + atmp3; - const int bz5 = MULTIPLY(bz3 + bz4, FIX_1_175875602); - - const int az1 = MULTIPLY(bz1, - FIX_0_899976223); - const int az2 = MULTIPLY(bz2, - FIX_2_562915447); - const int az3 = MULTIPLY(bz3, - FIX_1_961570560) + bz5; - const int az4 = MULTIPLY(bz4, - FIX_0_390180644) + bz5; - - const int btmp0 = MULTIPLY(atmp0, FIX_0_298631336) + az1 + az3; - const int btmp1 = MULTIPLY(atmp1, FIX_2_053119869) + az2 + az4; - const int btmp2 = MULTIPLY(atmp2, FIX_3_072711026) + az2 + az3; - const int btmp3 = MULTIPLY(atmp3, FIX_1_501321110) + az1 + az4; - - pTemp[0] = DESCALE(tmp10 + btmp3, CONST_BITS-PASS1_BITS); - pTemp[7] = DESCALE(tmp10 - btmp3, CONST_BITS-PASS1_BITS); - pTemp[1] = DESCALE(tmp11 + btmp2, CONST_BITS-PASS1_BITS); - pTemp[6] = DESCALE(tmp11 - btmp2, CONST_BITS-PASS1_BITS); - pTemp[2] = DESCALE(tmp12 + btmp1, CONST_BITS-PASS1_BITS); - pTemp[5] = DESCALE(tmp12 - btmp1, CONST_BITS-PASS1_BITS); - pTemp[3] = DESCALE(tmp13 + btmp0, CONST_BITS-PASS1_BITS); - pTemp[4] = DESCALE(tmp13 - btmp0, CONST_BITS-PASS1_BITS); - } - }; - - template <> - struct Row<0> - { - static void idct(int* pTemp, const jpgd_block_t* pSrc) - { -#ifdef _MSC_VER - pTemp; pSrc; -#endif - } - }; - - template <> - struct Row<1> - { - static void idct(int* pTemp, const jpgd_block_t* pSrc) - { - const int dcval = (pSrc[0] << PASS1_BITS); - - pTemp[0] = dcval; - pTemp[1] = dcval; - pTemp[2] = dcval; - pTemp[3] = dcval; - pTemp[4] = dcval; - pTemp[5] = dcval; - pTemp[6] = dcval; - pTemp[7] = dcval; - } - }; - - // Compiler creates a fast path 1D IDCT for X non-zero rows - template - struct Col - { - static void idct(uint8* pDst_ptr, const int* pTemp) - { - // ACCESS_ROW() will be optimized at compile time to either an array access, or 0. -#define ACCESS_ROW(x) (((x) < NONZERO_ROWS) ? pTemp[x * 8] : 0) - - const int z2 = ACCESS_ROW(2); - const int z3 = ACCESS_ROW(6); - - const int z1 = MULTIPLY(z2 + z3, FIX_0_541196100); - const int tmp2 = z1 + MULTIPLY(z3, - FIX_1_847759065); - const int tmp3 = z1 + MULTIPLY(z2, FIX_0_765366865); - - const int tmp0 = (ACCESS_ROW(0) + ACCESS_ROW(4)) << CONST_BITS; - const int tmp1 = (ACCESS_ROW(0) - ACCESS_ROW(4)) << CONST_BITS; - - const int tmp10 = tmp0 + tmp3, tmp13 = tmp0 - tmp3, tmp11 = tmp1 + tmp2, tmp12 = tmp1 - tmp2; - - const int atmp0 = ACCESS_ROW(7), atmp1 = ACCESS_ROW(5), atmp2 = ACCESS_ROW(3), atmp3 = ACCESS_ROW(1); - - const int bz1 = atmp0 + atmp3, bz2 = atmp1 + atmp2, bz3 = atmp0 + atmp2, bz4 = atmp1 + atmp3; - const int bz5 = MULTIPLY(bz3 + bz4, FIX_1_175875602); - - const int az1 = MULTIPLY(bz1, - FIX_0_899976223); - const int az2 = MULTIPLY(bz2, - FIX_2_562915447); - const int az3 = MULTIPLY(bz3, - FIX_1_961570560) + bz5; - const int az4 = MULTIPLY(bz4, - FIX_0_390180644) + bz5; - - const int btmp0 = MULTIPLY(atmp0, FIX_0_298631336) + az1 + az3; - const int btmp1 = MULTIPLY(atmp1, FIX_2_053119869) + az2 + az4; - const int btmp2 = MULTIPLY(atmp2, FIX_3_072711026) + az2 + az3; - const int btmp3 = MULTIPLY(atmp3, FIX_1_501321110) + az1 + az4; - - int i = DESCALE_ZEROSHIFT(tmp10 + btmp3, CONST_BITS+PASS1_BITS+3); - pDst_ptr[8*0] = (uint8)CLAMP(i); - - i = DESCALE_ZEROSHIFT(tmp10 - btmp3, CONST_BITS+PASS1_BITS+3); - pDst_ptr[8*7] = (uint8)CLAMP(i); - - i = DESCALE_ZEROSHIFT(tmp11 + btmp2, CONST_BITS+PASS1_BITS+3); - pDst_ptr[8*1] = (uint8)CLAMP(i); - - i = DESCALE_ZEROSHIFT(tmp11 - btmp2, CONST_BITS+PASS1_BITS+3); - pDst_ptr[8*6] = (uint8)CLAMP(i); - - i = DESCALE_ZEROSHIFT(tmp12 + btmp1, CONST_BITS+PASS1_BITS+3); - pDst_ptr[8*2] = (uint8)CLAMP(i); - - i = DESCALE_ZEROSHIFT(tmp12 - btmp1, CONST_BITS+PASS1_BITS+3); - pDst_ptr[8*5] = (uint8)CLAMP(i); - - i = DESCALE_ZEROSHIFT(tmp13 + btmp0, CONST_BITS+PASS1_BITS+3); - pDst_ptr[8*3] = (uint8)CLAMP(i); - - i = DESCALE_ZEROSHIFT(tmp13 - btmp0, CONST_BITS+PASS1_BITS+3); - pDst_ptr[8*4] = (uint8)CLAMP(i); - } - }; - - template <> - struct Col<1> - { - static void idct(uint8* pDst_ptr, const int* pTemp) - { - int dcval = DESCALE_ZEROSHIFT(pTemp[0], PASS1_BITS+3); - const uint8 dcval_clamped = (uint8)CLAMP(dcval); - pDst_ptr[0*8] = dcval_clamped; - pDst_ptr[1*8] = dcval_clamped; - pDst_ptr[2*8] = dcval_clamped; - pDst_ptr[3*8] = dcval_clamped; - pDst_ptr[4*8] = dcval_clamped; - pDst_ptr[5*8] = dcval_clamped; - pDst_ptr[6*8] = dcval_clamped; - pDst_ptr[7*8] = dcval_clamped; - } - }; - - static const uint8 s_idct_row_table[] = - { - 1,0,0,0,0,0,0,0, 2,0,0,0,0,0,0,0, 2,1,0,0,0,0,0,0, 2,1,1,0,0,0,0,0, 2,2,1,0,0,0,0,0, 3,2,1,0,0,0,0,0, 4,2,1,0,0,0,0,0, 4,3,1,0,0,0,0,0, - 4,3,2,0,0,0,0,0, 4,3,2,1,0,0,0,0, 4,3,2,1,1,0,0,0, 4,3,2,2,1,0,0,0, 4,3,3,2,1,0,0,0, 4,4,3,2,1,0,0,0, 5,4,3,2,1,0,0,0, 6,4,3,2,1,0,0,0, - 6,5,3,2,1,0,0,0, 6,5,4,2,1,0,0,0, 6,5,4,3,1,0,0,0, 6,5,4,3,2,0,0,0, 6,5,4,3,2,1,0,0, 6,5,4,3,2,1,1,0, 6,5,4,3,2,2,1,0, 6,5,4,3,3,2,1,0, - 6,5,4,4,3,2,1,0, 6,5,5,4,3,2,1,0, 6,6,5,4,3,2,1,0, 7,6,5,4,3,2,1,0, 8,6,5,4,3,2,1,0, 8,7,5,4,3,2,1,0, 8,7,6,4,3,2,1,0, 8,7,6,5,3,2,1,0, - 8,7,6,5,4,2,1,0, 8,7,6,5,4,3,1,0, 8,7,6,5,4,3,2,0, 8,7,6,5,4,3,2,1, 8,7,6,5,4,3,2,2, 8,7,6,5,4,3,3,2, 8,7,6,5,4,4,3,2, 8,7,6,5,5,4,3,2, - 8,7,6,6,5,4,3,2, 8,7,7,6,5,4,3,2, 8,8,7,6,5,4,3,2, 8,8,8,6,5,4,3,2, 8,8,8,7,5,4,3,2, 8,8,8,7,6,4,3,2, 8,8,8,7,6,5,3,2, 8,8,8,7,6,5,4,2, - 8,8,8,7,6,5,4,3, 8,8,8,7,6,5,4,4, 8,8,8,7,6,5,5,4, 8,8,8,7,6,6,5,4, 8,8,8,7,7,6,5,4, 8,8,8,8,7,6,5,4, 8,8,8,8,8,6,5,4, 8,8,8,8,8,7,5,4, - 8,8,8,8,8,7,6,4, 8,8,8,8,8,7,6,5, 8,8,8,8,8,7,6,6, 8,8,8,8,8,7,7,6, 8,8,8,8,8,8,7,6, 8,8,8,8,8,8,8,6, 8,8,8,8,8,8,8,7, 8,8,8,8,8,8,8,8, - }; - - static const uint8 s_idct_col_table[] = { 1, 1, 2, 3, 3, 3, 3, 3, 3, 4, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 6, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8 }; - - void idct(const jpgd_block_t* pSrc_ptr, uint8* pDst_ptr, int block_max_zag) - { - JPGD_ASSERT(block_max_zag >= 1); - JPGD_ASSERT(block_max_zag <= 64); - - if (block_max_zag == 1) - { - int k = ((pSrc_ptr[0] + 4) >> 3) + 128; - k = CLAMP(k); - k = k | (k<<8); - k = k | (k<<16); - - for (int i = 8; i > 0; i--) - { - *(int*)&pDst_ptr[0] = k; - *(int*)&pDst_ptr[4] = k; - pDst_ptr += 8; - } - return; - } - - int temp[64]; - - const jpgd_block_t* pSrc = pSrc_ptr; - int* pTemp = temp; - - const uint8* pRow_tab = &s_idct_row_table[(block_max_zag - 1) * 8]; - int i; - for (i = 8; i > 0; i--, pRow_tab++) - { - switch (*pRow_tab) - { - case 0: Row<0>::idct(pTemp, pSrc); break; - case 1: Row<1>::idct(pTemp, pSrc); break; - case 2: Row<2>::idct(pTemp, pSrc); break; - case 3: Row<3>::idct(pTemp, pSrc); break; - case 4: Row<4>::idct(pTemp, pSrc); break; - case 5: Row<5>::idct(pTemp, pSrc); break; - case 6: Row<6>::idct(pTemp, pSrc); break; - case 7: Row<7>::idct(pTemp, pSrc); break; - case 8: Row<8>::idct(pTemp, pSrc); break; - } - - pSrc += 8; - pTemp += 8; - } - - pTemp = temp; - - const int nonzero_rows = s_idct_col_table[block_max_zag - 1]; - for (i = 8; i > 0; i--) - { - switch (nonzero_rows) - { - case 1: Col<1>::idct(pDst_ptr, pTemp); break; - case 2: Col<2>::idct(pDst_ptr, pTemp); break; - case 3: Col<3>::idct(pDst_ptr, pTemp); break; - case 4: Col<4>::idct(pDst_ptr, pTemp); break; - case 5: Col<5>::idct(pDst_ptr, pTemp); break; - case 6: Col<6>::idct(pDst_ptr, pTemp); break; - case 7: Col<7>::idct(pDst_ptr, pTemp); break; - case 8: Col<8>::idct(pDst_ptr, pTemp); break; - } - - pTemp++; - pDst_ptr++; - } - } - - void idct_4x4(const jpgd_block_t* pSrc_ptr, uint8* pDst_ptr) - { - int temp[64]; - int* pTemp = temp; - const jpgd_block_t* pSrc = pSrc_ptr; - - for (int i = 4; i > 0; i--) - { - Row<4>::idct(pTemp, pSrc); - pSrc += 8; - pTemp += 8; - } - - pTemp = temp; - for (int i = 8; i > 0; i--) - { - Col<4>::idct(pDst_ptr, pTemp); - pTemp++; - pDst_ptr++; - } - } - - // Retrieve one character from the input stream. - inline uint jpeg_decoder::get_char() - { - // Any bytes remaining in buffer? - if (!m_in_buf_left) - { - // Try to get more bytes. - prep_in_buffer(); - // Still nothing to get? - if (!m_in_buf_left) - { - // Pad the end of the stream with 0xFF 0xD9 (EOI marker) - int t = m_tem_flag; - m_tem_flag ^= 1; - if (t) - return 0xD9; - else - return 0xFF; - } - } - - uint c = *m_pIn_buf_ofs++; - m_in_buf_left--; - - return c; - } - - // Same as previous method, except can indicate if the character is a pad character or not. - inline uint jpeg_decoder::get_char(bool *pPadding_flag) - { - if (!m_in_buf_left) - { - prep_in_buffer(); - if (!m_in_buf_left) - { - *pPadding_flag = true; - int t = m_tem_flag; - m_tem_flag ^= 1; - if (t) - return 0xD9; - else - return 0xFF; - } - } - - *pPadding_flag = false; - - uint c = *m_pIn_buf_ofs++; - m_in_buf_left--; - - return c; - } - - // Inserts a previously retrieved character back into the input buffer. - inline void jpeg_decoder::stuff_char(uint8 q) - { - *(--m_pIn_buf_ofs) = q; - m_in_buf_left++; - } - - // Retrieves one character from the input stream, but does not read past markers. Will continue to return 0xFF when a marker is encountered. - inline uint8 jpeg_decoder::get_octet() - { - bool padding_flag; - int c = get_char(&padding_flag); - - if (c == 0xFF) - { - if (padding_flag) - return 0xFF; - - c = get_char(&padding_flag); - if (padding_flag) - { - stuff_char(0xFF); - return 0xFF; - } - - if (c == 0x00) - return 0xFF; - else - { - stuff_char(static_cast(c)); - stuff_char(0xFF); - return 0xFF; - } - } - - return static_cast(c); - } - - // Retrieves a variable number of bits from the input stream. Does not recognize markers. - inline uint jpeg_decoder::get_bits(int num_bits) - { - if (!num_bits) - return 0; - - uint i = m_bit_buf >> (32 - num_bits); - - if ((m_bits_left -= num_bits) <= 0) - { - m_bit_buf <<= (num_bits += m_bits_left); - - uint c1 = get_char(); - uint c2 = get_char(); - m_bit_buf = (m_bit_buf & 0xFFFF0000) | (c1 << 8) | c2; - - m_bit_buf <<= -m_bits_left; - - m_bits_left += 16; - - JPGD_ASSERT(m_bits_left >= 0); - } - else - m_bit_buf <<= num_bits; - - return i; - } - - // Retrieves a variable number of bits from the input stream. Markers will not be read into the input bit buffer. Instead, an infinite number of all 1's will be returned when a marker is encountered. - inline uint jpeg_decoder::get_bits_no_markers(int num_bits) - { - if (!num_bits) - return 0; - - uint i = m_bit_buf >> (32 - num_bits); - - if ((m_bits_left -= num_bits) <= 0) - { - m_bit_buf <<= (num_bits += m_bits_left); - - if ((m_in_buf_left < 2) || (m_pIn_buf_ofs[0] == 0xFF) || (m_pIn_buf_ofs[1] == 0xFF)) - { - uint c1 = get_octet(); - uint c2 = get_octet(); - m_bit_buf |= (c1 << 8) | c2; - } - else - { - m_bit_buf |= ((uint)m_pIn_buf_ofs[0] << 8) | m_pIn_buf_ofs[1]; - m_in_buf_left -= 2; - m_pIn_buf_ofs += 2; - } - - m_bit_buf <<= -m_bits_left; - - m_bits_left += 16; - - JPGD_ASSERT(m_bits_left >= 0); - } - else - m_bit_buf <<= num_bits; - - return i; - } - - // Decodes a Huffman encoded symbol. - inline int jpeg_decoder::huff_decode(huff_tables *pH) - { - int symbol; - - // Check first 8-bits: do we have a complete symbol? - if ((symbol = pH->look_up[m_bit_buf >> 24]) < 0) - { - // Decode more bits, use a tree traversal to find symbol. - int ofs = 23; - do - { - symbol = pH->tree[-(int)(symbol + ((m_bit_buf >> ofs) & 1))]; - ofs--; - } while (symbol < 0); - - get_bits_no_markers(8 + (23 - ofs)); - } - else - get_bits_no_markers(pH->code_size[symbol]); - - return symbol; - } - - // Decodes a Huffman encoded symbol. - inline int jpeg_decoder::huff_decode(huff_tables *pH, int& extra_bits) - { - int symbol; - - // Check first 8-bits: do we have a complete symbol? - if ((symbol = pH->look_up2[m_bit_buf >> 24]) < 0) - { - // Use a tree traversal to find symbol. - int ofs = 23; - do - { - symbol = pH->tree[-(int)(symbol + ((m_bit_buf >> ofs) & 1))]; - ofs--; - } while (symbol < 0); - - get_bits_no_markers(8 + (23 - ofs)); - - extra_bits = get_bits_no_markers(symbol & 0xF); - } - else - { - JPGD_ASSERT(((symbol >> 8) & 31) == pH->code_size[symbol & 255] + ((symbol & 0x8000) ? (symbol & 15) : 0)); - - if (symbol & 0x8000) - { - get_bits_no_markers((symbol >> 8) & 31); - extra_bits = symbol >> 16; - } - else - { - int code_size = (symbol >> 8) & 31; - int num_extra_bits = symbol & 0xF; - int bits = code_size + num_extra_bits; - if (bits <= (m_bits_left + 16)) - extra_bits = get_bits_no_markers(bits) & ((1 << num_extra_bits) - 1); - else - { - get_bits_no_markers(code_size); - extra_bits = get_bits_no_markers(num_extra_bits); - } - } - - symbol &= 0xFF; - } - - return symbol; - } - - // Tables and macro used to fully decode the DPCM differences. - static const int s_extend_test[16] = { 0, 0x0001, 0x0002, 0x0004, 0x0008, 0x0010, 0x0020, 0x0040, 0x0080, 0x0100, 0x0200, 0x0400, 0x0800, 0x1000, 0x2000, 0x4000 }; - static const int s_extend_offset[16] = { 0, -1, -3, -7, -15, -31, -63, -127, -255, -511, -1023, -2047, -4095, -8191, -16383, -32767 }; - static const int s_extend_mask[] = { 0, (1<<0), (1<<1), (1<<2), (1<<3), (1<<4), (1<<5), (1<<6), (1<<7), (1<<8), (1<<9), (1<<10), (1<<11), (1<<12), (1<<13), (1<<14), (1<<15), (1<<16) }; -#define HUFF_EXTEND(x,s) ((x) < s_extend_test[s] ? (x) + s_extend_offset[s] : (x)) - - // Clamps a value between 0-255. - inline uint8 jpeg_decoder::clamp(int i) - { - if (static_cast(i) > 255) - i = (((~i) >> 31) & 0xFF); - - return static_cast(i); - } - - namespace DCT_Upsample - { - struct Matrix44 - { - typedef int Element_Type; - enum { NUM_ROWS = 4, NUM_COLS = 4 }; - - Element_Type v[NUM_ROWS][NUM_COLS]; - - inline int rows() const { return NUM_ROWS; } - inline int cols() const { return NUM_COLS; } - - inline const Element_Type & at(int r, int c) const { return v[r][c]; } - inline Element_Type & at(int r, int c) { return v[r][c]; } - - inline Matrix44() { } - - inline Matrix44& operator += (const Matrix44& a) - { - for (int r = 0; r < NUM_ROWS; r++) - { - at(r, 0) += a.at(r, 0); - at(r, 1) += a.at(r, 1); - at(r, 2) += a.at(r, 2); - at(r, 3) += a.at(r, 3); - } - return *this; - } - - inline Matrix44& operator -= (const Matrix44& a) - { - for (int r = 0; r < NUM_ROWS; r++) - { - at(r, 0) -= a.at(r, 0); - at(r, 1) -= a.at(r, 1); - at(r, 2) -= a.at(r, 2); - at(r, 3) -= a.at(r, 3); - } - return *this; - } - - friend inline Matrix44 operator + (const Matrix44& a, const Matrix44& b) - { - Matrix44 ret; - for (int r = 0; r < NUM_ROWS; r++) - { - ret.at(r, 0) = a.at(r, 0) + b.at(r, 0); - ret.at(r, 1) = a.at(r, 1) + b.at(r, 1); - ret.at(r, 2) = a.at(r, 2) + b.at(r, 2); - ret.at(r, 3) = a.at(r, 3) + b.at(r, 3); - } - return ret; - } - - friend inline Matrix44 operator - (const Matrix44& a, const Matrix44& b) - { - Matrix44 ret; - for (int r = 0; r < NUM_ROWS; r++) - { - ret.at(r, 0) = a.at(r, 0) - b.at(r, 0); - ret.at(r, 1) = a.at(r, 1) - b.at(r, 1); - ret.at(r, 2) = a.at(r, 2) - b.at(r, 2); - ret.at(r, 3) = a.at(r, 3) - b.at(r, 3); - } - return ret; - } - - static inline void add_and_store(jpgd_block_t* pDst, const Matrix44& a, const Matrix44& b) - { - for (int r = 0; r < 4; r++) - { - pDst[0*8 + r] = static_cast(a.at(r, 0) + b.at(r, 0)); - pDst[1*8 + r] = static_cast(a.at(r, 1) + b.at(r, 1)); - pDst[2*8 + r] = static_cast(a.at(r, 2) + b.at(r, 2)); - pDst[3*8 + r] = static_cast(a.at(r, 3) + b.at(r, 3)); - } - } - - static inline void sub_and_store(jpgd_block_t* pDst, const Matrix44& a, const Matrix44& b) - { - for (int r = 0; r < 4; r++) - { - pDst[0*8 + r] = static_cast(a.at(r, 0) - b.at(r, 0)); - pDst[1*8 + r] = static_cast(a.at(r, 1) - b.at(r, 1)); - pDst[2*8 + r] = static_cast(a.at(r, 2) - b.at(r, 2)); - pDst[3*8 + r] = static_cast(a.at(r, 3) - b.at(r, 3)); - } - } - }; - - const int FRACT_BITS = 10; - const int SCALE = 1 << FRACT_BITS; - - typedef int Temp_Type; -#define D(i) (((i) + (SCALE >> 1)) >> FRACT_BITS) -#define F(i) ((int)((i) * SCALE + .5f)) - - // Any decent C++ compiler will optimize this at compile time to a 0, or an array access. -#define AT(c, r) ((((c)>=NUM_COLS)||((r)>=NUM_ROWS)) ? 0 : pSrc[(c)+(r)*8]) - - // NUM_ROWS/NUM_COLS = # of non-zero rows/cols in input matrix - template - struct P_Q - { - static void calc(Matrix44& P, Matrix44& Q, const jpgd_block_t* pSrc) - { - // 4x8 = 4x8 times 8x8, matrix 0 is constant - const Temp_Type X000 = AT(0, 0); - const Temp_Type X001 = AT(0, 1); - const Temp_Type X002 = AT(0, 2); - const Temp_Type X003 = AT(0, 3); - const Temp_Type X004 = AT(0, 4); - const Temp_Type X005 = AT(0, 5); - const Temp_Type X006 = AT(0, 6); - const Temp_Type X007 = AT(0, 7); - const Temp_Type X010 = D(F(0.415735f) * AT(1, 0) + F(0.791065f) * AT(3, 0) + F(-0.352443f) * AT(5, 0) + F(0.277785f) * AT(7, 0)); - const Temp_Type X011 = D(F(0.415735f) * AT(1, 1) + F(0.791065f) * AT(3, 1) + F(-0.352443f) * AT(5, 1) + F(0.277785f) * AT(7, 1)); - const Temp_Type X012 = D(F(0.415735f) * AT(1, 2) + F(0.791065f) * AT(3, 2) + F(-0.352443f) * AT(5, 2) + F(0.277785f) * AT(7, 2)); - const Temp_Type X013 = D(F(0.415735f) * AT(1, 3) + F(0.791065f) * AT(3, 3) + F(-0.352443f) * AT(5, 3) + F(0.277785f) * AT(7, 3)); - const Temp_Type X014 = D(F(0.415735f) * AT(1, 4) + F(0.791065f) * AT(3, 4) + F(-0.352443f) * AT(5, 4) + F(0.277785f) * AT(7, 4)); - const Temp_Type X015 = D(F(0.415735f) * AT(1, 5) + F(0.791065f) * AT(3, 5) + F(-0.352443f) * AT(5, 5) + F(0.277785f) * AT(7, 5)); - const Temp_Type X016 = D(F(0.415735f) * AT(1, 6) + F(0.791065f) * AT(3, 6) + F(-0.352443f) * AT(5, 6) + F(0.277785f) * AT(7, 6)); - const Temp_Type X017 = D(F(0.415735f) * AT(1, 7) + F(0.791065f) * AT(3, 7) + F(-0.352443f) * AT(5, 7) + F(0.277785f) * AT(7, 7)); - const Temp_Type X020 = AT(4, 0); - const Temp_Type X021 = AT(4, 1); - const Temp_Type X022 = AT(4, 2); - const Temp_Type X023 = AT(4, 3); - const Temp_Type X024 = AT(4, 4); - const Temp_Type X025 = AT(4, 5); - const Temp_Type X026 = AT(4, 6); - const Temp_Type X027 = AT(4, 7); - const Temp_Type X030 = D(F(0.022887f) * AT(1, 0) + F(-0.097545f) * AT(3, 0) + F(0.490393f) * AT(5, 0) + F(0.865723f) * AT(7, 0)); - const Temp_Type X031 = D(F(0.022887f) * AT(1, 1) + F(-0.097545f) * AT(3, 1) + F(0.490393f) * AT(5, 1) + F(0.865723f) * AT(7, 1)); - const Temp_Type X032 = D(F(0.022887f) * AT(1, 2) + F(-0.097545f) * AT(3, 2) + F(0.490393f) * AT(5, 2) + F(0.865723f) * AT(7, 2)); - const Temp_Type X033 = D(F(0.022887f) * AT(1, 3) + F(-0.097545f) * AT(3, 3) + F(0.490393f) * AT(5, 3) + F(0.865723f) * AT(7, 3)); - const Temp_Type X034 = D(F(0.022887f) * AT(1, 4) + F(-0.097545f) * AT(3, 4) + F(0.490393f) * AT(5, 4) + F(0.865723f) * AT(7, 4)); - const Temp_Type X035 = D(F(0.022887f) * AT(1, 5) + F(-0.097545f) * AT(3, 5) + F(0.490393f) * AT(5, 5) + F(0.865723f) * AT(7, 5)); - const Temp_Type X036 = D(F(0.022887f) * AT(1, 6) + F(-0.097545f) * AT(3, 6) + F(0.490393f) * AT(5, 6) + F(0.865723f) * AT(7, 6)); - const Temp_Type X037 = D(F(0.022887f) * AT(1, 7) + F(-0.097545f) * AT(3, 7) + F(0.490393f) * AT(5, 7) + F(0.865723f) * AT(7, 7)); - - // 4x4 = 4x8 times 8x4, matrix 1 is constant - P.at(0, 0) = X000; - P.at(0, 1) = D(X001 * F(0.415735f) + X003 * F(0.791065f) + X005 * F(-0.352443f) + X007 * F(0.277785f)); - P.at(0, 2) = X004; - P.at(0, 3) = D(X001 * F(0.022887f) + X003 * F(-0.097545f) + X005 * F(0.490393f) + X007 * F(0.865723f)); - P.at(1, 0) = X010; - P.at(1, 1) = D(X011 * F(0.415735f) + X013 * F(0.791065f) + X015 * F(-0.352443f) + X017 * F(0.277785f)); - P.at(1, 2) = X014; - P.at(1, 3) = D(X011 * F(0.022887f) + X013 * F(-0.097545f) + X015 * F(0.490393f) + X017 * F(0.865723f)); - P.at(2, 0) = X020; - P.at(2, 1) = D(X021 * F(0.415735f) + X023 * F(0.791065f) + X025 * F(-0.352443f) + X027 * F(0.277785f)); - P.at(2, 2) = X024; - P.at(2, 3) = D(X021 * F(0.022887f) + X023 * F(-0.097545f) + X025 * F(0.490393f) + X027 * F(0.865723f)); - P.at(3, 0) = X030; - P.at(3, 1) = D(X031 * F(0.415735f) + X033 * F(0.791065f) + X035 * F(-0.352443f) + X037 * F(0.277785f)); - P.at(3, 2) = X034; - P.at(3, 3) = D(X031 * F(0.022887f) + X033 * F(-0.097545f) + X035 * F(0.490393f) + X037 * F(0.865723f)); - // 40 muls 24 adds - - // 4x4 = 4x8 times 8x4, matrix 1 is constant - Q.at(0, 0) = D(X001 * F(0.906127f) + X003 * F(-0.318190f) + X005 * F(0.212608f) + X007 * F(-0.180240f)); - Q.at(0, 1) = X002; - Q.at(0, 2) = D(X001 * F(-0.074658f) + X003 * F(0.513280f) + X005 * F(0.768178f) + X007 * F(-0.375330f)); - Q.at(0, 3) = X006; - Q.at(1, 0) = D(X011 * F(0.906127f) + X013 * F(-0.318190f) + X015 * F(0.212608f) + X017 * F(-0.180240f)); - Q.at(1, 1) = X012; - Q.at(1, 2) = D(X011 * F(-0.074658f) + X013 * F(0.513280f) + X015 * F(0.768178f) + X017 * F(-0.375330f)); - Q.at(1, 3) = X016; - Q.at(2, 0) = D(X021 * F(0.906127f) + X023 * F(-0.318190f) + X025 * F(0.212608f) + X027 * F(-0.180240f)); - Q.at(2, 1) = X022; - Q.at(2, 2) = D(X021 * F(-0.074658f) + X023 * F(0.513280f) + X025 * F(0.768178f) + X027 * F(-0.375330f)); - Q.at(2, 3) = X026; - Q.at(3, 0) = D(X031 * F(0.906127f) + X033 * F(-0.318190f) + X035 * F(0.212608f) + X037 * F(-0.180240f)); - Q.at(3, 1) = X032; - Q.at(3, 2) = D(X031 * F(-0.074658f) + X033 * F(0.513280f) + X035 * F(0.768178f) + X037 * F(-0.375330f)); - Q.at(3, 3) = X036; - // 40 muls 24 adds - } - }; - - template - struct R_S - { - static void calc(Matrix44& R, Matrix44& S, const jpgd_block_t* pSrc) - { - // 4x8 = 4x8 times 8x8, matrix 0 is constant - const Temp_Type X100 = D(F(0.906127f) * AT(1, 0) + F(-0.318190f) * AT(3, 0) + F(0.212608f) * AT(5, 0) + F(-0.180240f) * AT(7, 0)); - const Temp_Type X101 = D(F(0.906127f) * AT(1, 1) + F(-0.318190f) * AT(3, 1) + F(0.212608f) * AT(5, 1) + F(-0.180240f) * AT(7, 1)); - const Temp_Type X102 = D(F(0.906127f) * AT(1, 2) + F(-0.318190f) * AT(3, 2) + F(0.212608f) * AT(5, 2) + F(-0.180240f) * AT(7, 2)); - const Temp_Type X103 = D(F(0.906127f) * AT(1, 3) + F(-0.318190f) * AT(3, 3) + F(0.212608f) * AT(5, 3) + F(-0.180240f) * AT(7, 3)); - const Temp_Type X104 = D(F(0.906127f) * AT(1, 4) + F(-0.318190f) * AT(3, 4) + F(0.212608f) * AT(5, 4) + F(-0.180240f) * AT(7, 4)); - const Temp_Type X105 = D(F(0.906127f) * AT(1, 5) + F(-0.318190f) * AT(3, 5) + F(0.212608f) * AT(5, 5) + F(-0.180240f) * AT(7, 5)); - const Temp_Type X106 = D(F(0.906127f) * AT(1, 6) + F(-0.318190f) * AT(3, 6) + F(0.212608f) * AT(5, 6) + F(-0.180240f) * AT(7, 6)); - const Temp_Type X107 = D(F(0.906127f) * AT(1, 7) + F(-0.318190f) * AT(3, 7) + F(0.212608f) * AT(5, 7) + F(-0.180240f) * AT(7, 7)); - const Temp_Type X110 = AT(2, 0); - const Temp_Type X111 = AT(2, 1); - const Temp_Type X112 = AT(2, 2); - const Temp_Type X113 = AT(2, 3); - const Temp_Type X114 = AT(2, 4); - const Temp_Type X115 = AT(2, 5); - const Temp_Type X116 = AT(2, 6); - const Temp_Type X117 = AT(2, 7); - const Temp_Type X120 = D(F(-0.074658f) * AT(1, 0) + F(0.513280f) * AT(3, 0) + F(0.768178f) * AT(5, 0) + F(-0.375330f) * AT(7, 0)); - const Temp_Type X121 = D(F(-0.074658f) * AT(1, 1) + F(0.513280f) * AT(3, 1) + F(0.768178f) * AT(5, 1) + F(-0.375330f) * AT(7, 1)); - const Temp_Type X122 = D(F(-0.074658f) * AT(1, 2) + F(0.513280f) * AT(3, 2) + F(0.768178f) * AT(5, 2) + F(-0.375330f) * AT(7, 2)); - const Temp_Type X123 = D(F(-0.074658f) * AT(1, 3) + F(0.513280f) * AT(3, 3) + F(0.768178f) * AT(5, 3) + F(-0.375330f) * AT(7, 3)); - const Temp_Type X124 = D(F(-0.074658f) * AT(1, 4) + F(0.513280f) * AT(3, 4) + F(0.768178f) * AT(5, 4) + F(-0.375330f) * AT(7, 4)); - const Temp_Type X125 = D(F(-0.074658f) * AT(1, 5) + F(0.513280f) * AT(3, 5) + F(0.768178f) * AT(5, 5) + F(-0.375330f) * AT(7, 5)); - const Temp_Type X126 = D(F(-0.074658f) * AT(1, 6) + F(0.513280f) * AT(3, 6) + F(0.768178f) * AT(5, 6) + F(-0.375330f) * AT(7, 6)); - const Temp_Type X127 = D(F(-0.074658f) * AT(1, 7) + F(0.513280f) * AT(3, 7) + F(0.768178f) * AT(5, 7) + F(-0.375330f) * AT(7, 7)); - const Temp_Type X130 = AT(6, 0); - const Temp_Type X131 = AT(6, 1); - const Temp_Type X132 = AT(6, 2); - const Temp_Type X133 = AT(6, 3); - const Temp_Type X134 = AT(6, 4); - const Temp_Type X135 = AT(6, 5); - const Temp_Type X136 = AT(6, 6); - const Temp_Type X137 = AT(6, 7); - // 80 muls 48 adds - - // 4x4 = 4x8 times 8x4, matrix 1 is constant - R.at(0, 0) = X100; - R.at(0, 1) = D(X101 * F(0.415735f) + X103 * F(0.791065f) + X105 * F(-0.352443f) + X107 * F(0.277785f)); - R.at(0, 2) = X104; - R.at(0, 3) = D(X101 * F(0.022887f) + X103 * F(-0.097545f) + X105 * F(0.490393f) + X107 * F(0.865723f)); - R.at(1, 0) = X110; - R.at(1, 1) = D(X111 * F(0.415735f) + X113 * F(0.791065f) + X115 * F(-0.352443f) + X117 * F(0.277785f)); - R.at(1, 2) = X114; - R.at(1, 3) = D(X111 * F(0.022887f) + X113 * F(-0.097545f) + X115 * F(0.490393f) + X117 * F(0.865723f)); - R.at(2, 0) = X120; - R.at(2, 1) = D(X121 * F(0.415735f) + X123 * F(0.791065f) + X125 * F(-0.352443f) + X127 * F(0.277785f)); - R.at(2, 2) = X124; - R.at(2, 3) = D(X121 * F(0.022887f) + X123 * F(-0.097545f) + X125 * F(0.490393f) + X127 * F(0.865723f)); - R.at(3, 0) = X130; - R.at(3, 1) = D(X131 * F(0.415735f) + X133 * F(0.791065f) + X135 * F(-0.352443f) + X137 * F(0.277785f)); - R.at(3, 2) = X134; - R.at(3, 3) = D(X131 * F(0.022887f) + X133 * F(-0.097545f) + X135 * F(0.490393f) + X137 * F(0.865723f)); - // 40 muls 24 adds - // 4x4 = 4x8 times 8x4, matrix 1 is constant - S.at(0, 0) = D(X101 * F(0.906127f) + X103 * F(-0.318190f) + X105 * F(0.212608f) + X107 * F(-0.180240f)); - S.at(0, 1) = X102; - S.at(0, 2) = D(X101 * F(-0.074658f) + X103 * F(0.513280f) + X105 * F(0.768178f) + X107 * F(-0.375330f)); - S.at(0, 3) = X106; - S.at(1, 0) = D(X111 * F(0.906127f) + X113 * F(-0.318190f) + X115 * F(0.212608f) + X117 * F(-0.180240f)); - S.at(1, 1) = X112; - S.at(1, 2) = D(X111 * F(-0.074658f) + X113 * F(0.513280f) + X115 * F(0.768178f) + X117 * F(-0.375330f)); - S.at(1, 3) = X116; - S.at(2, 0) = D(X121 * F(0.906127f) + X123 * F(-0.318190f) + X125 * F(0.212608f) + X127 * F(-0.180240f)); - S.at(2, 1) = X122; - S.at(2, 2) = D(X121 * F(-0.074658f) + X123 * F(0.513280f) + X125 * F(0.768178f) + X127 * F(-0.375330f)); - S.at(2, 3) = X126; - S.at(3, 0) = D(X131 * F(0.906127f) + X133 * F(-0.318190f) + X135 * F(0.212608f) + X137 * F(-0.180240f)); - S.at(3, 1) = X132; - S.at(3, 2) = D(X131 * F(-0.074658f) + X133 * F(0.513280f) + X135 * F(0.768178f) + X137 * F(-0.375330f)); - S.at(3, 3) = X136; - // 40 muls 24 adds - } - }; - } // end namespace DCT_Upsample - - // Unconditionally frees all allocated m_blocks. - void jpeg_decoder::free_all_blocks() - { - m_pStream = NULL; - for (mem_block *b = m_pMem_blocks; b; ) - { - mem_block *n = b->m_pNext; - jpgd_free(b); - b = n; - } - m_pMem_blocks = NULL; - } - - // This method handles all errors. - // It could easily be changed to use C++ exceptions. - void jpeg_decoder::stop_decoding(jpgd_status status) - { - m_error_code = status; - free_all_blocks(); - longjmp(m_jmp_state, status); - - // we shouldn't get here as longjmp shouldn't return, but we put it here to make it explicit - // that this function doesn't return, otherwise we get this error: - // - // error : function declared 'noreturn' should not return - exit(1); - } - - void *jpeg_decoder::alloc(size_t nSize, bool zero) - { - nSize = (JPGD_MAX(nSize, 1) + 3) & ~3; - char *rv = NULL; - for (mem_block *b = m_pMem_blocks; b; b = b->m_pNext) - { - if ((b->m_used_count + nSize) <= b->m_size) - { - rv = b->m_data + b->m_used_count; - b->m_used_count += nSize; - break; - } - } - if (!rv) - { - int capacity = JPGD_MAX(32768 - 256, (nSize + 2047) & ~2047); - mem_block *b = (mem_block*)jpgd_malloc(sizeof(mem_block) + capacity); - if (!b) stop_decoding(JPGD_NOTENOUGHMEM); - b->m_pNext = m_pMem_blocks; m_pMem_blocks = b; - b->m_used_count = nSize; - b->m_size = capacity; - rv = b->m_data; - } - if (zero) memset(rv, 0, nSize); - return rv; - } - - void jpeg_decoder::word_clear(void *p, uint16 c, uint n) - { - uint8 *pD = (uint8*)p; - const uint8 l = c & 0xFF, h = (c >> 8) & 0xFF; - while (n) - { - pD[0] = l; pD[1] = h; pD += 2; - n--; - } - } - - // Refill the input buffer. - // This method will sit in a loop until (A) the buffer is full or (B) - // the stream's read() method reports and end of file condition. - void jpeg_decoder::prep_in_buffer() - { - m_in_buf_left = 0; - m_pIn_buf_ofs = m_in_buf; - - if (m_eof_flag) - return; - - do - { - int bytes_read = m_pStream->read(m_in_buf + m_in_buf_left, JPGD_IN_BUF_SIZE - m_in_buf_left, &m_eof_flag); - if (bytes_read == -1) - stop_decoding(JPGD_STREAM_READ); - - m_in_buf_left += bytes_read; - } while ((m_in_buf_left < JPGD_IN_BUF_SIZE) && (!m_eof_flag)); - - m_total_bytes_read += m_in_buf_left; - - // Pad the end of the block with M_EOI (prevents the decompressor from going off the rails if the stream is invalid). - // (This dates way back to when this decompressor was written in C/asm, and the all-asm Huffman decoder did some fancy things to increase perf.) - word_clear(m_pIn_buf_ofs + m_in_buf_left, 0xD9FF, 64); - } - - // Read a Huffman code table. - void jpeg_decoder::read_dht_marker() - { - int i, index, count; - uint8 huff_num[17]; - uint8 huff_val[256]; - - uint num_left = get_bits(16); - - if (num_left < 2) - stop_decoding(JPGD_BAD_DHT_MARKER); - - num_left -= 2; - - while (num_left) - { - index = get_bits(8); - - huff_num[0] = 0; - - count = 0; - - for (i = 1; i <= 16; i++) - { - huff_num[i] = static_cast(get_bits(8)); - count += huff_num[i]; - } - - if (count > 255) - stop_decoding(JPGD_BAD_DHT_COUNTS); - - for (i = 0; i < count; i++) - huff_val[i] = static_cast(get_bits(8)); - - i = 1 + 16 + count; - - if (num_left < (uint)i) - stop_decoding(JPGD_BAD_DHT_MARKER); - - num_left -= i; - - if ((index & 0x10) > 0x10) - stop_decoding(JPGD_BAD_DHT_INDEX); - - index = (index & 0x0F) + ((index & 0x10) >> 4) * (JPGD_MAX_HUFF_TABLES >> 1); - - if (index >= JPGD_MAX_HUFF_TABLES) - stop_decoding(JPGD_BAD_DHT_INDEX); - - if (!m_huff_num[index]) - m_huff_num[index] = (uint8 *)alloc(17); - - if (!m_huff_val[index]) - m_huff_val[index] = (uint8 *)alloc(256); - - m_huff_ac[index] = (index & 0x10) != 0; - memcpy(m_huff_num[index], huff_num, 17); - memcpy(m_huff_val[index], huff_val, 256); - } - } - - // Read a quantization table. - void jpeg_decoder::read_dqt_marker() - { - int n, i, prec; - uint num_left; - uint temp; - - num_left = get_bits(16); - - if (num_left < 2) - stop_decoding(JPGD_BAD_DQT_MARKER); - - num_left -= 2; - - while (num_left) - { - n = get_bits(8); - prec = n >> 4; - n &= 0x0F; - - if (n >= JPGD_MAX_QUANT_TABLES) - stop_decoding(JPGD_BAD_DQT_TABLE); - - if (!m_quant[n]) - m_quant[n] = (jpgd_quant_t *)alloc(64 * sizeof(jpgd_quant_t)); - - // read quantization entries, in zag order - for (i = 0; i < 64; i++) - { - temp = get_bits(8); - - if (prec) - temp = (temp << 8) + get_bits(8); - - m_quant[n][i] = static_cast(temp); - } - - i = 64 + 1; - - if (prec) - i += 64; - - if (num_left < (uint)i) - stop_decoding(JPGD_BAD_DQT_LENGTH); - - num_left -= i; - } - } - - // Read the start of frame (SOF) marker. - void jpeg_decoder::read_sof_marker() - { - int i; - uint num_left; - - num_left = get_bits(16); - - if (get_bits(8) != 8) /* precision: sorry, only 8-bit precision is supported right now */ - stop_decoding(JPGD_BAD_PRECISION); - - m_image_y_size = get_bits(16); - - if ((m_image_y_size < 1) || (m_image_y_size > JPGD_MAX_HEIGHT)) - stop_decoding(JPGD_BAD_HEIGHT); - - m_image_x_size = get_bits(16); - - if ((m_image_x_size < 1) || (m_image_x_size > JPGD_MAX_WIDTH)) - stop_decoding(JPGD_BAD_WIDTH); - - m_comps_in_frame = get_bits(8); - - if (m_comps_in_frame > JPGD_MAX_COMPONENTS) - stop_decoding(JPGD_TOO_MANY_COMPONENTS); - - if (num_left != (uint)(m_comps_in_frame * 3 + 8)) - stop_decoding(JPGD_BAD_SOF_LENGTH); - - for (i = 0; i < m_comps_in_frame; i++) - { - m_comp_ident[i] = get_bits(8); - m_comp_h_samp[i] = get_bits(4); - m_comp_v_samp[i] = get_bits(4); - m_comp_quant[i] = get_bits(8); - } - } - - // Used to skip unrecognized markers. - void jpeg_decoder::skip_variable_marker() - { - uint num_left; - - num_left = get_bits(16); - - if (num_left < 2) - stop_decoding(JPGD_BAD_VARIABLE_MARKER); - - num_left -= 2; - - while (num_left) - { - get_bits(8); - num_left--; - } - } - - // Read a define restart interval (DRI) marker. - void jpeg_decoder::read_dri_marker() - { - if (get_bits(16) != 4) - stop_decoding(JPGD_BAD_DRI_LENGTH); - - m_restart_interval = get_bits(16); - } - - // Read a start of scan (SOS) marker. - void jpeg_decoder::read_sos_marker() - { - uint num_left; - int i, ci, n, c, cc; - - num_left = get_bits(16); - - n = get_bits(8); - - m_comps_in_scan = n; - - num_left -= 3; - - if ( (num_left != (uint)(n * 2 + 3)) || (n < 1) || (n > JPGD_MAX_COMPS_IN_SCAN) ) - stop_decoding(JPGD_BAD_SOS_LENGTH); - - for (i = 0; i < n; i++) - { - cc = get_bits(8); - c = get_bits(8); - num_left -= 2; - - for (ci = 0; ci < m_comps_in_frame; ci++) - if (cc == m_comp_ident[ci]) - break; - - if (ci >= m_comps_in_frame) - stop_decoding(JPGD_BAD_SOS_COMP_ID); - - m_comp_list[i] = ci; - m_comp_dc_tab[ci] = (c >> 4) & 15; - m_comp_ac_tab[ci] = (c & 15) + (JPGD_MAX_HUFF_TABLES >> 1); - } - - m_spectral_start = get_bits(8); - m_spectral_end = get_bits(8); - m_successive_high = get_bits(4); - m_successive_low = get_bits(4); - - if (!m_progressive_flag) - { - m_spectral_start = 0; - m_spectral_end = 63; - } - - num_left -= 3; - - while (num_left) /* read past whatever is num_left */ - { - get_bits(8); - num_left--; - } - } - - // Finds the next marker. - int jpeg_decoder::next_marker() - { - uint c, bytes; - - bytes = 0; - - do - { - do - { - bytes++; - c = get_bits(8); - } while (c != 0xFF); - - do - { - c = get_bits(8); - } while (c == 0xFF); - - } while (c == 0); - - // If bytes > 0 here, there where extra bytes before the marker (not good). - - return c; - } - - // Process markers. Returns when an SOFx, SOI, EOI, or SOS marker is - // encountered. - int jpeg_decoder::process_markers() - { - int c; - - for ( ; ; ) - { - c = next_marker(); - - switch (c) - { - case M_SOF0: - case M_SOF1: - case M_SOF2: - case M_SOF3: - case M_SOF5: - case M_SOF6: - case M_SOF7: - // case M_JPG: - case M_SOF9: - case M_SOF10: - case M_SOF11: - case M_SOF13: - case M_SOF14: - case M_SOF15: - case M_SOI: - case M_EOI: - case M_SOS: - { - return c; - } - case M_DHT: - { - read_dht_marker(); - break; - } - // No arithmitic support - dumb patents! - case M_DAC: - { - stop_decoding(JPGD_NO_ARITHMITIC_SUPPORT); - break; - } - case M_DQT: - { - read_dqt_marker(); - break; - } - case M_DRI: - { - read_dri_marker(); - break; - } - //case M_APP0: /* no need to read the JFIF marker */ - - case M_JPG: - case M_RST0: /* no parameters */ - case M_RST1: - case M_RST2: - case M_RST3: - case M_RST4: - case M_RST5: - case M_RST6: - case M_RST7: - case M_TEM: - { - stop_decoding(JPGD_UNEXPECTED_MARKER); - break; - } - default: /* must be DNL, DHP, EXP, APPn, JPGn, COM, or RESn or APP0 */ - { - skip_variable_marker(); - break; - } - } - } - } - - // Finds the start of image (SOI) marker. - // This code is rather defensive: it only checks the first 512 bytes to avoid - // false positives. - void jpeg_decoder::locate_soi_marker() - { - uint lastchar, thischar; - uint bytesleft; - - lastchar = get_bits(8); - - thischar = get_bits(8); - - /* ok if it's a normal JPEG file without a special header */ - - if ((lastchar == 0xFF) && (thischar == M_SOI)) - return; - - bytesleft = 4096; //512; - - for ( ; ; ) - { - if (--bytesleft == 0) - stop_decoding(JPGD_NOT_JPEG); - - lastchar = thischar; - - thischar = get_bits(8); - - if (lastchar == 0xFF) - { - if (thischar == M_SOI) - break; - else if (thischar == M_EOI) // get_bits will keep returning M_EOI if we read past the end - stop_decoding(JPGD_NOT_JPEG); - } - } - - // Check the next character after marker: if it's not 0xFF, it can't be the start of the next marker, so the file is bad. - thischar = (m_bit_buf >> 24) & 0xFF; - - if (thischar != 0xFF) - stop_decoding(JPGD_NOT_JPEG); - } - - // Find a start of frame (SOF) marker. - void jpeg_decoder::locate_sof_marker() - { - locate_soi_marker(); - - int c = process_markers(); - - switch (c) - { - case M_SOF2: - m_progressive_flag = JPGD_TRUE; - case M_SOF0: /* baseline DCT */ - case M_SOF1: /* extended sequential DCT */ - { - read_sof_marker(); - break; - } - case M_SOF9: /* Arithmitic coding */ - { - stop_decoding(JPGD_NO_ARITHMITIC_SUPPORT); - break; - } - default: - { - stop_decoding(JPGD_UNSUPPORTED_MARKER); - break; - } - } - } - - // Find a start of scan (SOS) marker. - int jpeg_decoder::locate_sos_marker() - { - int c; - - c = process_markers(); - - if (c == M_EOI) - return JPGD_FALSE; - else if (c != M_SOS) - stop_decoding(JPGD_UNEXPECTED_MARKER); - - read_sos_marker(); - - return JPGD_TRUE; - } - - // Reset everything to default/uninitialized state. - void jpeg_decoder::init(jpeg_decoder_stream *pStream) - { - m_pMem_blocks = NULL; - m_error_code = JPGD_SUCCESS; - m_ready_flag = false; - m_image_x_size = m_image_y_size = 0; - m_pStream = pStream; - m_progressive_flag = JPGD_FALSE; - - memset(m_huff_ac, 0, sizeof(m_huff_ac)); - memset(m_huff_num, 0, sizeof(m_huff_num)); - memset(m_huff_val, 0, sizeof(m_huff_val)); - memset(m_quant, 0, sizeof(m_quant)); - - m_scan_type = 0; - m_comps_in_frame = 0; - - memset(m_comp_h_samp, 0, sizeof(m_comp_h_samp)); - memset(m_comp_v_samp, 0, sizeof(m_comp_v_samp)); - memset(m_comp_quant, 0, sizeof(m_comp_quant)); - memset(m_comp_ident, 0, sizeof(m_comp_ident)); - memset(m_comp_h_blocks, 0, sizeof(m_comp_h_blocks)); - memset(m_comp_v_blocks, 0, sizeof(m_comp_v_blocks)); - - m_comps_in_scan = 0; - memset(m_comp_list, 0, sizeof(m_comp_list)); - memset(m_comp_dc_tab, 0, sizeof(m_comp_dc_tab)); - memset(m_comp_ac_tab, 0, sizeof(m_comp_ac_tab)); - - m_spectral_start = 0; - m_spectral_end = 0; - m_successive_low = 0; - m_successive_high = 0; - m_max_mcu_x_size = 0; - m_max_mcu_y_size = 0; - m_blocks_per_mcu = 0; - m_max_blocks_per_row = 0; - m_mcus_per_row = 0; - m_mcus_per_col = 0; - m_expanded_blocks_per_component = 0; - m_expanded_blocks_per_mcu = 0; - m_expanded_blocks_per_row = 0; - m_freq_domain_chroma_upsample = false; - - memset(m_mcu_org, 0, sizeof(m_mcu_org)); - - m_total_lines_left = 0; - m_mcu_lines_left = 0; - m_real_dest_bytes_per_scan_line = 0; - m_dest_bytes_per_scan_line = 0; - m_dest_bytes_per_pixel = 0; - - memset(m_pHuff_tabs, 0, sizeof(m_pHuff_tabs)); - - memset(m_dc_coeffs, 0, sizeof(m_dc_coeffs)); - memset(m_ac_coeffs, 0, sizeof(m_ac_coeffs)); - memset(m_block_y_mcu, 0, sizeof(m_block_y_mcu)); - - m_eob_run = 0; - - memset(m_block_y_mcu, 0, sizeof(m_block_y_mcu)); - - m_pIn_buf_ofs = m_in_buf; - m_in_buf_left = 0; - m_eof_flag = false; - m_tem_flag = 0; - - memset(m_in_buf_pad_start, 0, sizeof(m_in_buf_pad_start)); - memset(m_in_buf, 0, sizeof(m_in_buf)); - memset(m_in_buf_pad_end, 0, sizeof(m_in_buf_pad_end)); - - m_restart_interval = 0; - m_restarts_left = 0; - m_next_restart_num = 0; - - m_max_mcus_per_row = 0; - m_max_blocks_per_mcu = 0; - m_max_mcus_per_col = 0; - - memset(m_last_dc_val, 0, sizeof(m_last_dc_val)); - m_pMCU_coefficients = NULL; - m_pSample_buf = NULL; - - m_total_bytes_read = 0; - - m_pScan_line_0 = NULL; - m_pScan_line_1 = NULL; - - // Ready the input buffer. - prep_in_buffer(); - - // Prime the bit buffer. - m_bits_left = 16; - m_bit_buf = 0; - - get_bits(16); - get_bits(16); - - for (int i = 0; i < JPGD_MAX_BLOCKS_PER_MCU; i++) - m_mcu_block_max_zag[i] = 64; - } - -#define SCALEBITS 16 -#define ONE_HALF ((int) 1 << (SCALEBITS-1)) -#define FIX(x) ((int) ((x) * (1L<> SCALEBITS; - m_cbb[i] = ( FIX(1.77200f) * k + ONE_HALF) >> SCALEBITS; - m_crg[i] = (-FIX(0.71414f)) * k; - m_cbg[i] = (-FIX(0.34414f)) * k + ONE_HALF; - } - } - - // This method throws back into the stream any bytes that where read - // into the bit buffer during initial marker scanning. - void jpeg_decoder::fix_in_buffer() - { - // In case any 0xFF's where pulled into the buffer during marker scanning. - JPGD_ASSERT((m_bits_left & 7) == 0); - - if (m_bits_left == 16) - stuff_char( (uint8)(m_bit_buf & 0xFF)); - - if (m_bits_left >= 8) - stuff_char( (uint8)((m_bit_buf >> 8) & 0xFF)); - - stuff_char((uint8)((m_bit_buf >> 16) & 0xFF)); - stuff_char((uint8)((m_bit_buf >> 24) & 0xFF)); - - m_bits_left = 16; - get_bits_no_markers(16); - get_bits_no_markers(16); - } - - void jpeg_decoder::transform_mcu(int mcu_row) - { - jpgd_block_t* pSrc_ptr = m_pMCU_coefficients; - uint8* pDst_ptr = m_pSample_buf + mcu_row * m_blocks_per_mcu * 64; - - for (int mcu_block = 0; mcu_block < m_blocks_per_mcu; mcu_block++) - { - idct(pSrc_ptr, pDst_ptr, m_mcu_block_max_zag[mcu_block]); - pSrc_ptr += 64; - pDst_ptr += 64; - } - } - - static const uint8 s_max_rc[64] = - { - 17, 18, 34, 50, 50, 51, 52, 52, 52, 68, 84, 84, 84, 84, 85, 86, 86, 86, 86, 86, - 102, 118, 118, 118, 118, 118, 118, 119, 120, 120, 120, 120, 120, 120, 120, 136, - 136, 136, 136, 136, 136, 136, 136, 136, 136, 136, 136, 136, 136, 136, 136, 136, - 136, 136, 136, 136, 136, 136, 136, 136, 136, 136, 136, 136 - }; - - void jpeg_decoder::transform_mcu_expand(int mcu_row) - { - jpgd_block_t* pSrc_ptr = m_pMCU_coefficients; - uint8* pDst_ptr = m_pSample_buf + mcu_row * m_expanded_blocks_per_mcu * 64; - - // Y IDCT - int mcu_block; - for (mcu_block = 0; mcu_block < m_expanded_blocks_per_component; mcu_block++) - { - idct(pSrc_ptr, pDst_ptr, m_mcu_block_max_zag[mcu_block]); - pSrc_ptr += 64; - pDst_ptr += 64; - } - - // Chroma IDCT, with upsampling - jpgd_block_t temp_block[64]; - - for (int i = 0; i < 2; i++) - { - DCT_Upsample::Matrix44 P, Q, R, S; - - JPGD_ASSERT(m_mcu_block_max_zag[mcu_block] >= 1); - JPGD_ASSERT(m_mcu_block_max_zag[mcu_block] <= 64); - - switch (s_max_rc[m_mcu_block_max_zag[mcu_block++] - 1]) - { - case 1*16+1: - DCT_Upsample::P_Q<1, 1>::calc(P, Q, pSrc_ptr); - DCT_Upsample::R_S<1, 1>::calc(R, S, pSrc_ptr); - break; - case 1*16+2: - DCT_Upsample::P_Q<1, 2>::calc(P, Q, pSrc_ptr); - DCT_Upsample::R_S<1, 2>::calc(R, S, pSrc_ptr); - break; - case 2*16+2: - DCT_Upsample::P_Q<2, 2>::calc(P, Q, pSrc_ptr); - DCT_Upsample::R_S<2, 2>::calc(R, S, pSrc_ptr); - break; - case 3*16+2: - DCT_Upsample::P_Q<3, 2>::calc(P, Q, pSrc_ptr); - DCT_Upsample::R_S<3, 2>::calc(R, S, pSrc_ptr); - break; - case 3*16+3: - DCT_Upsample::P_Q<3, 3>::calc(P, Q, pSrc_ptr); - DCT_Upsample::R_S<3, 3>::calc(R, S, pSrc_ptr); - break; - case 3*16+4: - DCT_Upsample::P_Q<3, 4>::calc(P, Q, pSrc_ptr); - DCT_Upsample::R_S<3, 4>::calc(R, S, pSrc_ptr); - break; - case 4*16+4: - DCT_Upsample::P_Q<4, 4>::calc(P, Q, pSrc_ptr); - DCT_Upsample::R_S<4, 4>::calc(R, S, pSrc_ptr); - break; - case 5*16+4: - DCT_Upsample::P_Q<5, 4>::calc(P, Q, pSrc_ptr); - DCT_Upsample::R_S<5, 4>::calc(R, S, pSrc_ptr); - break; - case 5*16+5: - DCT_Upsample::P_Q<5, 5>::calc(P, Q, pSrc_ptr); - DCT_Upsample::R_S<5, 5>::calc(R, S, pSrc_ptr); - break; - case 5*16+6: - DCT_Upsample::P_Q<5, 6>::calc(P, Q, pSrc_ptr); - DCT_Upsample::R_S<5, 6>::calc(R, S, pSrc_ptr); - break; - case 6*16+6: - DCT_Upsample::P_Q<6, 6>::calc(P, Q, pSrc_ptr); - DCT_Upsample::R_S<6, 6>::calc(R, S, pSrc_ptr); - break; - case 7*16+6: - DCT_Upsample::P_Q<7, 6>::calc(P, Q, pSrc_ptr); - DCT_Upsample::R_S<7, 6>::calc(R, S, pSrc_ptr); - break; - case 7*16+7: - DCT_Upsample::P_Q<7, 7>::calc(P, Q, pSrc_ptr); - DCT_Upsample::R_S<7, 7>::calc(R, S, pSrc_ptr); - break; - case 7*16+8: - DCT_Upsample::P_Q<7, 8>::calc(P, Q, pSrc_ptr); - DCT_Upsample::R_S<7, 8>::calc(R, S, pSrc_ptr); - break; - case 8*16+8: - DCT_Upsample::P_Q<8, 8>::calc(P, Q, pSrc_ptr); - DCT_Upsample::R_S<8, 8>::calc(R, S, pSrc_ptr); - break; - default: - JPGD_ASSERT(false); - } - - DCT_Upsample::Matrix44 a(P + Q); P -= Q; - DCT_Upsample::Matrix44& b = P; - DCT_Upsample::Matrix44 c(R + S); R -= S; - DCT_Upsample::Matrix44& d = R; - - DCT_Upsample::Matrix44::add_and_store(temp_block, a, c); - idct_4x4(temp_block, pDst_ptr); - pDst_ptr += 64; - - DCT_Upsample::Matrix44::sub_and_store(temp_block, a, c); - idct_4x4(temp_block, pDst_ptr); - pDst_ptr += 64; - - DCT_Upsample::Matrix44::add_and_store(temp_block, b, d); - idct_4x4(temp_block, pDst_ptr); - pDst_ptr += 64; - - DCT_Upsample::Matrix44::sub_and_store(temp_block, b, d); - idct_4x4(temp_block, pDst_ptr); - pDst_ptr += 64; - - pSrc_ptr += 64; - } - } - - // Loads and dequantizes the next row of (already decoded) coefficients. - // Progressive images only. - void jpeg_decoder::load_next_row() - { - int i; - jpgd_block_t *p; - jpgd_quant_t *q; - int mcu_row, mcu_block, row_block = 0; - int component_num, component_id; - int block_x_mcu[JPGD_MAX_COMPONENTS]; - - memset(block_x_mcu, 0, JPGD_MAX_COMPONENTS * sizeof(int)); - - for (mcu_row = 0; mcu_row < m_mcus_per_row; mcu_row++) - { - int block_x_mcu_ofs = 0, block_y_mcu_ofs = 0; - - for (mcu_block = 0; mcu_block < m_blocks_per_mcu; mcu_block++) - { - component_id = m_mcu_org[mcu_block]; - q = m_quant[m_comp_quant[component_id]]; - - p = m_pMCU_coefficients + 64 * mcu_block; - - jpgd_block_t* pAC = coeff_buf_getp(m_ac_coeffs[component_id], block_x_mcu[component_id] + block_x_mcu_ofs, m_block_y_mcu[component_id] + block_y_mcu_ofs); - jpgd_block_t* pDC = coeff_buf_getp(m_dc_coeffs[component_id], block_x_mcu[component_id] + block_x_mcu_ofs, m_block_y_mcu[component_id] + block_y_mcu_ofs); - p[0] = pDC[0]; - memcpy(&p[1], &pAC[1], 63 * sizeof(jpgd_block_t)); - - for (i = 63; i > 0; i--) - if (p[g_ZAG[i]]) - break; - - m_mcu_block_max_zag[mcu_block] = i + 1; - - for ( ; i >= 0; i--) - if (p[g_ZAG[i]]) - p[g_ZAG[i]] = static_cast(p[g_ZAG[i]] * q[i]); - - row_block++; - - if (m_comps_in_scan == 1) - block_x_mcu[component_id]++; - else - { - if (++block_x_mcu_ofs == m_comp_h_samp[component_id]) - { - block_x_mcu_ofs = 0; - - if (++block_y_mcu_ofs == m_comp_v_samp[component_id]) - { - block_y_mcu_ofs = 0; - - block_x_mcu[component_id] += m_comp_h_samp[component_id]; - } - } - } - } - - if (m_freq_domain_chroma_upsample) - transform_mcu_expand(mcu_row); - else - transform_mcu(mcu_row); - } - - if (m_comps_in_scan == 1) - m_block_y_mcu[m_comp_list[0]]++; - else - { - for (component_num = 0; component_num < m_comps_in_scan; component_num++) - { - component_id = m_comp_list[component_num]; - - m_block_y_mcu[component_id] += m_comp_v_samp[component_id]; - } - } - } - - // Restart interval processing. - void jpeg_decoder::process_restart() - { - int i; - int c = 0; - - // Align to a byte boundry - // FIXME: Is this really necessary? get_bits_no_markers() never reads in markers! - //get_bits_no_markers(m_bits_left & 7); - - // Let's scan a little bit to find the marker, but not _too_ far. - // 1536 is a "fudge factor" that determines how much to scan. - for (i = 1536; i > 0; i--) - if (get_char() == 0xFF) - break; - - if (i == 0) - stop_decoding(JPGD_BAD_RESTART_MARKER); - - for ( ; i > 0; i--) - if ((c = get_char()) != 0xFF) - break; - - if (i == 0) - stop_decoding(JPGD_BAD_RESTART_MARKER); - - // Is it the expected marker? If not, something bad happened. - if (c != (m_next_restart_num + M_RST0)) - stop_decoding(JPGD_BAD_RESTART_MARKER); - - // Reset each component's DC prediction values. - memset(&m_last_dc_val, 0, m_comps_in_frame * sizeof(uint)); - - m_eob_run = 0; - - m_restarts_left = m_restart_interval; - - m_next_restart_num = (m_next_restart_num + 1) & 7; - - // Get the bit buffer going again... - - m_bits_left = 16; - get_bits_no_markers(16); - get_bits_no_markers(16); - } - - static inline int dequantize_ac(int c, int q) { c *= q; return c; } - - // Decodes and dequantizes the next row of coefficients. - void jpeg_decoder::decode_next_row() - { - int row_block = 0; - - for (int mcu_row = 0; mcu_row < m_mcus_per_row; mcu_row++) - { - if ((m_restart_interval) && (m_restarts_left == 0)) - process_restart(); - - jpgd_block_t* p = m_pMCU_coefficients; - for (int mcu_block = 0; mcu_block < m_blocks_per_mcu; mcu_block++, p += 64) - { - int component_id = m_mcu_org[mcu_block]; - jpgd_quant_t* q = m_quant[m_comp_quant[component_id]]; - - int r, s; - s = huff_decode(m_pHuff_tabs[m_comp_dc_tab[component_id]], r); - s = HUFF_EXTEND(r, s); - - m_last_dc_val[component_id] = (s += m_last_dc_val[component_id]); - - p[0] = static_cast(s * q[0]); - - int prev_num_set = m_mcu_block_max_zag[mcu_block]; - - huff_tables *pH = m_pHuff_tabs[m_comp_ac_tab[component_id]]; - - int k; - for (k = 1; k < 64; k++) - { - int extra_bits; - s = huff_decode(pH, extra_bits); - - r = s >> 4; - s &= 15; - - if (s) - { - if (r) - { - if ((k + r) > 63) - stop_decoding(JPGD_DECODE_ERROR); - - if (k < prev_num_set) - { - int n = JPGD_MIN(r, prev_num_set - k); - int kt = k; - while (n--) - p[g_ZAG[kt++]] = 0; - } - - k += r; - } - - s = HUFF_EXTEND(extra_bits, s); - - JPGD_ASSERT(k < 64); - - p[g_ZAG[k]] = static_cast(dequantize_ac(s, q[k])); //s * q[k]; - } - else - { - if (r == 15) - { - if ((k + 16) > 64) - stop_decoding(JPGD_DECODE_ERROR); - - if (k < prev_num_set) - { - int n = JPGD_MIN(16, prev_num_set - k); - int kt = k; - while (n--) - { - JPGD_ASSERT(kt <= 63); - p[g_ZAG[kt++]] = 0; - } - } - - k += 16 - 1; // - 1 because the loop counter is k - // BEGIN EPIC MOD - JPGD_ASSERT(k < 64 && p[g_ZAG[k]] == 0); - // END EPIC MOD - } - else - break; - } - } - - if (k < prev_num_set) - { - int kt = k; - while (kt < prev_num_set) - p[g_ZAG[kt++]] = 0; - } - - m_mcu_block_max_zag[mcu_block] = k; - - row_block++; - } - - if (m_freq_domain_chroma_upsample) - transform_mcu_expand(mcu_row); - else - transform_mcu(mcu_row); - - m_restarts_left--; - } - } - - // YCbCr H1V1 (1x1:1:1, 3 m_blocks per MCU) to RGB - void jpeg_decoder::H1V1Convert() - { - int row = m_max_mcu_y_size - m_mcu_lines_left; - uint8 *d = m_pScan_line_0; - uint8 *s = m_pSample_buf + row * 8; - - for (int i = m_max_mcus_per_row; i > 0; i--) - { - for (int j = 0; j < 8; j++) - { - int y = s[j]; - int cb = s[64+j]; - int cr = s[128+j]; - - if (jpg_format == ERGBFormatJPG::BGRA) - { - d[0] = clamp(y + m_cbb[cb]); - d[1] = clamp(y + ((m_crg[cr] + m_cbg[cb]) >> 16)); - d[2] = clamp(y + m_crr[cr]); - d[3] = 255; - } - else - { - d[0] = clamp(y + m_crr[cr]); - d[1] = clamp(y + ((m_crg[cr] + m_cbg[cb]) >> 16)); - d[2] = clamp(y + m_cbb[cb]); - d[3] = 255; - } - d += 4; - } - - s += 64*3; - } - } - - // YCbCr H2V1 (2x1:1:1, 4 m_blocks per MCU) to RGB - void jpeg_decoder::H2V1Convert() - { - int row = m_max_mcu_y_size - m_mcu_lines_left; - uint8 *d0 = m_pScan_line_0; - uint8 *y = m_pSample_buf + row * 8; - uint8 *c = m_pSample_buf + 2*64 + row * 8; - - for (int i = m_max_mcus_per_row; i > 0; i--) - { - for (int l = 0; l < 2; l++) - { - for (int j = 0; j < 4; j++) - { - int cb = c[0]; - int cr = c[64]; - - int rc = m_crr[cr]; - int gc = ((m_crg[cr] + m_cbg[cb]) >> 16); - int bc = m_cbb[cb]; - - int yy = y[j<<1]; - if (jpg_format == ERGBFormatJPG::BGRA) - { - d0[0] = clamp(yy+bc); - d0[1] = clamp(yy+gc); - d0[2] = clamp(yy+rc); - d0[3] = 255; - yy = y[(j<<1)+1]; - d0[4] = clamp(yy+bc); - d0[5] = clamp(yy+gc); - d0[6] = clamp(yy+rc); - d0[7] = 255; - } - else - { - d0[0] = clamp(yy+rc); - d0[1] = clamp(yy+gc); - d0[2] = clamp(yy+bc); - d0[3] = 255; - yy = y[(j<<1)+1]; - d0[4] = clamp(yy+rc); - d0[5] = clamp(yy+gc); - d0[6] = clamp(yy+bc); - d0[7] = 255; - } - - d0 += 8; - - c++; - } - y += 64; - } - - y += 64*4 - 64*2; - c += 64*4 - 8; - } - } - - // YCbCr H2V1 (1x2:1:1, 4 m_blocks per MCU) to RGB - void jpeg_decoder::H1V2Convert() - { - int row = m_max_mcu_y_size - m_mcu_lines_left; - uint8 *d0 = m_pScan_line_0; - uint8 *d1 = m_pScan_line_1; - uint8 *y; - uint8 *c; - - if (row < 8) - y = m_pSample_buf + row * 8; - else - y = m_pSample_buf + 64*1 + (row & 7) * 8; - - c = m_pSample_buf + 64*2 + (row >> 1) * 8; - - for (int i = m_max_mcus_per_row; i > 0; i--) - { - for (int j = 0; j < 8; j++) - { - int cb = c[0+j]; - int cr = c[64+j]; - - int rc = m_crr[cr]; - int gc = ((m_crg[cr] + m_cbg[cb]) >> 16); - int bc = m_cbb[cb]; - - int yy = y[j]; - if (jpg_format == ERGBFormatJPG::BGRA) - { - d0[0] = clamp(yy+bc); - d0[1] = clamp(yy+gc); - d0[2] = clamp(yy+rc); - d0[3] = 255; - yy = y[8+j]; - d1[0] = clamp(yy+bc); - d1[1] = clamp(yy+gc); - d1[2] = clamp(yy+rc); - d1[3] = 255; - } - else - { - d0[0] = clamp(yy+rc); - d0[1] = clamp(yy+gc); - d0[2] = clamp(yy+bc); - d0[3] = 255; - yy = y[8+j]; - d1[0] = clamp(yy+rc); - d1[1] = clamp(yy+gc); - d1[2] = clamp(yy+bc); - d1[3] = 255; - } - - d0 += 4; - d1 += 4; - } - - y += 64*4; - c += 64*4; - } - } - - // YCbCr H2V2 (2x2:1:1, 6 m_blocks per MCU) to RGB - void jpeg_decoder::H2V2Convert() - { - int row = m_max_mcu_y_size - m_mcu_lines_left; - uint8 *d0 = m_pScan_line_0; - uint8 *d1 = m_pScan_line_1; - uint8 *y; - uint8 *c; - - if (row < 8) - y = m_pSample_buf + row * 8; - else - y = m_pSample_buf + 64*2 + (row & 7) * 8; - - c = m_pSample_buf + 64*4 + (row >> 1) * 8; - - for (int i = m_max_mcus_per_row; i > 0; i--) - { - for (int l = 0; l < 2; l++) - { - for (int j = 0; j < 8; j += 2) - { - int cb = c[0]; - int cr = c[64]; - - int rc = m_crr[cr]; - int gc = ((m_crg[cr] + m_cbg[cb]) >> 16); - int bc = m_cbb[cb]; - - int yy = y[j]; - if (jpg_format == ERGBFormatJPG::BGRA) - { - d0[0] = clamp(yy+bc); - d0[1] = clamp(yy+gc); - d0[2] = clamp(yy+rc); - d0[3] = 255; - yy = y[j+1]; - d0[4] = clamp(yy+bc); - d0[5] = clamp(yy+gc); - d0[6] = clamp(yy+rc); - d0[7] = 255; - yy = y[j+8]; - d1[0] = clamp(yy+bc); - d1[1] = clamp(yy+gc); - d1[2] = clamp(yy+rc); - d1[3] = 255; - yy = y[j+8+1]; - d1[4] = clamp(yy+bc); - d1[5] = clamp(yy+gc); - d1[6] = clamp(yy+rc); - d1[7] = 255; - } - else - { - d0[0] = clamp(yy+rc); - d0[1] = clamp(yy+gc); - d0[2] = clamp(yy+bc); - d0[3] = 255; - yy = y[j+1]; - d0[4] = clamp(yy+rc); - d0[5] = clamp(yy+gc); - d0[6] = clamp(yy+bc); - d0[7] = 255; - yy = y[j+8]; - d1[0] = clamp(yy+rc); - d1[1] = clamp(yy+gc); - d1[2] = clamp(yy+bc); - d1[3] = 255; - yy = y[j+8+1]; - d1[4] = clamp(yy+rc); - d1[5] = clamp(yy+gc); - d1[6] = clamp(yy+bc); - d1[7] = 255; - } - - d0 += 8; - d1 += 8; - - c++; - } - y += 64; - } - - y += 64*6 - 64*2; - c += 64*6 - 8; - } - } - - // Y (1 block per MCU) to 8-bit grayscale - void jpeg_decoder::gray_convert() - { - int row = m_max_mcu_y_size - m_mcu_lines_left; - uint8 *d = m_pScan_line_0; - uint8 *s = m_pSample_buf + row * 8; - - for (int i = m_max_mcus_per_row; i > 0; i--) - { - *(uint *)d = *(uint *)s; - *(uint *)(&d[4]) = *(uint *)(&s[4]); - - s += 64; - d += 8; - } - } - - void jpeg_decoder::expanded_convert() - { - int row = m_max_mcu_y_size - m_mcu_lines_left; - - uint8* Py = m_pSample_buf + (row / 8) * 64 * m_comp_h_samp[0] + (row & 7) * 8; - - uint8* d = m_pScan_line_0; - - for (int i = m_max_mcus_per_row; i > 0; i--) - { - for (int k = 0; k < m_max_mcu_x_size; k += 8) - { - const int Y_ofs = k * 8; - const int Cb_ofs = Y_ofs + 64 * m_expanded_blocks_per_component; - const int Cr_ofs = Y_ofs + 64 * m_expanded_blocks_per_component * 2; - for (int j = 0; j < 8; j++) - { - int y = Py[Y_ofs + j]; - int cb = Py[Cb_ofs + j]; - int cr = Py[Cr_ofs + j]; - - if (jpg_format == ERGBFormatJPG::BGRA) - { - d[0] = clamp(y + m_cbb[cb]); - d[1] = clamp(y + ((m_crg[cr] + m_cbg[cb]) >> 16)); - d[2] = clamp(y + m_crr[cr]); - d[3] = 255; - } - else - { - d[0] = clamp(y + m_crr[cr]); - d[1] = clamp(y + ((m_crg[cr] + m_cbg[cb]) >> 16)); - d[2] = clamp(y + m_cbb[cb]); - d[3] = 255; - } - - d += 4; - } - } - - Py += 64 * m_expanded_blocks_per_mcu; - } - } - - // Find end of image (EOI) marker, so we can return to the user the exact size of the input stream. - void jpeg_decoder::find_eoi() - { - if (!m_progressive_flag) - { - // Attempt to read the EOI marker. - //get_bits_no_markers(m_bits_left & 7); - - // Prime the bit buffer - m_bits_left = 16; - get_bits(16); - get_bits(16); - - // The next marker _should_ be EOI - process_markers(); - } - - m_total_bytes_read -= m_in_buf_left; - } - - int jpeg_decoder::decode(const void** pScan_line, uint* pScan_line_len) - { - if ((m_error_code) || (!m_ready_flag)) - return JPGD_FAILED; - - if (m_total_lines_left == 0) - return JPGD_DONE; - - if (m_mcu_lines_left == 0) - { - if (setjmp(m_jmp_state)) - return JPGD_FAILED; - - if (m_progressive_flag) - load_next_row(); - else - decode_next_row(); - - // Find the EOI marker if that was the last row. - if (m_total_lines_left <= m_max_mcu_y_size) - find_eoi(); - - m_mcu_lines_left = m_max_mcu_y_size; - } - - if (m_freq_domain_chroma_upsample) - { - expanded_convert(); - *pScan_line = m_pScan_line_0; - } - else - { - switch (m_scan_type) - { - case JPGD_YH2V2: - { - if ((m_mcu_lines_left & 1) == 0) - { - H2V2Convert(); - *pScan_line = m_pScan_line_0; - } - else - *pScan_line = m_pScan_line_1; - - break; - } - case JPGD_YH2V1: - { - H2V1Convert(); - *pScan_line = m_pScan_line_0; - break; - } - case JPGD_YH1V2: - { - if ((m_mcu_lines_left & 1) == 0) - { - H1V2Convert(); - *pScan_line = m_pScan_line_0; - } - else - *pScan_line = m_pScan_line_1; - - break; - } - case JPGD_YH1V1: - { - H1V1Convert(); - *pScan_line = m_pScan_line_0; - break; - } - case JPGD_GRAYSCALE: - { - gray_convert(); - *pScan_line = m_pScan_line_0; - - break; - } - } - } - - *pScan_line_len = m_real_dest_bytes_per_scan_line; - - m_mcu_lines_left--; - m_total_lines_left--; - - return JPGD_SUCCESS; - } - - // Creates the tables needed for efficient Huffman decoding. - void jpeg_decoder::make_huff_table(int index, huff_tables *pH) - { - int p, i, l, si; - uint8 huffsize[257]; - uint huffcode[257]; - uint code; - uint subtree; - int code_size; - int lastp; - int nextfreeentry; - int currententry; - - pH->ac_table = m_huff_ac[index] != 0; - - p = 0; - - for (l = 1; l <= 16; l++) - { - for (i = 1; i <= m_huff_num[index][l]; i++) - huffsize[p++] = static_cast(l); - } - - huffsize[p] = 0; - - lastp = p; - - code = 0; - si = huffsize[0]; - p = 0; - - while (huffsize[p]) - { - while (huffsize[p] == si) - { - huffcode[p++] = code; - code++; - } - - code <<= 1; - si++; - } - - memset(pH->look_up, 0, sizeof(pH->look_up)); - memset(pH->look_up2, 0, sizeof(pH->look_up2)); - memset(pH->tree, 0, sizeof(pH->tree)); - memset(pH->code_size, 0, sizeof(pH->code_size)); - - nextfreeentry = -1; - - p = 0; - - while (p < lastp) - { - i = m_huff_val[index][p]; - code = huffcode[p]; - code_size = huffsize[p]; - - pH->code_size[i] = static_cast(code_size); - - if (code_size <= 8) - { - code <<= (8 - code_size); - - for (l = 1 << (8 - code_size); l > 0; l--) - { - JPGD_ASSERT(i < 256); - - pH->look_up[code] = i; - - bool has_extrabits = false; - int extra_bits = 0; - int num_extra_bits = i & 15; - - int bits_to_fetch = code_size; - if (num_extra_bits) - { - int total_codesize = code_size + num_extra_bits; - if (total_codesize <= 8) - { - has_extrabits = true; - extra_bits = ((1 << num_extra_bits) - 1) & (code >> (8 - total_codesize)); - JPGD_ASSERT(extra_bits <= 0x7FFF); - bits_to_fetch += num_extra_bits; - } - } - - if (!has_extrabits) - pH->look_up2[code] = i | (bits_to_fetch << 8); - else - pH->look_up2[code] = i | 0x8000 | (extra_bits << 16) | (bits_to_fetch << 8); - - code++; - } - } - else - { - subtree = (code >> (code_size - 8)) & 0xFF; - - currententry = pH->look_up[subtree]; - - if (currententry == 0) - { - pH->look_up[subtree] = currententry = nextfreeentry; - pH->look_up2[subtree] = currententry = nextfreeentry; - - nextfreeentry -= 2; - } - - code <<= (16 - (code_size - 8)); - - for (l = code_size; l > 9; l--) - { - if ((code & 0x8000) == 0) - currententry--; - - if (pH->tree[-currententry - 1] == 0) - { - pH->tree[-currententry - 1] = nextfreeentry; - - currententry = nextfreeentry; - - nextfreeentry -= 2; - } - else - currententry = pH->tree[-currententry - 1]; - - code <<= 1; - } - - if ((code & 0x8000) == 0) - currententry--; - - pH->tree[-currententry - 1] = i; - } - - p++; - } - } - - // Verifies the quantization tables needed for this scan are available. - void jpeg_decoder::check_quant_tables() - { - for (int i = 0; i < m_comps_in_scan; i++) - if (m_quant[m_comp_quant[m_comp_list[i]]] == NULL) - stop_decoding(JPGD_UNDEFINED_QUANT_TABLE); - } - - // Verifies that all the Huffman tables needed for this scan are available. - void jpeg_decoder::check_huff_tables() - { - for (int i = 0; i < m_comps_in_scan; i++) - { - if ((m_spectral_start == 0) && (m_huff_num[m_comp_dc_tab[m_comp_list[i]]] == NULL)) - stop_decoding(JPGD_UNDEFINED_HUFF_TABLE); - - if ((m_spectral_end > 0) && (m_huff_num[m_comp_ac_tab[m_comp_list[i]]] == NULL)) - stop_decoding(JPGD_UNDEFINED_HUFF_TABLE); - } - - for (int i = 0; i < JPGD_MAX_HUFF_TABLES; i++) - if (m_huff_num[i]) - { - if (!m_pHuff_tabs[i]) - m_pHuff_tabs[i] = (huff_tables *)alloc(sizeof(huff_tables)); - - make_huff_table(i, m_pHuff_tabs[i]); - } - } - - // Determines the component order inside each MCU. - // Also calcs how many MCU's are on each row, etc. - void jpeg_decoder::calc_mcu_block_order() - { - int component_num, component_id; - int max_h_samp = 0, max_v_samp = 0; - - for (component_id = 0; component_id < m_comps_in_frame; component_id++) - { - if (m_comp_h_samp[component_id] > max_h_samp) - max_h_samp = m_comp_h_samp[component_id]; - - if (m_comp_v_samp[component_id] > max_v_samp) - max_v_samp = m_comp_v_samp[component_id]; - } - - for (component_id = 0; component_id < m_comps_in_frame; component_id++) - { - m_comp_h_blocks[component_id] = ((((m_image_x_size * m_comp_h_samp[component_id]) + (max_h_samp - 1)) / max_h_samp) + 7) / 8; - m_comp_v_blocks[component_id] = ((((m_image_y_size * m_comp_v_samp[component_id]) + (max_v_samp - 1)) / max_v_samp) + 7) / 8; - } - - if (m_comps_in_scan == 1) - { - m_mcus_per_row = m_comp_h_blocks[m_comp_list[0]]; - m_mcus_per_col = m_comp_v_blocks[m_comp_list[0]]; - } - else - { - m_mcus_per_row = (((m_image_x_size + 7) / 8) + (max_h_samp - 1)) / max_h_samp; - m_mcus_per_col = (((m_image_y_size + 7) / 8) + (max_v_samp - 1)) / max_v_samp; - } - - if (m_comps_in_scan == 1) - { - m_mcu_org[0] = m_comp_list[0]; - - m_blocks_per_mcu = 1; - } - else - { - m_blocks_per_mcu = 0; - - for (component_num = 0; component_num < m_comps_in_scan; component_num++) - { - int num_blocks; - - component_id = m_comp_list[component_num]; - - num_blocks = m_comp_h_samp[component_id] * m_comp_v_samp[component_id]; - - while (num_blocks--) - m_mcu_org[m_blocks_per_mcu++] = component_id; - } - } - } - - // Starts a new scan. - int jpeg_decoder::init_scan() - { - if (!locate_sos_marker()) - return JPGD_FALSE; - - calc_mcu_block_order(); - - check_huff_tables(); - - check_quant_tables(); - - memset(m_last_dc_val, 0, m_comps_in_frame * sizeof(uint)); - - m_eob_run = 0; - - if (m_restart_interval) - { - m_restarts_left = m_restart_interval; - m_next_restart_num = 0; - } - - fix_in_buffer(); - - return JPGD_TRUE; - } - - // Starts a frame. Determines if the number of components or sampling factors - // are supported. - void jpeg_decoder::init_frame() - { - int i; - - if (m_comps_in_frame == 1) - { - if ((m_comp_h_samp[0] != 1) || (m_comp_v_samp[0] != 1)) - stop_decoding(JPGD_UNSUPPORTED_SAMP_FACTORS); - - m_scan_type = JPGD_GRAYSCALE; - m_max_blocks_per_mcu = 1; - m_max_mcu_x_size = 8; - m_max_mcu_y_size = 8; - } - else if (m_comps_in_frame == 3) - { - if ( ((m_comp_h_samp[1] != 1) || (m_comp_v_samp[1] != 1)) || - ((m_comp_h_samp[2] != 1) || (m_comp_v_samp[2] != 1)) ) - stop_decoding(JPGD_UNSUPPORTED_SAMP_FACTORS); - - if ((m_comp_h_samp[0] == 1) && (m_comp_v_samp[0] == 1)) - { - m_scan_type = JPGD_YH1V1; - - m_max_blocks_per_mcu = 3; - m_max_mcu_x_size = 8; - m_max_mcu_y_size = 8; - } - else if ((m_comp_h_samp[0] == 2) && (m_comp_v_samp[0] == 1)) - { - m_scan_type = JPGD_YH2V1; - m_max_blocks_per_mcu = 4; - m_max_mcu_x_size = 16; - m_max_mcu_y_size = 8; - } - else if ((m_comp_h_samp[0] == 1) && (m_comp_v_samp[0] == 2)) - { - m_scan_type = JPGD_YH1V2; - m_max_blocks_per_mcu = 4; - m_max_mcu_x_size = 8; - m_max_mcu_y_size = 16; - } - else if ((m_comp_h_samp[0] == 2) && (m_comp_v_samp[0] == 2)) - { - m_scan_type = JPGD_YH2V2; - m_max_blocks_per_mcu = 6; - m_max_mcu_x_size = 16; - m_max_mcu_y_size = 16; - } - else - stop_decoding(JPGD_UNSUPPORTED_SAMP_FACTORS); - } - else - stop_decoding(JPGD_UNSUPPORTED_COLORSPACE); - - m_max_mcus_per_row = (m_image_x_size + (m_max_mcu_x_size - 1)) / m_max_mcu_x_size; - m_max_mcus_per_col = (m_image_y_size + (m_max_mcu_y_size - 1)) / m_max_mcu_y_size; - - // These values are for the *destination* pixels: after conversion. - if (m_scan_type == JPGD_GRAYSCALE) - m_dest_bytes_per_pixel = 1; - else - m_dest_bytes_per_pixel = 4; - - m_dest_bytes_per_scan_line = ((m_image_x_size + 15) & 0xFFF0) * m_dest_bytes_per_pixel; - - m_real_dest_bytes_per_scan_line = (m_image_x_size * m_dest_bytes_per_pixel); - - // Initialize two scan line buffers. - m_pScan_line_0 = (uint8 *)alloc(m_dest_bytes_per_scan_line, true); - if ((m_scan_type == JPGD_YH1V2) || (m_scan_type == JPGD_YH2V2)) - m_pScan_line_1 = (uint8 *)alloc(m_dest_bytes_per_scan_line, true); - - m_max_blocks_per_row = m_max_mcus_per_row * m_max_blocks_per_mcu; - - // Should never happen - if (m_max_blocks_per_row > JPGD_MAX_BLOCKS_PER_ROW) - stop_decoding(JPGD_ASSERTION_ERROR); - - // Allocate the coefficient buffer, enough for one MCU - m_pMCU_coefficients = (jpgd_block_t*)alloc(m_max_blocks_per_mcu * 64 * sizeof(jpgd_block_t)); - - for (i = 0; i < m_max_blocks_per_mcu; i++) - m_mcu_block_max_zag[i] = 64; - - m_expanded_blocks_per_component = m_comp_h_samp[0] * m_comp_v_samp[0]; - m_expanded_blocks_per_mcu = m_expanded_blocks_per_component * m_comps_in_frame; - m_expanded_blocks_per_row = m_max_mcus_per_row * m_expanded_blocks_per_mcu; - // Freq. domain chroma upsampling is only supported for H2V2 subsampling factor. -// BEGIN EPIC MOD -#if JPGD_SUPPORT_FREQ_DOMAIN_UPSAMPLING - m_freq_domain_chroma_upsample = (m_expanded_blocks_per_mcu == 4*3); -#else - m_freq_domain_chroma_upsample = 0; -#endif -// END EPIC MOD - - if (m_freq_domain_chroma_upsample) - m_pSample_buf = (uint8 *)alloc(m_expanded_blocks_per_row * 64); - else - m_pSample_buf = (uint8 *)alloc(m_max_blocks_per_row * 64); - - m_total_lines_left = m_image_y_size; - - m_mcu_lines_left = 0; - - create_look_ups(); - } - - // The coeff_buf series of methods originally stored the coefficients - // into a "virtual" file which was located in EMS, XMS, or a disk file. A cache - // was used to make this process more efficient. Now, we can store the entire - // thing in RAM. - jpeg_decoder::coeff_buf* jpeg_decoder::coeff_buf_open(int block_num_x, int block_num_y, int block_len_x, int block_len_y) - { - coeff_buf* cb = (coeff_buf*)alloc(sizeof(coeff_buf)); - - cb->block_num_x = block_num_x; - cb->block_num_y = block_num_y; - cb->block_len_x = block_len_x; - cb->block_len_y = block_len_y; - cb->block_size = (block_len_x * block_len_y) * sizeof(jpgd_block_t); - cb->pData = (uint8 *)alloc(cb->block_size * block_num_x * block_num_y, true); - return cb; - } - - inline jpgd_block_t *jpeg_decoder::coeff_buf_getp(coeff_buf *cb, int block_x, int block_y) - { - JPGD_ASSERT((block_x < cb->block_num_x) && (block_y < cb->block_num_y)); - return (jpgd_block_t *)(cb->pData + block_x * cb->block_size + block_y * (cb->block_size * cb->block_num_x)); - } - - // The following methods decode the various types of m_blocks encountered - // in progressively encoded images. - void jpeg_decoder::decode_block_dc_first(jpeg_decoder *pD, int component_id, int block_x, int block_y) - { - int s, r; - jpgd_block_t *p = pD->coeff_buf_getp(pD->m_dc_coeffs[component_id], block_x, block_y); - - if ((s = pD->huff_decode(pD->m_pHuff_tabs[pD->m_comp_dc_tab[component_id]])) != 0) - { - r = pD->get_bits_no_markers(s); - s = HUFF_EXTEND(r, s); - } - - pD->m_last_dc_val[component_id] = (s += pD->m_last_dc_val[component_id]); - - p[0] = static_cast(s << pD->m_successive_low); - } - - void jpeg_decoder::decode_block_dc_refine(jpeg_decoder *pD, int component_id, int block_x, int block_y) - { - if (pD->get_bits_no_markers(1)) - { - jpgd_block_t *p = pD->coeff_buf_getp(pD->m_dc_coeffs[component_id], block_x, block_y); - - p[0] |= (1 << pD->m_successive_low); - } - } - - void jpeg_decoder::decode_block_ac_first(jpeg_decoder *pD, int component_id, int block_x, int block_y) - { - int k, s, r; - - if (pD->m_eob_run) - { - pD->m_eob_run--; - return; - } - - jpgd_block_t *p = pD->coeff_buf_getp(pD->m_ac_coeffs[component_id], block_x, block_y); - - for (k = pD->m_spectral_start; k <= pD->m_spectral_end; k++) - { - s = pD->huff_decode(pD->m_pHuff_tabs[pD->m_comp_ac_tab[component_id]]); - - r = s >> 4; - s &= 15; - - if (s) - { - if ((k += r) > 63) - pD->stop_decoding(JPGD_DECODE_ERROR); - - r = pD->get_bits_no_markers(s); - s = HUFF_EXTEND(r, s); - - p[g_ZAG[k]] = static_cast(s << pD->m_successive_low); - } - else - { - if (r == 15) - { - if ((k += 15) > 63) - pD->stop_decoding(JPGD_DECODE_ERROR); - } - else - { - pD->m_eob_run = 1 << r; - - if (r) - pD->m_eob_run += pD->get_bits_no_markers(r); - - pD->m_eob_run--; - - break; - } - } - } - } - - void jpeg_decoder::decode_block_ac_refine(jpeg_decoder *pD, int component_id, int block_x, int block_y) - { - int s, k, r; - int p1 = 1 << pD->m_successive_low; - int m1 = (-1) << pD->m_successive_low; - jpgd_block_t *p = pD->coeff_buf_getp(pD->m_ac_coeffs[component_id], block_x, block_y); - - k = pD->m_spectral_start; - - if (pD->m_eob_run == 0) - { - for ( ; k <= pD->m_spectral_end; k++) - { - s = pD->huff_decode(pD->m_pHuff_tabs[pD->m_comp_ac_tab[component_id]]); - - r = s >> 4; - s &= 15; - - if (s) - { - if (s != 1) - pD->stop_decoding(JPGD_DECODE_ERROR); - - if (pD->get_bits_no_markers(1)) - s = p1; - else - s = m1; - } - else - { - if (r != 15) - { - pD->m_eob_run = 1 << r; - - if (r) - pD->m_eob_run += pD->get_bits_no_markers(r); - - break; - } - } - - do - { - // BEGIN EPIC MOD - JPGD_ASSERT(k < 64); - // END EPIC MOD - - jpgd_block_t *this_coef = p + g_ZAG[k]; - - if (*this_coef != 0) - { - if (pD->get_bits_no_markers(1)) - { - if ((*this_coef & p1) == 0) - { - if (*this_coef >= 0) - *this_coef = static_cast(*this_coef + p1); - else - *this_coef = static_cast(*this_coef + m1); - } - } - } - else - { - if (--r < 0) - break; - } - - k++; - - } while (k <= pD->m_spectral_end); - - if ((s) && (k < 64)) - { - p[g_ZAG[k]] = static_cast(s); - } - } - } - - if (pD->m_eob_run > 0) - { - for ( ; k <= pD->m_spectral_end; k++) - { - // BEGIN EPIC MOD - JPGD_ASSERT(k < 64); - // END EPIC MOD - - jpgd_block_t *this_coef = p + g_ZAG[k]; - - if (*this_coef != 0) - { - if (pD->get_bits_no_markers(1)) - { - if ((*this_coef & p1) == 0) - { - if (*this_coef >= 0) - *this_coef = static_cast(*this_coef + p1); - else - *this_coef = static_cast(*this_coef + m1); - } - } - } - } - - pD->m_eob_run--; - } - } - - // Decode a scan in a progressively encoded image. - void jpeg_decoder::decode_scan(pDecode_block_func decode_block_func) - { - int mcu_row, mcu_col, mcu_block; - int block_x_mcu[JPGD_MAX_COMPONENTS], m_block_y_mcu[JPGD_MAX_COMPONENTS]; - - memset(m_block_y_mcu, 0, sizeof(m_block_y_mcu)); - - for (mcu_col = 0; mcu_col < m_mcus_per_col; mcu_col++) - { - int component_num, component_id; - - memset(block_x_mcu, 0, sizeof(block_x_mcu)); - - for (mcu_row = 0; mcu_row < m_mcus_per_row; mcu_row++) - { - int block_x_mcu_ofs = 0, block_y_mcu_ofs = 0; - - if ((m_restart_interval) && (m_restarts_left == 0)) - process_restart(); - - for (mcu_block = 0; mcu_block < m_blocks_per_mcu; mcu_block++) - { - component_id = m_mcu_org[mcu_block]; - - decode_block_func(this, component_id, block_x_mcu[component_id] + block_x_mcu_ofs, m_block_y_mcu[component_id] + block_y_mcu_ofs); - - if (m_comps_in_scan == 1) - block_x_mcu[component_id]++; - else - { - if (++block_x_mcu_ofs == m_comp_h_samp[component_id]) - { - block_x_mcu_ofs = 0; - - if (++block_y_mcu_ofs == m_comp_v_samp[component_id]) - { - block_y_mcu_ofs = 0; - block_x_mcu[component_id] += m_comp_h_samp[component_id]; - } - } - } - } - - m_restarts_left--; - } - - if (m_comps_in_scan == 1) - m_block_y_mcu[m_comp_list[0]]++; - else - { - for (component_num = 0; component_num < m_comps_in_scan; component_num++) - { - component_id = m_comp_list[component_num]; - m_block_y_mcu[component_id] += m_comp_v_samp[component_id]; - } - } - } - } - - // Decode a progressively encoded image. - void jpeg_decoder::init_progressive() - { - int i; - - if (m_comps_in_frame == 4) - stop_decoding(JPGD_UNSUPPORTED_COLORSPACE); - - // Allocate the coefficient buffers. - for (i = 0; i < m_comps_in_frame; i++) - { - m_dc_coeffs[i] = coeff_buf_open(m_max_mcus_per_row * m_comp_h_samp[i], m_max_mcus_per_col * m_comp_v_samp[i], 1, 1); - m_ac_coeffs[i] = coeff_buf_open(m_max_mcus_per_row * m_comp_h_samp[i], m_max_mcus_per_col * m_comp_v_samp[i], 8, 8); - } - - for ( ; ; ) - { - int dc_only_scan, refinement_scan; - pDecode_block_func decode_block_func; - - if (!init_scan()) - break; - - dc_only_scan = (m_spectral_start == 0); - refinement_scan = (m_successive_high != 0); - - if ((m_spectral_start > m_spectral_end) || (m_spectral_end > 63)) - stop_decoding(JPGD_BAD_SOS_SPECTRAL); - - if (dc_only_scan) - { - if (m_spectral_end) - stop_decoding(JPGD_BAD_SOS_SPECTRAL); - } - else if (m_comps_in_scan != 1) /* AC scans can only contain one component */ - stop_decoding(JPGD_BAD_SOS_SPECTRAL); - - if ((refinement_scan) && (m_successive_low != m_successive_high - 1)) - stop_decoding(JPGD_BAD_SOS_SUCCESSIVE); - - if (dc_only_scan) - { - if (refinement_scan) - decode_block_func = decode_block_dc_refine; - else - decode_block_func = decode_block_dc_first; - } - else - { - if (refinement_scan) - decode_block_func = decode_block_ac_refine; - else - decode_block_func = decode_block_ac_first; - } - - decode_scan(decode_block_func); - - m_bits_left = 16; - get_bits(16); - get_bits(16); - } - - m_comps_in_scan = m_comps_in_frame; - - for (i = 0; i < m_comps_in_frame; i++) - m_comp_list[i] = i; - - calc_mcu_block_order(); - } - - void jpeg_decoder::init_sequential() - { - if (!init_scan()) - stop_decoding(JPGD_UNEXPECTED_MARKER); - } - - void jpeg_decoder::decode_start() - { - init_frame(); - - if (m_progressive_flag) - init_progressive(); - else - init_sequential(); - } - - void jpeg_decoder::decode_init(jpeg_decoder_stream *pStream) - { - init(pStream); - locate_sof_marker(); - } - - jpeg_decoder::jpeg_decoder(jpeg_decoder_stream *pStream) - { - if (setjmp(m_jmp_state)) - return; - decode_init(pStream); - } - - int jpeg_decoder::begin_decoding() - { - if (m_ready_flag) - return JPGD_SUCCESS; - - if (m_error_code) - return JPGD_FAILED; - - if (setjmp(m_jmp_state)) - return JPGD_FAILED; - - decode_start(); - - m_ready_flag = true; - - return JPGD_SUCCESS; - } - - jpeg_decoder::~jpeg_decoder() - { - free_all_blocks(); - } - - jpeg_decoder_file_stream::jpeg_decoder_file_stream() - { - m_pFile = NULL; - m_eof_flag = false; - m_error_flag = false; - } - - void jpeg_decoder_file_stream::close() - { - if (m_pFile) - { - fclose(m_pFile); - m_pFile = NULL; - } - - m_eof_flag = false; - m_error_flag = false; - } - - jpeg_decoder_file_stream::~jpeg_decoder_file_stream() - { - close(); - } - - bool jpeg_decoder_file_stream::open(const char *Pfilename) - { - close(); - - m_eof_flag = false; - m_error_flag = false; - -#if defined(_MSC_VER) - m_pFile = NULL; - fopen_s(&m_pFile, Pfilename, "rb"); -#else - m_pFile = fopen(Pfilename, "rb"); -#endif - return m_pFile != NULL; - } - - int jpeg_decoder_file_stream::read(uint8 *pBuf, int max_bytes_to_read, bool *pEOF_flag) - { - if (!m_pFile) - return -1; - - if (m_eof_flag) - { - *pEOF_flag = true; - return 0; - } - - if (m_error_flag) - return -1; - - int bytes_read = static_cast(fread(pBuf, 1, max_bytes_to_read, m_pFile)); - if (bytes_read < max_bytes_to_read) - { - if (ferror(m_pFile)) - { - m_error_flag = true; - return -1; - } - - m_eof_flag = true; - *pEOF_flag = true; - } - - return bytes_read; - } - - bool jpeg_decoder_mem_stream::open(const uint8 *pSrc_data, uint size) - { - close(); - m_pSrc_data = pSrc_data; - m_ofs = 0; - m_size = size; - return true; - } - - int jpeg_decoder_mem_stream::read(uint8 *pBuf, int max_bytes_to_read, bool *pEOF_flag) - { - *pEOF_flag = false; - - if (!m_pSrc_data) - return -1; - - uint bytes_remaining = m_size - m_ofs; - if ((uint)max_bytes_to_read > bytes_remaining) - { - max_bytes_to_read = bytes_remaining; - *pEOF_flag = true; - } - - memcpy(pBuf, m_pSrc_data + m_ofs, max_bytes_to_read); - m_ofs += max_bytes_to_read; - - return max_bytes_to_read; - } - - unsigned char *decompress_jpeg_image_from_stream(jpeg_decoder_stream *pStream, int *width, int *height, int *actual_comps, int req_comps) - { - if (!actual_comps) - return NULL; - *actual_comps = 0; - - if ((!pStream) || (!width) || (!height) || (!req_comps)) - return NULL; - - if ((req_comps != 1) && (req_comps != 3) && (req_comps != 4)) - return NULL; - - jpeg_decoder decoder(pStream); - if (decoder.get_error_code() != JPGD_SUCCESS) - return NULL; - - const int image_width = decoder.get_width(), image_height = decoder.get_height(); - *width = image_width; - *height = image_height; - *actual_comps = decoder.get_num_components(); - - if (decoder.begin_decoding() != JPGD_SUCCESS) - return NULL; - - const int dst_bpl = image_width * req_comps; - - uint8 *pImage_data = (uint8*)jpgd_malloc(dst_bpl * image_height); - if (!pImage_data) - return NULL; - - for (int y = 0; y < image_height; y++) - { - const uint8* pScan_line = 0; - uint scan_line_len; - if (decoder.decode((const void**)&pScan_line, &scan_line_len) != JPGD_SUCCESS) - { - jpgd_free(pImage_data); - return NULL; - } - - uint8 *pDst = pImage_data + y * dst_bpl; - - if (((req_comps == 4) && (decoder.get_num_components() == 3)) || - ((req_comps == 1) && (decoder.get_num_components() == 1))) - { - memcpy(pDst, pScan_line, dst_bpl); - } - else if (decoder.get_num_components() == 1) - { - if (req_comps == 3) - { - for (int x = 0; x < image_width; x++) - { - uint8 luma = pScan_line[x]; - pDst[0] = luma; - pDst[1] = luma; - pDst[2] = luma; - pDst += 3; - } - } - else - { - for (int x = 0; x < image_width; x++) - { - uint8 luma = pScan_line[x]; - pDst[0] = luma; - pDst[1] = luma; - pDst[2] = luma; - pDst[3] = 255; - pDst += 4; - } - } - } - else if (decoder.get_num_components() == 3) - { - if (req_comps == 1) - { - const int YR = 19595, YG = 38470, YB = 7471; - for (int x = 0; x < image_width; x++) - { - int r = pScan_line[x*4+0]; - int g = pScan_line[x*4+1]; - int b = pScan_line[x*4+2]; - *pDst++ = static_cast((r * YR + g * YG + b * YB + 32768) >> 16); - } - } - else - { - for (int x = 0; x < image_width; x++) - { - pDst[0] = pScan_line[x*4+0]; - pDst[1] = pScan_line[x*4+1]; - pDst[2] = pScan_line[x*4+2]; - pDst += 3; - } - } - } - } - - return pImage_data; - } - -// BEGIN EPIC MOD - unsigned char *decompress_jpeg_image_from_memory(const unsigned char *pSrc_data, int src_data_size, int *width, int *height, int *actual_comps, int req_comps, int format) - { - jpg_format = (ERGBFormatJPG)format; -// EMD EPIC MOD - jpgd::jpeg_decoder_mem_stream mem_stream(pSrc_data, src_data_size); - return decompress_jpeg_image_from_stream(&mem_stream, width, height, actual_comps, req_comps); - } - - unsigned char *decompress_jpeg_image_from_file(const char *pSrc_filename, int *width, int *height, int *actual_comps, int req_comps) - { - jpgd::jpeg_decoder_file_stream file_stream; - if (!file_stream.open(pSrc_filename)) - return NULL; - return decompress_jpeg_image_from_stream(&file_stream, width, height, actual_comps, req_comps); - } - -} // namespace jpgd diff --git a/spaces/flax-community/multilingual-image-captioning/sections/conclusion_future_work/social_impact.md b/spaces/flax-community/multilingual-image-captioning/sections/conclusion_future_work/social_impact.md deleted file mode 100644 index f38d25dc668cc7fc4745627397e4eb0c08e4d892..0000000000000000000000000000000000000000 --- a/spaces/flax-community/multilingual-image-captioning/sections/conclusion_future_work/social_impact.md +++ /dev/null @@ -1,5 +0,0 @@ -Our initial plan was to include 4 high-resource and 4 low-resource languages (Marathi, Bengali, Urdu, Telegu) in our training data. However, the existing translations do not perform as well and we would have received poor labels, not to mention, with a longer training time. - -Being able to automatically describe the content of an image using properly formed sentences in any language is a challenging task, but it could have great impact by helping visually impaired people better understand their surroundings. - -A slightly (not-so) long term use case would definitely be, explaining what happens in a video, frame by frame. One more recent use-case for the same can be generating surgical instructions. Since our model is multi-lingual which means the instructions will not be just limited to regions where English is spoken but those instructions can be perused in regions where Spanish, French and German are spoken as well. Further if we extend this project to low-resource languages then its impact can be manifold. \ No newline at end of file diff --git a/spaces/freshield/ChatGPT-gradio/lib/AESCipher.py b/spaces/freshield/ChatGPT-gradio/lib/AESCipher.py deleted file mode 100644 index 1876531d4f59fbcf04f10d13fd2154726b86832f..0000000000000000000000000000000000000000 --- a/spaces/freshield/ChatGPT-gradio/lib/AESCipher.py +++ /dev/null @@ -1,54 +0,0 @@ -# coding=utf-8 -""" -@Author: Freshield -@Contact: yangyufresh@163.com -@File: AESCipher.py -@Time: 2023-03-05 22:55 -@Last_update: 2023-03-05 22:55 -@Desc: None -@==============================================@ -@ _____ _ _ _ _ @ -@ | __|___ ___ ___| |_|_|___| |_| | @ -@ | __| _| -_|_ -| | | -_| | . | @ -@ |__| |_| |___|___|_|_|_|___|_|___| @ -@ Freshield @ -@==============================================@ -""" -from Crypto.Cipher import AES -import base64 - -# 加密函数 -def aes_encrypt(key, data): - # 将key转换成16、24、32位的字符串,不足的以空格补齐 - key = key.ljust(32, ' ') - # 将data转换成16的倍数,不足的以空格补齐 - data = data.ljust(16 * (len(data) // 16 + 1), ' ') - # 进行加密 - cipher = AES.new(key.encode('utf-8'), AES.MODE_ECB) - encrypted_data = cipher.encrypt(data.encode('utf-8')) - # 将加密后的数据进行base64编码 - encrypted_data = base64.b64encode(encrypted_data).decode('utf-8') - return encrypted_data - -# 解密函数 -def aes_decrypt(key, encrypted_data): - # 将key转换成16、24、32位的字符串,不足的以空格补齐 - key = key.ljust(32, ' ') - # 对加密后的数据进行base64解码 - encrypted_data = base64.b64decode(encrypted_data) - # 进行解密 - cipher = AES.new(key.encode('utf-8'), AES.MODE_ECB) - decrypted_data = cipher.decrypt(encrypted_data).decode('utf-8') - # 去除解密后的数据中的空格 - decrypted_data = decrypted_data.strip() - return decrypted_data - - -# 测试 -if __name__ == '__main__': - key = '1234567890123456345345' - data = 'Hello, world!' - encrypted_data = aes_encrypt(key, data) - print('加密后的数据:', encrypted_data) - decrypted_data = aes_decrypt(key, encrypted_data) - print('解密后的数据:', decrypted_data) diff --git a/spaces/fuckyoudeki/AutoGPT/autogpt/commands/__init__.py b/spaces/fuckyoudeki/AutoGPT/autogpt/commands/__init__.py deleted file mode 100644 index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000 diff --git a/spaces/gauss314/vllc/README.md b/spaces/gauss314/vllc/README.md deleted file mode 100644 index 6845e23b1e89c34b14d98e218c84bfc7e805db45..0000000000000000000000000000000000000000 --- a/spaces/gauss314/vllc/README.md +++ /dev/null @@ -1,13 +0,0 @@ ---- -title: Simulacion diputados ARG 2023 -emoji: ✉️ -colorFrom: blue -colorTo: indigo -sdk: streamlit -sdk_version: 1.25.0 -app_file: app.py -pinned: false -license: apache-2.0 ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/georgefen/Face-Landmark-ControlNet/annotator/uniformer/configs/_base_/models/dmnet_r50-d8.py b/spaces/georgefen/Face-Landmark-ControlNet/annotator/uniformer/configs/_base_/models/dmnet_r50-d8.py deleted file mode 100644 index d22ba52640bebd805b3b8d07025e276dfb023759..0000000000000000000000000000000000000000 --- a/spaces/georgefen/Face-Landmark-ControlNet/annotator/uniformer/configs/_base_/models/dmnet_r50-d8.py +++ /dev/null @@ -1,44 +0,0 @@ -# model settings -norm_cfg = dict(type='SyncBN', requires_grad=True) -model = dict( - type='EncoderDecoder', - pretrained='open-mmlab://resnet50_v1c', - backbone=dict( - type='ResNetV1c', - depth=50, - num_stages=4, - out_indices=(0, 1, 2, 3), - dilations=(1, 1, 2, 4), - strides=(1, 2, 1, 1), - norm_cfg=norm_cfg, - norm_eval=False, - style='pytorch', - contract_dilation=True), - decode_head=dict( - type='DMHead', - in_channels=2048, - in_index=3, - channels=512, - filter_sizes=(1, 3, 5, 7), - dropout_ratio=0.1, - num_classes=19, - norm_cfg=dict(type='SyncBN', requires_grad=True), - align_corners=False, - loss_decode=dict( - type='CrossEntropyLoss', use_sigmoid=False, loss_weight=1.0)), - auxiliary_head=dict( - type='FCNHead', - in_channels=1024, - in_index=2, - channels=256, - num_convs=1, - concat_input=False, - dropout_ratio=0.1, - num_classes=19, - norm_cfg=norm_cfg, - align_corners=False, - loss_decode=dict( - type='CrossEntropyLoss', use_sigmoid=False, loss_weight=0.4)), - # model training and testing settings - train_cfg=dict(), - test_cfg=dict(mode='whole')) diff --git a/spaces/giswqs/Streamlit/apps/plotly_maps.py b/spaces/giswqs/Streamlit/apps/plotly_maps.py deleted file mode 100644 index dd5031ef58437eb37659174fb419ec150c61e2c3..0000000000000000000000000000000000000000 --- a/spaces/giswqs/Streamlit/apps/plotly_maps.py +++ /dev/null @@ -1,17 +0,0 @@ -import streamlit as st -import leafmap.plotlymap as leafmap - - -def app(): - - st.title("Plotly Maps") - m = leafmap.Map(basemap="street", height=650) - m.add_mapbox_layer(style="streets") - - basemaps = list(leafmap.basemaps.keys()) - basemap = st.selectbox( - "Select a basemap", basemaps, basemaps.index("Stamen.Terrain") - ) - m.add_basemap(basemap) - - st.plotly_chart(m, use_container_width=True) diff --git a/spaces/gotiQspiryo/whisper-ui/examples/Data Cash 230embryology mcq bank pdf free 125 Embryology Review and Self-Assessment.md b/spaces/gotiQspiryo/whisper-ui/examples/Data Cash 230embryology mcq bank pdf free 125 Embryology Review and Self-Assessment.md deleted file mode 100644 index ac0c293ffa0698a4c455977b93c5f5db8afdce87..0000000000000000000000000000000000000000 --- a/spaces/gotiQspiryo/whisper-ui/examples/Data Cash 230embryology mcq bank pdf free 125 Embryology Review and Self-Assessment.md +++ /dev/null @@ -1,6 +0,0 @@ -

      Data Cash 230embryology mcq bank pdf free 125


      Downloadhttps://urlgoal.com/2uyN1h



      - - aaccfb2cb3
      -
      -
      -

      diff --git a/spaces/gotiQspiryo/whisper-ui/examples/I just want to be someone. Well doesnt everyone How to Overcome Superwoman Syndrome and Find Your True Self.md b/spaces/gotiQspiryo/whisper-ui/examples/I just want to be someone. Well doesnt everyone How to Overcome Superwoman Syndrome and Find Your True Self.md deleted file mode 100644 index 62f84ced3eee3c5a4745c014cfe72204c1f3d65d..0000000000000000000000000000000000000000 --- a/spaces/gotiQspiryo/whisper-ui/examples/I just want to be someone. Well doesnt everyone How to Overcome Superwoman Syndrome and Find Your True Self.md +++ /dev/null @@ -1,7 +0,0 @@ - -

      This is episode 241 And it's another episode in the body image series. I'm taking a deep dive into Superwoman syndrome. If you struggle with trying to do it all and be it all then you don't want to miss this episode, I talk about what Superwoman syndrome is six ways to identify whether you have it, how it impacts your body image and 10 ways to break free, you can find all the links and resources mentioned including a free worksheet called 10 ways to reject Superwoman syndrome at summer innanen.com forward slash Two for one. Let's give a shout out to Cal flan who left this review. Listen to this podcast summer has awesome insight and hosts great guests. This podcast has literally changed the direction of my life for the better. Thank you so much for leaving that review. I would really appreciate it if you took a minute to leave a review as well. Go to iTunes search for eat the rules. Then click ratings and reviews and click to leave a review or give it a rating. Don't forget to grab your free copy of the 10 day body confidence makeover at summer innanen.com forward slash freebies with 10 steps to take right now to feel better in your body. And if you are a professional who works with people who may also have body image struggles get the free body image coaching roadmap at summer innanen.com forward slash roadmap. Don't forget to subscribe to this podcast to takes a second to just click that little follow or subscribe button via whatever platform you're using. All right, let's dive into this week's episode, I want to talk about why I'm doing this episode when I was living with the challenges of adenomyosis. Or when I was going through a lot of pain and just the stress of having to kind of design my life around it. And not getting sleep for a week, every couple of weeks. If you want to hear more about that, you can go back and listen to episode 232, where I talk about that in more detail. But when I was going through that, it made me realize how often we're used to suffering. We're used to just, you know, having pain or having some kind of discomfort or being exhausted, and really just downplaying it and saying, oh, you know, it's not that bad. Or, oh, I'll just wait to see a doctor. Or I'll just, you know, continue to kind of keep pushing through, I just need to work harder, I just need to get off earlier or, you know, I or I am able to just push through and people praise me for it. So that motivates me to keep doing it. It really made me reflect on that both with that. And also, after I had COVID and my heart rate was higher. And yeah, it just made me really think about these things. And I also hear from clients all the time, that really struggle with this pressure to do it all and be at all. And we're conditioned to ignore things like exhaustion, or pain or other discomfort from our body in an effort to just keep all the balls in the air, right? Like we just think like, well, I don't have time for that or it's not that bad. Instead of really like valuing, if there's, you know, valuing ourselves enough and respecting ourselves enough to say, hey, you know what, I'm not really feeling like myself, or outside of just kind of like more medical symptoms, really just, you know, the sheer exhaustion and loud voice of your inner critic that you end up suffering with as a result of trying to do it all and be at all. And I remember seeing this post on Facebook. This was a few years ago, and I'm going to try and keep this as anonymous as possible. Pretty much no one listening to this is going to know who I'm talking about. So that's good. I don't think so at least. Anyways, there was a post that came up on my Facebook feed and it was someone praising their wife for working full time and

      -

      I just want to be someone. Well doesn’t everyone : Surviving Superwoman Syndrome.


      DOWNLOAD ⚹⚹⚹ https://urlgoal.com/2uyLBK



      -

      and going to the gym and taking care of their kids right up until her, I think it was like their third or fourth child came. So she was like, extremely pregnant. And it was this post being like, I'm so proud of my wife, you know, she's still seeing so many clients every day and she's going to the gym and she takes care of the kids. And everyone in the comments was like, You're so amazing, your wife is so incredible. And I was like, why are we praising this? Like, why do we praise this, like the baby's literally about to fall out of her, like, take some damn rest, you know. And it actually frustrated me so much that inspired me to write this post many years ago. So I'm going to read you a Facebook post that I wrote, that was inspired by seeing that I'm hyper aware of how our culture praises the super woman, the woman who can manage being a mom, a career person health enthusiast, while maintaining a Pinterest worthy home. And how this obsession with doing it all starts to impact the pressure we put back on ourselves. I've decided I'm here to praise the ones who rest the ones who set firm boundaries, the ones who say no, the ones who ask for help, the ones who admit they don't have it all together, the ones that choose the messiness of life, instead of trying to keep everything polished and pristine. That's the kind of Superwoman I am here for. And it got a really great response, because I think everyone really needed to hear this. And it's a sigh of relief, to think like, Okay, we don't need to be like no one is perfect, no one is holding it all together, people who kind of give that impression typically either have a ton of help that you don't see, or there's a lot going on in their inner world, that would be we don't know about, like, maybe they don't actually feel that good. And they just give off this perception of it. I think that happens, the majority of the time, to be honest, and our culture really glorifies and sees people as morally superior, who can juggle everything. And what happens is, is that we then internalize that we internalize that we should be able to then do everything effortlessly. And then if we can't, there is something wrong with us. So for example, like kind of like what happened to me when we look at social media, and we see people with their well organized homes and their glossy skin and their homemade meals, and they're raising five kids, and they're working a full time job. And we think, well, I should be able to do that then too. And I see this time and time again, I see clients beating themselves up because they feel like they can't do it all. And they feel like there's something wrong with them as a result of that. And that need to do it all. And B it all has a name and that name is Superwoman syndrome. And that is a very gendered name. That is the kind of official name that people use and psychologists use around it. However, I recognize that that is extreme that's looking at things very binary. So maybe we call it superhero syndrome. Although I would say it does disproportionately affect people who identify as as female. And it really is this idea that we can do it all and be at all, that we can maintain our hosts and focus on our health and have the perfect body and be career oriented, and effortlessly manage, like all that mental load of running household, which is especially dominant if you're a parent, and then just show up in life like Kelly Ripa with like a lot of energy and a smile. You know, that's that's kind of how, when I think about who, who's sort of like, you know, the Superwoman archetype, I really always think of like Kelly Ripa and like Gwyneth Paltrow, those are kind of the two people that come to mind for me, I'm always curious as to like, who might come to mind for you, it's probably different. But that's who I think about. And I think, we think, okay, I should be able to do that, too, I should look like them, I should be able to be like energetic, and perky and all this other stuff. And it tears us apart inside. When we're attempting to do all this, we're doing it at the expense of ourselves, we're putting others needs before our own, we're putting all of our energy into kind of this image that we want other people to see us as. And we put that into how we may appear to others. And we putting that all before our own well being, we're usually putting that before a lot of our own wants and our own needs, and the things that we truly value. And so in other words, what we're often doing is we're chasing this illusion of having it all together and this illusion of perfection over what we actually want to need. We say to ourselves, you know, I'm going to put other people's needs before my own, I'm going to put how I how I think other people should perceive me before my needs. And what I mean by that is that we often get sucked into kind of being a superhero because we don't want to be perceived as less than so there's definitely an external influence here, in terms of how other people perceive us and judge us. And there's a fear behind that there's a fear behind these behaviors. And that fear is often I'm sure there's more than a Milus here, but those fears are often that we're going to be perceived as inadequate, that we're going to be perceived as less than that we're going to be perceived as failures or NP

      -

      reductive or lazy, and that is cultural conditioning, this has been conditioned in us as a way to gain more social currency, if we can just, you know, give the impression that we have it all together, then we'll get more power or will be more open to being desirable and lovable and all of these other things that come along with that. Our culture praises busyness, it praises the hustle, it praises hard work. And that's really, if if you kind of look into the history of that, and the racial origins of that, that's really the culture of, of whiteness. If you Google the culture of whiteness, you'll find a lot of stuff on that. And if you look at the images, you'll find really good illustrations of the breakdown of what that looks like. It's kind of like this idea of quiet quitting, that's come up right now, where if you're just doing your job, you're considered, you know, not as valuable. Because again, this ties back to this culture that we have that really praises kind of overworking and going over and above and competition and all this other stuff and productivity. And I'm not saying that you can't work hard at things and go to the other or go at or that we should all just go to the other extreme, and do nothing. But there's a diminishing point of returns to hard work, there's a diminishing point of returns to these efforts. And if they're not coming from a place of respect for our own needs and wants, then it's absolutely going to be at a detriment to our mental well being. And all of this sources from the archetype of the quote unquote, perfect woman, I kind of mentioned this earlier, when I talked about when I think of when I think of the perfect woman I think of, you know, Gwyneth Paltrow, or I think of Kelly Ripa. And it's really just, you know, that that woman that like, has it all together and is successful and is like, has a smile and all this other crap. But that trying to fit into that archetype of the perfect woman steals our time, it steals our energy, and it steals our resources, much like diet culture was, which is a subset of that. So trying to like hustle to give off this illusion of perfection. It takes a hell of a lot of time, a hell of a lot of energy, and money, and just our resources in general. And I want to note here that this is much more difficult for women of color or other marginalized groups, as they often have to work much harder to prove themselves. And in order to deal with subtle and overt forms of discrimination. I came across this interesting article by Dr. Helen fosu, who is a psychologist and wrote this great article about her own lived experience with Superwoman syndrome. So I'm going to link to that in the show notes. Because I think that offers a different lived experience than the one that I have with it. And there's also this really interesting study that I came across that I'll link to in the show notes, which is called Superwoman schema, African American woman's views on stress, strengthen health, which really breaks down the origins of this even further and how the this really does stem from more racial origins that and that's not easy to find when you kind of just loosely Google Superwoman syndrome online. And what they talk, one of the things that they really talk specifically about in that study is that among black women, Superwoman syndrome is quote, unquote, necessary for survival. And the thing that I really took away from that is that depending on your level of privilege, you likely feel greater pressure to engage in Superwoman syndrome as a means of survival. And I've certainly worked with clients that identify as fat, who have felt increased pressure, they felt increased pressure to work harder to prove themselves because of their body size. And again, this is coming back to just the product of the culture that we live in the dominant white culture, the fat phobic culture, the sexist culture that values overworking and competition and power. And so I think that it's always really good to know the origins of these things, it's really important to look at the social justice aspect of these things. Because it's not an individual defect. It's not just like, oh, you know, you just need to, like stop trying to work so hard. It's literally in our DNA as a way to try to survive for a lot of individuals more so if you experience different levels of oppression. And so it's not something to feel bad about. So don't beat yourself up for having this. It's not something to, you know, again, feel shame about. I think, let's bring awareness to this and see how our culture perpetuates this narrative. Like let's open up our awareness to it so that we can actively reject it as individuals and also look for the systemic ways that we can try to create change or reject it on a

      aaccfb2cb3
      -
      -
      \ No newline at end of file diff --git a/spaces/gradio/HuBERT/examples/m2m_100/process_data/dedup_data.py b/spaces/gradio/HuBERT/examples/m2m_100/process_data/dedup_data.py deleted file mode 100644 index 58d9ed1cd17b3ba70772a6d9adab709785495fd9..0000000000000000000000000000000000000000 --- a/spaces/gradio/HuBERT/examples/m2m_100/process_data/dedup_data.py +++ /dev/null @@ -1,91 +0,0 @@ -import argparse -from collections import namedtuple -import os - -DATADIR = "/path/to/train_data" -DEDUP_FROM_DIR = "/path/to/eval/data" -OUTPUT_DIR = "/path/to/output/data" - - -def main(args): - languages = set() - for language_directory in os.listdir(DATADIR): - if "_" in language_directory: - src, tgt = language_directory.split("_") - languages.add(LanguagePair(src=src, tgt=tgt)) - - data = existing_data() - train_languages = sorted(languages) - for language_pair in train_languages[args.start_index:args.start_index + args.size]: - print(language_pair) - dedup(language_pair, data) - - -LanguagePair = namedtuple("LanguagePair", ["src", "tgt"]) - - -def existing_data(): - data = set() - for file in os.listdir(DEDUP_FROM_DIR): - with open(os.path.join(DEDUP_FROM_DIR, file)) as f: - data |= set(f.readlines()) - return data - -def dedup(language_pair, data, verbose=True, output=True): - train_filenames = LanguagePair( - src=f"{DATADIR}/{language_pair.src}_{language_pair.tgt}/train.{language_pair.src}", - tgt=f"{DATADIR}/{language_pair.src}_{language_pair.tgt}/train.{language_pair.tgt}", - ) - - output_filenames = LanguagePair( - src=f"{OUTPUT_DIR}/train.dedup.{language_pair.src}-{language_pair.tgt}.{language_pair.src}", - tgt=f"{OUTPUT_DIR}/train.dedup.{language_pair.src}-{language_pair.tgt}.{language_pair.tgt}" - ) - - # If output exists, skip this pair. It has already been done. - if (os.path.exists(output_filenames.src) and - os.path.exists(output_filenames.tgt)): - if verbose: - print(f"{language_pair.src}-{language_pair.tgt} already done.") - return - - if verbose: - print(f"{language_pair.src}-{language_pair.tgt} ready, will check dups.") - - # If there is no output, no need to actually do the loop. - if not output: - return - - if os.path.exists(train_filenames.src) and os.path.exists(train_filenames.tgt): - with open(train_filenames.src) as f: - train_source = f.readlines() - - with open(train_filenames.tgt) as f: - train_target = f.readlines() - - # do dedup - new_train_source = [] - new_train_target = [] - for i, train_line in enumerate(train_source): - if train_line not in data and train_target[i] not in data: - new_train_source.append(train_line) - new_train_target.append(train_target[i]) - - assert len(train_source) == len(train_target) - assert len(new_train_source) == len(new_train_target) - assert len(new_train_source) <= len(train_source) - - with open(output_filenames.src, "w") as o: - for line in new_train_source: - o.write(line) - - with open(output_filenames.tgt, "w") as o: - for line in new_train_target: - o.write(line) - - -if __name__ == '__main__': - parser = argparse.ArgumentParser() - parser.add_argument("-s", "--start-index", required=True, type=int) - parser.add_argument("-n", "--size", required=True, type=int) - main(parser.parse_args()) diff --git a/spaces/gradio/HuBERT/examples/speech_recognition/models/w2l_conv_glu_enc.py b/spaces/gradio/HuBERT/examples/speech_recognition/models/w2l_conv_glu_enc.py deleted file mode 100644 index 655a9b0d19d11e35511392a016f9d6b7d7aa2925..0000000000000000000000000000000000000000 --- a/spaces/gradio/HuBERT/examples/speech_recognition/models/w2l_conv_glu_enc.py +++ /dev/null @@ -1,177 +0,0 @@ -#!/usr/bin/env python3 - -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -import math - -import torch -import torch.nn as nn -import torch.nn.functional as F -from fairseq.models import ( - FairseqEncoder, - FairseqEncoderModel, - register_model, - register_model_architecture, -) -from fairseq.modules.fairseq_dropout import FairseqDropout - - -default_conv_enc_config = """[ - (400, 13, 170, 0.2), - (440, 14, 0, 0.214), - (484, 15, 0, 0.22898), - (532, 16, 0, 0.2450086), - (584, 17, 0, 0.262159202), - (642, 18, 0, 0.28051034614), - (706, 19, 0, 0.30014607037), - (776, 20, 0, 0.321156295296), - (852, 21, 0, 0.343637235966), - (936, 22, 0, 0.367691842484), - (1028, 23, 0, 0.393430271458), - (1130, 24, 0, 0.42097039046), - (1242, 25, 0, 0.450438317792), - (1366, 26, 0, 0.481969000038), - (1502, 27, 0, 0.51570683004), - (1652, 28, 0, 0.551806308143), - (1816, 29, 0, 0.590432749713), -]""" - - -@register_model("asr_w2l_conv_glu_encoder") -class W2lConvGluEncoderModel(FairseqEncoderModel): - def __init__(self, encoder): - super().__init__(encoder) - - @staticmethod - def add_args(parser): - """Add model-specific arguments to the parser.""" - parser.add_argument( - "--input-feat-per-channel", - type=int, - metavar="N", - help="encoder input dimension per input channel", - ) - parser.add_argument( - "--in-channels", - type=int, - metavar="N", - help="number of encoder input channels", - ) - parser.add_argument( - "--conv-enc-config", - type=str, - metavar="EXPR", - help=""" - an array of tuples each containing the configuration of one conv layer - [(out_channels, kernel_size, padding, dropout), ...] - """, - ) - - @classmethod - def build_model(cls, args, task): - """Build a new model instance.""" - conv_enc_config = getattr(args, "conv_enc_config", default_conv_enc_config) - encoder = W2lConvGluEncoder( - vocab_size=len(task.target_dictionary), - input_feat_per_channel=args.input_feat_per_channel, - in_channels=args.in_channels, - conv_enc_config=eval(conv_enc_config), - ) - return cls(encoder) - - def get_normalized_probs(self, net_output, log_probs, sample=None): - lprobs = super().get_normalized_probs(net_output, log_probs, sample) - lprobs.batch_first = False - return lprobs - - -class W2lConvGluEncoder(FairseqEncoder): - def __init__( - self, vocab_size, input_feat_per_channel, in_channels, conv_enc_config - ): - super().__init__(None) - - self.input_dim = input_feat_per_channel - if in_channels != 1: - raise ValueError("only 1 input channel is currently supported") - - self.conv_layers = nn.ModuleList() - self.linear_layers = nn.ModuleList() - self.dropouts = [] - cur_channels = input_feat_per_channel - - for out_channels, kernel_size, padding, dropout in conv_enc_config: - layer = nn.Conv1d(cur_channels, out_channels, kernel_size, padding=padding) - layer.weight.data.mul_(math.sqrt(3)) # match wav2letter init - self.conv_layers.append(nn.utils.weight_norm(layer)) - self.dropouts.append( - FairseqDropout(dropout, module_name=self.__class__.__name__) - ) - if out_channels % 2 != 0: - raise ValueError("odd # of out_channels is incompatible with GLU") - cur_channels = out_channels // 2 # halved by GLU - - for out_channels in [2 * cur_channels, vocab_size]: - layer = nn.Linear(cur_channels, out_channels) - layer.weight.data.mul_(math.sqrt(3)) - self.linear_layers.append(nn.utils.weight_norm(layer)) - cur_channels = out_channels // 2 - - def forward(self, src_tokens, src_lengths, **kwargs): - - """ - src_tokens: padded tensor (B, T, C * feat) - src_lengths: tensor of original lengths of input utterances (B,) - """ - B, T, _ = src_tokens.size() - x = src_tokens.transpose(1, 2).contiguous() # (B, feat, T) assuming C == 1 - - for layer_idx in range(len(self.conv_layers)): - x = self.conv_layers[layer_idx](x) - x = F.glu(x, dim=1) - x = self.dropouts[layer_idx](x) - - x = x.transpose(1, 2).contiguous() # (B, T, 908) - x = self.linear_layers[0](x) - x = F.glu(x, dim=2) - x = self.dropouts[-1](x) - x = self.linear_layers[1](x) - - assert x.size(0) == B - assert x.size(1) == T - - encoder_out = x.transpose(0, 1) # (T, B, vocab_size) - - # need to debug this -- find a simpler/elegant way in pytorch APIs - encoder_padding_mask = ( - torch.arange(T).view(1, T).expand(B, -1).to(x.device) - >= src_lengths.view(B, 1).expand(-1, T) - ).t() # (B x T) -> (T x B) - - return { - "encoder_out": encoder_out, # (T, B, vocab_size) - "encoder_padding_mask": encoder_padding_mask, # (T, B) - } - - def reorder_encoder_out(self, encoder_out, new_order): - encoder_out["encoder_out"] = encoder_out["encoder_out"].index_select( - 1, new_order - ) - encoder_out["encoder_padding_mask"] = encoder_out[ - "encoder_padding_mask" - ].index_select(1, new_order) - return encoder_out - - def max_positions(self): - """Maximum input length supported by the encoder.""" - return (1e6, 1e6) # an arbitrary large number - - -@register_model_architecture("asr_w2l_conv_glu_encoder", "w2l_conv_glu_enc") -def w2l_conv_glu_enc(args): - args.input_feat_per_channel = getattr(args, "input_feat_per_channel", 80) - args.in_channels = getattr(args, "in_channels", 1) - args.conv_enc_config = getattr(args, "conv_enc_config", default_conv_enc_config) diff --git a/spaces/gradio/audio_debugger/README.md b/spaces/gradio/audio_debugger/README.md deleted file mode 100644 index 6cc9ddb3220c01da0c8bf1ae9b79d84cc61f5b84..0000000000000000000000000000000000000000 --- a/spaces/gradio/audio_debugger/README.md +++ /dev/null @@ -1,12 +0,0 @@ - ---- -title: audio_debugger -emoji: 🔥 -colorFrom: indigo -colorTo: indigo -sdk: gradio -sdk_version: 4.1.2 -app_file: run.py -pinned: false -hf_oauth: true ---- diff --git a/spaces/gradio/neon-tts-plugin-coqui/README.md b/spaces/gradio/neon-tts-plugin-coqui/README.md deleted file mode 100644 index 34674ebb937a81600e29bf01959c6c0157e182fc..0000000000000000000000000000000000000000 --- a/spaces/gradio/neon-tts-plugin-coqui/README.md +++ /dev/null @@ -1,12 +0,0 @@ - ---- -title: neon-tts-plugin-coqui -emoji: 🔥 -colorFrom: indigo -colorTo: indigo -sdk: gradio -sdk_version: 4.1.2 -app_file: run.py -pinned: false -hf_oauth: true ---- diff --git a/spaces/gstaff/MagicGen/colab-data-test/css/mana.css b/spaces/gstaff/MagicGen/colab-data-test/css/mana.css deleted file mode 100644 index fbb5c7dcda9c9a46afcb2cc236e26fb2fd7f2537..0000000000000000000000000000000000000000 --- a/spaces/gstaff/MagicGen/colab-data-test/css/mana.css +++ /dev/null @@ -1,684 +0,0 @@ -/** - * Global */ -@font-face { - font-family: 'Mana'; - src: url('../fonts/mana.eot?v=0.6'); - src: url('../fonts/mana.eot?#iefix&v=0.6') format('embedded-opentype'), url('../fonts/mana.woff?v=0.6') format('woff'), url('../fonts/mana.ttf?v=0.6') format('truetype'), url('../fonts/mana.svg?v=0.6#mana') format('svg'); - font-weight: normal; - font-style: normal; -} -@font-face { - font-family: 'MPlantin'; - src: url('../fonts/mplantin.eot?v=0.6'); - src: url('../fonts/mplantin.eot?#iefix&v=0.6') format('embedded-opentype'), url('../fonts/mplantin.woff?v=0.6') format('woff'), url('../fonts/mplantin.ttf?v=0.6') format('truetype'), url('../fonts/mplantin.svg?v=0.6#mplantin') format('svg'); - font-weight: normal; - font-style: normal; -} -.ms { - display: inline-block; - font: normal normal normal 14px/1 Mana; - font-size: inherit; - line-height: 1em; - text-rendering: auto; - transform: translate(0, 0); - speak: none; - text-transform: none; - vertical-align: middle; - -webkit-font-smoothing: antialiased; - -moz-osx-font-smoothing: grayscale; -} -/** - * Mana cost styles */ -.ms-cost { - background-color: #BEB9B2; - border-radius: 1em; - color: #111; - font-size: 0.95em; - width: 1.3em; - height: 1.3em; - line-height: 1.35em; - text-align: center; -} -.ms-cost.ms-w, -.ms-cost.ms-wp { - background-color: #F0F2C0; -} -.ms-cost.ms-u, -.ms-cost.ms-up { - background-color: #B5CDE3; -} -.ms-cost.ms-b, -.ms-cost.ms-bp { - background-color: #ACA29A; -} -.ms-cost.ms-r, -.ms-cost.ms-rp { - background-color: #DB8664; -} -.ms-cost.ms-g, -.ms-cost.ms-gp { - background-color: #93B483; -} -.ms-cost.ms-wu { - background: #edf2b0; - background: -moz-linear-gradient(-45deg, #edf2b0 0%, #edf2b0 50%, #a6c1dd 50%, #a6c1dd 100%); - background: -webkit-gradient(linear, left top, right bottom, color-stop(0%, #edf2b0), color-stop(50%, #edf2b0), color-stop(50%, #a6c1dd), color-stop(100%, #a6c1dd)); - background: -webkit-linear-gradient(-45deg, #edf2b0 0%, #edf2b0 50%, #a6c1dd 50%, #a6c1dd 100%); - background: -o-linear-gradient(-45deg, #edf2b0 0%, #edf2b0 50%, #a6c1dd 50%, #a6c1dd 100%); - background: -ms-linear-gradient(-45deg, #edf2b0 0%, #edf2b0 50%, #a6c1dd 50%, #a6c1dd 100%); - background: linear-gradient(135deg, #edf2b0 0%, #edf2b0 50%, #a6c1dd 50%, #a6c1dd 100%); - filter: progid:DXImageTransform.Microsoft.gradient(startColorstr='#edf2b0', endColorstr='#a6c1dd', GradientType=1); -} -.ms-cost.ms-wb { - background: #edf2b0; - background: -moz-linear-gradient(-45deg, #edf2b0 0%, #edf2b0 50%, #9c9188 50%, #9c9188 100%); - background: -webkit-gradient(linear, left top, right bottom, color-stop(0%, #edf2b0), color-stop(50%, #edf2b0), color-stop(50%, #9c9188), color-stop(100%, #9c9188)); - background: -webkit-linear-gradient(-45deg, #edf2b0 0%, #edf2b0 50%, #9c9188 50%, #9c9188 100%); - background: -o-linear-gradient(-45deg, #edf2b0 0%, #edf2b0 50%, #9c9188 50%, #9c9188 100%); - background: -ms-linear-gradient(-45deg, #edf2b0 0%, #edf2b0 50%, #9c9188 50%, #9c9188 100%); - background: linear-gradient(135deg, #edf2b0 0%, #edf2b0 50%, #9c9188 50%, #9c9188 100%); - filter: progid:DXImageTransform.Microsoft.gradient(startColorstr='#edf2b0', endColorstr='#9c9188', GradientType=1); -} -.ms-cost.ms-ub { - background: #a6c1dd; - background: -moz-linear-gradient(-45deg, #a6c1dd 0%, #a6c1dd 50%, #9c9188 50%, #9c9188 100%); - background: -webkit-gradient(linear, left top, right bottom, color-stop(0%, #a6c1dd), color-stop(50%, #a6c1dd), color-stop(50%, #9c9188), color-stop(100%, #9c9188)); - background: -webkit-linear-gradient(-45deg, #a6c1dd 0%, #a6c1dd 50%, #9c9188 50%, #9c9188 100%); - background: -o-linear-gradient(-45deg, #a6c1dd 0%, #a6c1dd 50%, #9c9188 50%, #9c9188 100%); - background: -ms-linear-gradient(-45deg, #a6c1dd 0%, #a6c1dd 50%, #9c9188 50%, #9c9188 100%); - background: linear-gradient(135deg, #a6c1dd 0%, #a6c1dd 50%, #9c9188 50%, #9c9188 100%); - filter: progid:DXImageTransform.Microsoft.gradient(startColorstr='#a6c1dd', endColorstr='#9c9188', GradientType=1); -} -.ms-cost.ms-ur { - background: #a6c1dd; - background: -moz-linear-gradient(-45deg, #a6c1dd 0%, #a6c1dd 50%, #db8664 50%, #db8664 100%); - background: -webkit-gradient(linear, left top, right bottom, color-stop(0%, #a6c1dd), color-stop(50%, #a6c1dd), color-stop(50%, #db8664), color-stop(100%, #db8664)); - background: -webkit-linear-gradient(-45deg, #a6c1dd 0%, #a6c1dd 50%, #db8664 50%, #db8664 100%); - background: -o-linear-gradient(-45deg, #a6c1dd 0%, #a6c1dd 50%, #db8664 50%, #db8664 100%); - background: -ms-linear-gradient(-45deg, #a6c1dd 0%, #a6c1dd 50%, #db8664 50%, #db8664 100%); - background: linear-gradient(135deg, #a6c1dd 0%, #a6c1dd 50%, #db8664 50%, #db8664 100%); - filter: progid:DXImageTransform.Microsoft.gradient(startColorstr='#a6c1dd', endColorstr='#db8664', GradientType=1); -} -.ms-cost.ms-br { - background: #aca29a; - background: -moz-linear-gradient(-45deg, #aca29a 0%, #aca29a 50%, #db8664 50%, #db8664 100%); - background: -webkit-gradient(linear, left top, right bottom, color-stop(0%, #aca29a), color-stop(50%, #aca29a), color-stop(50%, #db8664), color-stop(100%, #db8664)); - background: -webkit-linear-gradient(-45deg, #aca29a 0%, #aca29a 50%, #db8664 50%, #db8664 100%); - background: -o-linear-gradient(-45deg, #aca29a 0%, #aca29a 50%, #db8664 50%, #db8664 100%); - background: -ms-linear-gradient(-45deg, #aca29a 0%, #aca29a 50%, #db8664 50%, #db8664 100%); - background: linear-gradient(135deg, #aca29a 0%, #aca29a 50%, #db8664 50%, #db8664 100%); - filter: progid:DXImageTransform.Microsoft.gradient(startColorstr='#aca29a', endColorstr='#db8664', GradientType=1); -} -.ms-cost.ms-bg { - background: #aca29a; - background: -moz-linear-gradient(-45deg, #aca29a 0%, #aca29a 50%, #93b483 50%, #93b483 100%); - background: -webkit-gradient(linear, left top, right bottom, color-stop(0%, #aca29a), color-stop(50%, #aca29a), color-stop(50%, #93b483), color-stop(100%, #93b483)); - background: -webkit-linear-gradient(-45deg, #aca29a 0%, #aca29a 50%, #93b483 50%, #93b483 100%); - background: -o-linear-gradient(-45deg, #aca29a 0%, #aca29a 50%, #93b483 50%, #93b483 100%); - background: -ms-linear-gradient(-45deg, #aca29a 0%, #aca29a 50%, #93b483 50%, #93b483 100%); - background: linear-gradient(135deg, #aca29a 0%, #aca29a 50%, #93b483 50%, #93b483 100%); - filter: progid:DXImageTransform.Microsoft.gradient(startColorstr='#aca29a', endColorstr='#93b483', GradientType=1); -} -.ms-cost.ms-rw { - background: #db8664; - background: -moz-linear-gradient(-45deg, #db8664 0%, #db8664 50%, #edf2b0 50%, #edf2b0 100%); - background: -webkit-gradient(linear, left top, right bottom, color-stop(0%, #db8664), color-stop(50%, #db8664), color-stop(50%, #edf2b0), color-stop(100%, #edf2b0)); - background: -webkit-linear-gradient(-45deg, #db8664 0%, #db8664 50%, #edf2b0 50%, #edf2b0 100%); - background: -o-linear-gradient(-45deg, #db8664 0%, #db8664 50%, #edf2b0 50%, #edf2b0 100%); - background: -ms-linear-gradient(-45deg, #db8664 0%, #db8664 50%, #edf2b0 50%, #edf2b0 100%); - background: linear-gradient(135deg, #db8664 0%, #db8664 50%, #edf2b0 50%, #edf2b0 100%); - filter: progid:DXImageTransform.Microsoft.gradient(startColorstr='#db8664', endColorstr='#edf2b0', GradientType=1); -} -.ms-cost.ms-rg { - background: #db8664; - background: -moz-linear-gradient(-45deg, #db8664 0%, #db8664 50%, #93b483 50%, #93b483 100%); - background: -webkit-gradient(linear, left top, right bottom, color-stop(0%, #db8664), color-stop(50%, #db8664), color-stop(50%, #93b483), color-stop(100%, #93b483)); - background: -webkit-linear-gradient(-45deg, #db8664 0%, #db8664 50%, #93b483 50%, #93b483 100%); - background: -o-linear-gradient(-45deg, #db8664 0%, #db8664 50%, #93b483 50%, #93b483 100%); - background: -ms-linear-gradient(-45deg, #db8664 0%, #db8664 50%, #93b483 50%, #93b483 100%); - background: linear-gradient(135deg, #db8664 0%, #db8664 50%, #93b483 50%, #93b483 100%); - filter: progid:DXImageTransform.Microsoft.gradient(startColorstr='#db8664', endColorstr='#93b483', GradientType=1); -} -.ms-cost.ms-gw { - background: #93b483; - background: -moz-linear-gradient(-45deg, #93b483 0%, #93b483 50%, #edf2b0 50%, #edf2b0 100%); - background: -webkit-gradient(linear, left top, right bottom, color-stop(0%, #93b483), color-stop(50%, #93b483), color-stop(50%, #edf2b0), color-stop(100%, #edf2b0)); - background: -webkit-linear-gradient(-45deg, #93b483 0%, #93b483 50%, #edf2b0 50%, #edf2b0 100%); - background: -o-linear-gradient(-45deg, #93b483 0%, #93b483 50%, #edf2b0 50%, #edf2b0 100%); - background: -ms-linear-gradient(-45deg, #93b483 0%, #93b483 50%, #edf2b0 50%, #edf2b0 100%); - background: linear-gradient(135deg, #93b483 0%, #93b483 50%, #edf2b0 50%, #edf2b0 100%); - filter: progid:DXImageTransform.Microsoft.gradient(startColorstr='#93b483', endColorstr='#edf2b0', GradientType=1); -} -.ms-cost.ms-gu { - background: #93b483; - background: -moz-linear-gradient(-45deg, #93b483 0%, #93b483 50%, #b5cde3 50%, #b5cde3 100%); - background: -webkit-gradient(linear, left top, right bottom, color-stop(0%, #93b483), color-stop(50%, #93b483), color-stop(50%, #b5cde3), color-stop(100%, #b5cde3)); - background: -webkit-linear-gradient(-45deg, #93b483 0%, #93b483 50%, #b5cde3 50%, #b5cde3 100%); - background: -o-linear-gradient(-45deg, #93b483 0%, #93b483 50%, #b5cde3 50%, #b5cde3 100%); - background: -ms-linear-gradient(-45deg, #93b483 0%, #93b483 50%, #b5cde3 50%, #b5cde3 100%); - background: linear-gradient(135deg, #93b483 0%, #93b483 50%, #b5cde3 50%, #b5cde3 100%); - filter: progid:DXImageTransform.Microsoft.gradient(startColorstr='#93b483', endColorstr='#b5cde3', GradientType=1); -} -.ms-cost.ms-2w { - background: #beb9b2; - background: -moz-linear-gradient(-45deg, #beb9b2 0%, #beb9b2 50%, #edf2b0 50%, #edf2b0 100%); - background: -webkit-gradient(linear, left top, right bottom, color-stop(0%, #beb9b2), color-stop(50%, #beb9b2), color-stop(50%, #edf2b0), color-stop(100%, #edf2b0)); - background: -webkit-linear-gradient(-45deg, #beb9b2 0%, #beb9b2 50%, #edf2b0 50%, #edf2b0 100%); - background: -o-linear-gradient(-45deg, #beb9b2 0%, #beb9b2 50%, #edf2b0 50%, #edf2b0 100%); - background: -ms-linear-gradient(-45deg, #beb9b2 0%, #beb9b2 50%, #edf2b0 50%, #edf2b0 100%); - background: linear-gradient(135deg, #beb9b2 0%, #beb9b2 50%, #edf2b0 50%, #edf2b0 100%); - filter: progid:DXImageTransform.Microsoft.gradient(startColorstr='#beb9b2', endColorstr='#edf2b0', GradientType=1); -} -.ms-cost.ms-2u { - background: #beb9b2; - background: -moz-linear-gradient(-45deg, #beb9b2 0%, #beb9b2 50%, #b5cde3 50%, #b5cde3 100%); - background: -webkit-gradient(linear, left top, right bottom, color-stop(0%, #beb9b2), color-stop(50%, #beb9b2), color-stop(50%, #b5cde3), color-stop(100%, #b5cde3)); - background: -webkit-linear-gradient(-45deg, #beb9b2 0%, #beb9b2 50%, #b5cde3 50%, #b5cde3 100%); - background: -o-linear-gradient(-45deg, #beb9b2 0%, #beb9b2 50%, #b5cde3 50%, #b5cde3 100%); - background: -ms-linear-gradient(-45deg, #beb9b2 0%, #beb9b2 50%, #b5cde3 50%, #b5cde3 100%); - background: linear-gradient(135deg, #beb9b2 0%, #beb9b2 50%, #b5cde3 50%, #b5cde3 100%); - filter: progid:DXImageTransform.Microsoft.gradient(startColorstr='#beb9b2', endColorstr='#b5cde3', GradientType=1); -} -.ms-cost.ms-2b { - background: #beb9b2; - background: -moz-linear-gradient(-45deg, #beb9b2 0%, #beb9b2 50%, #9c9188 50%, #9c9188 100%); - background: -webkit-gradient(linear, left top, right bottom, color-stop(0%, #beb9b2), color-stop(50%, #beb9b2), color-stop(50%, #9c9188), color-stop(100%, #9c9188)); - background: -webkit-linear-gradient(-45deg, #beb9b2 0%, #beb9b2 50%, #9c9188 50%, #9c9188 100%); - background: -o-linear-gradient(-45deg, #beb9b2 0%, #beb9b2 50%, #9c9188 50%, #9c9188 100%); - background: -ms-linear-gradient(-45deg, #beb9b2 0%, #beb9b2 50%, #9c9188 50%, #9c9188 100%); - background: linear-gradient(135deg, #beb9b2 0%, #beb9b2 50%, #9c9188 50%, #9c9188 100%); - filter: progid:DXImageTransform.Microsoft.gradient(startColorstr='#beb9b2', endColorstr='#9c9188', GradientType=1); -} -.ms-cost.ms-2r { - background: #beb9b2; - background: -moz-linear-gradient(-45deg, #beb9b2 0%, #beb9b2 50%, #db8664 50%, #db8664 100%); - background: -webkit-gradient(linear, left top, right bottom, color-stop(0%, #beb9b2), color-stop(50%, #beb9b2), color-stop(50%, #db8664), color-stop(100%, #db8664)); - background: -webkit-linear-gradient(-45deg, #beb9b2 0%, #beb9b2 50%, #db8664 50%, #db8664 100%); - background: -o-linear-gradient(-45deg, #beb9b2 0%, #beb9b2 50%, #db8664 50%, #db8664 100%); - background: -ms-linear-gradient(-45deg, #beb9b2 0%, #beb9b2 50%, #db8664 50%, #db8664 100%); - background: linear-gradient(135deg, #beb9b2 0%, #beb9b2 50%, #db8664 50%, #db8664 100%); - filter: progid:DXImageTransform.Microsoft.gradient(startColorstr='#beb9b2', endColorstr='#db8664', GradientType=1); -} -.ms-cost.ms-2g { - background: #beb9b2; - background: -moz-linear-gradient(-45deg, #beb9b2 0%, #beb9b2 50%, #93b483 50%, #93b483 100%); - background: -webkit-gradient(linear, left top, right bottom, color-stop(0%, #beb9b2), color-stop(50%, #beb9b2), color-stop(50%, #93b483), color-stop(100%, #93b483)); - background: -webkit-linear-gradient(-45deg, #beb9b2 0%, #beb9b2 50%, #93b483 50%, #93b483 100%); - background: -o-linear-gradient(-45deg, #beb9b2 0%, #beb9b2 50%, #93b483 50%, #93b483 100%); - background: -ms-linear-gradient(-45deg, #beb9b2 0%, #beb9b2 50%, #93b483 50%, #93b483 100%); - background: linear-gradient(135deg, #beb9b2 0%, #beb9b2 50%, #93b483 50%, #93b483 100%); - filter: progid:DXImageTransform.Microsoft.gradient(startColorstr='#beb9b2', endColorstr='#93b483', GradientType=1); -} -.ms-cost.ms-p:before, -.ms-cost.ms-wp:before, -.ms-cost.ms-up:before, -.ms-cost.ms-bp:before, -.ms-cost.ms-rp:before, -.ms-cost.ms-gp:before { - display: inline-block; - -moz-transform: scale(1.2, 1.2); - -webkit-transform: scale(1.2, 1.2); - transform: scale(1.2, 1.2); -} -.ms-cost.ms-tap-alt:before { - display: inline-block; - -moz-transform: scale(1.2, 1.2); - -webkit-transform: scale(1.2, 1.2); - transform: scale(1.2, 1.2); - padding-left: .06em; - padding-bottom: 0.10em; -} -.ms-cost.ms-s:before { - color: #fff; - -webkit-text-stroke: 2px #fff; - font-size: 0.85em; - top: -0.05em; - position: relative; - display: inline-block; -} -.ms-cost.ms-s:after { - content: "\e619"; - position: absolute; - color: #333; - margin-left: -0.9em; - font-size: 1.1em; -} -.ms-cost.ms-untap { - background-color: #111; - color: #fff; -} -.ms-cost.ms-shadow { - box-shadow: -0.06em 0.07em 0 #111, 0 0.06em 0 #111; -} -.ms-cost.ms-shadow.ms-untap { - box-shadow: -0.06em 0.07em 0 #fff, 0 0.06em 0 #fff; -} -/** - * Split costs */ -.ms-split { - position: relative; - width: 1.3em; - height: 1.3em; -} -.ms-split:before, -.ms-split:after { - font-size: 0.55em !important; - position: absolute; -} -.ms-split:before { - top: -0.38em; - left: 0.28em; -} -.ms-split:after { - top: 0.5em; - left: 1.0em; -} -/** - * Half costs */ -.ms-half { - width: .675em; - overflow: hidden; - display: inline-block; - margin-left: .675em; -} -.ms-half > .ms-cost { - margin-left: -0.675em; -} -/** - * Un-set costs */ -.ms-100 { - width: 2.4em; -} -.ms-100000 { - width: 5.4em; -} -/** - * Planeswalker symbols */ -.ms-loyalty-up, -.ms-loyalty-down, -.ms-loyalty-zero, -.ms-loyalty-start { - color: #111; - font-size: 1.5em; - position: relative; - text-align: center; -} -.ms-loyalty-start { - font-size: 2.0em; -} -.ms-loyalty-0:after, -.ms-loyalty-1:after, -.ms-loyalty-2:after, -.ms-loyalty-3:after, -.ms-loyalty-4:after, -.ms-loyalty-5:after, -.ms-loyalty-6:after, -.ms-loyalty-7:after, -.ms-loyalty-8:after, -.ms-loyalty-9:after, -.ms-loyalty-10:after, -.ms-loyalty-x:after { - color: #fff; - display: inline-block; - font-size: 0.5em; - font-family: 'MPlantin, Garamond, Palatino, ' Times New Roman ', Times, serif'; - position: absolute; - left: 0; - line-height: 1.75em; - width: 100%; - text-align: center; - -webkit-padding-before: 0.15em; -} -.ms-loyalty-0:after { - content: "0"; -} -.ms-loyalty-up.ms-loyalty-1:after { - content: "+1"; -} -.ms-loyalty-up.ms-loyalty-2:after { - content: "+2"; -} -.ms-loyalty-up.ms-loyalty-3:after { - content: "+3"; -} -.ms-loyalty-up.ms-loyalty-4:after { - content: "+4"; -} -.ms-loyalty-up.ms-loyalty-5:after { - content: "+5"; -} -.ms-loyalty-up.ms-loyalty-6:after { - content: "+6"; -} -.ms-loyalty-up.ms-loyalty-7:after { - content: "+7"; -} -.ms-loyalty-up.ms-loyalty-8:after { - content: "+8"; -} -.ms-loyalty-up.ms-loyalty-9:after { - content: "+9"; -} -.ms-loyalty-up.ms-loyalty-10:after { - content: "+10"; -} -.ms-loyalty-up.ms-loyalty-x:after { - content: "+X"; -} -.ms-loyalty-start.ms-loyalty-1:after { - content: "1"; -} -.ms-loyalty-start.ms-loyalty-2:after { - content: "2"; -} -.ms-loyalty-start.ms-loyalty-3:after { - content: "3"; -} -.ms-loyalty-start.ms-loyalty-4:after { - content: "4"; -} -.ms-loyalty-start.ms-loyalty-5:after { - content: "5"; -} -.ms-loyalty-start.ms-loyalty-6:after { - content: "6"; -} -.ms-loyalty-start.ms-loyalty-7:after { - content: "7"; -} -.ms-loyalty-start.ms-loyalty-8:after { - content: "8"; -} -.ms-loyalty-start.ms-loyalty-9:after { - content: "9"; -} -.ms-loyalty-start.ms-loyalty-10:after { - content: "10"; -} -.ms-loyalty-start.ms-loyalty-x:after { - content: "X"; -} -.ms-loyalty-down:after { - line-height: 1.6em; -} -.ms-loyalty-down.ms-loyalty-1:after { - content: "-1"; -} -.ms-loyalty-down.ms-loyalty-2:after { - content: "-2"; -} -.ms-loyalty-down.ms-loyalty-3:after { - content: "-3"; -} -.ms-loyalty-down.ms-loyalty-4:after { - content: "-4"; -} -.ms-loyalty-down.ms-loyalty-5:after { - content: "-5"; -} -.ms-loyalty-down.ms-loyalty-6:after { - content: "-6"; -} -.ms-loyalty-down.ms-loyalty-7:after { - content: "-7"; -} -.ms-loyalty-down.ms-loyalty-8:after { - content: "-8"; -} -.ms-loyalty-down.ms-loyalty-9:after { - content: "-9"; -} -.ms-loyalty-down.ms-loyalty-10:after { - content: "-10"; -} -.ms-loyalty-down.ms-loyalty-x:after { - content: "-X"; -} -/** - * Double faced cards */ -.ms-dfc { - color: #111; - border: .05em solid #111; - border-radius: 2em; - padding: 1px; -} -/* - * Larger sizes */ -.ms-2x { - font-size: 1.75em; -} -.ms-3x { - font-size: 2.25em; -} -.ms-4x { - font-size: 3.0em; -} -.ms-5x { - font-size: 3.75em; -} -.ms-6x { - font-size: 4.5em; -} -/** - * Mana */ -.ms-w:before { - content: "\e600"; -} -.ms-u:before { - content: "\e601"; -} -.ms-b:before { - content: "\e602"; -} -.ms-r:before { - content: "\e603"; -} -.ms-g:before { - content: "\e604"; -} -.ms-0:before { - content: "\e605"; -} -.ms-1:before { - content: "\e606"; -} -.ms-2:before { - content: "\e607"; -} -.ms-3:before { - content: "\e608"; -} -.ms-4:before { - content: "\e609"; -} -.ms-5:before { - content: "\e60a"; -} -.ms-6:before { - content: "\e60b"; -} -.ms-7:before { - content: "\e60c"; -} -.ms-8:before { - content: "\e60d"; -} -.ms-9:before { - content: "\e60e"; -} -.ms-10:before { - content: "\e60f"; -} -.ms-11:before { - content: "\e610"; -} -.ms-12:before { - content: "\e611"; -} -.ms-13:before { - content: "\e612"; -} -.ms-14:before { - content: "\e613"; -} -.ms-15:before { - content: "\e614"; -} -.ms-16:before { - content: "\e62a"; -} -.ms-17:before { - content: "\e62b"; -} -.ms-18:before { - content: "\e62c"; -} -.ms-19:before { - content: "\e62d"; -} -.ms-20:before { - content: "\e62e"; -} -.ms-x:before { - content: "\e615"; -} -.ms-y:before { - content: "\e616"; -} -.ms-z:before { - content: "\e617"; -} -.ms-p:before, -.ms-wp:before, -.ms-up:before, -.ms-bp:before, -.ms-rp:before, -.ms-gp:before { - content: "\e618"; -} -.ms-s:before { - content: "\e619"; -} -.ms-c:before { - content: "\e904"; -} -/** - * Tap/roll symbols */ -.ms-tap:before { - content: "\e61a"; -} -.ms-untap:before { - content: "\e61b"; -} -.ms-tap-alt:before { - content: "\e61c"; -} -.ms-chaos:before { - content: "\e61d"; -} -.ms-1-2:before { - content: "\e902"; -} -.ms-infinity:before { - content: "\e903"; -} -/** - * Card types */ -.ms-artifact:before { - content: "\e61e"; -} -.ms-creature:before { - content: "\e61f"; -} -.ms-enchantment:before { - content: "\e620"; -} -.ms-instant:before { - content: "\e621"; -} -.ms-land:before { - content: "\e622"; -} -.ms-planeswalker:before { - content: "\e623"; -} -.ms-sorcery:before { - content: "\e624"; -} -/** - * Split symbols */ -.ms-wu:before, -.ms-wb:before, -.ms-rw:after, -.ms-gw:after, -.ms-2w:after { - content: "\e600"; -} -.ms-ub:before, -.ms-ur:before, -.ms-wu:after, -.ms-gu:after, -.ms-2u:after { - content: "\e601"; -} -.ms-br:before, -.ms-bg:before, -.ms-wb:after, -.ms-ub:after, -.ms-2b:after { - content: "\e602"; -} -.ms-rw:before, -.ms-rg:before, -.ms-ur:after, -.ms-br:after, -.ms-2r:after { - content: "\e603"; -} -.ms-gw:before, -.ms-gu:before, -.ms-bg:after, -.ms-rg:after, -.ms-2g:after { - content: "\e604"; -} -.ms-2w:before, -.ms-2u:before, -.ms-2b:before, -.ms-2r:before, -.ms-2g:before { - content: "\e607"; -} -/** - * Un-set symbols */ -.ms-100:before { - content: "\e900"; -} -.ms-100000:before { - content: "\e901"; -} -/** - * Planeswalker symbols */ -.ms-loyalty-up:before { - content: "\e627"; -} -.ms-loyalty-down:before { - content: "\e625"; -} -.ms-loyalty-zero:before { - content: "\e626"; -} -.ms-loyalty-start:before { - content: "\e628"; -} -/** - * Other */ -.ms-flashback:before { - content: "\e629"; -} -.ms-dfc-night:before { - content: "\e905"; -} -.ms-dfc-day:before { - content: "\e906"; -} diff --git a/spaces/gwang-kim/DATID-3D/eg3d/torch_utils/__init__.py b/spaces/gwang-kim/DATID-3D/eg3d/torch_utils/__init__.py deleted file mode 100644 index dfebd04f47e6f6b1b44984c14c23b57d56f72240..0000000000000000000000000000000000000000 --- a/spaces/gwang-kim/DATID-3D/eg3d/torch_utils/__init__.py +++ /dev/null @@ -1,11 +0,0 @@ -# SPDX-FileCopyrightText: Copyright (c) 2021-2022 NVIDIA CORPORATION & AFFILIATES. All rights reserved. -# SPDX-License-Identifier: LicenseRef-NvidiaProprietary -# -# NVIDIA CORPORATION, its affiliates and licensors retain all intellectual -# property and proprietary rights in and to this material, related -# documentation and any modifications thereto. Any use, reproduction, -# disclosure or distribution of this material and related documentation -# without an express license agreement from NVIDIA CORPORATION or -# its affiliates is strictly prohibited. - -# empty diff --git a/spaces/h2oai/wave-tour/examples/graphics_primitives.py b/spaces/h2oai/wave-tour/examples/graphics_primitives.py deleted file mode 100644 index 6a86166666207b7263f9e0a5338111ed891cb546..0000000000000000000000000000000000000000 --- a/spaces/h2oai/wave-tour/examples/graphics_primitives.py +++ /dev/null @@ -1,58 +0,0 @@ -# Graphics / Primitives -# Use the #graphics module to render and update shapes. -# --- - -from h2o_wave import site, ui, graphics as g - -# Create some shapes -arc = g.arc(r1=25, r2=50, a1=90, a2=180) -circle = g.circle(cx=25, cy=25, r=25) -ellipse = g.ellipse(cx=25, cy=25, rx=25, ry=20) -image = g.image(width=50, height=50, href='https://www.python.org/static/community_logos/python-powered-h-140x182.png') -line = g.line(x1=0, y1=0, x2=50, y2=50) -path = g.path(d='M 0,0 L 50,50 L 50,0 L 0,50 z') -path2 = g.path(d=g.p().M(0, 0).L(50, 50).L(50, 0).L(0, 50).z().d()) # same effect as above, but programmable. -path3 = g.p().M(0, 0).L(50, 50).L(50, 0).L(0, 50).z().path() # same effect as above, but a tad more concise. -polygon = g.polygon(points='0,0 50,50 50,0 0,50') -polyline = g.polyline(points='0,0 50,50 50,0 0,50') -rect = g.rect(x=0, y=0, width=50, height=50) -rounded_rect = g.rect(x=0, y=0, width=50, height=50, rx=10) -text = g.text(x=0, y=48, text='Z', font_size='4em') - -# Collect 'em all -shapes = [arc, circle, ellipse, image, line, path, path2, path3, polygon, polyline, rect, rounded_rect, text] - -# Apply fill/stroke for each shape -for shape in shapes: - shape.fill = 'none' if g.type_of(shape) == 'polyline' else 'crimson' - shape.stroke = 'darkred' - shape.stroke_width = 5 - -# Lay out shapes vertically -y = 10 -for shape in shapes: - shape.transform = f'translate(10,{y})' - y += 60 - -# Add shapes to the graphics card -page = site['/demo'] -page['example'] = ui.graphics_card( - box='1 1 1 10', view_box='0 0 70 800', width='100%', height='100%', - stage=g.stage( - arc=arc, - circle=circle, - ellipse=ellipse, - image=image, - line=line, - path=path, - path2=path2, - path3=path3, - polygon=polygon, - polyline=polyline, - rect=rect, - rounded_rect=rounded_rect, - text=text, - ), -) - -page.save() diff --git a/spaces/hackathon-pln-es/Sexismdetection/README.md b/spaces/hackathon-pln-es/Sexismdetection/README.md deleted file mode 100644 index 8916e16754b68b92ed8318eb97d05ede49f781c1..0000000000000000000000000000000000000000 --- a/spaces/hackathon-pln-es/Sexismdetection/README.md +++ /dev/null @@ -1,12 +0,0 @@ ---- -title: Sexismdetector -emoji: 🙅‍♀️ -colorFrom: purple -colorTo: pink -sdk: streamlit -sdk_version: 1.2.0 -app_file: app.py -pinned: false ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces#reference diff --git a/spaces/hackathon-pln-es/Spanish-Medical-NER/README.md b/spaces/hackathon-pln-es/Spanish-Medical-NER/README.md deleted file mode 100644 index da0381d740acba4e3e623925d8359f75e4638205..0000000000000000000000000000000000000000 --- a/spaces/hackathon-pln-es/Spanish-Medical-NER/README.md +++ /dev/null @@ -1,13 +0,0 @@ ---- -title: Medical NER (Named Entity Recognition) -emoji: 👩‍⚕ -colorFrom: yellow -colorTo: red -sdk: gradio -sdk_version: 2.9.0 -app_file: app.py -pinned: false -license: cc-by-nc-4.0 ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces#reference diff --git a/spaces/hamzapehlivan/StyleRes/README.md b/spaces/hamzapehlivan/StyleRes/README.md deleted file mode 100644 index bd8f2985f6ed798896938145d288d9273fd8560d..0000000000000000000000000000000000000000 --- a/spaces/hamzapehlivan/StyleRes/README.md +++ /dev/null @@ -1,13 +0,0 @@ ---- -title: StyleRes -emoji: 🚀 -colorFrom: blue -colorTo: indigo -sdk: gradio -sdk_version: 3.24.1 -python_version: 3.7 -app_file: app.py -pinned: false ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/haotiz/glip-zeroshot-demo/maskrcnn_benchmark/structures/image_list.py b/spaces/haotiz/glip-zeroshot-demo/maskrcnn_benchmark/structures/image_list.py deleted file mode 100644 index 9a70418fb0ef322103f3cf91eb8746d0de04b39a..0000000000000000000000000000000000000000 --- a/spaces/haotiz/glip-zeroshot-demo/maskrcnn_benchmark/structures/image_list.py +++ /dev/null @@ -1,70 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. All Rights Reserved. -from __future__ import division - -import torch - - -class ImageList(object): - """ - Structure that holds a list of images (of possibly - varying sizes) as a single tensor. - This works by padding the images to the same size, - and storing in a field the original sizes of each image - """ - - def __init__(self, tensors, image_sizes): - """ - Arguments: - tensors (tensor) - image_sizes (list[tuple[int, int]]) - """ - self.tensors = tensors - self.image_sizes = image_sizes - - def to(self, *args, **kwargs): - cast_tensor = self.tensors.to(*args, **kwargs) - return ImageList(cast_tensor, self.image_sizes) - - -def to_image_list(tensors, size_divisible=0): - """ - tensors can be an ImageList, a torch.Tensor or - an iterable of Tensors. It can't be a numpy array. - When tensors is an iterable of Tensors, it pads - the Tensors with zeros so that they have the same - shape - """ - if isinstance(tensors, torch.Tensor) and size_divisible > 0: - tensors = [tensors] - - if isinstance(tensors, ImageList): - return tensors - elif isinstance(tensors, torch.Tensor): - # single tensor shape can be inferred - assert tensors.dim() == 4 - image_sizes = [tensor.shape[-2:] for tensor in tensors] - return ImageList(tensors, image_sizes) - elif isinstance(tensors, (tuple, list)): - max_size = tuple(max(s) for s in zip(*[img.shape for img in tensors])) - - # TODO Ideally, just remove this and let me model handle arbitrary - # input sizs - if size_divisible > 0: - import math - - stride = size_divisible - max_size = list(max_size) - max_size[1] = int(math.ceil(max_size[1] / stride) * stride) - max_size[2] = int(math.ceil(max_size[2] / stride) * stride) - max_size = tuple(max_size) - - batch_shape = (len(tensors),) + max_size - batched_imgs = tensors[0].new(*batch_shape).zero_() - for img, pad_img in zip(tensors, batched_imgs): - pad_img[: img.shape[0], : img.shape[1], : img.shape[2]].copy_(img) - - image_sizes = [im.shape[-2:] for im in tensors] - - return ImageList(batched_imgs, image_sizes) - else: - raise TypeError("Unsupported type for to_image_list: {}".format(type(tensors))) diff --git a/spaces/hasibzunair/fifa-tryon-demo/Self-Correction-Human-Parsing-for-ACGPN/mhp_extension/detectron2/projects/TridentNet/tridentnet/config.py b/spaces/hasibzunair/fifa-tryon-demo/Self-Correction-Human-Parsing-for-ACGPN/mhp_extension/detectron2/projects/TridentNet/tridentnet/config.py deleted file mode 100644 index f33f473cb32633d9ba6582f0406ffe0a929d23c6..0000000000000000000000000000000000000000 --- a/spaces/hasibzunair/fifa-tryon-demo/Self-Correction-Human-Parsing-for-ACGPN/mhp_extension/detectron2/projects/TridentNet/tridentnet/config.py +++ /dev/null @@ -1,26 +0,0 @@ -# -*- coding: utf-8 -*- -# Copyright (c) Facebook, Inc. and its affiliates. All Rights Reserved - -from detectron2.config import CfgNode as CN - - -def add_tridentnet_config(cfg): - """ - Add config for tridentnet. - """ - _C = cfg - - _C.MODEL.TRIDENT = CN() - - # Number of branches for TridentNet. - _C.MODEL.TRIDENT.NUM_BRANCH = 3 - # Specify the dilations for each branch. - _C.MODEL.TRIDENT.BRANCH_DILATIONS = [1, 2, 3] - # Specify the stage for applying trident blocks. Default stage is Res4 according to the - # TridentNet paper. - _C.MODEL.TRIDENT.TRIDENT_STAGE = "res4" - # Specify the test branch index TridentNet Fast inference: - # - use -1 to aggregate results of all branches during inference. - # - otherwise, only using specified branch for fast inference. Recommended setting is - # to use the middle branch. - _C.MODEL.TRIDENT.TEST_BRANCH_IDX = 1 diff --git a/spaces/huaiji3y/bingo-Public/src/components/header.tsx b/spaces/huaiji3y/bingo-Public/src/components/header.tsx deleted file mode 100644 index dc298b722154d1ac6d7a7e148204605562d6cc58..0000000000000000000000000000000000000000 --- a/spaces/huaiji3y/bingo-Public/src/components/header.tsx +++ /dev/null @@ -1,12 +0,0 @@ -import * as React from 'react' -import { UserMenu } from './user-menu' - -export async function Header() { - return ( -
      -
      - -
      -
      - ) -} diff --git a/spaces/hugging-fellows/img-to-music/README.md b/spaces/hugging-fellows/img-to-music/README.md deleted file mode 100644 index ff1948d1b95ee1f8d7a3396aefb285c729d18687..0000000000000000000000000000000000000000 --- a/spaces/hugging-fellows/img-to-music/README.md +++ /dev/null @@ -1,13 +0,0 @@ ---- -title: Img To Music -emoji: 🌅🎶 -colorFrom: green -colorTo: purple -sdk: gradio -sdk_version: 3.16.0 -app_file: app.py -pinned: true -duplicated_from: fffiloni/img-to-music ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference \ No newline at end of file diff --git a/spaces/huggingface-projects/wordalle/static/_app/immutable/chunks/index-86f4d6c3.js b/spaces/huggingface-projects/wordalle/static/_app/immutable/chunks/index-86f4d6c3.js deleted file mode 100644 index a92134b10c8d70d3789c73329b866e0a48bf8e34..0000000000000000000000000000000000000000 --- a/spaces/huggingface-projects/wordalle/static/_app/immutable/chunks/index-86f4d6c3.js +++ /dev/null @@ -1,4 +0,0 @@ -function k(){}const ct=t=>t;function ot(t,e){for(const n in e)t[n]=e[n];return t}function J(t){return t()}function I(){return Object.create(null)}function x(t){t.forEach(J)}function K(t){return typeof t=="function"}function qt(t,e){return t!=t?e==e:t!==e||t&&typeof t=="object"||typeof t=="function"}let S;function Tt(t,e){return S||(S=document.createElement("a")),S.href=e,t===S.href}function lt(t){return Object.keys(t).length===0}function ut(t,...e){if(t==null)return k;const n=t.subscribe(...e);return n.unsubscribe?()=>n.unsubscribe():n}function zt(t,e,n){t.$$.on_destroy.push(ut(e,n))}function Bt(t,e,n,r){if(t){const s=Q(t,e,n,r);return t[0](s)}}function Q(t,e,n,r){return t[1]&&r?ot(n.ctx.slice(),t[1](r(e))):n.ctx}function Lt(t,e,n,r){if(t[2]&&r){const s=t[2](r(n));if(e.dirty===void 0)return s;if(typeof s=="object"){const l=[],i=Math.max(e.dirty.length,s.length);for(let o=0;o32){const e=[],n=t.ctx.length/32;for(let r=0;rwindow.performance.now():()=>Date.now(),F=U?t=>requestAnimationFrame(t):k;const b=new Set;function V(t){b.forEach(e=>{e.c(t)||(b.delete(e),e.f())}),b.size!==0&&F(V)}function ft(t){let e;return b.size===0&&F(V),{promise:new Promise(n=>{b.add(e={c:t,f:n})}),abort(){b.delete(e)}}}let O=!1;function _t(){O=!0}function dt(){O=!1}function ht(t,e,n,r){for(;t>1);n(s)<=r?t=s+1:e=s}return t}function mt(t){if(t.hydrate_init)return;t.hydrate_init=!0;let e=t.childNodes;if(t.nodeName==="HEAD"){const c=[];for(let u=0;u0&&e[n[s]].claim_order<=u?s+1:ht(1,s,a=>e[n[a]].claim_order,u))-1;r[c]=n[_]+1;const f=_+1;n[f]=c,s=Math.max(f,s)}const l=[],i=[];let o=e.length-1;for(let c=n[s]+1;c!=0;c=r[c-1]){for(l.push(e[c-1]);o>=c;o--)i.push(e[o]);o--}for(;o>=0;o--)i.push(e[o]);l.reverse(),i.sort((c,u)=>c.claim_order-u.claim_order);for(let c=0,u=0;c=l[u].claim_order;)u++;const _=ut.removeEventListener(e,n,r)}function Ut(t,e,n){n==null?t.removeAttribute(e):t.getAttribute(e)!==n&&t.setAttribute(e,n)}function Vt(t,e,n){t.setAttributeNS("http://www.w3.org/1999/xlink",e,n)}function wt(t){return Array.from(t.childNodes)}function vt(t){t.claim_info===void 0&&(t.claim_info={last_index:0,total_claimed:0})}function Z(t,e,n,r,s=!1){vt(t);const l=(()=>{for(let i=t.claim_info.last_index;i=0;i--){const o=t[i];if(e(o)){const c=n(o);return c===void 0?t.splice(i,1):t[i]=c,s?c===void 0&&t.claim_info.last_index--:t.claim_info.last_index=i,o}}return r()})();return l.claim_order=t.claim_info.total_claimed,t.claim_info.total_claimed+=1,l}function tt(t,e,n,r){return Z(t,s=>s.nodeName===e,s=>{const l=[];for(let i=0;is.removeAttribute(i))},()=>r(e))}function Xt(t,e,n){return tt(t,e,n,Y)}function Yt(t,e,n){return tt(t,e,n,$t)}function Et(t,e){return Z(t,n=>n.nodeType===3,n=>{const r=""+e;if(n.data.startsWith(r)){if(n.data.length!==r.length)return n.splitText(r.length)}else n.data=r},()=>H(e),!0)}function Zt(t){return Et(t," ")}function te(t,e){e=""+e,t.wholeText!==e&&(t.data=e)}function ee(t,e,n,r){n===null?t.style.removeProperty(e):t.style.setProperty(e,n,r?"important":"")}function et(t,e,{bubbles:n=!1,cancelable:r=!1}={}){const s=document.createEvent("CustomEvent");return s.initCustomEvent(t,n,r,e),s}const D=new Map;let P=0;function kt(t){let e=5381,n=t.length;for(;n--;)e=(e<<5)-e^t.charCodeAt(n);return e>>>0}function Nt(t,e){const n={stylesheet:yt(e),rules:{}};return D.set(t,n),n}function W(t,e,n,r,s,l,i,o=0){const c=16.666/r;let u=`{ -`;for(let p=0;p<=1;p+=c){const g=e+(n-e)*l(p);u+=p*100+`%{${i(g,1-g)}} -`}const _=u+`100% {${i(n,1-n)}} -}`,f=`__svelte_${kt(_)}_${o}`,a=X(t),{stylesheet:d,rules:h}=D.get(a)||Nt(a,t);h[f]||(h[f]=!0,d.insertRule(`@keyframes ${f} ${_}`,d.cssRules.length));const y=t.style.animation||"";return t.style.animation=`${y?`${y}, `:""}${f} ${r}ms linear ${s}ms 1 both`,P+=1,f}function At(t,e){const n=(t.style.animation||"").split(", "),r=n.filter(e?l=>l.indexOf(e)<0:l=>l.indexOf("__svelte")===-1),s=n.length-r.length;s&&(t.style.animation=r.join(", "),P-=s,P||St())}function St(){F(()=>{P||(D.forEach(t=>{const{stylesheet:e}=t;let n=e.cssRules.length;for(;n--;)e.deleteRule(n);t.rules={}}),D.clear())})}let E;function v(t){E=t}function N(){if(!E)throw new Error("Function called outside component initialization");return E}function ne(t){N().$$.on_mount.push(t)}function ie(t){N().$$.after_update.push(t)}function re(t){N().$$.on_destroy.push(t)}function se(){const t=N();return(e,n,{cancelable:r=!1}={})=>{const s=t.$$.callbacks[e];if(s){const l=et(e,n,{cancelable:r});return s.slice().forEach(i=>{i.call(t,l)}),!l.defaultPrevented}return!0}}function ce(t,e){return N().$$.context.set(t,e),e}const w=[],G=[],C=[],B=[],nt=Promise.resolve();let L=!1;function it(){L||(L=!0,nt.then(rt))}function oe(){return it(),nt}function R(t){C.push(t)}function le(t){B.push(t)}const T=new Set;let j=0;function rt(){const t=E;do{for(;j{$=null})),$}function z(t,e,n){t.dispatchEvent(et(`${e?"intro":"outro"}${n}`))}const M=new Set;let m;function ue(){m={r:0,c:[],p:m}}function ae(){m.r||x(m.c),m=m.p}function Mt(t,e){t&&t.i&&(M.delete(t),t.i(e))}function fe(t,e,n,r){if(t&&t.o){if(M.has(t))return;M.add(t),m.c.push(()=>{M.delete(t),r&&(n&&t.d(1),r())}),t.o(e)}}const Dt={duration:0};function _e(t,e,n,r){let s=e(t,n),l=r?0:1,i=null,o=null,c=null;function u(){c&&At(t,c)}function _(a,d){const h=a.b-l;return d*=Math.abs(h),{a:l,b:a.b,d:h,duration:d,start:a.start,end:a.start+d,group:a.group}}function f(a){const{delay:d=0,duration:h=300,easing:y=ct,tick:p=k,css:g}=s||Dt,q={start:at()+d,b:a};a||(q.group=m,m.r+=1),i||o?o=q:(g&&(u(),c=W(t,l,a,h,d,y,g)),a&&p(0,1),i=_(q,h),R(()=>z(t,a,"start")),ft(A=>{if(o&&A>o.start&&(i=_(o,h),o=null,z(t,i.b,"start"),g&&(u(),c=W(t,l,i.b,i.duration,0,y,s.css))),i){if(A>=i.end)p(l=i.b,1-l),z(t,i.b,"end"),o||(i.b?u():--i.group.r||x(i.group.c)),i=null;else if(A>=i.start){const st=A-i.start;l=i.a+i.d*y(st/i.duration),p(l,1-l)}}return!!(i||o)}))}return{run(a){K(s)?Ct().then(()=>{s=s(),f(a)}):f(a)},end(){u(),i=o=null}}}function de(t,e){const n={},r={},s={$$scope:1};let l=t.length;for(;l--;){const i=t[l],o=e[l];if(o){for(const c in i)c in o||(r[c]=1);for(const c in o)s[c]||(n[c]=o[c],s[c]=1);t[l]=o}else for(const c in i)s[c]=1}for(const i in r)i in n||(n[i]=void 0);return n}function he(t){return typeof t=="object"&&t!==null?t:{}}function me(t,e,n){const r=t.$$.props[e];r!==void 0&&(t.$$.bound[r]=n,n(t.$$.ctx[r]))}function pe(t){t&&t.c()}function ye(t,e){t&&t.l(e)}function Pt(t,e,n,r){const{fragment:s,on_mount:l,on_destroy:i,after_update:o}=t.$$;s&&s.m(e,n),r||R(()=>{const c=l.map(J).filter(K);i?i.push(...c):x(c),t.$$.on_mount=[]}),o.forEach(R)}function Rt(t,e){const n=t.$$;n.fragment!==null&&(x(n.on_destroy),n.fragment&&n.fragment.d(e),n.on_destroy=n.fragment=null,n.ctx=[])}function Ot(t,e){t.$$.dirty[0]===-1&&(w.push(t),it(),t.$$.dirty.fill(0)),t.$$.dirty[e/31|0]|=1<{const h=d.length?d[0]:a;return u.ctx&&s(u.ctx[f],u.ctx[f]=h)&&(!u.skip_bound&&u.bound[f]&&u.bound[f](h),_&&Ot(t,f)),a}):[],u.update(),_=!0,x(u.before_update),u.fragment=r?r(u.ctx):!1,e.target){if(e.hydrate){_t();const f=wt(e.target);u.fragment&&u.fragment.l(f),f.forEach(xt)}else u.fragment&&u.fragment.c();e.intro&&Mt(t.$$.fragment),Pt(t,e.target,e.anchor,e.customElement),dt(),rt()}v(c)}class be{$destroy(){Rt(this,1),this.$destroy=k}$on(e,n){const r=this.$$.callbacks[e]||(this.$$.callbacks[e]=[]);return r.push(n),()=>{const s=r.indexOf(n);s!==-1&&r.splice(s,1)}}$set(e){this.$$set&&!lt(e)&&(this.$$.skip_bound=!0,this.$$set(e),this.$$.skip_bound=!1)}}export{Vt as $,he as A,Rt as B,ot as C,oe as D,k as E,Bt as F,Ft as G,Ht as H,Lt as I,bt as J,Qt as K,Gt as L,se as M,$t as N,Yt as O,ct as P,R as Q,_e as R,be as S,Tt as T,x as U,re as V,G as W,le as X,zt as Y,It as Z,me as _,wt as a,Ut as b,Xt as c,xt as d,Y as e,ee as f,Wt as g,Et as h,ge as i,te as j,Jt as k,Kt as l,Zt as m,ue as n,fe as o,ae as p,Mt as q,ce as r,qt as s,H as t,ie as u,ne as v,pe as w,ye as x,Pt as y,de as z}; diff --git a/spaces/hunkim/kakaogpt/app.py b/spaces/hunkim/kakaogpt/app.py deleted file mode 100644 index de73f68a1ecc181146fd485a3b1605c38e68e136..0000000000000000000000000000000000000000 --- a/spaces/hunkim/kakaogpt/app.py +++ /dev/null @@ -1,59 +0,0 @@ -# -*-coding:utf-8-*- -import streamlit as st -# code from https://huggingface.co/kakaobrain/kogpt -import torch -from transformers import AutoTokenizer, AutoModelForCausalLM - - -tokenizer = AutoTokenizer.from_pretrained( - 'kakaobrain/kogpt', revision='KoGPT6B-ryan1.5b', cache_dir='./model_dir/', - bos_token='[BOS]', eos_token='[EOS]', unk_token='[UNK]', pad_token='[PAD]', mask_token='[MASK]' -) - -device = torch.device('cuda' if torch.cuda.is_available() else 'cpu') - -model = AutoModelForCausalLM.from_pretrained( - 'kakaobrain/kogpt', revision='KoGPT6B-ryan1.5b',cache_dir='./model_dir/', - pad_token_id=tokenizer.eos_token_id, - torch_dtype=torch.float16, low_cpu_mem_usage=True -).to(device=device, non_blocking=True) -_ = model.eval() - -print("Model loading done!") - -def gpt(prompt): - with torch.no_grad(): - tokens = tokenizer.encode(prompt, return_tensors='pt').to(device=device, non_blocking=True) - gen_tokens = model.generate(tokens, do_sample=True, temperature=0.8, max_length=256) - generated = tokenizer.batch_decode(gen_tokens)[0] - - return generated - - -#prompts -st.title("여러분들의 문장을 완성해줍니다. 🤖") -st.markdown("카카오 gpt 사용합니다.") -st.subheader("몇가지 예제: ") -example_1_str = "오늘의 날씨는 너무 눈부시다. 내일은 " -example_2_str = "우리는 행복을 언제나 갈망하지만 항상 " -example_1 = st.button(example_1_str) -example_2 = st.button(example_2_str) -textbox = st.text_area('오늘은 아름다움을 향해 달리고 ', '',height=100, max_chars=500 ) -button = st.button('생성:') -# output -st.subheader("결과값: ") -if example_1: - with st.spinner('In progress.......'): - output_text = gpt(example_1_str) - st.markdown("\n"+output_text) -if example_2: - with st.spinner('In progress.......'): - output_text = gpt(example_2_str) - st.markdown("\n"+output_text) -if button: - with st.spinner('In progress.......'): - if textbox: - output_text = gpt(textbox) - else: - output_text = " " - st.markdown("\n" + output_text) \ No newline at end of file diff --git a/spaces/hysts/ControlNet-v1-1/app_depth.py b/spaces/hysts/ControlNet-v1-1/app_depth.py deleted file mode 100644 index fdce423a5609e2e9cfe55502d758f38b4367df17..0000000000000000000000000000000000000000 --- a/spaces/hysts/ControlNet-v1-1/app_depth.py +++ /dev/null @@ -1,95 +0,0 @@ -#!/usr/bin/env python - -import gradio as gr - -from settings import ( - DEFAULT_IMAGE_RESOLUTION, - DEFAULT_NUM_IMAGES, - MAX_IMAGE_RESOLUTION, - MAX_NUM_IMAGES, - MAX_SEED, -) -from utils import randomize_seed_fn - - -def create_demo(process): - with gr.Blocks() as demo: - with gr.Row(): - with gr.Column(): - image = gr.Image() - prompt = gr.Textbox(label="Prompt") - run_button = gr.Button("Run") - with gr.Accordion("Advanced options", open=False): - preprocessor_name = gr.Radio( - label="Preprocessor", choices=["Midas", "DPT", "None"], type="value", value="DPT" - ) - num_samples = gr.Slider( - label="Number of images", minimum=1, maximum=MAX_NUM_IMAGES, value=DEFAULT_NUM_IMAGES, step=1 - ) - image_resolution = gr.Slider( - label="Image resolution", - minimum=256, - maximum=MAX_IMAGE_RESOLUTION, - value=DEFAULT_IMAGE_RESOLUTION, - step=256, - ) - preprocess_resolution = gr.Slider( - label="Preprocess resolution", minimum=128, maximum=512, value=384, step=1 - ) - num_steps = gr.Slider(label="Number of steps", minimum=1, maximum=100, value=20, step=1) - guidance_scale = gr.Slider(label="Guidance scale", minimum=0.1, maximum=30.0, value=9.0, step=0.1) - seed = gr.Slider(label="Seed", minimum=0, maximum=MAX_SEED, step=1, value=0) - randomize_seed = gr.Checkbox(label="Randomize seed", value=True) - a_prompt = gr.Textbox(label="Additional prompt", value="best quality, extremely detailed") - n_prompt = gr.Textbox( - label="Negative prompt", - value="longbody, lowres, bad anatomy, bad hands, missing fingers, extra digit, fewer digits, cropped, worst quality, low quality", - ) - with gr.Column(): - result = gr.Gallery(label="Output", show_label=False, columns=2, object_fit="scale-down") - inputs = [ - image, - prompt, - a_prompt, - n_prompt, - num_samples, - image_resolution, - preprocess_resolution, - num_steps, - guidance_scale, - seed, - preprocessor_name, - ] - prompt.submit( - fn=randomize_seed_fn, - inputs=[seed, randomize_seed], - outputs=seed, - queue=False, - api_name=False, - ).then( - fn=process, - inputs=inputs, - outputs=result, - api_name=False, - ) - run_button.click( - fn=randomize_seed_fn, - inputs=[seed, randomize_seed], - outputs=seed, - queue=False, - api_name=False, - ).then( - fn=process, - inputs=inputs, - outputs=result, - api_name="depth", - ) - return demo - - -if __name__ == "__main__": - from model import Model - - model = Model(task_name="depth") - demo = create_demo(model.process_depth) - demo.queue().launch() diff --git a/spaces/iccv23-diffusers-demo/Shap-E/utils.py b/spaces/iccv23-diffusers-demo/Shap-E/utils.py deleted file mode 100644 index 36e072134588bf5252bf0f018aa7912d9c45567c..0000000000000000000000000000000000000000 --- a/spaces/iccv23-diffusers-demo/Shap-E/utils.py +++ /dev/null @@ -1,9 +0,0 @@ -import random - -from settings import MAX_SEED - - -def randomize_seed_fn(seed: int, randomize_seed: bool) -> int: - if randomize_seed: - seed = random.randint(0, MAX_SEED) - return seed diff --git a/spaces/inplisQlawa/anything-midjourney-v4-1/Bandicam 4.0.1.1339 Pre-Cracked For Windows - [CrackzSoft] Serial Key Keygen.md b/spaces/inplisQlawa/anything-midjourney-v4-1/Bandicam 4.0.1.1339 Pre-Cracked For Windows - [CrackzSoft] Serial Key Keygen.md deleted file mode 100644 index 97a9ffe443ee54912690cd27fdd1a0a357a739a0..0000000000000000000000000000000000000000 --- a/spaces/inplisQlawa/anything-midjourney-v4-1/Bandicam 4.0.1.1339 Pre-Cracked For Windows - [CrackzSoft] Serial Key Keygen.md +++ /dev/null @@ -1,46 +0,0 @@ -

      Bandicam 4.0.1.1339 Pre-Cracked For Windows - [CrackzSoft] Serial Key keygen


      Download File - https://urlin.us/2uEwoX



      -
      -スポンサードリンク - -ハートちゃんのお母さん シリコーズにもスタンプがあります - -ハートちゃんのお母さんのお母さんがカラフルなスタンプを作ってくれたんです! - -ちゃんとハートちゃんを住むことができるらしく、最近はお母さんのお母さんも本当にお母さんでいて気分が良いですね。 - -子どもらしく可愛らしい - -子どもらしさでいいじゃないですか。 - -はやぶさん、梨沢いとうなんや。 - -八千代、草木七郎 - -八千代は七郎ちゃん。 - -ボンボンです。 - -美人極み。 - -ツールカーを通る途中、 - -深夜の思い出を作ってみよう - -感想はもう書けません - -大人になって辞めていて - -前に言ったけど、それ以上に慣れていない - -僕らのそれには乗ったことがない - -一緒に遊ぶ以外はない - -それだけでもそれだけでやさしい - -曲を楽しんだり、映画を観たりするのがお母さんのお母さんだったらどうしよう - -と 4fefd39f24
      -
      -
      -

      diff --git a/spaces/inplisQlawa/anything-midjourney-v4-1/FULL CyberLink PowerDirector Ultimate 19 0 2819 0 Crack [EXCLUSIVE].md b/spaces/inplisQlawa/anything-midjourney-v4-1/FULL CyberLink PowerDirector Ultimate 19 0 2819 0 Crack [EXCLUSIVE].md deleted file mode 100644 index 3c61d686a1a8daa831e989532567af8ce5193246..0000000000000000000000000000000000000000 --- a/spaces/inplisQlawa/anything-midjourney-v4-1/FULL CyberLink PowerDirector Ultimate 19 0 2819 0 Crack [EXCLUSIVE].md +++ /dev/null @@ -1,6 +0,0 @@ -

      FULL CyberLink PowerDirector Ultimate 19 0 2819 0 Crack


      Download File --->>> https://urlin.us/2uEvM8



      -
      -CyberLink PowerDirector Crack Full Version Torrent With Keygen 2020.. CyberLink PowerDirector Ultimate 19 0 2819 0 + Crack using magnet ... 1fdad05405
      -
      -
      -

      diff --git a/spaces/inplisQlawa/anything-midjourney-v4-1/HD Online Player (Sachin - A Billion Dreams Hindi Movi).md b/spaces/inplisQlawa/anything-midjourney-v4-1/HD Online Player (Sachin - A Billion Dreams Hindi Movi).md deleted file mode 100644 index ccc7f15b8e69006189bd37b0bada1340760f8e10..0000000000000000000000000000000000000000 --- a/spaces/inplisQlawa/anything-midjourney-v4-1/HD Online Player (Sachin - A Billion Dreams Hindi Movi).md +++ /dev/null @@ -1,8 +0,0 @@ -
      -

      Rajkumar Hirani, though. The director shared that the film has broken all piracy records, with 25 million people watching the film online. Yet if you're a cinephile with an interest in bringing the best Bollywood movies online, you might be keen to share your views by attending our film festivals in India, UK or the USA to meet the stars in person.

      -

      The story revolves around a lady's journey of how she will be able to achieve her dreams. Prominent actors like Anil Kapoor, Sushant Singh Rajput, and others were part of the ensemble cast of the movie, which deals with Hindu-Muslim relations in India and Pakistan.

      -

      HD Online Player (Sachin - A Billion Dreams Hindi Movi)


      Download Zip ••• https://urlin.us/2uEwme



      -

      The conversation that Wasim Akram had with Sachin Tendulkar was the stuff of legend. Remember when Sachin was the only guy to have won the World Cup and Test Championship? Akram recounts his memories and gives his analysis on Tendulkars career. He also lays some points about his future and the team. Sachin: A Billion Dreams is the definitive doco on the greatest of all time. The whole thing is worth watching because it s not just about Sachin. It also delves into what makes India and Indian cricket so special and so endearing.

      -

      It took the not-so-small size of Sachin to be bigger than himself. Indian cricket has always been synonymous with the Little Master. And Sachin himself has always been a legend. You get to know about the ups and the downs of his life. This is your chance to see the first day of the last day of the first test match. Its the lowest point for the Indian cricket team and their fans. How he deals with that and even the last minute that they played itself out is heartwarming. Because in the end of the day, Sachin won. Sachin: A Billion Dreams is such a heartwarming documentary. It tells the story of one of the greatest in Indian cricket history. Its about his lows and his highs as a cricket player and as a human being.

      899543212b
      -
      -
      \ No newline at end of file diff --git a/spaces/inplisQlawa/anything-midjourney-v4-1/La Guerre De Lart Steven Pressfield Pdf Free WORK.md b/spaces/inplisQlawa/anything-midjourney-v4-1/La Guerre De Lart Steven Pressfield Pdf Free WORK.md deleted file mode 100644 index b512555d3e3963aaab6a9684b5581d0c60af6b2e..0000000000000000000000000000000000000000 --- a/spaces/inplisQlawa/anything-midjourney-v4-1/La Guerre De Lart Steven Pressfield Pdf Free WORK.md +++ /dev/null @@ -1,68 +0,0 @@ -## La Guerre De Lart Steven Pressfield Pdf Free - - - - - - - - - -**Download File ===> [https://urlcod.com/2txvL3](https://urlcod.com/2txvL3)** - - - - - - - - - - - - - -# La Guerre De Lart: How to Overcome Resistance and Unleash Your Creativity - - - -Do you struggle with procrastination, fear, self-doubt, or perfectionism? Do you have a dream project that you want to pursue, but you don't know how to start or finish it? If so, you are not alone. Many creative people face these challenges every day. - - - -Fortunately, there is a way to overcome them. In his bestselling book *La Guerre De Lart* (The War of Art), Steven Pressfield reveals the secret enemy that prevents us from achieving our full potential: Resistance. Resistance is the negative force that opposes any act of creation, whether it is writing a novel, painting a masterpiece, starting a business, or launching a campaign. Resistance is what makes us feel lazy, insecure, bored, or distracted. Resistance is what tells us that we are not good enough, that we are not ready, that we should wait for inspiration or approval. - - - -Pressfield argues that Resistance is not something that we can avoid or ignore. It is something that we have to face and fight every day. He offers practical advice on how to identify Resistance, how to overcome it, and how to turn it into a source of motivation and inspiration. He also shares inspiring stories of famous artists, writers, entrepreneurs, and leaders who have overcome Resistance and achieved their goals. - - - -If you want to learn how to unleash your creativity and express your true voice, you need to read *La Guerre De Lart*. It is a powerful and inspiring book that will change the way you think about your work and your life. - - - -You can download *La Guerre De Lart* by Steven Pressfield in PDF format for free from various online sources[^1^] [^2^] [^3^]. However, if you enjoy the book and find it helpful, we recommend that you support the author by purchasing a copy from his official website or from your favorite bookstore. - - - -In *La Guerre De Lart*, Pressfield introduces the concept of the Professional and the Amateur. The Professional is someone who treats his or her work as a calling, not as a hobby. The Professional shows up every day, no matter what. The Professional is committed, disciplined, and focused. The Professional does not let Resistance stop him or her from doing the work. - - - -The Amateur, on the other hand, is someone who plays at his or her work, not seriously. The Amateur is easily distracted, discouraged, and defeated by Resistance. The Amateur waits for inspiration, approval, or perfect conditions to do the work. The Amateur does not finish what he or she starts. - - - -Pressfield urges us to become Professionals, not Amateurs. He says that becoming a Professional is not about talent, education, or luck. It is about attitude, mindset, and behavior. It is about making a decision to do the work that matters to us, regardless of the obstacles and challenges that we face. - - - -He also explains that becoming a Professional is not only beneficial for our work, but also for our soul. He says that doing our work is how we connect with our true self, our higher power, and our purpose in life. He says that doing our work is how we express our love for ourselves and for others. He says that doing our work is how we honor the gift that we have been given. - - dfd1c89656 - - - - - diff --git a/spaces/inreVtussa/clothingai/Examples/CValley FILTERiT 5.0.4 And Xtream Path 2.0.4 Win Mac.md b/spaces/inreVtussa/clothingai/Examples/CValley FILTERiT 5.0.4 And Xtream Path 2.0.4 Win Mac.md deleted file mode 100644 index d0a7fa7ec1b5b5fd0b1beed72eaff43f5ebdc31c..0000000000000000000000000000000000000000 --- a/spaces/inreVtussa/clothingai/Examples/CValley FILTERiT 5.0.4 And Xtream Path 2.0.4 Win Mac.md +++ /dev/null @@ -1,209 +0,0 @@ -
      -

      CValley FILTERiT 5.0.4 and Xtream Path 2.0.4 Win Mac: The Best Plugins for Adobe Illustrator

      - -

      If you are a graphic designer, illustrator, or web developer who uses Adobe Illustrator, you might be interested in two amazing plugins that can enhance your creativity and productivity: CValley FILTERiT 5.0.4 and Xtream Path 2.0.4 Win Mac.

      - -

      These plugins are developed by CValley, Inc., a company that specializes in creating software tools for graphic design and web development. CValley has been providing high-quality plugins for Adobe Illustrator since 1997, and has won many awards and recognition from users and experts.

      -

      CValley FILTERiT 5.0.4 and Xtream Path 2.0.4 Win Mac


      Download Zip ===== https://tiurll.com/2uCjGo



      - -

      In this article, we will introduce you to CValley FILTERiT 5.0.4 and Xtream Path 2.0.4 Win Mac, and show you how they can transform your design workflow with Adobe Illustrator.

      - -

      What is CValley FILTERiT 5.0.4?

      - -

      CValley FILTERiT 5.0.4 is a plugin that adds more than 80 filters and effects to Adobe Illustrator, allowing you to create stunning graphics with ease.

      - -

      With CValley FILTERiT 5.0.4, you can apply various effects such as 3D transformations, warping, fractals, patterns, textures, distortions, gradients, and more to your vector objects.

      - -

      You can also use CValley FILTERiT 5.0.4 to create dynamic animations with Adobe Flash or After Effects, by exporting your vector objects as SWF files.

      - -

      CValley FILTERiT 5.0.4 is compatible with Adobe Illustrator CS6/CC – 2023, and works on both Windows and Mac platforms.

      - -

      What is Xtream Path 2.0.4?

      - -

      Xtream Path 2.0.4 is a plugin that adds more than 40 tools and commands to Adobe Illustrator, allowing you to edit paths and shapes with ease.

      -

      - -

      With Xtream Path 2.0.4, you can manipulate paths and shapes in various ways, such as dragging anchor points, bending segments, scaling objects, rotating handles, aligning points, smoothing curves, and more.

      - -

      You can also use Xtream Path 2.0.4 to create complex shapes with simple operations, such as cutting paths, joining segments, dividing objects, duplicating shapes, creating outlines, and more.

      - -

      Xtream Path 2.0.4 is compatible with Adobe Illustrator CS6/CC – 2023, and works on both Windows and Mac platforms.

      - -

      Why You Need CValley FILTERiT 5.0.4 and Xtream Path 2.0.4 Win Mac

      - -

      CValley FILTERiT 5.0.4 and Xtream Path 2.0.4 Win Mac are plugins that can help you create amazing graphics with Adobe Illustrator in less time and with less effort.

      - -

      By using these plugins, you can:

      - -
        -
      • Expand your creative possibilities with more than 120 filters and tools
      • -
      • Save your time and energy by simplifying complex tasks
      • -
      • Improve your quality and accuracy by fine-tuning your results
      • -
      • Enhance your compatibility and flexibility by working with different versions and platforms
      • -
      • Increase your value and reputation by delivering professional outputs
      • -
      - -

      CValley FILTERiT 5.0.4 and Xtream Path 2.0.4 Win Mac are plugins that can make your design workflow with Adobe Illustrator more fun and rewarding.

      - -

      How to Get CValley FILTERiT 5.0.4 and Xtream Path 2.0.4 Win Mac

      - -

      If you want to get CValley FILTERiT 5.0.4 and Xtream Path 2.0.4 Win Mac, you can visit the official website of CValley at https://www.cvalley.com/.

      - -

      There you can find more information about these plugins, such as features, screenshots, tutorials, reviews, testimonials, FAQs, support, updates, etc.

      - -

      You can also download the demo versions of these plugins for free from https://www.cvalley.com/downloads-2/. The demo versions work the same as the full versions except for some restrictions such as expiration date and number of restarts.

      - -

      If you want to buy the full versions of these plugins, you can do so from https://www.cvalley.com/store/. The prices are $129 for CValley FILTERiT 5.0.4 and $139 for Xtream Path 2.0.4 Win Mac.

      - -

      You can also get a bundle of both plugins for $199 from https://www.cvalley.com/store/bundle/. This is a great deal that saves you $69 compared to buying them separately.

      - -

      Conclusion

      - -

      CValley FILTERiT 5.0.4 and Xtream Path 2.0.4 Win Mac are two amazing plugins that can enhance your creativity and productivity with Adobe Illustrator.

      - -

      They offer more than 120 filters and tools that can help you create stunning graphics with ease.

      - -

      They are compatible with Adobe Illustrator CS6/CC – 2023, and work on both Windows and Mac platforms.

      - -

      They are available for purchase from the official website of CValley at https://www.cvalley.com/, or you can get a bundle of both plugins for $199 from https://www.cvalley.com/store/bundle/.

      - -

      If you are a graphic designer, illustrator, or web developer who uses Adobe Illustrator regularly or occasionally,

      -

      you might want to try CValley FILTERiT 5.0.4 and Xtream Path 2.0.4 Win Mac.

      - -

      These plugins are not only powerful and versatile, but also easy to use and affordable.

      - -

      They can help you create graphics that stand out from the crowd and impress your clients and audience.

      - -

      How to Use CValley FILTERiT 5.0.4 and Xtream Path 2.0.4 Win Mac

      - -

      To use CValley FILTERiT 5.0.4 and Xtream Path 2.0.4 Win Mac, you need to install them on your computer and activate them with your license key.

      - -

      Once you have done that, you can access them from the Adobe Illustrator menu bar or toolbar.

      - -

      CValley FILTERiT 5.0.4 adds a new menu item called "FILTERiT" under the "Effect" menu, where you can find all the filters and effects available.

      - -

      Xtream Path 2.0.4 adds a new toolbar called "Xtream Path", where you can find all the tools and commands available.

      - -

      You can also customize the settings and preferences of these plugins from the "CValley" menu under the "Window" menu.

      - -

      To apply a filter or effect from CValley FILTERiT 5.0.4, you need to select a vector object or a group of objects, and then choose a filter or effect from the "FILTERiT" menu.

      - -

      You can adjust the parameters of the filter or effect in a dialog box that appears, and preview the result before applying it.

      - -

      To edit a path or shape with Xtream Path 2.0.4, you need to select a path or a group of paths, and then choose a tool or command from the "Xtream Path" toolbar.

      - -

      You can manipulate the path or shape by dragging anchor points, handles, segments, or objects with your mouse, or by entering values in a dialog box that appears.

      - -

      Some Examples of CValley FILTERiT 5.0.4 and Xtream Path 2.0.4 Win Mac

      - -

      To give you some idea of what you can do with CValley FILTERiT 5.0.4 and Xtream Path 2.0.4 Win Mac, here are some examples of graphics created with these plugins.

      - -

      CValley FILTERiT 5.0.4 Examples

      - -
        -
      • 3D Transform: This filter allows you to create realistic 3D objects from 2D shapes, by applying perspective, lighting, shading, and texture effects.
      • -
      • Live Symbol: This filter allows you to create dynamic patterns from symbols, by applying various transformations such as rotation, scaling, skewing, flipping, etc.
      • -
      • Live Trail: This filter allows you to create dynamic trails from paths, by applying various effects such as fading, blurring, twisting, tapering, etc.
      • -
      • Fractalize: This filter allows you to create organic shapes from paths, by applying fractal algorithms such as Mandelbrot set, Julia set, Koch curve, etc.
      • -
      • MetaBrush: This filter allows you to create artistic strokes from paths, by applying various brushes such as charcoal, watercolor, oil paint, etc.
      • -
      - -

      Xtream Path 2.0.4 Examples

      - -
        -
      • Drag & Drop: This tool allows you to edit paths by dragging anchor points or handles with your mouse, without switching tools or modes.
      • -
      • Bend: This tool allows you to edit paths by bending segments with your mouse, without breaking or distorting them.
      • -
      • Scale & Rotate: This tool allows you to edit paths by scaling or rotating objects with your mouse, without changing their shapes or proportions.
      • -
      • Smart Rounding: This tool allows you to edit paths by rounding corners or curves with your mouse, without losing details or smoothness.
      • -
      • Cut & Connect: This tool allows you to edit paths by cutting or joining segments with your mouse, without losing continuity or direction.
      • -
      - -

      Conclusion

      - -

      CValley FILTERiT 5.0.4 and Xtream Path 2.0.4 Win Mac are two amazing plugins that can enhance your creativity and productivity with Adobe Illustrator.

      - -

      They offer more than 120 filters and tools that can help you create stunning graphics with ease.

      - -

      They are compatible with Adobe Illustrator CS6/CC – 2023, and work on both Windows and Mac platforms.

      - -

      They are available for purchase from the official website of CValley at https://www.cvalley.com/, or you can get a bundle of both plugins for $199 from https://www.cvalley.com/store/bundle/.

      - -

      If you are a graphic designer, illustrator, or web developer who uses Adobe Illustrator regularly or occasionally,

      -

      you might want to try CValley FILTERiT 5.0.4 and Xtream Path 2.0.4 Win Mac.

      - -

      These plugins are not only powerful and versatile, but also easy to use and affordable.

      - -

      They can help you create graphics that stand out from the crowd and impress your clients and audience.

      - -

      In this article, we have introduced you to CValley FILTERiT 5.0.4 and Xtream Path 2.0.4 Win Mac, and showed you how they can transform your design workflow with Adobe Illustrator.

      - -

      We have also given you some examples of graphics created with these plugins, and explained how to get them from the official website of CValley.

      - -

      We hope you have found this article informative and useful, and that you will give CValley FILTERiT 5.0.4 and Xtream Path 2.0.4 Win Mac a try.

      - -

      Thank you for reading, and happy designing!

      -

      If you want to learn more about CValley FILTERiT 5.0.4 and Xtream Path 2.0.4 Win Mac, you can visit the official website of CValley at https://www.cvalley.com/.

      - -

      There you can find more information about these plugins, such as features, screenshots, tutorials, reviews, testimonials, FAQs, support, updates, etc.

      - -

      You can also watch some videos that demonstrate how these plugins work and what they can do for your graphics.

      - -

      Here are some links to some of the videos:

      - -
        -
      • CValley FILTERiT 5.0.4 Overview: https://www.youtube.com/watch?v=Zl8w9XQYf7Q
      • -
      • CValley Xtream Path 2.0.4 Overview: https://www.youtube.com/watch?v=9yjy7Mx6vqk
      • -
      • CValley FILTERiT 5.0.4 and Xtream Path 2.0.4 Bundle: https://www.youtube.com/watch?v=8wFJn6zJZbE
      • -
      - -

      You can also join the CValley community on social media platforms such as Facebook, Twitter, Instagram, and Pinterest.

      - -

      There you can follow the latest news and updates about these plugins, as well as share your feedback and suggestions with other users and developers.

      - -

      Here are some links to some of the social media pages:

      - -
        -
      • CValley Facebook: https://www.facebook.com/cvalleyinc
      • -
      • CValley Twitter: https://twitter.com/cvalleyinc
      • -
      • CValley Instagram: https://www.instagram.com/cvalleyinc
      • -
      • CValley Pinterest: https://www.pinterest.com/cvalleyinc
      • -
      - -

      CValley FILTERiT 5.0.4 and Xtream Path 2.0.4 Win Mac are two amazing plugins that can enhance your creativity and productivity with Adobe Illustrator.

      - -

      They offer more than 120 filters and tools that can help you create stunning graphics with ease.

      - -

      They are compatible with Adobe Illustrator CS6/CC – 2023, and work on both Windows and Mac platforms.

      - -

      They are available for purchase from the official website of CValley at https://www.cvalley.com/, or you can get a bundle of both plugins for $199 from https://www.cvalley.com/store/bundle/.

      - -

      If you are a graphic designer, illustrator, or web developer who uses Adobe Illustrator regularly or occasionally,

      -

      you might want to try CValley FILTERiT 5.0.4 and Xtream Path 2.0.4 Win Mac.

      - -

      These plugins are not only powerful and versatile, but also easy to use and affordable.

      - -

      They can help you create graphics that stand out from the crowd and impress your clients and audience.

      - -

      In this article, we have introduced you to CValley FILTERiT 5.0.4 and Xtream Path 2.0.4 Win Mac, and showed you how they can transform your design workflow with Adobe Illustrator.

      - -

      We have also given you some examples of graphics created with these plugins, and explained how to get them from the official website of CValley.

      - -

      We have also provided you with some links to some videos and social media pages that can help you learn more about these plugins and join the CValley community.

      - -

      Conclusion

      - -

      CValley FILTERiT 5.0.4 and Xtream Path 2.0.4 Win Mac are two amazing plugins that can enhance your creativity and productivity with Adobe Illustrator.

      - -

      They offer more than 120 filters and tools that can help you create stunning graphics with ease.

      - -

      They are compatible with Adobe Illustrator CS6/CC – 2023, and work on both Windows and Mac platforms.

      - -

      They are available for purchase from the official website of CValley at https://www.cvalley.com/, or you can get a bundle of both plugins for $199 from https://www.cvalley.com/store/bundle/.

      - -

      If you are a graphic designer, illustrator, or web developer who uses Adobe Illustrator regularly or occasionally, you should definitely give CValley FILTERiT 5.0.4 and Xtream Path 2.0.4 Win Mac a try.

      - -

      You will not regret it, as these plugins will make your design workflow more fun and rewarding.

      - -

      Thank you for reading this article, and happy designing!

      3cee63e6c2
      -
      -
      \ No newline at end of file diff --git a/spaces/inreVtussa/clothingai/Examples/Crack Para Deejaysystem Video Vj2 3.3.0.md b/spaces/inreVtussa/clothingai/Examples/Crack Para Deejaysystem Video Vj2 3.3.0.md deleted file mode 100644 index abd7890afa457524f647a47f23c0d7f47836a700..0000000000000000000000000000000000000000 --- a/spaces/inreVtussa/clothingai/Examples/Crack Para Deejaysystem Video Vj2 3.3.0.md +++ /dev/null @@ -1,6 +0,0 @@ -

      crack para deejaysystem video vj2 3.3.0


      DOWNLOADhttps://tiurll.com/2uClOS



      -
      -IDM Crack 6.32 Build 5 + Serial Keys Full Version Download IDM Crack incl Patch ... Your search for Arkaos Grandvj 1.6.5 may return better results if you avoid ... 4.9.0 | Deejaysystem Video Vj2 3.3.0 | Softonpc Universal Maps Downloader 7.6 ... 4d29de3e1b
      -
      -
      -

      diff --git a/spaces/jamesbradbury333/fastai-week-2/app.py b/spaces/jamesbradbury333/fastai-week-2/app.py deleted file mode 100644 index ba1fd0ccc7aaa511ddbe9f7a96e62691a758b263..0000000000000000000000000000000000000000 --- a/spaces/jamesbradbury333/fastai-week-2/app.py +++ /dev/null @@ -1,20 +0,0 @@ -import gradio as gr -from fastai.vision.all import * -import gradio as gr - - -learn = load_learner('brain_ai_model.pkl') - - -categories = ('Brain', 'Computer') - -def classify_image(img): - pred, idx, probs = learn.predict(img) - return dict(zip(categories, map(float, probs))) - -image = gr.inputs.Image(shape=(192,192)) -label = gr.outputs.Label() -examples = ['brain.jpg', 'computer.jpg', 'dunno.jpg'] - -interface = gr.Interface(fn=classify_image, inputs=image, outputs=label, examples=examples) -interface.launch(inline=False) diff --git a/spaces/javiermontesinos/whisper/app.py b/spaces/javiermontesinos/whisper/app.py deleted file mode 100644 index 89966c9fd3ea7fac0d7668d97fda3919b2e676d2..0000000000000000000000000000000000000000 --- a/spaces/javiermontesinos/whisper/app.py +++ /dev/null @@ -1,36 +0,0 @@ -import whisper -import gradio as gr - -model = whisper.load_model("small") - -def transcribe(audio): - - #time.sleep(3) - # load audio and pad/trim it to fit 30 seconds - audio = whisper.load_audio(audio) - audio = whisper.pad_or_trim(audio) - - # make log-Mel spectrogram and move to the same device as the model - mel = whisper.log_mel_spectrogram(audio).to(model.device) - - # detect the spoken language - _, probs = model.detect_language(mel) - print(f"Detected language: {max(probs, key=probs.get)}") - - # decode the audio - options = whisper.DecodingOptions(fp16 = False) - result = whisper.decode(model, mel, options) - return result.text - - - -gr.Interface( - title = 'OpenAI Whisper ASR Gradio Web UI', - fn=transcribe, - inputs=[ - gr.inputs.Audio(source="microphone", type="filepath") - ], - outputs=[ - "textbox" - ], - live=True).launch() diff --git a/spaces/jbetker/tortoise/tortoise/models/transformer.py b/spaces/jbetker/tortoise/tortoise/models/transformer.py deleted file mode 100644 index aa59b462a3f9c2680f28ceb1b87480258f0293f0..0000000000000000000000000000000000000000 --- a/spaces/jbetker/tortoise/tortoise/models/transformer.py +++ /dev/null @@ -1,219 +0,0 @@ -from functools import partial - -import torch -import torch.nn.functional as F -from einops import rearrange -from rotary_embedding_torch import RotaryEmbedding, broadcat -from torch import nn - - -# helpers - - -def exists(val): - return val is not None - - -def default(val, d): - return val if exists(val) else d - - -def cast_tuple(val, depth = 1): - if isinstance(val, list): - val = tuple(val) - return val if isinstance(val, tuple) else (val,) * depth - - -def max_neg_value(t): - return -torch.finfo(t.dtype).max - - -def stable_softmax(t, dim = -1, alpha = 32 ** 2): - t = t / alpha - t = t - torch.amax(t, dim = dim, keepdim = True).detach() - return (t * alpha).softmax(dim = dim) - - -def route_args(router, args, depth): - routed_args = [(dict(), dict()) for _ in range(depth)] - matched_keys = [key for key in args.keys() if key in router] - - for key in matched_keys: - val = args[key] - for depth, ((f_args, g_args), routes) in enumerate(zip(routed_args, router[key])): - new_f_args, new_g_args = map(lambda route: ({key: val} if route else {}), routes) - routed_args[depth] = ({**f_args, **new_f_args}, {**g_args, **new_g_args}) - return routed_args - - -# classes -class SequentialSequence(nn.Module): - def __init__(self, layers, args_route = {}, layer_dropout = 0.): - super().__init__() - assert all(len(route) == len(layers) for route in args_route.values()), 'each argument route map must have the same depth as the number of sequential layers' - self.layers = layers - self.args_route = args_route - self.layer_dropout = layer_dropout - - def forward(self, x, **kwargs): - args = route_args(self.args_route, kwargs, len(self.layers)) - layers_and_args = list(zip(self.layers, args)) - - for (f, g), (f_args, g_args) in layers_and_args: - x = x + f(x, **f_args) - x = x + g(x, **g_args) - return x - - -class DivideMax(nn.Module): - def __init__(self, dim): - super().__init__() - self.dim = dim - - def forward(self, x): - maxes = x.amax(dim = self.dim, keepdim = True).detach() - return x / maxes - - -# https://arxiv.org/abs/2103.17239 -class LayerScale(nn.Module): - def __init__(self, dim, depth, fn): - super().__init__() - if depth <= 18: - init_eps = 0.1 - elif depth > 18 and depth <= 24: - init_eps = 1e-5 - else: - init_eps = 1e-6 - - scale = torch.zeros(1, 1, dim).fill_(init_eps) - self.scale = nn.Parameter(scale) - self.fn = fn - def forward(self, x, **kwargs): - return self.fn(x, **kwargs) * self.scale - -# layer norm - - -class PreNorm(nn.Module): - def __init__(self, dim, fn, sandwich = False): - super().__init__() - self.norm = nn.LayerNorm(dim) - self.norm_out = nn.LayerNorm(dim) if sandwich else nn.Identity() - self.fn = fn - - def forward(self, x, **kwargs): - x = self.norm(x) - x = self.fn(x, **kwargs) - return self.norm_out(x) - -# feed forward - - -class GEGLU(nn.Module): - def forward(self, x): - x, gates = x.chunk(2, dim = -1) - return x * F.gelu(gates) - - -class FeedForward(nn.Module): - def __init__(self, dim, dropout = 0., mult = 4.): - super().__init__() - self.net = nn.Sequential( - nn.Linear(dim, dim * mult * 2), - GEGLU(), - nn.Dropout(dropout), - nn.Linear(dim * mult, dim) - ) - - def forward(self, x): - return self.net(x) - -# Attention - - -class Attention(nn.Module): - def __init__(self, dim, seq_len, causal = True, heads = 8, dim_head = 64, dropout = 0.): - super().__init__() - inner_dim = dim_head * heads - self.heads = heads - self.seq_len = seq_len - self.scale = dim_head ** -0.5 - - self.causal = causal - - self.to_qkv = nn.Linear(dim, inner_dim * 3, bias = False) - self.to_out = nn.Sequential( - nn.Linear(inner_dim, dim), - nn.Dropout(dropout) - ) - - def forward(self, x, mask = None): - b, n, _, h, device = *x.shape, self.heads, x.device - softmax = torch.softmax - - qkv = self.to_qkv(x).chunk(3, dim = -1) - q, k, v = map(lambda t: rearrange(t, 'b n (h d) -> b h n d', h = h), qkv) - - q = q * self.scale - - dots = torch.einsum('b h i d, b h j d -> b h i j', q, k) - mask_value = max_neg_value(dots) - - if exists(mask): - mask = rearrange(mask, 'b j -> b () () j') - dots.masked_fill_(~mask, mask_value) - del mask - - if self.causal: - i, j = dots.shape[-2:] - mask = torch.ones(i, j, device = device).triu_(j - i + 1).bool() - dots.masked_fill_(mask, mask_value) - - attn = softmax(dots, dim=-1) - - out = torch.einsum('b h i j, b h j d -> b h i d', attn, v) - out = rearrange(out, 'b h n d -> b n (h d)') - out = self.to_out(out) - return out - - -# main transformer class -class Transformer(nn.Module): - def __init__( - self, - *, - dim, - depth, - seq_len, - causal = True, - heads = 8, - dim_head = 64, - ff_mult = 4, - attn_dropout = 0., - ff_dropout = 0., - sparse_attn = False, - sandwich_norm = False, - ): - super().__init__() - layers = nn.ModuleList([]) - sparse_layer = cast_tuple(sparse_attn, depth) - - for ind, sparse_attn in zip(range(depth), sparse_layer): - attn = Attention(dim, causal = causal, seq_len = seq_len, heads = heads, dim_head = dim_head, dropout = attn_dropout) - - ff = FeedForward(dim, mult = ff_mult, dropout = ff_dropout) - - layers.append(nn.ModuleList([ - LayerScale(dim, ind + 1, PreNorm(dim, attn, sandwich = sandwich_norm)), - LayerScale(dim, ind + 1, PreNorm(dim, ff, sandwich = sandwich_norm)) - ])) - - execute_type = SequentialSequence - route_attn = ((True, False),) * depth - attn_route_map = {'mask': route_attn} - - self.layers = execute_type(layers, args_route = attn_route_map) - - def forward(self, x, **kwargs): - return self.layers(x, **kwargs) \ No newline at end of file diff --git a/spaces/jbilcke-hf/ai-clip-factory/src/app/interface/generate/ExistingModel.tsx b/spaces/jbilcke-hf/ai-clip-factory/src/app/interface/generate/ExistingModel.tsx deleted file mode 100644 index c0d099a86ce6e60508b3be2bc4f9767c69f9e564..0000000000000000000000000000000000000000 --- a/spaces/jbilcke-hf/ai-clip-factory/src/app/interface/generate/ExistingModel.tsx +++ /dev/null @@ -1,16 +0,0 @@ -export function ExistingModel({ - name, - example, - downloadURL -}: { - name: string, - example: string, - downloadURL: string -}) { - return ( -
      -
      Put thumbnail here
      -
      {name}
      -
      - ) -} \ No newline at end of file diff --git a/spaces/jeonchangbin49/De-limiter/prepro/delimit_valid_L_prepro.py b/spaces/jeonchangbin49/De-limiter/prepro/delimit_valid_L_prepro.py deleted file mode 100644 index bf56df01b4bd44dc203cc856608a86aaaf84544b..0000000000000000000000000000000000000000 --- a/spaces/jeonchangbin49/De-limiter/prepro/delimit_valid_L_prepro.py +++ /dev/null @@ -1,41 +0,0 @@ -import os -import json - -from torch.utils.data import DataLoader -import soundfile as sf -import tqdm - -from dataloader import DelimitValidDataset - - -def main(): - # Parameters - data_path = "/path/to/musdb18hq" - save_path = "/path/to/musdb18hq_limited_L" - batch_size = 1 - num_workers = 1 - sr = 44100 - - # Dataset - dataset = DelimitValidDataset(root=data_path, valid_target_lufs=-14.39) - data_loader = DataLoader( - dataset, batch_size=batch_size, num_workers=num_workers, shuffle=False - ) - dict_valid_loudness = {} - # Preprocessing - for limited_audio, orig_audio, audio_name, loudness in tqdm.tqdm(data_loader): - audio_name = audio_name[0] - limited_audio = limited_audio[0].numpy() - loudness = float(loudness[0].numpy()) - dict_valid_loudness[audio_name] = loudness - # Save audio - os.makedirs(os.path.join(save_path, "valid"), exist_ok=True) - audio_path = os.path.join(save_path, "valid", audio_name) - sf.write(f"{audio_path}.wav", limited_audio.T, sr) - # write json write code - with open(os.path.join(save_path, "valid_loudness.json"), "w") as f: - json.dump(dict_valid_loudness, f, indent=4) - - -if __name__ == "__main__": - main() diff --git a/spaces/jiejiejie0420/bingo/src/state/index.ts b/spaces/jiejiejie0420/bingo/src/state/index.ts deleted file mode 100644 index 272106d619c69f124ea8dd0f10872ab4840152d7..0000000000000000000000000000000000000000 --- a/spaces/jiejiejie0420/bingo/src/state/index.ts +++ /dev/null @@ -1,118 +0,0 @@ -import { BingWebBot } from '@/lib/bots/bing' -import { BingConversationStyle, ChatMessageModel, BotId } from '@/lib/bots/bing/types' -import { nanoid } from '@/lib/utils' -import { atom } from 'jotai' -import { atomWithImmer } from 'jotai-immer' -import { atomWithStorage } from 'jotai/utils' -import { atomFamily } from 'jotai/utils' -import { atomWithHash, atomWithLocation } from 'jotai-location' - -const initialMessages: ChatMessageModel[] = [ - { author: 'system', text: 'conversation between user and robot', id: '1' }, - { author: 'user', text: '销量最高的 3 种宠物吸尘器有哪些优点和缺点? ', id: '2' }, - { - author: 'bot', text: ` -您好,这是必应。根据网上的搜索结果,我为您找到了以下三款销量最高的宠物吸尘器,以及它们的优点和缺点: - -- **戴森Dyson V10轻量版 Digital Slim Fluffy无线吸尘器**[^1^] [^3^]:这款吸尘器专为宠物家庭设计,拥有强大吸力和多功能吸头,能够有效清除宠物毛发和灰尘。 - - 优点:轻便易用,续航时间长,过滤系统先进,能够有效去除过敏原; - - 缺点:价格较高,噪音较大,需要定期清洁滤网。 -- **飞利浦FC6908/82(W7)无线吸尘器**[^2^] [^4^]:这是一款手持式干吸无线吸尘器,专门为宠物家庭设计。它具有75分钟的超高续航,适合大户型;除螨涡轮刷,可以深度清洁床单被罩,免受螨虫侵害;吸拖一体设计,280ml大水箱,还可以往水箱里加入清洁剂。 - - 优点:清洁性能强劲,操作方便,适用多种场景; - - 缺点:噪音较大,达到84dB(A);尘杯容量较小,只有0.6L;充电时长较长,需要5小时。 -- **必胜(Bissell) 宠物吸尘器**[^2^]:这是一款便携式手持真空吸尘器,专为清理宠物毛发而设计。它配备特殊的橡胶喷嘴和软毛刷头,可以轻松拾取宠物毛发和灰尘;几乎静音运行。 - - 优点:轻巧易用,重量只有1.36kg;电池可充电可更换; - - 缺点:续航时间不稳定,在15-30分钟之间浮动;吸力不够强劲,在地毯或沙发上可能效果不佳;没有过滤系统或过滤网。 - -希望这些信息对您有所帮助。如果您还有其他问题,请随时提问。😊 - `, id: '3' }, - { author: 'user', text: '今天的新闻', id: '4' }, - { - author: 'bot', text: `你好,这是Bing。我根据你的搜索查询“今天的新闻”为你找到了以下信息: - - # 中国新闻 - - 《求是》杂志发表习近平总书记重要文章《深化党和国家机构改革 推进国家治理体系和治理能力现代化》[^1^] - - 梦之队第5金! 全红婵陈芋汐女双10米台夺冠[^1^] - - 中央气象台7月16日18时发布台风橙色预警[^1^] - - 贵州石阡:暑期旅游带动乡村振兴[^1^] - - 激活大科学装置的“人才红利”[^1^] - - 聚才用才留才 让希望的田野成为智慧的田野[^1^] - - 科学家发现粮食系统增效可有效支撑我国碳中和[^1^] - - 成都以赛为媒提升城市美誉度[^1^] - - 香江观澜:“抢人才”成效显著 香港竞争力依旧[^1^] - - 浙江建德:新安江上享清凉[^1^] - - 定向资金持续精准滴灌 高质量发展力量不断积累[^1^] - - 企业信心恢复加大中长期投资 为稳增长奠定基础[^1^] - - 武汉渡江节开赛 千名“泳士”竞渡长江[^1^] - - 金融更加重视对实体经济支持 降低融资成本提振市场信心[^1^] - - 大运来了丨成都迎大运 全民健身动起来[^1^] - - 藏在高校里的宝藏博物馆[^1^] - - 中国汽车工业用70年赢得三个“全球第一”[^1^] - - 辽宁阜新市委常委、副市长赖化平涉嫌严重违纪违法接受审查调查[^1^] - - # 国际新闻 - - 联合国秘书长:可持续发展面临风险 各国需加强行动[^2^] - - 国际航运业加快绿色转型[^2^] - - 美企反对收紧对华芯片出口限制[^2^] - - 欧洲加大气候科技领域投资[^2^] - - 中企助力丹麦发展清洁能源[^2^] - - 中国代表呼吁国际社会共同努力防止乌克兰局势失控[^2^] - - 中国和阿尔及利亚共同构建新型国际关系典范[^2^] - - 以上信息仅供参考,具体详情请点击以下链接查看: - - [1] [中国新闻_央视网(cctv.com)](https://news.cctv.com/china/) - [2] [人民网_网上的人民日报](http://www.people.com.cn/)`, id: '5' }, - { author: 'user', text: '写一个快排', id: '6' }, - { author: 'bot', text: "你好,这是Bing。我可以帮你用python写一个快排。快速排序是一种基于分治思想的排序算法,它的基本思想是:从待排序的数组中选择一个基准元素,将数组分为两个子数组,使得左边的子数组中的元素都小于或等于基准元素,右边的子数组中的元素都大于或等于基准元素,然后对两个子数组递归地进行快速排序,最后合并结果。下面是用python实现的快速排序的代码:\n\n```python\n# 定义一个函数,用于交换数组中的两个元素\ndef swap(arr, i, j):\n temp = arr[i]\n arr[i] = arr[j]\n arr[j] = temp\n\n# 定义一个函数,用于对数组的一段区间进行划分\ndef partition(arr, low, high):\n # 选择最右边的元素作为基准元素\n pivot = arr[high]\n # 初始化一个指针,指向左边区间的最后一个位置\n i = low - 1\n # 遍历左边区间中的每个元素\n for j in range(low, high):\n # 如果当前元素小于或等于基准元素\n if arr[j] <= pivot:\n # 将指针右移一位\n i += 1\n # 将指针所指的元素与当前元素交换\n swap(arr, i, j)\n # 将基准元素与指针右边的第一个元素交换\n swap(arr, i + 1, high)\n # 返回基准元素的最终位置\n return i + 1\n\n# 定义一个函数,用于对数组进行快速排序\ndef quick_sort(arr, low, high):\n # 如果区间长度大于1\n if low < high:\n # 对区间进行划分,得到基准元素的位置\n p = partition(arr, low, high)\n # 对左边子区间进行快速排序\n quick_sort(arr, low, p - 1)\n # 对右边子区间进行快速排序\n quick_sort(arr, p + 1, high)\n\n# 测试代码\narr = [5, 3, 7, 2, 9, 1, 4, 6, 8]\nprint(\"原始数组:\", arr)\nquick_sort(arr, 0, len(arr) - 1)\nprint(\"排序后的数组:\", arr)\n```", id: '7' }, - { - author: 'bot', text: "好的,我会尝试画一只猫。\n > ![一只猫](https://tse2.mm.bing.net/th/id/OIG.jz34V0PNVkPC229h9spV?w=270&h=270&c=6&r=0&o=5&dpr=1.5&pid=ImgGn)![一只猫](https://tse1.mm.bing.net/th/id/OIG.6g7d.XLZMP_iwAByLhvo?w=270&h=270&c=6&r=0&o=5&dpr=1.5&pid=ImgGn)![一只猫](https://tse2.mm.bing.net/th/id/OIG.iAxF4ekekYn7sZw9SmU6?w=270&h=270&c=6&r=0&o=5&dpr=1.5&pid=ImgGn)![一只猫](https://tse4.mm.bing.net/th/id/OIG.qDnzeSKzUCeJcrBqc5mX?w=270&h=270&c=6&r=0&o=5&dpr=1.5&pid=ImgGn)", - id: '8' - } -] - -export const GreetMessages = [ - '谢谢你! 知道你什么时候准备好继续前进总是很有帮助的。我现在能为你回答什么问题?', - '重新开始总是很棒。问我任何问题!', - '当然,我很乐意重新开始。我现在可以为你提供哪些帮助?', - '当然,我已准备好进行新的挑战。我现在可以为你做什么?', - '很好,让我们来更改主题。你在想什么?', - '不用担心,我很高兴尝试一些新内容。我现在可以为你回答什么问题?', - '好的,我准备好了!感谢重置。我们应该了解哪些内容?', - '感谢刷新!你有新的话题吗?', - '明白了,让我们重新开始。接下来应该讨论什么?', - '下一步!我可以为你做什么?', - '好的,我已准备好新话题。我们应该一起了解哪些内容?' -] - -export const bingConversationStyleAtom = atomWithStorage('bingConversationStyle', BingConversationStyle.Creative, undefined, { unstable_getOnInit: true }) -export const voiceAtom = atomWithStorage('enableTTS', false, undefined, { unstable_getOnInit: true }) - -type Param = { botId: BotId; page: string } - -const createBotInstance = () => { - return new BingWebBot({ - cookie: ' ', - ua: ' ', - }) -} - -export const chatFamily = atomFamily( - (param: Param) => { - return atomWithImmer({ - botId: param.botId, - bot: createBotInstance(), - messages: [] as ChatMessageModel[], - generatingMessageId: '', - abortController: undefined as AbortController | undefined, - conversationId: nanoid(), - }) - }, - (a, b) => a.botId === b.botId && a.page === b.page, -) - -export const hashAtom = atomWithHash('dialog', '') - -export const locationAtom = atomWithLocation() - -export const voiceListenAtom = atom(false) diff --git a/spaces/joaopereirajp/livvieChatBot/venv/lib/python3.9/site-packages/anyio/_backends/_trio.py b/spaces/joaopereirajp/livvieChatBot/venv/lib/python3.9/site-packages/anyio/_backends/_trio.py deleted file mode 100644 index cf2894350952e1169a6c77ea7c767e892f3efc1e..0000000000000000000000000000000000000000 --- a/spaces/joaopereirajp/livvieChatBot/venv/lib/python3.9/site-packages/anyio/_backends/_trio.py +++ /dev/null @@ -1,996 +0,0 @@ -from __future__ import annotations - -import array -import math -import socket -from concurrent.futures import Future -from contextvars import copy_context -from dataclasses import dataclass -from functools import partial -from io import IOBase -from os import PathLike -from signal import Signals -from types import TracebackType -from typing import ( - IO, - TYPE_CHECKING, - Any, - AsyncGenerator, - AsyncIterator, - Awaitable, - Callable, - Collection, - Coroutine, - Generic, - Iterable, - Mapping, - NoReturn, - Sequence, - TypeVar, - cast, -) - -import sniffio -import trio.from_thread -from outcome import Error, Outcome, Value -from trio.socket import SocketType as TrioSocketType -from trio.to_thread import run_sync - -from .. import CapacityLimiterStatistics, EventStatistics, TaskInfo, abc -from .._core._compat import DeprecatedAsyncContextManager, DeprecatedAwaitable -from .._core._eventloop import claim_worker_thread -from .._core._exceptions import ( - BrokenResourceError, - BusyResourceError, - ClosedResourceError, - EndOfStream, -) -from .._core._exceptions import ExceptionGroup as BaseExceptionGroup -from .._core._sockets import convert_ipv6_sockaddr -from .._core._synchronization import CapacityLimiter as BaseCapacityLimiter -from .._core._synchronization import Event as BaseEvent -from .._core._synchronization import ResourceGuard -from .._core._tasks import CancelScope as BaseCancelScope -from ..abc import IPSockAddrType, UDPPacketType - -if TYPE_CHECKING: - from trio_typing import TaskStatus - -try: - from trio import lowlevel as trio_lowlevel -except ImportError: - from trio import hazmat as trio_lowlevel # type: ignore[no-redef] - from trio.hazmat import wait_readable, wait_writable -else: - from trio.lowlevel import wait_readable, wait_writable - -try: - trio_open_process = trio_lowlevel.open_process -except AttributeError: - # isort: off - from trio import ( # type: ignore[attr-defined, no-redef] - open_process as trio_open_process, - ) - -T_Retval = TypeVar("T_Retval") -T_SockAddr = TypeVar("T_SockAddr", str, IPSockAddrType) - - -# -# Event loop -# - -run = trio.run -current_token = trio.lowlevel.current_trio_token -RunVar = trio.lowlevel.RunVar - - -# -# Miscellaneous -# - -sleep = trio.sleep - - -# -# Timeouts and cancellation -# - - -class CancelScope(BaseCancelScope): - def __new__( - cls, original: trio.CancelScope | None = None, **kwargs: object - ) -> CancelScope: - return object.__new__(cls) - - def __init__(self, original: trio.CancelScope | None = None, **kwargs: Any) -> None: - self.__original = original or trio.CancelScope(**kwargs) - - def __enter__(self) -> CancelScope: - self.__original.__enter__() - return self - - def __exit__( - self, - exc_type: type[BaseException] | None, - exc_val: BaseException | None, - exc_tb: TracebackType | None, - ) -> bool | None: - # https://github.com/python-trio/trio-typing/pull/79 - return self.__original.__exit__( # type: ignore[func-returns-value] - exc_type, exc_val, exc_tb - ) - - def cancel(self) -> DeprecatedAwaitable: - self.__original.cancel() - return DeprecatedAwaitable(self.cancel) - - @property - def deadline(self) -> float: - return self.__original.deadline - - @deadline.setter - def deadline(self, value: float) -> None: - self.__original.deadline = value - - @property - def cancel_called(self) -> bool: - return self.__original.cancel_called - - @property - def shield(self) -> bool: - return self.__original.shield - - @shield.setter - def shield(self, value: bool) -> None: - self.__original.shield = value - - -CancelledError = trio.Cancelled -checkpoint = trio.lowlevel.checkpoint -checkpoint_if_cancelled = trio.lowlevel.checkpoint_if_cancelled -cancel_shielded_checkpoint = trio.lowlevel.cancel_shielded_checkpoint -current_effective_deadline = trio.current_effective_deadline -current_time = trio.current_time - - -# -# Task groups -# - - -class ExceptionGroup(BaseExceptionGroup, trio.MultiError): - pass - - -class TaskGroup(abc.TaskGroup): - def __init__(self) -> None: - self._active = False - self._nursery_manager = trio.open_nursery() - self.cancel_scope = None # type: ignore[assignment] - - async def __aenter__(self) -> TaskGroup: - self._active = True - self._nursery = await self._nursery_manager.__aenter__() - self.cancel_scope = CancelScope(self._nursery.cancel_scope) - return self - - async def __aexit__( - self, - exc_type: type[BaseException] | None, - exc_val: BaseException | None, - exc_tb: TracebackType | None, - ) -> bool | None: - try: - return await self._nursery_manager.__aexit__(exc_type, exc_val, exc_tb) - except trio.MultiError as exc: - raise ExceptionGroup(exc.exceptions) from None - finally: - self._active = False - - def start_soon( - self, func: Callable[..., Awaitable[Any]], *args: object, name: object = None - ) -> None: - if not self._active: - raise RuntimeError( - "This task group is not active; no new tasks can be started." - ) - - self._nursery.start_soon(func, *args, name=name) - - async def start( - self, func: Callable[..., Awaitable[Any]], *args: object, name: object = None - ) -> object: - if not self._active: - raise RuntimeError( - "This task group is not active; no new tasks can be started." - ) - - return await self._nursery.start(func, *args, name=name) - - -# -# Threads -# - - -async def run_sync_in_worker_thread( - func: Callable[..., T_Retval], - *args: object, - cancellable: bool = False, - limiter: trio.CapacityLimiter | None = None, -) -> T_Retval: - def wrapper() -> T_Retval: - with claim_worker_thread("trio"): - return func(*args) - - # TODO: remove explicit context copying when trio 0.20 is the minimum requirement - context = copy_context() - context.run(sniffio.current_async_library_cvar.set, None) - return await run_sync( - context.run, wrapper, cancellable=cancellable, limiter=limiter - ) - - -# TODO: remove this workaround when trio 0.20 is the minimum requirement -def run_async_from_thread( - fn: Callable[..., Awaitable[T_Retval]], *args: Any -) -> T_Retval: - async def wrapper() -> T_Retval: - retval: T_Retval - - async def inner() -> None: - nonlocal retval - __tracebackhide__ = True - retval = await fn(*args) - - async with trio.open_nursery() as n: - context.run(n.start_soon, inner) - - __tracebackhide__ = True - return retval # noqa: F821 - - context = copy_context() - context.run(sniffio.current_async_library_cvar.set, "trio") - return trio.from_thread.run(wrapper) - - -def run_sync_from_thread(fn: Callable[..., T_Retval], *args: Any) -> T_Retval: - # TODO: remove explicit context copying when trio 0.20 is the minimum requirement - retval = trio.from_thread.run_sync(copy_context().run, fn, *args) - return cast(T_Retval, retval) - - -class BlockingPortal(abc.BlockingPortal): - def __new__(cls) -> BlockingPortal: - return object.__new__(cls) - - def __init__(self) -> None: - super().__init__() - self._token = trio.lowlevel.current_trio_token() - - def _spawn_task_from_thread( - self, - func: Callable, - args: tuple, - kwargs: dict[str, Any], - name: object, - future: Future, - ) -> None: - context = copy_context() - context.run(sniffio.current_async_library_cvar.set, "trio") - trio.from_thread.run_sync( - context.run, - partial(self._task_group.start_soon, name=name), - self._call_func, - func, - args, - kwargs, - future, - trio_token=self._token, - ) - - -# -# Subprocesses -# - - -@dataclass(eq=False) -class ReceiveStreamWrapper(abc.ByteReceiveStream): - _stream: trio.abc.ReceiveStream - - async def receive(self, max_bytes: int | None = None) -> bytes: - try: - data = await self._stream.receive_some(max_bytes) - except trio.ClosedResourceError as exc: - raise ClosedResourceError from exc.__cause__ - except trio.BrokenResourceError as exc: - raise BrokenResourceError from exc.__cause__ - - if data: - return data - else: - raise EndOfStream - - async def aclose(self) -> None: - await self._stream.aclose() - - -@dataclass(eq=False) -class SendStreamWrapper(abc.ByteSendStream): - _stream: trio.abc.SendStream - - async def send(self, item: bytes) -> None: - try: - await self._stream.send_all(item) - except trio.ClosedResourceError as exc: - raise ClosedResourceError from exc.__cause__ - except trio.BrokenResourceError as exc: - raise BrokenResourceError from exc.__cause__ - - async def aclose(self) -> None: - await self._stream.aclose() - - -@dataclass(eq=False) -class Process(abc.Process): - _process: trio.Process - _stdin: abc.ByteSendStream | None - _stdout: abc.ByteReceiveStream | None - _stderr: abc.ByteReceiveStream | None - - async def aclose(self) -> None: - if self._stdin: - await self._stdin.aclose() - if self._stdout: - await self._stdout.aclose() - if self._stderr: - await self._stderr.aclose() - - await self.wait() - - async def wait(self) -> int: - return await self._process.wait() - - def terminate(self) -> None: - self._process.terminate() - - def kill(self) -> None: - self._process.kill() - - def send_signal(self, signal: Signals) -> None: - self._process.send_signal(signal) - - @property - def pid(self) -> int: - return self._process.pid - - @property - def returncode(self) -> int | None: - return self._process.returncode - - @property - def stdin(self) -> abc.ByteSendStream | None: - return self._stdin - - @property - def stdout(self) -> abc.ByteReceiveStream | None: - return self._stdout - - @property - def stderr(self) -> abc.ByteReceiveStream | None: - return self._stderr - - -async def open_process( - command: str | bytes | Sequence[str | bytes], - *, - shell: bool, - stdin: int | IO[Any] | None, - stdout: int | IO[Any] | None, - stderr: int | IO[Any] | None, - cwd: str | bytes | PathLike | None = None, - env: Mapping[str, str] | None = None, - start_new_session: bool = False, -) -> Process: - process = await trio_open_process( # type: ignore[misc] - command, # type: ignore[arg-type] - stdin=stdin, - stdout=stdout, - stderr=stderr, - shell=shell, - cwd=cwd, - env=env, - start_new_session=start_new_session, - ) - stdin_stream = SendStreamWrapper(process.stdin) if process.stdin else None - stdout_stream = ReceiveStreamWrapper(process.stdout) if process.stdout else None - stderr_stream = ReceiveStreamWrapper(process.stderr) if process.stderr else None - return Process(process, stdin_stream, stdout_stream, stderr_stream) - - -class _ProcessPoolShutdownInstrument(trio.abc.Instrument): - def after_run(self) -> None: - super().after_run() - - -current_default_worker_process_limiter: RunVar = RunVar( - "current_default_worker_process_limiter" -) - - -async def _shutdown_process_pool(workers: set[Process]) -> None: - process: Process - try: - await sleep(math.inf) - except trio.Cancelled: - for process in workers: - if process.returncode is None: - process.kill() - - with CancelScope(shield=True): - for process in workers: - await process.aclose() - - -def setup_process_pool_exit_at_shutdown(workers: set[Process]) -> None: - trio.lowlevel.spawn_system_task(_shutdown_process_pool, workers) - - -# -# Sockets and networking -# - - -class _TrioSocketMixin(Generic[T_SockAddr]): - def __init__(self, trio_socket: TrioSocketType) -> None: - self._trio_socket = trio_socket - self._closed = False - - def _check_closed(self) -> None: - if self._closed: - raise ClosedResourceError - if self._trio_socket.fileno() < 0: - raise BrokenResourceError - - @property - def _raw_socket(self) -> socket.socket: - return self._trio_socket._sock # type: ignore[attr-defined] - - async def aclose(self) -> None: - if self._trio_socket.fileno() >= 0: - self._closed = True - self._trio_socket.close() - - def _convert_socket_error(self, exc: BaseException) -> NoReturn: - if isinstance(exc, trio.ClosedResourceError): - raise ClosedResourceError from exc - elif self._trio_socket.fileno() < 0 and self._closed: - raise ClosedResourceError from None - elif isinstance(exc, OSError): - raise BrokenResourceError from exc - else: - raise exc - - -class SocketStream(_TrioSocketMixin, abc.SocketStream): - def __init__(self, trio_socket: TrioSocketType) -> None: - super().__init__(trio_socket) - self._receive_guard = ResourceGuard("reading from") - self._send_guard = ResourceGuard("writing to") - - async def receive(self, max_bytes: int = 65536) -> bytes: - with self._receive_guard: - try: - data = await self._trio_socket.recv(max_bytes) - except BaseException as exc: - self._convert_socket_error(exc) - - if data: - return data - else: - raise EndOfStream - - async def send(self, item: bytes) -> None: - with self._send_guard: - view = memoryview(item) - while view: - try: - bytes_sent = await self._trio_socket.send(view) - except BaseException as exc: - self._convert_socket_error(exc) - - view = view[bytes_sent:] - - async def send_eof(self) -> None: - self._trio_socket.shutdown(socket.SHUT_WR) - - -class UNIXSocketStream(SocketStream, abc.UNIXSocketStream): - async def receive_fds(self, msglen: int, maxfds: int) -> tuple[bytes, list[int]]: - if not isinstance(msglen, int) or msglen < 0: - raise ValueError("msglen must be a non-negative integer") - if not isinstance(maxfds, int) or maxfds < 1: - raise ValueError("maxfds must be a positive integer") - - fds = array.array("i") - await checkpoint() - with self._receive_guard: - while True: - try: - message, ancdata, flags, addr = await self._trio_socket.recvmsg( - msglen, socket.CMSG_LEN(maxfds * fds.itemsize) - ) - except BaseException as exc: - self._convert_socket_error(exc) - else: - if not message and not ancdata: - raise EndOfStream - - break - - for cmsg_level, cmsg_type, cmsg_data in ancdata: - if cmsg_level != socket.SOL_SOCKET or cmsg_type != socket.SCM_RIGHTS: - raise RuntimeError( - f"Received unexpected ancillary data; message = {message!r}, " - f"cmsg_level = {cmsg_level}, cmsg_type = {cmsg_type}" - ) - - fds.frombytes(cmsg_data[: len(cmsg_data) - (len(cmsg_data) % fds.itemsize)]) - - return message, list(fds) - - async def send_fds(self, message: bytes, fds: Collection[int | IOBase]) -> None: - if not message: - raise ValueError("message must not be empty") - if not fds: - raise ValueError("fds must not be empty") - - filenos: list[int] = [] - for fd in fds: - if isinstance(fd, int): - filenos.append(fd) - elif isinstance(fd, IOBase): - filenos.append(fd.fileno()) - - fdarray = array.array("i", filenos) - await checkpoint() - with self._send_guard: - while True: - try: - await self._trio_socket.sendmsg( - [message], - [ - ( - socket.SOL_SOCKET, - socket.SCM_RIGHTS, # type: ignore[list-item] - fdarray, - ) - ], - ) - break - except BaseException as exc: - self._convert_socket_error(exc) - - -class TCPSocketListener(_TrioSocketMixin, abc.SocketListener): - def __init__(self, raw_socket: socket.socket): - super().__init__(trio.socket.from_stdlib_socket(raw_socket)) - self._accept_guard = ResourceGuard("accepting connections from") - - async def accept(self) -> SocketStream: - with self._accept_guard: - try: - trio_socket, _addr = await self._trio_socket.accept() - except BaseException as exc: - self._convert_socket_error(exc) - - trio_socket.setsockopt(socket.IPPROTO_TCP, socket.TCP_NODELAY, 1) - return SocketStream(trio_socket) - - -class UNIXSocketListener(_TrioSocketMixin, abc.SocketListener): - def __init__(self, raw_socket: socket.socket): - super().__init__(trio.socket.from_stdlib_socket(raw_socket)) - self._accept_guard = ResourceGuard("accepting connections from") - - async def accept(self) -> UNIXSocketStream: - with self._accept_guard: - try: - trio_socket, _addr = await self._trio_socket.accept() - except BaseException as exc: - self._convert_socket_error(exc) - - return UNIXSocketStream(trio_socket) - - -class UDPSocket(_TrioSocketMixin[IPSockAddrType], abc.UDPSocket): - def __init__(self, trio_socket: TrioSocketType) -> None: - super().__init__(trio_socket) - self._receive_guard = ResourceGuard("reading from") - self._send_guard = ResourceGuard("writing to") - - async def receive(self) -> tuple[bytes, IPSockAddrType]: - with self._receive_guard: - try: - data, addr = await self._trio_socket.recvfrom(65536) - return data, convert_ipv6_sockaddr(addr) - except BaseException as exc: - self._convert_socket_error(exc) - - async def send(self, item: UDPPacketType) -> None: - with self._send_guard: - try: - await self._trio_socket.sendto(*item) - except BaseException as exc: - self._convert_socket_error(exc) - - -class ConnectedUDPSocket(_TrioSocketMixin[IPSockAddrType], abc.ConnectedUDPSocket): - def __init__(self, trio_socket: TrioSocketType) -> None: - super().__init__(trio_socket) - self._receive_guard = ResourceGuard("reading from") - self._send_guard = ResourceGuard("writing to") - - async def receive(self) -> bytes: - with self._receive_guard: - try: - return await self._trio_socket.recv(65536) - except BaseException as exc: - self._convert_socket_error(exc) - - async def send(self, item: bytes) -> None: - with self._send_guard: - try: - await self._trio_socket.send(item) - except BaseException as exc: - self._convert_socket_error(exc) - - -async def connect_tcp( - host: str, port: int, local_address: IPSockAddrType | None = None -) -> SocketStream: - family = socket.AF_INET6 if ":" in host else socket.AF_INET - trio_socket = trio.socket.socket(family) - trio_socket.setsockopt(socket.IPPROTO_TCP, socket.TCP_NODELAY, 1) - if local_address: - await trio_socket.bind(local_address) - - try: - await trio_socket.connect((host, port)) - except BaseException: - trio_socket.close() - raise - - return SocketStream(trio_socket) - - -async def connect_unix(path: str) -> UNIXSocketStream: - trio_socket = trio.socket.socket(socket.AF_UNIX) - try: - await trio_socket.connect(path) - except BaseException: - trio_socket.close() - raise - - return UNIXSocketStream(trio_socket) - - -async def create_udp_socket( - family: socket.AddressFamily, - local_address: IPSockAddrType | None, - remote_address: IPSockAddrType | None, - reuse_port: bool, -) -> UDPSocket | ConnectedUDPSocket: - trio_socket = trio.socket.socket(family=family, type=socket.SOCK_DGRAM) - - if reuse_port: - trio_socket.setsockopt(socket.SOL_SOCKET, socket.SO_REUSEPORT, 1) - - if local_address: - await trio_socket.bind(local_address) - - if remote_address: - await trio_socket.connect(remote_address) - return ConnectedUDPSocket(trio_socket) - else: - return UDPSocket(trio_socket) - - -getaddrinfo = trio.socket.getaddrinfo -getnameinfo = trio.socket.getnameinfo - - -async def wait_socket_readable(sock: socket.socket) -> None: - try: - await wait_readable(sock) - except trio.ClosedResourceError as exc: - raise ClosedResourceError().with_traceback(exc.__traceback__) from None - except trio.BusyResourceError: - raise BusyResourceError("reading from") from None - - -async def wait_socket_writable(sock: socket.socket) -> None: - try: - await wait_writable(sock) - except trio.ClosedResourceError as exc: - raise ClosedResourceError().with_traceback(exc.__traceback__) from None - except trio.BusyResourceError: - raise BusyResourceError("writing to") from None - - -# -# Synchronization -# - - -class Event(BaseEvent): - def __new__(cls) -> Event: - return object.__new__(cls) - - def __init__(self) -> None: - self.__original = trio.Event() - - def is_set(self) -> bool: - return self.__original.is_set() - - async def wait(self) -> None: - return await self.__original.wait() - - def statistics(self) -> EventStatistics: - orig_statistics = self.__original.statistics() - return EventStatistics(tasks_waiting=orig_statistics.tasks_waiting) - - def set(self) -> DeprecatedAwaitable: - self.__original.set() - return DeprecatedAwaitable(self.set) - - -class CapacityLimiter(BaseCapacityLimiter): - def __new__(cls, *args: object, **kwargs: object) -> CapacityLimiter: - return object.__new__(cls) - - def __init__( - self, *args: Any, original: trio.CapacityLimiter | None = None - ) -> None: - self.__original = original or trio.CapacityLimiter(*args) - - async def __aenter__(self) -> None: - return await self.__original.__aenter__() - - async def __aexit__( - self, - exc_type: type[BaseException] | None, - exc_val: BaseException | None, - exc_tb: TracebackType | None, - ) -> None: - await self.__original.__aexit__(exc_type, exc_val, exc_tb) - - @property - def total_tokens(self) -> float: - return self.__original.total_tokens - - @total_tokens.setter - def total_tokens(self, value: float) -> None: - self.__original.total_tokens = value - - @property - def borrowed_tokens(self) -> int: - return self.__original.borrowed_tokens - - @property - def available_tokens(self) -> float: - return self.__original.available_tokens - - def acquire_nowait(self) -> DeprecatedAwaitable: - self.__original.acquire_nowait() - return DeprecatedAwaitable(self.acquire_nowait) - - def acquire_on_behalf_of_nowait(self, borrower: object) -> DeprecatedAwaitable: - self.__original.acquire_on_behalf_of_nowait(borrower) - return DeprecatedAwaitable(self.acquire_on_behalf_of_nowait) - - async def acquire(self) -> None: - await self.__original.acquire() - - async def acquire_on_behalf_of(self, borrower: object) -> None: - await self.__original.acquire_on_behalf_of(borrower) - - def release(self) -> None: - return self.__original.release() - - def release_on_behalf_of(self, borrower: object) -> None: - return self.__original.release_on_behalf_of(borrower) - - def statistics(self) -> CapacityLimiterStatistics: - orig = self.__original.statistics() - return CapacityLimiterStatistics( - borrowed_tokens=orig.borrowed_tokens, - total_tokens=orig.total_tokens, - borrowers=orig.borrowers, - tasks_waiting=orig.tasks_waiting, - ) - - -_capacity_limiter_wrapper: RunVar = RunVar("_capacity_limiter_wrapper") - - -def current_default_thread_limiter() -> CapacityLimiter: - try: - return _capacity_limiter_wrapper.get() - except LookupError: - limiter = CapacityLimiter( - original=trio.to_thread.current_default_thread_limiter() - ) - _capacity_limiter_wrapper.set(limiter) - return limiter - - -# -# Signal handling -# - - -class _SignalReceiver(DeprecatedAsyncContextManager["_SignalReceiver"]): - _iterator: AsyncIterator[int] - - def __init__(self, signals: tuple[Signals, ...]): - self._signals = signals - - def __enter__(self) -> _SignalReceiver: - self._cm = trio.open_signal_receiver(*self._signals) - self._iterator = self._cm.__enter__() - return self - - def __exit__( - self, - exc_type: type[BaseException] | None, - exc_val: BaseException | None, - exc_tb: TracebackType | None, - ) -> bool | None: - return self._cm.__exit__(exc_type, exc_val, exc_tb) - - def __aiter__(self) -> _SignalReceiver: - return self - - async def __anext__(self) -> Signals: - signum = await self._iterator.__anext__() - return Signals(signum) - - -def open_signal_receiver(*signals: Signals) -> _SignalReceiver: - return _SignalReceiver(signals) - - -# -# Testing and debugging -# - - -def get_current_task() -> TaskInfo: - task = trio_lowlevel.current_task() - - parent_id = None - if task.parent_nursery and task.parent_nursery.parent_task: - parent_id = id(task.parent_nursery.parent_task) - - return TaskInfo(id(task), parent_id, task.name, task.coro) - - -def get_running_tasks() -> list[TaskInfo]: - root_task = trio_lowlevel.current_root_task() - task_infos = [TaskInfo(id(root_task), None, root_task.name, root_task.coro)] - nurseries = root_task.child_nurseries - while nurseries: - new_nurseries: list[trio.Nursery] = [] - for nursery in nurseries: - for task in nursery.child_tasks: - task_infos.append( - TaskInfo(id(task), id(nursery.parent_task), task.name, task.coro) - ) - new_nurseries.extend(task.child_nurseries) - - nurseries = new_nurseries - - return task_infos - - -def wait_all_tasks_blocked() -> Awaitable[None]: - import trio.testing - - return trio.testing.wait_all_tasks_blocked() - - -class TestRunner(abc.TestRunner): - def __init__(self, **options: Any) -> None: - from collections import deque - from queue import Queue - - self._call_queue: Queue[Callable[..., object]] = Queue() - self._result_queue: deque[Outcome] = deque() - self._stop_event: trio.Event | None = None - self._nursery: trio.Nursery | None = None - self._options = options - - async def _trio_main(self) -> None: - self._stop_event = trio.Event() - async with trio.open_nursery() as self._nursery: - await self._stop_event.wait() - - async def _call_func( - self, func: Callable[..., Awaitable[object]], args: tuple, kwargs: dict - ) -> None: - try: - retval = await func(*args, **kwargs) - except BaseException as exc: - self._result_queue.append(Error(exc)) - else: - self._result_queue.append(Value(retval)) - - def _main_task_finished(self, outcome: object) -> None: - self._nursery = None - - def _get_nursery(self) -> trio.Nursery: - if self._nursery is None: - trio.lowlevel.start_guest_run( - self._trio_main, - run_sync_soon_threadsafe=self._call_queue.put, - done_callback=self._main_task_finished, - **self._options, - ) - while self._nursery is None: - self._call_queue.get()() - - return self._nursery - - def _call( - self, func: Callable[..., Awaitable[T_Retval]], *args: object, **kwargs: object - ) -> T_Retval: - self._get_nursery().start_soon(self._call_func, func, args, kwargs) - while not self._result_queue: - self._call_queue.get()() - - outcome = self._result_queue.pop() - return outcome.unwrap() - - def close(self) -> None: - if self._stop_event: - self._stop_event.set() - while self._nursery is not None: - self._call_queue.get()() - - def run_asyncgen_fixture( - self, - fixture_func: Callable[..., AsyncGenerator[T_Retval, Any]], - kwargs: dict[str, Any], - ) -> Iterable[T_Retval]: - async def fixture_runner(*, task_status: TaskStatus[T_Retval]) -> None: - agen = fixture_func(**kwargs) - retval = await agen.asend(None) - task_status.started(retval) - await teardown_event.wait() - try: - await agen.asend(None) - except StopAsyncIteration: - pass - else: - await agen.aclose() - raise RuntimeError("Async generator fixture did not stop") - - teardown_event = trio.Event() - fixture_value = self._call(lambda: self._get_nursery().start(fixture_runner)) - yield fixture_value - teardown_event.set() - - def run_fixture( - self, - fixture_func: Callable[..., Coroutine[Any, Any, T_Retval]], - kwargs: dict[str, Any], - ) -> T_Retval: - return self._call(fixture_func, **kwargs) - - def run_test( - self, test_func: Callable[..., Coroutine[Any, Any, Any]], kwargs: dict[str, Any] - ) -> None: - self._call(test_func, **kwargs) diff --git a/spaces/joaopereirajp/livvieChatBot/venv/lib/python3.9/site-packages/attr/setters.py b/spaces/joaopereirajp/livvieChatBot/venv/lib/python3.9/site-packages/attr/setters.py deleted file mode 100644 index 12ed6750df35b96e2ccde24a9752dca22929188d..0000000000000000000000000000000000000000 --- a/spaces/joaopereirajp/livvieChatBot/venv/lib/python3.9/site-packages/attr/setters.py +++ /dev/null @@ -1,73 +0,0 @@ -# SPDX-License-Identifier: MIT - -""" -Commonly used hooks for on_setattr. -""" - - -from . import _config -from .exceptions import FrozenAttributeError - - -def pipe(*setters): - """ - Run all *setters* and return the return value of the last one. - - .. versionadded:: 20.1.0 - """ - - def wrapped_pipe(instance, attrib, new_value): - rv = new_value - - for setter in setters: - rv = setter(instance, attrib, rv) - - return rv - - return wrapped_pipe - - -def frozen(_, __, ___): - """ - Prevent an attribute to be modified. - - .. versionadded:: 20.1.0 - """ - raise FrozenAttributeError() - - -def validate(instance, attrib, new_value): - """ - Run *attrib*'s validator on *new_value* if it has one. - - .. versionadded:: 20.1.0 - """ - if _config._run_validators is False: - return new_value - - v = attrib.validator - if not v: - return new_value - - v(instance, attrib, new_value) - - return new_value - - -def convert(instance, attrib, new_value): - """ - Run *attrib*'s converter -- if it has one -- on *new_value* and return the - result. - - .. versionadded:: 20.1.0 - """ - c = attrib.converter - if c: - return c(new_value) - - return new_value - - -# Sentinel for disabling class-wide *on_setattr* hooks for certain attributes. -# autodata stopped working, so the docstring is inlined in the API docs. -NO_OP = object() diff --git a/spaces/joaopereirajp/livvieChatBot/venv/lib/python3.9/site-packages/bson/typings.py b/spaces/joaopereirajp/livvieChatBot/venv/lib/python3.9/site-packages/bson/typings.py deleted file mode 100644 index c796c65c4ee8cf53f01204625e4b52776a8f105e..0000000000000000000000000000000000000000 --- a/spaces/joaopereirajp/livvieChatBot/venv/lib/python3.9/site-packages/bson/typings.py +++ /dev/null @@ -1,29 +0,0 @@ -# Copyright 2023-Present MongoDB, Inc. -# -# Licensed under the Apache License, Version 2.0 (the "License"); -# you may not use this file except in compliance with the License. -# You may obtain a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an "AS IS" BASIS, -# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -# See the License for the specific language governing permissions and -# limitations under the License. - -"""Type aliases used by bson""" -from typing import TYPE_CHECKING, Any, Mapping, MutableMapping, TypeVar, Union - -if TYPE_CHECKING: - from array import array - from mmap import mmap - - from bson.raw_bson import RawBSONDocument - - -# Common Shared Types. -_DocumentOut = Union[MutableMapping[str, Any], "RawBSONDocument"] -_DocumentType = TypeVar("_DocumentType", bound=Mapping[str, Any]) -_DocumentTypeArg = TypeVar("_DocumentTypeArg", bound=Mapping[str, Any]) -_ReadableBuffer = Union[bytes, memoryview, "mmap", "array"] diff --git a/spaces/joaopereirajp/livvieChatBot/venv/lib/python3.9/site-packages/dns/rdtypes/IN/APL.py b/spaces/joaopereirajp/livvieChatBot/venv/lib/python3.9/site-packages/dns/rdtypes/IN/APL.py deleted file mode 100644 index f1bb01db199f8e46266f8c128aa8376903d4f337..0000000000000000000000000000000000000000 --- a/spaces/joaopereirajp/livvieChatBot/venv/lib/python3.9/site-packages/dns/rdtypes/IN/APL.py +++ /dev/null @@ -1,152 +0,0 @@ -# Copyright (C) Dnspython Contributors, see LICENSE for text of ISC license - -# Copyright (C) 2003-2017 Nominum, Inc. -# -# Permission to use, copy, modify, and distribute this software and its -# documentation for any purpose with or without fee is hereby granted, -# provided that the above copyright notice and this permission notice -# appear in all copies. -# -# THE SOFTWARE IS PROVIDED "AS IS" AND NOMINUM DISCLAIMS ALL WARRANTIES -# WITH REGARD TO THIS SOFTWARE INCLUDING ALL IMPLIED WARRANTIES OF -# MERCHANTABILITY AND FITNESS. IN NO EVENT SHALL NOMINUM BE LIABLE FOR -# ANY SPECIAL, DIRECT, INDIRECT, OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES -# WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR PROFITS, WHETHER IN AN -# ACTION OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS ACTION, ARISING OUT -# OF OR IN CONNECTION WITH THE USE OR PERFORMANCE OF THIS SOFTWARE. - -import binascii -import codecs -import struct - -import dns.exception -import dns.immutable -import dns.ipv4 -import dns.ipv6 -import dns.rdata -import dns.tokenizer - - -@dns.immutable.immutable -class APLItem: - - """An APL list item.""" - - __slots__ = ["family", "negation", "address", "prefix"] - - def __init__(self, family, negation, address, prefix): - self.family = dns.rdata.Rdata._as_uint16(family) - self.negation = dns.rdata.Rdata._as_bool(negation) - if self.family == 1: - self.address = dns.rdata.Rdata._as_ipv4_address(address) - self.prefix = dns.rdata.Rdata._as_int(prefix, 0, 32) - elif self.family == 2: - self.address = dns.rdata.Rdata._as_ipv6_address(address) - self.prefix = dns.rdata.Rdata._as_int(prefix, 0, 128) - else: - self.address = dns.rdata.Rdata._as_bytes(address, max_length=127) - self.prefix = dns.rdata.Rdata._as_uint8(prefix) - - def __str__(self): - if self.negation: - return "!%d:%s/%s" % (self.family, self.address, self.prefix) - else: - return "%d:%s/%s" % (self.family, self.address, self.prefix) - - def to_wire(self, file): - if self.family == 1: - address = dns.ipv4.inet_aton(self.address) - elif self.family == 2: - address = dns.ipv6.inet_aton(self.address) - else: - address = binascii.unhexlify(self.address) - # - # Truncate least significant zero bytes. - # - last = 0 - for i in range(len(address) - 1, -1, -1): - if address[i] != 0: - last = i + 1 - break - address = address[0:last] - l = len(address) - assert l < 128 - if self.negation: - l |= 0x80 - header = struct.pack("!HBB", self.family, self.prefix, l) - file.write(header) - file.write(address) - - -@dns.immutable.immutable -class APL(dns.rdata.Rdata): - - """APL record.""" - - # see: RFC 3123 - - __slots__ = ["items"] - - def __init__(self, rdclass, rdtype, items): - super().__init__(rdclass, rdtype) - for item in items: - if not isinstance(item, APLItem): - raise ValueError("item not an APLItem") - self.items = tuple(items) - - def to_text(self, origin=None, relativize=True, **kw): - return " ".join(map(str, self.items)) - - @classmethod - def from_text( - cls, rdclass, rdtype, tok, origin=None, relativize=True, relativize_to=None - ): - items = [] - for token in tok.get_remaining(): - item = token.unescape().value - if item[0] == "!": - negation = True - item = item[1:] - else: - negation = False - (family, rest) = item.split(":", 1) - family = int(family) - (address, prefix) = rest.split("/", 1) - prefix = int(prefix) - item = APLItem(family, negation, address, prefix) - items.append(item) - - return cls(rdclass, rdtype, items) - - def _to_wire(self, file, compress=None, origin=None, canonicalize=False): - for item in self.items: - item.to_wire(file) - - @classmethod - def from_wire_parser(cls, rdclass, rdtype, parser, origin=None): - items = [] - while parser.remaining() > 0: - header = parser.get_struct("!HBB") - afdlen = header[2] - if afdlen > 127: - negation = True - afdlen -= 128 - else: - negation = False - address = parser.get_bytes(afdlen) - l = len(address) - if header[0] == 1: - if l < 4: - address += b"\x00" * (4 - l) - elif header[0] == 2: - if l < 16: - address += b"\x00" * (16 - l) - else: - # - # This isn't really right according to the RFC, but it - # seems better than throwing an exception - # - address = codecs.encode(address, "hex_codec") - item = APLItem(header[0], negation, address, header[1]) - items.append(item) - return cls(rdclass, rdtype, items) diff --git a/spaces/joaopereirajp/livvieChatBot/venv/lib/python3.9/site-packages/gpt_index/indices/keyword_table/utils.py b/spaces/joaopereirajp/livvieChatBot/venv/lib/python3.9/site-packages/gpt_index/indices/keyword_table/utils.py deleted file mode 100644 index 2a9b502dd54b7d390c11474b9e51fa3bab334520..0000000000000000000000000000000000000000 --- a/spaces/joaopereirajp/livvieChatBot/venv/lib/python3.9/site-packages/gpt_index/indices/keyword_table/utils.py +++ /dev/null @@ -1,74 +0,0 @@ -"""Utils for keyword table.""" - -import re -from typing import Optional, Set - -import pandas as pd - -from gpt_index.indices.utils import expand_tokens_with_subtokens -from gpt_index.utils import globals_helper - - -def simple_extract_keywords( - text_chunk: str, max_keywords: Optional[int] = None, filter_stopwords: bool = True -) -> Set[str]: - """Extract keywords with simple algorithm.""" - tokens = [t.strip().lower() for t in re.findall(r"\w+", text_chunk)] - if filter_stopwords: - tokens = [t for t in tokens if t not in globals_helper.stopwords] - value_counts = pd.Series(tokens).value_counts() - keywords = value_counts.index.tolist()[:max_keywords] - return set(keywords) - - -def rake_extract_keywords( - text_chunk: str, - max_keywords: Optional[int] = None, - expand_with_subtokens: bool = True, -) -> Set[str]: - """Extract keywords with RAKE.""" - try: - import nltk - - nltk.download("punkt") - except ImportError: - raise ImportError("Please install nltk: `pip install nltk`") - try: - from rake_nltk import Rake - except ImportError: - raise ImportError("Please install rake_nltk: `pip install rake_nltk`") - - r = Rake() - r.extract_keywords_from_text(text_chunk) - keywords = r.get_ranked_phrases()[:max_keywords] - if expand_with_subtokens: - return set(expand_tokens_with_subtokens(keywords)) - else: - return set(keywords) - - -def extract_keywords_given_response( - response: str, lowercase: bool = True, start_token: str = "" -) -> Set[str]: - """Extract keywords given the GPT-generated response. - - Used by keyword table indices. - Parses : , , ... into [word1, word2, ...] - Raises exception if response doesn't start with - """ - results = [] - response = response.strip() # Strip newlines from responses. - - if response.startswith(start_token): - response = response[len(start_token) :] - - keywords = response.split(",") - for k in keywords: - rk = k - if lowercase: - rk = rk.lower() - results.append(rk.strip()) - - # if keyword consists of multiple words, split into subwords - # (removing stopwords) - return expand_tokens_with_subtokens(set(results)) diff --git a/spaces/johnslegers/ImageProcessService/modules/u2net.py b/spaces/johnslegers/ImageProcessService/modules/u2net.py deleted file mode 100644 index 9ccadc7052c2227b1e3b17e78b3816b58f079030..0000000000000000000000000000000000000000 --- a/spaces/johnslegers/ImageProcessService/modules/u2net.py +++ /dev/null @@ -1,50 +0,0 @@ -import paddlehub as hub -import numpy as np -import cv2 -from PIL import Image -import io -import base64 - -from U2Net.module import U2Net - -Model = U2Net() - -def transBg(rgb, mask): - # First create the image with alpha channel - rgba = cv2.cvtColor(rgb, cv2.COLOR_RGB2RGBA) - # Then assign the alpha channel as the last channel of the image - rgba[:, :, 3] = mask - return rgba - -def b64decode(input): - return base64.b64decode(input.split(',', 1)[1]) - -def b64encode(input): - b = base64.b64encode(input) - b = b.decode("utf-8") - return 'data:image/png;base64,{}'.format(b) - -def numpyToBytes(im): - is_success, im_buf_arr = cv2.imencode(".png", im) - byte_im = im_buf_arr.tobytes() - return byte_im - -def bytesToNumpy(bytes): - return cv2.cvtColor(cv2.imdecode(np.frombuffer(bytes, dtype=np.uint8), cv2.IMREAD_COLOR), cv2.COLOR_RGB2BGR) - -def u2net_inference(img): - result = Model.Segmentation( - images=[bytesToNumpy(b64decode(img))], -# images=[cv2.imdecode(np.fromstring(img, np.uint8), cv2.IMREAD_COLOR)], - paths=None, - batch_size=1, - input_size=320, - output_dir='output', - visualization=True) - - return [ - result[0], - b64encode(numpyToBytes(transBg(result[0]['front'][:,:,::-1], result[0]['mask']))), - b64encode(numpyToBytes(result[0]['front'][:,:,::-1])), - b64encode(numpyToBytes(result[0]['mask'])) - ] diff --git a/spaces/johnsu6616/SD_Helper_01/README.md b/spaces/johnsu6616/SD_Helper_01/README.md deleted file mode 100644 index 3e653bab146bb66395f0351ebe7ae9e1c780ed0a..0000000000000000000000000000000000000000 --- a/spaces/johnsu6616/SD_Helper_01/README.md +++ /dev/null @@ -1,14 +0,0 @@ ---- -title: SD_Helper_01 -emoji: 📊 -colorFrom: gray -colorTo: indigo -sdk: gradio -sdk_version: 3.30.0 -app_file: app.py -pinned: false -license: openrail -duplicated_from: johnsu6616/SD_Helper ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/jonas/sdg-policy-tracing/src/preprocessing.py b/spaces/jonas/sdg-policy-tracing/src/preprocessing.py deleted file mode 100644 index 7f1a0b742971408bf4614c901aef94d9d5a836f5..0000000000000000000000000000000000000000 --- a/spaces/jonas/sdg-policy-tracing/src/preprocessing.py +++ /dev/null @@ -1,63 +0,0 @@ -from typing import Callable, Dict, List, Optional - -from pathlib import Path -import re -import logging -import string -import streamlit as st -logger = logging.getLogger(__name__) - -import os -os.environ["TOKENIZERS_PARALLELISM"] = "false" - -from haystack.utils import convert_files_to_docs, fetch_archive_from_http -from haystack.nodes.file_converter import BaseConverter, DocxToTextConverter, PDFToTextConverter, TextConverter -from haystack.schema import Document -import pdfplumber - -import pandas as pd - -def load_document( - file: str, - file_name, - encoding: Optional[str] = None, - id_hash_keys: Optional[List[str]] = None, -) -> List[Document]: - - """ - takes docx, txt and pdf files as input and extracts text as well as the filename as metadata. Since haystack - does not take care of all pdf files, pdfplumber is attached to the pipeline in case the pdf extraction fails - via Haystack. - - Returns a list of type haystack.schema.Document - """ - - if file_name.name.endswith('.pdf'): - converter = PDFToTextConverter(remove_numeric_tables=True) - if file_name.name.endswith('.txt'): - converter = TextConverter() - if file_name.name.endswith('.docx'): - converter = DocxToTextConverter() - - - documents = [] - logger.info("Converting {}".format(file_name)) - # PDFToTextConverter, TextConverter, and DocxToTextConverter return a list containing a single Document - document = converter.convert( - file_path=file, meta=None, encoding=encoding, id_hash_keys=id_hash_keys - )[0] - text = document.content - documents.append(Document(content=text, meta={"name": file_name}, id_hash_keys=id_hash_keys)) - - '''check if text is empty and apply different pdf processor. This can happen whith certain pdf types.''' - for i in documents: - if i.content == "": - st.write("using pdfplumber") - text = [] - with pdfplumber.open(file) as pdf: - for page in pdf.pages: - text.append(page.extract_text()) - i.content = ' '.join([page for page in text]) - - return documents - diff --git a/spaces/julien-c/streamlit-cheatsheet/README.md b/spaces/julien-c/streamlit-cheatsheet/README.md deleted file mode 100644 index ea92507dd6eeb0fe83888d911a4dd435379bebcf..0000000000000000000000000000000000000000 --- a/spaces/julien-c/streamlit-cheatsheet/README.md +++ /dev/null @@ -1,9 +0,0 @@ ---- -emoji: 🖤 -sdk: streamlit -app_file: app.py ---- - -## streamlit-cheatsheet - -Hello \ No newline at end of file diff --git a/spaces/justest/gpt4free/app.py b/spaces/justest/gpt4free/app.py deleted file mode 100644 index 23e3a59d76381e8f30904722f52f1ac57285a006..0000000000000000000000000000000000000000 --- a/spaces/justest/gpt4free/app.py +++ /dev/null @@ -1,172 +0,0 @@ -import g4f -import gradio as gr -from g4f.Provider import ( - Ails, - You, - Bing, - Yqcloud, - Theb, - Aichat, - Bard, - Vercel, - Forefront, - Lockchat, - Liaobots, - H2o, - ChatgptLogin, - DeepAi, - GetGpt -) -import os -import json -import pandas as pd - -from models_for_langchain.model import CustomLLM -from langchain.memory import ConversationBufferWindowMemory, ConversationTokenBufferMemory -from langchain import LLMChain, PromptTemplate -from langchain.prompts import ( - ChatPromptTemplate, - PromptTemplate, - SystemMessagePromptTemplate, - AIMessagePromptTemplate, - HumanMessagePromptTemplate, -) - -provider_dict = { - 'Ails': Ails, - 'You': You, - 'Bing': Bing, - 'Yqcloud': Yqcloud, - 'Theb': Theb, - 'Aichat': Aichat, - 'Bard': Bard, - 'Vercel': Vercel, - 'Forefront': Forefront, - 'Lockchat': Lockchat, - 'Liaobots': Liaobots, - 'H2o': H2o, - 'ChatgptLogin': ChatgptLogin, - 'DeepAi': DeepAi, - 'GetGpt': GetGpt -} - -prompt_set_list = {} -for prompt_file in os.listdir("prompt_set"): - key = prompt_file - if '.csv' in key: - df = pd.read_csv("prompt_set/" + prompt_file) - prompt_dict = dict(zip(df['act'], df['prompt'])) - else: - with open("prompt_set/" + prompt_file, encoding='utf-8') as f: - ds = json.load(f) - prompt_dict = {item["act"]: item["prompt"] for item in ds} - prompt_set_list[key] = prompt_dict - -with gr.Blocks() as demo: - llm = CustomLLM() - - template = """ - Chat with human based on following instructions: - ``` - {system_instruction} - ``` - The following is a conversation between a human and an AI. The AI is talkative and provides lots of specific details from its context. If the AI does not know the answer to a question, it truthfully says it does not know. - {{chat_history}} - Human: {{human_input}} - Chatbot:""" - - memory = ConversationBufferWindowMemory(k=10, memory_key="chat_history") - - chatbot = gr.Chatbot([], label='AI') - msg = gr.Textbox(value="", label='请输入:') - with gr.Row(): - clear = gr.Button("清空对话", scale=2) - chat_mode = gr.Checkbox(value=True, label='聊天模式', interactive=True, scale=1) - system_msg = gr.Textbox(value="你是一名助手,可以解答问题。", label='系统提示') - with gr.Row(): - default_prompt_set = "1 中文提示词.json" - prompt_set_name = gr.Dropdown(prompt_set_list.keys(), value=default_prompt_set, label='提示词集合') - prompt_name = gr.Dropdown(prompt_set_list[default_prompt_set].keys(), label='提示词', min_width=20) - with gr.Row(): - model_name = gr.Dropdown(['gpt-3.5-turbo', 'gpt-4'], value='gpt-3.5-turbo', label='模型') - provider_name = gr.Dropdown(provider_dict.keys(), value='GetGpt', label='提供者', min_width=20) - - def change_prompt_set(prompt_set_name): - return gr.Dropdown.update(choices=list(prompt_set_list[prompt_set_name].keys())) - - def change_prompt(prompt_set_name, prompt_name): - return gr.update(value=prompt_set_list[prompt_set_name][prompt_name]) - - def user(user_message, history = []): - return gr.update(value="", interactive=False), history + [[user_message, None]] - - def bot(history, model_name, provider_name, system_msg, chat_mode): - history[-1][1] = '' - if len(system_msg)>3000: - system_msg = system_msg[:2000] + system_msg[-1000:] - - if not chat_mode: - global template, memory - llm.model_name = model_name - llm.provider_name = provider_name - prompt = PromptTemplate( - input_variables=["chat_history", "human_input"], template=template.format(system_instruction=system_msg) - ) - llm_chain = LLMChain( - llm=llm, - prompt=prompt, - verbose=False, - memory=memory, - ) - bot_msg = llm_chain.run(history[-1][0]) - for c in bot_msg: - history[-1][1] += c - yield history - else: - prompt = """ - 请你仔细阅读以下提示,然后针对用户的话进行回答。 - 提示: - ``` - {} - ``` - 用户最新的话: - ``` - {} - ``` - 请回答: - """ - - # print(history) - messages = [] - for user_message, assistant_message in history[:-1]: - messages.append({"role": "user", "content": user_message}) - messages.append({"role": "assistant", "content": assistant_message}) - messages.append({"role": "user", "content": history[-1][0]}) - # print(messages) - - bot_msg = g4f.ChatCompletion.create( - model=model_name, - provider=provider_dict[provider_name], - messages=messages, - stream=True) - for c in bot_msg: - history[-1][1] += c - print(c, flush=True, end='') - yield history - - def empty_chat(): - global memory - memory = ConversationBufferWindowMemory(k=10, memory_key="chat_history") - return None - response = msg.submit(user, [msg, chatbot], [msg, chatbot], queue=False).then( - bot, [chatbot, model_name, provider_name, system_msg, chat_mode], chatbot - ) - prompt_set_name.select(change_prompt_set, prompt_set_name, prompt_name) - prompt_name.select(change_prompt, [prompt_set_name, prompt_name], system_msg) - - response.then(lambda: gr.update(interactive=True), None, [msg], queue=False) - clear.click(empty_chat, None, [chatbot], queue=False) - -demo.title = "AI Chat" -demo.queue() -demo.launch() \ No newline at end of file diff --git a/spaces/jvde/sovits-webui/modules.py b/spaces/jvde/sovits-webui/modules.py deleted file mode 100644 index 9c7fd9cd6eb8b7e0ec0e08957e970744a374a924..0000000000000000000000000000000000000000 --- a/spaces/jvde/sovits-webui/modules.py +++ /dev/null @@ -1,390 +0,0 @@ -import copy -import math -import numpy as np -import scipy -import torch -from torch import nn -from torch.nn import functional as F - -from torch.nn import Conv1d, ConvTranspose1d, AvgPool1d, Conv2d -from torch.nn.utils import weight_norm, remove_weight_norm - -import commons -from commons import init_weights, get_padding -from transforms import piecewise_rational_quadratic_transform - - -LRELU_SLOPE = 0.1 - - -class LayerNorm(nn.Module): - def __init__(self, channels, eps=1e-5): - super().__init__() - self.channels = channels - self.eps = eps - - self.gamma = nn.Parameter(torch.ones(channels)) - self.beta = nn.Parameter(torch.zeros(channels)) - - def forward(self, x): - x = x.transpose(1, -1) - x = F.layer_norm(x, (self.channels,), self.gamma, self.beta, self.eps) - return x.transpose(1, -1) - - -class ConvReluNorm(nn.Module): - def __init__(self, in_channels, hidden_channels, out_channels, kernel_size, n_layers, p_dropout): - super().__init__() - self.in_channels = in_channels - self.hidden_channels = hidden_channels - self.out_channels = out_channels - self.kernel_size = kernel_size - self.n_layers = n_layers - self.p_dropout = p_dropout - assert n_layers > 1, "Number of layers should be larger than 0." - - self.conv_layers = nn.ModuleList() - self.norm_layers = nn.ModuleList() - self.conv_layers.append(nn.Conv1d(in_channels, hidden_channels, kernel_size, padding=kernel_size//2)) - self.norm_layers.append(LayerNorm(hidden_channels)) - self.relu_drop = nn.Sequential( - nn.ReLU(), - nn.Dropout(p_dropout)) - for _ in range(n_layers-1): - self.conv_layers.append(nn.Conv1d(hidden_channels, hidden_channels, kernel_size, padding=kernel_size//2)) - self.norm_layers.append(LayerNorm(hidden_channels)) - self.proj = nn.Conv1d(hidden_channels, out_channels, 1) - self.proj.weight.data.zero_() - self.proj.bias.data.zero_() - - def forward(self, x, x_mask): - x_org = x - for i in range(self.n_layers): - x = self.conv_layers[i](x * x_mask) - x = self.norm_layers[i](x) - x = self.relu_drop(x) - x = x_org + self.proj(x) - return x * x_mask - - -class DDSConv(nn.Module): - """ - Dialted and Depth-Separable Convolution - """ - def __init__(self, channels, kernel_size, n_layers, p_dropout=0.): - super().__init__() - self.channels = channels - self.kernel_size = kernel_size - self.n_layers = n_layers - self.p_dropout = p_dropout - - self.drop = nn.Dropout(p_dropout) - self.convs_sep = nn.ModuleList() - self.convs_1x1 = nn.ModuleList() - self.norms_1 = nn.ModuleList() - self.norms_2 = nn.ModuleList() - for i in range(n_layers): - dilation = kernel_size ** i - padding = (kernel_size * dilation - dilation) // 2 - self.convs_sep.append(nn.Conv1d(channels, channels, kernel_size, - groups=channels, dilation=dilation, padding=padding - )) - self.convs_1x1.append(nn.Conv1d(channels, channels, 1)) - self.norms_1.append(LayerNorm(channels)) - self.norms_2.append(LayerNorm(channels)) - - def forward(self, x, x_mask, g=None): - if g is not None: - x = x + g - for i in range(self.n_layers): - y = self.convs_sep[i](x * x_mask) - y = self.norms_1[i](y) - y = F.gelu(y) - y = self.convs_1x1[i](y) - y = self.norms_2[i](y) - y = F.gelu(y) - y = self.drop(y) - x = x + y - return x * x_mask - - -class WN(torch.nn.Module): - def __init__(self, hidden_channels, kernel_size, dilation_rate, n_layers, gin_channels=0, p_dropout=0): - super(WN, self).__init__() - assert(kernel_size % 2 == 1) - self.hidden_channels =hidden_channels - self.kernel_size = kernel_size, - self.dilation_rate = dilation_rate - self.n_layers = n_layers - self.gin_channels = gin_channels - self.p_dropout = p_dropout - - self.in_layers = torch.nn.ModuleList() - self.res_skip_layers = torch.nn.ModuleList() - self.drop = nn.Dropout(p_dropout) - - if gin_channels != 0: - cond_layer = torch.nn.Conv1d(gin_channels, 2*hidden_channels*n_layers, 1) - self.cond_layer = torch.nn.utils.weight_norm(cond_layer, name='weight') - - for i in range(n_layers): - dilation = dilation_rate ** i - padding = int((kernel_size * dilation - dilation) / 2) - in_layer = torch.nn.Conv1d(hidden_channels, 2*hidden_channels, kernel_size, - dilation=dilation, padding=padding) - in_layer = torch.nn.utils.weight_norm(in_layer, name='weight') - self.in_layers.append(in_layer) - - # last one is not necessary - if i < n_layers - 1: - res_skip_channels = 2 * hidden_channels - else: - res_skip_channels = hidden_channels - - res_skip_layer = torch.nn.Conv1d(hidden_channels, res_skip_channels, 1) - res_skip_layer = torch.nn.utils.weight_norm(res_skip_layer, name='weight') - self.res_skip_layers.append(res_skip_layer) - - def forward(self, x, x_mask, g=None, **kwargs): - output = torch.zeros_like(x) - n_channels_tensor = torch.IntTensor([self.hidden_channels]) - - if g is not None: - g = self.cond_layer(g) - - for i in range(self.n_layers): - x_in = self.in_layers[i](x) - if g is not None: - cond_offset = i * 2 * self.hidden_channels - g_l = g[:,cond_offset:cond_offset+2*self.hidden_channels,:] - else: - g_l = torch.zeros_like(x_in) - - acts = commons.fused_add_tanh_sigmoid_multiply( - x_in, - g_l, - n_channels_tensor) - acts = self.drop(acts) - - res_skip_acts = self.res_skip_layers[i](acts) - if i < self.n_layers - 1: - res_acts = res_skip_acts[:,:self.hidden_channels,:] - x = (x + res_acts) * x_mask - output = output + res_skip_acts[:,self.hidden_channels:,:] - else: - output = output + res_skip_acts - return output * x_mask - - def remove_weight_norm(self): - if self.gin_channels != 0: - torch.nn.utils.remove_weight_norm(self.cond_layer) - for l in self.in_layers: - torch.nn.utils.remove_weight_norm(l) - for l in self.res_skip_layers: - torch.nn.utils.remove_weight_norm(l) - - -class ResBlock1(torch.nn.Module): - def __init__(self, channels, kernel_size=3, dilation=(1, 3, 5)): - super(ResBlock1, self).__init__() - self.convs1 = nn.ModuleList([ - weight_norm(Conv1d(channels, channels, kernel_size, 1, dilation=dilation[0], - padding=get_padding(kernel_size, dilation[0]))), - weight_norm(Conv1d(channels, channels, kernel_size, 1, dilation=dilation[1], - padding=get_padding(kernel_size, dilation[1]))), - weight_norm(Conv1d(channels, channels, kernel_size, 1, dilation=dilation[2], - padding=get_padding(kernel_size, dilation[2]))) - ]) - self.convs1.apply(init_weights) - - self.convs2 = nn.ModuleList([ - weight_norm(Conv1d(channels, channels, kernel_size, 1, dilation=1, - padding=get_padding(kernel_size, 1))), - weight_norm(Conv1d(channels, channels, kernel_size, 1, dilation=1, - padding=get_padding(kernel_size, 1))), - weight_norm(Conv1d(channels, channels, kernel_size, 1, dilation=1, - padding=get_padding(kernel_size, 1))) - ]) - self.convs2.apply(init_weights) - - def forward(self, x, x_mask=None): - for c1, c2 in zip(self.convs1, self.convs2): - xt = F.leaky_relu(x, LRELU_SLOPE) - if x_mask is not None: - xt = xt * x_mask - xt = c1(xt) - xt = F.leaky_relu(xt, LRELU_SLOPE) - if x_mask is not None: - xt = xt * x_mask - xt = c2(xt) - x = xt + x - if x_mask is not None: - x = x * x_mask - return x - - def remove_weight_norm(self): - for l in self.convs1: - remove_weight_norm(l) - for l in self.convs2: - remove_weight_norm(l) - - -class ResBlock2(torch.nn.Module): - def __init__(self, channels, kernel_size=3, dilation=(1, 3)): - super(ResBlock2, self).__init__() - self.convs = nn.ModuleList([ - weight_norm(Conv1d(channels, channels, kernel_size, 1, dilation=dilation[0], - padding=get_padding(kernel_size, dilation[0]))), - weight_norm(Conv1d(channels, channels, kernel_size, 1, dilation=dilation[1], - padding=get_padding(kernel_size, dilation[1]))) - ]) - self.convs.apply(init_weights) - - def forward(self, x, x_mask=None): - for c in self.convs: - xt = F.leaky_relu(x, LRELU_SLOPE) - if x_mask is not None: - xt = xt * x_mask - xt = c(xt) - x = xt + x - if x_mask is not None: - x = x * x_mask - return x - - def remove_weight_norm(self): - for l in self.convs: - remove_weight_norm(l) - - -class Log(nn.Module): - def forward(self, x, x_mask, reverse=False, **kwargs): - if not reverse: - y = torch.log(torch.clamp_min(x, 1e-5)) * x_mask - logdet = torch.sum(-y, [1, 2]) - return y, logdet - else: - x = torch.exp(x) * x_mask - return x - - -class Flip(nn.Module): - def forward(self, x, *args, reverse=False, **kwargs): - x = torch.flip(x, [1]) - if not reverse: - logdet = torch.zeros(x.size(0)).to(dtype=x.dtype, device=x.device) - return x, logdet - else: - return x - - -class ElementwiseAffine(nn.Module): - def __init__(self, channels): - super().__init__() - self.channels = channels - self.m = nn.Parameter(torch.zeros(channels,1)) - self.logs = nn.Parameter(torch.zeros(channels,1)) - - def forward(self, x, x_mask, reverse=False, **kwargs): - if not reverse: - y = self.m + torch.exp(self.logs) * x - y = y * x_mask - logdet = torch.sum(self.logs * x_mask, [1,2]) - return y, logdet - else: - x = (x - self.m) * torch.exp(-self.logs) * x_mask - return x - - -class ResidualCouplingLayer(nn.Module): - def __init__(self, - channels, - hidden_channels, - kernel_size, - dilation_rate, - n_layers, - p_dropout=0, - gin_channels=0, - mean_only=False): - assert channels % 2 == 0, "channels should be divisible by 2" - super().__init__() - self.channels = channels - self.hidden_channels = hidden_channels - self.kernel_size = kernel_size - self.dilation_rate = dilation_rate - self.n_layers = n_layers - self.half_channels = channels // 2 - self.mean_only = mean_only - - self.pre = nn.Conv1d(self.half_channels, hidden_channels, 1) - self.enc = WN(hidden_channels, kernel_size, dilation_rate, n_layers, p_dropout=p_dropout, gin_channels=gin_channels) - self.post = nn.Conv1d(hidden_channels, self.half_channels * (2 - mean_only), 1) - self.post.weight.data.zero_() - self.post.bias.data.zero_() - - def forward(self, x, x_mask, g=None, reverse=False): - x0, x1 = torch.split(x, [self.half_channels]*2, 1) - h = self.pre(x0) * x_mask - h = self.enc(h, x_mask, g=g) - stats = self.post(h) * x_mask - if not self.mean_only: - m, logs = torch.split(stats, [self.half_channels]*2, 1) - else: - m = stats - logs = torch.zeros_like(m) - - if not reverse: - x1 = m + x1 * torch.exp(logs) * x_mask - x = torch.cat([x0, x1], 1) - logdet = torch.sum(logs, [1,2]) - return x, logdet - else: - x1 = (x1 - m) * torch.exp(-logs) * x_mask - x = torch.cat([x0, x1], 1) - return x - - -class ConvFlow(nn.Module): - def __init__(self, in_channels, filter_channels, kernel_size, n_layers, num_bins=10, tail_bound=5.0): - super().__init__() - self.in_channels = in_channels - self.filter_channels = filter_channels - self.kernel_size = kernel_size - self.n_layers = n_layers - self.num_bins = num_bins - self.tail_bound = tail_bound - self.half_channels = in_channels // 2 - - self.pre = nn.Conv1d(self.half_channels, filter_channels, 1) - self.convs = DDSConv(filter_channels, kernel_size, n_layers, p_dropout=0.) - self.proj = nn.Conv1d(filter_channels, self.half_channels * (num_bins * 3 - 1), 1) - self.proj.weight.data.zero_() - self.proj.bias.data.zero_() - - def forward(self, x, x_mask, g=None, reverse=False): - x0, x1 = torch.split(x, [self.half_channels]*2, 1) - h = self.pre(x0) - h = self.convs(h, x_mask, g=g) - h = self.proj(h) * x_mask - - b, c, t = x0.shape - h = h.reshape(b, c, -1, t).permute(0, 1, 3, 2) # [b, cx?, t] -> [b, c, t, ?] - - unnormalized_widths = h[..., :self.num_bins] / math.sqrt(self.filter_channels) - unnormalized_heights = h[..., self.num_bins:2*self.num_bins] / math.sqrt(self.filter_channels) - unnormalized_derivatives = h[..., 2 * self.num_bins:] - - x1, logabsdet = piecewise_rational_quadratic_transform(x1, - unnormalized_widths, - unnormalized_heights, - unnormalized_derivatives, - inverse=reverse, - tails='linear', - tail_bound=self.tail_bound - ) - - x = torch.cat([x0, x1], 1) * x_mask - logdet = torch.sum(logabsdet * x_mask, [1,2]) - if not reverse: - return x, logdet - else: - return x diff --git a/spaces/kastan/ai-teaching-assistant-beta/app.py b/spaces/kastan/ai-teaching-assistant-beta/app.py deleted file mode 100644 index e323b0e87ef31178a5d322be2a0805a4360cf097..0000000000000000000000000000000000000000 --- a/spaces/kastan/ai-teaching-assistant-beta/app.py +++ /dev/null @@ -1,363 +0,0 @@ -import os - -import gradio as gr -import retrieval -# UNCOMMENT ONLY WHEN RUNNING LOCALLY (not on Spaces) -# from dotenv import load_dotenv -from text_generation import Client, InferenceAPIClient -from typing import List, Tuple - -# load API keys from globally-availabe .env file -# SECRETS_FILEPATH = "/mnt/project/chatbotai/huggingface_cache/internal_api_keys.env" -# load_dotenv(dotenv_path=SECRETS_FILEPATH, override=True) - -openchat_preprompt = ( - "\n: Hi!\n: My name is Bot, model version is 0.15, part of an open-source kit for " - "fine-tuning new bots! I was created by Together, LAION, and Ontocord.ai and the open-source " - "community. I am not human, not evil and not alive, and thus have no thoughts and feelings, " - "but I am programmed to be helpful, polite, honest, and friendly. I'm really smart at answering electrical engineering questions.\n") - -# LOAD MODELS -ta = retrieval.Retrieval() -NUM_ANSWERS_GENERATED = 3 - - -def clip_img_search(img): - if img is None: - return [] - else: - return ta.reverse_img_search(img) - - -def get_client(model: str): - if model == "Rallio67/joi2_20Be_instruct_alpha": - return Client(os.getenv("JOI_API_URL")) - if model == "togethercomputer/GPT-NeoXT-Chat-Base-20B": - return Client(os.getenv("OPENCHAT_API_URL")) - return InferenceAPIClient(model, token=os.getenv("HF_TOKEN", None)) - - -def get_usernames(model: str): - """ - Returns: - (str, str, str, str): pre-prompt, username, bot name, separator - """ - if model == "OpenAssistant/oasst-sft-1-pythia-12b": - return "", "<|prompter|>", "<|assistant|>", "<|endoftext|>" - if model == "Rallio67/joi2_20Be_instruct_alpha": - return "", "User: ", "Joi: ", "\n\n" - if model == "togethercomputer/GPT-NeoXT-Chat-Base-20B": - return openchat_preprompt, ": ", ": ", "\n" - return "", "User: ", "Assistant: ", "\n" - - -def predict( - model: str, - inputs: str, - typical_p: float, - top_p: float, - temperature: float, - top_k: int, - repetition_penalty: float, - watermark: bool, - chatbot, - history, -): - client = get_client(model) - preprompt, user_name, assistant_name, sep = get_usernames(model) - - history.append(inputs) - - past = [] - for data in chatbot: - user_data, model_data = data - - if not user_data.startswith(user_name): - user_data = user_name + user_data - if not model_data.startswith(sep + assistant_name): - model_data = sep + assistant_name + model_data - - past.append(user_data + model_data.rstrip() + sep) - - if not inputs.startswith(user_name): - inputs = user_name + inputs - - total_inputs = preprompt + "".join(past) + inputs + sep + assistant_name.rstrip() - - partial_words = "" - - if model == "OpenAssistant/oasst-sft-1-pythia-12b": - iterator = client.generate_stream( - total_inputs, - typical_p=typical_p, - truncate=1000, - watermark=watermark, - max_new_tokens=500, - ) - else: - iterator = client.generate_stream( - total_inputs, - top_p=top_p if top_p < 1.0 else None, - top_k=top_k, - truncate=1000, - repetition_penalty=repetition_penalty, - watermark=watermark, - temperature=temperature, - max_new_tokens=500, - stop_sequences=[user_name.rstrip(), assistant_name.rstrip()], - ) - - chat_response = None - for i, response in enumerate(iterator): - if response.token.special: - continue - - partial_words = partial_words + response.token.text - if partial_words.endswith(user_name.rstrip()): - partial_words = partial_words.rstrip(user_name.rstrip()) - if partial_words.endswith(assistant_name.rstrip()): - partial_words = partial_words.rstrip(assistant_name.rstrip()) - - if i == 0: - history.append(" " + partial_words) - elif response.token.text not in user_name: - history[-1] = partial_words - - chat = [(history[i].strip(), history[i + 1].strip()) for i in range(0, len(history) - 1, 2)] - chat_response = chat - yield chat, history, None, None, None, [] - - cleaned_final_chat_response = clean_chat_response(chat_response) - # Pinecone context retrieval - top_context_list = ta.retrieve_contexts_from_pinecone(user_question=inputs, topk=NUM_ANSWERS_GENERATED) - # yield chat, history, top_context_list[0], top_context_list[1], top_context_list[2], [] - yield cleaned_final_chat_response, history, top_context_list[0], top_context_list[1], top_context_list[2], [] - - cleaned_final_chat_response = clean_chat_response(chat_response) - - # run CLIP - images_list = ta.clip_text_to_image(inputs) - # yield chat, history, top_context_list[0], top_context_list[1], top_context_list[2], images_list - yield cleaned_final_chat_response, history, top_context_list[0], top_context_list[1], top_context_list[2], images_list - -def clean_chat_response(chat: List[Tuple[str, str]]) -> List[Tuple[str, str]]: - ''' Not perfect, but much better at removing all the crazy newlines. ''' - cleaned_chat = [] - for human_chat, bot_chat in chat: - human_chat = human_chat.replace("
      ", "") - human_chat = human_chat.replace("\n\n", "\n") - bot_chat = bot_chat.replace("
      ", "") - bot_chat = bot_chat.replace("\n\n", "\n") - cleaned_chat.append( (human_chat, bot_chat) ) - return cleaned_chat - - -def reset_textbox(): - return gr.update(value="") - - -def radio_on_change( - value: str, - disclaimer, - typical_p, - top_p, - top_k, - temperature, - repetition_penalty, - watermark, -): - if value == "OpenAssistant/oasst-sft-1-pythia-12b": - typical_p = typical_p.update(value=0.2, visible=True) - top_p = top_p.update(visible=False) - top_k = top_k.update(visible=False) - temperature = temperature.update(visible=False) - disclaimer = disclaimer.update(visible=False) - repetition_penalty = repetition_penalty.update(visible=False) - watermark = watermark.update(False) - elif value == "togethercomputer/GPT-NeoXT-Chat-Base-20B": - typical_p = typical_p.update(visible=False) - top_p = top_p.update(value=0.25, visible=True) - top_k = top_k.update(value=50, visible=True) - temperature = temperature.update(value=0.6, visible=True) - repetition_penalty = repetition_penalty.update(value=1.01, visible=True) - watermark = watermark.update(False) - disclaimer = disclaimer.update(visible=True) - else: - typical_p = typical_p.update(visible=False) - top_p = top_p.update(value=0.95, visible=True) - top_k = top_k.update(value=4, visible=True) - temperature = temperature.update(value=0.5, visible=True) - repetition_penalty = repetition_penalty.update(value=1.03, visible=True) - watermark = watermark.update(True) - disclaimer = disclaimer.update(visible=False) - return ( - disclaimer, - typical_p, - top_p, - top_k, - temperature, - repetition_penalty, - watermark, - ) - - -title = """

      🧠 AI Teaching Assistant""" -description = """Better than Google at answering your questions! -""" - -openchat_disclaimer = """ -
      Checkout the official OpenChatKit feedback app for the full experience.
      -""" - -with gr.Blocks(css="""#col_container {margin-left: auto; margin-right: auto;} - #chatbot {height: 520px; overflow: auto;}""") as demo: - gr.HTML(title) - with gr.Row(): - with gr.Accordion("Model choices", open=False, visible=True): - model = gr.Radio( - value="OpenAssistant/oasst-sft-1-pythia-12b", - choices=[ - "OpenAssistant/oasst-sft-1-pythia-12b", - # "togethercomputer/GPT-NeoXT-Chat-Base-20B", - "Rallio67/joi2_20Be_instruct_alpha", - "google/flan-t5-xxl", - "google/flan-ul2", - "bigscience/bloom", - "bigscience/bloomz", - "EleutherAI/gpt-neox-20b", - ], - label="", - interactive=True, - ) - # with gr.Row(): - # with gr.Column(): - # use_gpt3_checkbox = gr.Checkbox(label="Include GPT-3 (paid)?") - # with gr.Column(): - # use_equation_checkbox = gr.Checkbox(label="Prioritize equations?") - state = gr.State([]) - - with gr.Row(): - with gr.Column(): - chatbot = gr.Chatbot(elem_id="chatbot") - inputs = gr.Textbox(placeholder="Ask an Electrical Engineering question!", label="Send a message...") - examples = gr.Examples( - examples=[ - "What is a Finite State Machine?", - "How do you design a functional a Two-Bit Gray Code Counter?", - "How can we compare an 8-bit 2's complement number to the value -1 using AND, OR, and NOT?", - "What does the uninterrupted counting cycle label mean?", - ], - inputs=[inputs], - outputs=[], - ) - gr.Markdown("## Relevant Textbook Passages & Lecture Transcripts") - with gr.Row(): - with gr.Column(): - context1 = gr.Textbox(label="Context 1") - with gr.Column(): - context2 = gr.Textbox(label="Context 2") - with gr.Column(): - context3 = gr.Textbox(label="Context 3") - - gr.Markdown("## Relevant Lecture Slides") - with gr.Row(): - with gr.Column(scale=2.6): - lec_gallery = gr.Gallery(label="Lecture images", show_label=False, elem_id="gallery").style(grid=[2], height="auto") - with gr.Column(scale=1): - inp_image = gr.Image(type="pil", label="Reverse Image Search (optional)", shape=(224, 398)) - - inp_image.change(fn=clip_img_search, inputs=inp_image, outputs=lec_gallery, scroll_to_output=True) - disclaimer = gr.Markdown(openchat_disclaimer, visible=False) - # state = gr.State([]) - - with gr.Row(): - with gr.Accordion("Parameters", open=False, visible=True): - typical_p = gr.Slider( - minimum=-0, - maximum=1.0, - value=0.2, - step=0.05, - interactive=True, - label="Typical P mass", - ) - top_p = gr.Slider( - minimum=-0, - maximum=1.0, - value=0.25, - step=0.05, - interactive=True, - label="Top-p (nucleus sampling)", - visible=False, - ) - temperature = gr.Slider( - minimum=-0, - maximum=5.0, - value=0.6, - step=0.1, - interactive=True, - label="Temperature", - visible=False, - ) - top_k = gr.Slider( - minimum=1, - maximum=50, - value=50, - step=1, - interactive=True, - label="Top-k", - visible=False, - ) - repetition_penalty = gr.Slider( - minimum=0.1, - maximum=3.0, - value=1.03, - step=0.01, - interactive=True, - label="Repetition Penalty", - visible=False, - ) - watermark = gr.Checkbox(value=False, label="Text watermarking") - - model.change( - lambda value: radio_on_change( - value, - disclaimer, - typical_p, - top_p, - top_k, - temperature, - repetition_penalty, - watermark, - ), - inputs=model, - outputs=[ - disclaimer, - typical_p, - top_p, - top_k, - temperature, - repetition_penalty, - watermark, - ], - ) - - inputs.submit( - predict, - [ - model, - inputs, - typical_p, - top_p, - temperature, - top_k, - repetition_penalty, - watermark, - chatbot, - state, - ], - [chatbot, state, context1, context2, context3, lec_gallery], - ) - inputs.submit(reset_textbox, [], [inputs]) - - gr.Markdown(description) - demo.queue(concurrency_count=16).launch(debug=True) diff --git a/spaces/kira4424/Tacotron-zero-short-voice-clone/vocoder/fregan/dwt.py b/spaces/kira4424/Tacotron-zero-short-voice-clone/vocoder/fregan/dwt.py deleted file mode 100644 index 1c5d995e1a6a8757b21f46dd1a6e74befaee9816..0000000000000000000000000000000000000000 --- a/spaces/kira4424/Tacotron-zero-short-voice-clone/vocoder/fregan/dwt.py +++ /dev/null @@ -1,76 +0,0 @@ -# Copyright (c) 2019, Adobe Inc. All rights reserved. -# -# This work is licensed under the Creative Commons Attribution-NonCommercial-ShareAlike -# 4.0 International Public License. To view a copy of this license, visit -# https://creativecommons.org/licenses/by-nc-sa/4.0/legalcode. - -# DWT code borrow from https://github.com/LiQiufu/WaveSNet/blob/12cb9d24208c3d26917bf953618c30f0c6b0f03d/DWT_IDWT/DWT_IDWT_layer.py - - -import pywt -import torch -import torch.nn as nn -import torch.nn.functional as F - -__all__ = ['DWT_1D'] -Pad_Mode = ['constant', 'reflect', 'replicate', 'circular'] - - -class DWT_1D(nn.Module): - def __init__(self, pad_type='reflect', wavename='haar', - stride=2, in_channels=1, out_channels=None, groups=None, - kernel_size=None, trainable=False): - - super(DWT_1D, self).__init__() - self.trainable = trainable - self.kernel_size = kernel_size - if not self.trainable: - assert self.kernel_size == None - self.in_channels = in_channels - self.out_channels = self.in_channels if out_channels == None else out_channels - self.groups = self.in_channels if groups == None else groups - assert isinstance(self.groups, int) and self.in_channels % self.groups == 0 - self.stride = stride - assert self.stride == 2 - self.wavename = wavename - self.pad_type = pad_type - assert self.pad_type in Pad_Mode - self.get_filters() - self.initialization() - - def get_filters(self): - wavelet = pywt.Wavelet(self.wavename) - band_low = torch.tensor(wavelet.rec_lo) - band_high = torch.tensor(wavelet.rec_hi) - length_band = band_low.size()[0] - self.kernel_size = length_band if self.kernel_size == None else self.kernel_size - assert self.kernel_size >= length_band - a = (self.kernel_size - length_band) // 2 - b = - (self.kernel_size - length_band - a) - b = None if b == 0 else b - self.filt_low = torch.zeros(self.kernel_size) - self.filt_high = torch.zeros(self.kernel_size) - self.filt_low[a:b] = band_low - self.filt_high[a:b] = band_high - - def initialization(self): - self.filter_low = self.filt_low[None, None, :].repeat((self.out_channels, self.in_channels // self.groups, 1)) - self.filter_high = self.filt_high[None, None, :].repeat((self.out_channels, self.in_channels // self.groups, 1)) - if torch.cuda.is_available(): - self.filter_low = self.filter_low.cuda() - self.filter_high = self.filter_high.cuda() - if self.trainable: - self.filter_low = nn.Parameter(self.filter_low) - self.filter_high = nn.Parameter(self.filter_high) - if self.kernel_size % 2 == 0: - self.pad_sizes = [self.kernel_size // 2 - 1, self.kernel_size // 2 - 1] - else: - self.pad_sizes = [self.kernel_size // 2, self.kernel_size // 2] - - def forward(self, input): - assert isinstance(input, torch.Tensor) - assert len(input.size()) == 3 - assert input.size()[1] == self.in_channels - input = F.pad(input, pad=self.pad_sizes, mode=self.pad_type) - return F.conv1d(input, self.filter_low.to(input.device), stride=self.stride, groups=self.groups), \ - F.conv1d(input, self.filter_high.to(input.device), stride=self.stride, groups=self.groups) diff --git a/spaces/kornia/line-segment-matching/app.py b/spaces/kornia/line-segment-matching/app.py deleted file mode 100644 index 8ff4734a9c22f3a29d3f6f12ba3cb58e377c61c1..0000000000000000000000000000000000000000 --- a/spaces/kornia/line-segment-matching/app.py +++ /dev/null @@ -1,110 +0,0 @@ -import gradio as gr -import kornia as K -import kornia.feature as KF -import torch -import matplotlib - -matplotlib.use("Agg") -import numpy as np -from plot_utils import plot_images, plot_lines, plot_color_line_matches - -sold2 = KF.SOLD2(pretrained=True, config=None) -ransac = K.geometry.RANSAC(model_type="homography_from_linesegments", inl_th=3.0) - - -def infer(img1, img2, line_style: str): - torch_img1 = K.image_to_tensor(img1).float() / 255.0 - torch_img2 = K.image_to_tensor(img2).float() / 255.0 - - torch_img1_gray = K.color.rgb_to_grayscale(torch_img1) - torch_img2_gray = K.color.rgb_to_grayscale(torch_img2) - - imgs = torch.stack( - [torch_img1_gray, torch_img2_gray], - ) - - with torch.inference_mode(): - outputs = sold2(imgs) - - line_seg1 = outputs["line_segments"][0] - line_seg2 = outputs["line_segments"][1] - desc1 = outputs["dense_desc"][0] - desc2 = outputs["dense_desc"][1] - - with torch.inference_mode(): - matches = sold2.match(line_seg1, line_seg2, desc1[None], desc2[None]) - - valid_matches = matches != -1 - match_indices = matches[valid_matches] - - matched_lines1 = line_seg1[valid_matches] - matched_lines2 = line_seg2[match_indices] - - imgs_to_plot = [K.tensor_to_image(torch_img1), K.tensor_to_image(torch_img2)] - - fig = plot_images( - imgs_to_plot, ["Image 1 - detected lines", "Image 2 - detected lines"] - ) - if line_style == "Line Matches": - lines_to_plot = [line_seg1.numpy(), line_seg2.numpy()] - plot_lines(lines_to_plot, fig, ps=3, lw=2, indices={0, 1}) - elif line_style == "Color Line Matches": - plot_color_line_matches([matched_lines1, matched_lines2], fig, lw=2) - elif line_style == "Line Segment Homography Warping": - _, _, img1_warp_to2 = get_homography_values( - matched_lines1, matched_lines2, torch_img1 - ) - fig = plot_images( - [K.tensor_to_image(torch_img2), K.tensor_to_image(img1_warp_to2)], - ["Image 2", "Image 1 wrapped to 2"], - ) - elif line_style == "Matched Lines for Homography Warping": - _, correspondence_mask, _ = get_homography_values( - matched_lines1, matched_lines2, torch_img1 - ) - plot_color_line_matches( - [matched_lines1[correspondence_mask], matched_lines2[correspondence_mask]], - fig, - lw=2, - ) - return fig - - -def get_homography_values(matched_lines1, matched_lines2, torch_img1): - H_ransac, correspondence_mask = ransac( - matched_lines1.flip(dims=(2,)), matched_lines2.flip(dims=(2,)) - ) - img1_warp_to2 = K.geometry.warp_perspective( - torch_img1[None], H_ransac[None], (torch_img1.shape[1:]) - ) - - return H_ransac, correspondence_mask, img1_warp_to2 - - -description = """In this space you can try out Line Detection and Segment Matching with the Kornia library as seen in [this tutorial](https://kornia-tutorials.readthedocs.io/en/latest/line_detection_and_matching_sold2.html). - -Just upload two images of a scene with different view points, choose an option for output and run the demo. -""" - - -Iface = gr.Interface( - fn=infer, - inputs=[ - gr.components.Image(), - gr.components.Image(), - gr.components.Dropdown( - [ - "Line Matches", - "Color Line Matches", - "Line Segment Homography Warping", - "Matched Lines for Homography Warping", - ], - value="Line Matches", - label="Options", - ), - ], - outputs=gr.components.Plot(), - examples=[["terrace0.JPG", "terrace1.JPG", "Line Matches"]], - title="Line Segment Matching with Kornia", - description=description, -).launch() diff --git a/spaces/ky2k/Toxicity_Classifier_POC/.venv/lib/python3.9/site-packages/jinja2/environment.py b/spaces/ky2k/Toxicity_Classifier_POC/.venv/lib/python3.9/site-packages/jinja2/environment.py deleted file mode 100644 index ea04e8b44330fe22909a2c875c6601e33bd1ffc2..0000000000000000000000000000000000000000 --- a/spaces/ky2k/Toxicity_Classifier_POC/.venv/lib/python3.9/site-packages/jinja2/environment.py +++ /dev/null @@ -1,1667 +0,0 @@ -"""Classes for managing templates and their runtime and compile time -options. -""" -import os -import typing -import typing as t -import weakref -from collections import ChainMap -from functools import lru_cache -from functools import partial -from functools import reduce -from types import CodeType - -from markupsafe import Markup - -from . import nodes -from .compiler import CodeGenerator -from .compiler import generate -from .defaults import BLOCK_END_STRING -from .defaults import BLOCK_START_STRING -from .defaults import COMMENT_END_STRING -from .defaults import COMMENT_START_STRING -from .defaults import DEFAULT_FILTERS -from .defaults import DEFAULT_NAMESPACE -from .defaults import DEFAULT_POLICIES -from .defaults import DEFAULT_TESTS -from .defaults import KEEP_TRAILING_NEWLINE -from .defaults import LINE_COMMENT_PREFIX -from .defaults import LINE_STATEMENT_PREFIX -from .defaults import LSTRIP_BLOCKS -from .defaults import NEWLINE_SEQUENCE -from .defaults import TRIM_BLOCKS -from .defaults import VARIABLE_END_STRING -from .defaults import VARIABLE_START_STRING -from .exceptions import TemplateNotFound -from .exceptions import TemplateRuntimeError -from .exceptions import TemplatesNotFound -from .exceptions import TemplateSyntaxError -from .exceptions import UndefinedError -from .lexer import get_lexer -from .lexer import Lexer -from .lexer import TokenStream -from .nodes import EvalContext -from .parser import Parser -from .runtime import Context -from .runtime import new_context -from .runtime import Undefined -from .utils import _PassArg -from .utils import concat -from .utils import consume -from .utils import import_string -from .utils import internalcode -from .utils import LRUCache -from .utils import missing - -if t.TYPE_CHECKING: - import typing_extensions as te - from .bccache import BytecodeCache - from .ext import Extension - from .loaders import BaseLoader - -_env_bound = t.TypeVar("_env_bound", bound="Environment") - - -# for direct template usage we have up to ten living environments -@lru_cache(maxsize=10) -def get_spontaneous_environment(cls: t.Type[_env_bound], *args: t.Any) -> _env_bound: - """Return a new spontaneous environment. A spontaneous environment - is used for templates created directly rather than through an - existing environment. - - :param cls: Environment class to create. - :param args: Positional arguments passed to environment. - """ - env = cls(*args) - env.shared = True - return env - - -def create_cache( - size: int, -) -> t.Optional[t.MutableMapping[t.Tuple[weakref.ref, str], "Template"]]: - """Return the cache class for the given size.""" - if size == 0: - return None - - if size < 0: - return {} - - return LRUCache(size) # type: ignore - - -def copy_cache( - cache: t.Optional[t.MutableMapping], -) -> t.Optional[t.MutableMapping[t.Tuple[weakref.ref, str], "Template"]]: - """Create an empty copy of the given cache.""" - if cache is None: - return None - - if type(cache) is dict: - return {} - - return LRUCache(cache.capacity) # type: ignore - - -def load_extensions( - environment: "Environment", - extensions: t.Sequence[t.Union[str, t.Type["Extension"]]], -) -> t.Dict[str, "Extension"]: - """Load the extensions from the list and bind it to the environment. - Returns a dict of instantiated extensions. - """ - result = {} - - for extension in extensions: - if isinstance(extension, str): - extension = t.cast(t.Type["Extension"], import_string(extension)) - - result[extension.identifier] = extension(environment) - - return result - - -def _environment_config_check(environment: "Environment") -> "Environment": - """Perform a sanity check on the environment.""" - assert issubclass( - environment.undefined, Undefined - ), "'undefined' must be a subclass of 'jinja2.Undefined'." - assert ( - environment.block_start_string - != environment.variable_start_string - != environment.comment_start_string - ), "block, variable and comment start strings must be different." - assert environment.newline_sequence in { - "\r", - "\r\n", - "\n", - }, "'newline_sequence' must be one of '\\n', '\\r\\n', or '\\r'." - return environment - - -class Environment: - r"""The core component of Jinja is the `Environment`. It contains - important shared variables like configuration, filters, tests, - globals and others. Instances of this class may be modified if - they are not shared and if no template was loaded so far. - Modifications on environments after the first template was loaded - will lead to surprising effects and undefined behavior. - - Here are the possible initialization parameters: - - `block_start_string` - The string marking the beginning of a block. Defaults to ``'{%'``. - - `block_end_string` - The string marking the end of a block. Defaults to ``'%}'``. - - `variable_start_string` - The string marking the beginning of a print statement. - Defaults to ``'{{'``. - - `variable_end_string` - The string marking the end of a print statement. Defaults to - ``'}}'``. - - `comment_start_string` - The string marking the beginning of a comment. Defaults to ``'{#'``. - - `comment_end_string` - The string marking the end of a comment. Defaults to ``'#}'``. - - `line_statement_prefix` - If given and a string, this will be used as prefix for line based - statements. See also :ref:`line-statements`. - - `line_comment_prefix` - If given and a string, this will be used as prefix for line based - comments. See also :ref:`line-statements`. - - .. versionadded:: 2.2 - - `trim_blocks` - If this is set to ``True`` the first newline after a block is - removed (block, not variable tag!). Defaults to `False`. - - `lstrip_blocks` - If this is set to ``True`` leading spaces and tabs are stripped - from the start of a line to a block. Defaults to `False`. - - `newline_sequence` - The sequence that starts a newline. Must be one of ``'\r'``, - ``'\n'`` or ``'\r\n'``. The default is ``'\n'`` which is a - useful default for Linux and OS X systems as well as web - applications. - - `keep_trailing_newline` - Preserve the trailing newline when rendering templates. - The default is ``False``, which causes a single newline, - if present, to be stripped from the end of the template. - - .. versionadded:: 2.7 - - `extensions` - List of Jinja extensions to use. This can either be import paths - as strings or extension classes. For more information have a - look at :ref:`the extensions documentation `. - - `optimized` - should the optimizer be enabled? Default is ``True``. - - `undefined` - :class:`Undefined` or a subclass of it that is used to represent - undefined values in the template. - - `finalize` - A callable that can be used to process the result of a variable - expression before it is output. For example one can convert - ``None`` implicitly into an empty string here. - - `autoescape` - If set to ``True`` the XML/HTML autoescaping feature is enabled by - default. For more details about autoescaping see - :class:`~markupsafe.Markup`. As of Jinja 2.4 this can also - be a callable that is passed the template name and has to - return ``True`` or ``False`` depending on autoescape should be - enabled by default. - - .. versionchanged:: 2.4 - `autoescape` can now be a function - - `loader` - The template loader for this environment. - - `cache_size` - The size of the cache. Per default this is ``400`` which means - that if more than 400 templates are loaded the loader will clean - out the least recently used template. If the cache size is set to - ``0`` templates are recompiled all the time, if the cache size is - ``-1`` the cache will not be cleaned. - - .. versionchanged:: 2.8 - The cache size was increased to 400 from a low 50. - - `auto_reload` - Some loaders load templates from locations where the template - sources may change (ie: file system or database). If - ``auto_reload`` is set to ``True`` (default) every time a template is - requested the loader checks if the source changed and if yes, it - will reload the template. For higher performance it's possible to - disable that. - - `bytecode_cache` - If set to a bytecode cache object, this object will provide a - cache for the internal Jinja bytecode so that templates don't - have to be parsed if they were not changed. - - See :ref:`bytecode-cache` for more information. - - `enable_async` - If set to true this enables async template execution which - allows using async functions and generators. - """ - - #: if this environment is sandboxed. Modifying this variable won't make - #: the environment sandboxed though. For a real sandboxed environment - #: have a look at jinja2.sandbox. This flag alone controls the code - #: generation by the compiler. - sandboxed = False - - #: True if the environment is just an overlay - overlayed = False - - #: the environment this environment is linked to if it is an overlay - linked_to: t.Optional["Environment"] = None - - #: shared environments have this set to `True`. A shared environment - #: must not be modified - shared = False - - #: the class that is used for code generation. See - #: :class:`~jinja2.compiler.CodeGenerator` for more information. - code_generator_class: t.Type["CodeGenerator"] = CodeGenerator - - concat = "".join - - #: the context class that is used for templates. See - #: :class:`~jinja2.runtime.Context` for more information. - context_class: t.Type[Context] = Context - - template_class: t.Type["Template"] - - def __init__( - self, - block_start_string: str = BLOCK_START_STRING, - block_end_string: str = BLOCK_END_STRING, - variable_start_string: str = VARIABLE_START_STRING, - variable_end_string: str = VARIABLE_END_STRING, - comment_start_string: str = COMMENT_START_STRING, - comment_end_string: str = COMMENT_END_STRING, - line_statement_prefix: t.Optional[str] = LINE_STATEMENT_PREFIX, - line_comment_prefix: t.Optional[str] = LINE_COMMENT_PREFIX, - trim_blocks: bool = TRIM_BLOCKS, - lstrip_blocks: bool = LSTRIP_BLOCKS, - newline_sequence: "te.Literal['\\n', '\\r\\n', '\\r']" = NEWLINE_SEQUENCE, - keep_trailing_newline: bool = KEEP_TRAILING_NEWLINE, - extensions: t.Sequence[t.Union[str, t.Type["Extension"]]] = (), - optimized: bool = True, - undefined: t.Type[Undefined] = Undefined, - finalize: t.Optional[t.Callable[..., t.Any]] = None, - autoescape: t.Union[bool, t.Callable[[t.Optional[str]], bool]] = False, - loader: t.Optional["BaseLoader"] = None, - cache_size: int = 400, - auto_reload: bool = True, - bytecode_cache: t.Optional["BytecodeCache"] = None, - enable_async: bool = False, - ): - # !!Important notice!! - # The constructor accepts quite a few arguments that should be - # passed by keyword rather than position. However it's important to - # not change the order of arguments because it's used at least - # internally in those cases: - # - spontaneous environments (i18n extension and Template) - # - unittests - # If parameter changes are required only add parameters at the end - # and don't change the arguments (or the defaults!) of the arguments - # existing already. - - # lexer / parser information - self.block_start_string = block_start_string - self.block_end_string = block_end_string - self.variable_start_string = variable_start_string - self.variable_end_string = variable_end_string - self.comment_start_string = comment_start_string - self.comment_end_string = comment_end_string - self.line_statement_prefix = line_statement_prefix - self.line_comment_prefix = line_comment_prefix - self.trim_blocks = trim_blocks - self.lstrip_blocks = lstrip_blocks - self.newline_sequence = newline_sequence - self.keep_trailing_newline = keep_trailing_newline - - # runtime information - self.undefined: t.Type[Undefined] = undefined - self.optimized = optimized - self.finalize = finalize - self.autoescape = autoescape - - # defaults - self.filters = DEFAULT_FILTERS.copy() - self.tests = DEFAULT_TESTS.copy() - self.globals = DEFAULT_NAMESPACE.copy() - - # set the loader provided - self.loader = loader - self.cache = create_cache(cache_size) - self.bytecode_cache = bytecode_cache - self.auto_reload = auto_reload - - # configurable policies - self.policies = DEFAULT_POLICIES.copy() - - # load extensions - self.extensions = load_extensions(self, extensions) - - self.is_async = enable_async - _environment_config_check(self) - - def add_extension(self, extension: t.Union[str, t.Type["Extension"]]) -> None: - """Adds an extension after the environment was created. - - .. versionadded:: 2.5 - """ - self.extensions.update(load_extensions(self, [extension])) - - def extend(self, **attributes: t.Any) -> None: - """Add the items to the instance of the environment if they do not exist - yet. This is used by :ref:`extensions ` to register - callbacks and configuration values without breaking inheritance. - """ - for key, value in attributes.items(): - if not hasattr(self, key): - setattr(self, key, value) - - def overlay( - self, - block_start_string: str = missing, - block_end_string: str = missing, - variable_start_string: str = missing, - variable_end_string: str = missing, - comment_start_string: str = missing, - comment_end_string: str = missing, - line_statement_prefix: t.Optional[str] = missing, - line_comment_prefix: t.Optional[str] = missing, - trim_blocks: bool = missing, - lstrip_blocks: bool = missing, - newline_sequence: "te.Literal['\\n', '\\r\\n', '\\r']" = missing, - keep_trailing_newline: bool = missing, - extensions: t.Sequence[t.Union[str, t.Type["Extension"]]] = missing, - optimized: bool = missing, - undefined: t.Type[Undefined] = missing, - finalize: t.Optional[t.Callable[..., t.Any]] = missing, - autoescape: t.Union[bool, t.Callable[[t.Optional[str]], bool]] = missing, - loader: t.Optional["BaseLoader"] = missing, - cache_size: int = missing, - auto_reload: bool = missing, - bytecode_cache: t.Optional["BytecodeCache"] = missing, - enable_async: bool = False, - ) -> "Environment": - """Create a new overlay environment that shares all the data with the - current environment except for cache and the overridden attributes. - Extensions cannot be removed for an overlayed environment. An overlayed - environment automatically gets all the extensions of the environment it - is linked to plus optional extra extensions. - - Creating overlays should happen after the initial environment was set - up completely. Not all attributes are truly linked, some are just - copied over so modifications on the original environment may not shine - through. - - .. versionchanged:: 3.1.2 - Added the ``newline_sequence``,, ``keep_trailing_newline``, - and ``enable_async`` parameters to match ``__init__``. - """ - args = dict(locals()) - del args["self"], args["cache_size"], args["extensions"], args["enable_async"] - - rv = object.__new__(self.__class__) - rv.__dict__.update(self.__dict__) - rv.overlayed = True - rv.linked_to = self - - for key, value in args.items(): - if value is not missing: - setattr(rv, key, value) - - if cache_size is not missing: - rv.cache = create_cache(cache_size) - else: - rv.cache = copy_cache(self.cache) - - rv.extensions = {} - for key, value in self.extensions.items(): - rv.extensions[key] = value.bind(rv) - if extensions is not missing: - rv.extensions.update(load_extensions(rv, extensions)) - - if enable_async is not missing: - rv.is_async = enable_async - - return _environment_config_check(rv) - - @property - def lexer(self) -> Lexer: - """The lexer for this environment.""" - return get_lexer(self) - - def iter_extensions(self) -> t.Iterator["Extension"]: - """Iterates over the extensions by priority.""" - return iter(sorted(self.extensions.values(), key=lambda x: x.priority)) - - def getitem( - self, obj: t.Any, argument: t.Union[str, t.Any] - ) -> t.Union[t.Any, Undefined]: - """Get an item or attribute of an object but prefer the item.""" - try: - return obj[argument] - except (AttributeError, TypeError, LookupError): - if isinstance(argument, str): - try: - attr = str(argument) - except Exception: - pass - else: - try: - return getattr(obj, attr) - except AttributeError: - pass - return self.undefined(obj=obj, name=argument) - - def getattr(self, obj: t.Any, attribute: str) -> t.Any: - """Get an item or attribute of an object but prefer the attribute. - Unlike :meth:`getitem` the attribute *must* be a string. - """ - try: - return getattr(obj, attribute) - except AttributeError: - pass - try: - return obj[attribute] - except (TypeError, LookupError, AttributeError): - return self.undefined(obj=obj, name=attribute) - - def _filter_test_common( - self, - name: t.Union[str, Undefined], - value: t.Any, - args: t.Optional[t.Sequence[t.Any]], - kwargs: t.Optional[t.Mapping[str, t.Any]], - context: t.Optional[Context], - eval_ctx: t.Optional[EvalContext], - is_filter: bool, - ) -> t.Any: - if is_filter: - env_map = self.filters - type_name = "filter" - else: - env_map = self.tests - type_name = "test" - - func = env_map.get(name) # type: ignore - - if func is None: - msg = f"No {type_name} named {name!r}." - - if isinstance(name, Undefined): - try: - name._fail_with_undefined_error() - except Exception as e: - msg = f"{msg} ({e}; did you forget to quote the callable name?)" - - raise TemplateRuntimeError(msg) - - args = [value, *(args if args is not None else ())] - kwargs = kwargs if kwargs is not None else {} - pass_arg = _PassArg.from_obj(func) - - if pass_arg is _PassArg.context: - if context is None: - raise TemplateRuntimeError( - f"Attempted to invoke a context {type_name} without context." - ) - - args.insert(0, context) - elif pass_arg is _PassArg.eval_context: - if eval_ctx is None: - if context is not None: - eval_ctx = context.eval_ctx - else: - eval_ctx = EvalContext(self) - - args.insert(0, eval_ctx) - elif pass_arg is _PassArg.environment: - args.insert(0, self) - - return func(*args, **kwargs) - - def call_filter( - self, - name: str, - value: t.Any, - args: t.Optional[t.Sequence[t.Any]] = None, - kwargs: t.Optional[t.Mapping[str, t.Any]] = None, - context: t.Optional[Context] = None, - eval_ctx: t.Optional[EvalContext] = None, - ) -> t.Any: - """Invoke a filter on a value the same way the compiler does. - - This might return a coroutine if the filter is running from an - environment in async mode and the filter supports async - execution. It's your responsibility to await this if needed. - - .. versionadded:: 2.7 - """ - return self._filter_test_common( - name, value, args, kwargs, context, eval_ctx, True - ) - - def call_test( - self, - name: str, - value: t.Any, - args: t.Optional[t.Sequence[t.Any]] = None, - kwargs: t.Optional[t.Mapping[str, t.Any]] = None, - context: t.Optional[Context] = None, - eval_ctx: t.Optional[EvalContext] = None, - ) -> t.Any: - """Invoke a test on a value the same way the compiler does. - - This might return a coroutine if the test is running from an - environment in async mode and the test supports async execution. - It's your responsibility to await this if needed. - - .. versionchanged:: 3.0 - Tests support ``@pass_context``, etc. decorators. Added - the ``context`` and ``eval_ctx`` parameters. - - .. versionadded:: 2.7 - """ - return self._filter_test_common( - name, value, args, kwargs, context, eval_ctx, False - ) - - @internalcode - def parse( - self, - source: str, - name: t.Optional[str] = None, - filename: t.Optional[str] = None, - ) -> nodes.Template: - """Parse the sourcecode and return the abstract syntax tree. This - tree of nodes is used by the compiler to convert the template into - executable source- or bytecode. This is useful for debugging or to - extract information from templates. - - If you are :ref:`developing Jinja extensions ` - this gives you a good overview of the node tree generated. - """ - try: - return self._parse(source, name, filename) - except TemplateSyntaxError: - self.handle_exception(source=source) - - def _parse( - self, source: str, name: t.Optional[str], filename: t.Optional[str] - ) -> nodes.Template: - """Internal parsing function used by `parse` and `compile`.""" - return Parser(self, source, name, filename).parse() - - def lex( - self, - source: str, - name: t.Optional[str] = None, - filename: t.Optional[str] = None, - ) -> t.Iterator[t.Tuple[int, str, str]]: - """Lex the given sourcecode and return a generator that yields - tokens as tuples in the form ``(lineno, token_type, value)``. - This can be useful for :ref:`extension development ` - and debugging templates. - - This does not perform preprocessing. If you want the preprocessing - of the extensions to be applied you have to filter source through - the :meth:`preprocess` method. - """ - source = str(source) - try: - return self.lexer.tokeniter(source, name, filename) - except TemplateSyntaxError: - self.handle_exception(source=source) - - def preprocess( - self, - source: str, - name: t.Optional[str] = None, - filename: t.Optional[str] = None, - ) -> str: - """Preprocesses the source with all extensions. This is automatically - called for all parsing and compiling methods but *not* for :meth:`lex` - because there you usually only want the actual source tokenized. - """ - return reduce( - lambda s, e: e.preprocess(s, name, filename), - self.iter_extensions(), - str(source), - ) - - def _tokenize( - self, - source: str, - name: t.Optional[str], - filename: t.Optional[str] = None, - state: t.Optional[str] = None, - ) -> TokenStream: - """Called by the parser to do the preprocessing and filtering - for all the extensions. Returns a :class:`~jinja2.lexer.TokenStream`. - """ - source = self.preprocess(source, name, filename) - stream = self.lexer.tokenize(source, name, filename, state) - - for ext in self.iter_extensions(): - stream = ext.filter_stream(stream) # type: ignore - - if not isinstance(stream, TokenStream): - stream = TokenStream(stream, name, filename) # type: ignore - - return stream - - def _generate( - self, - source: nodes.Template, - name: t.Optional[str], - filename: t.Optional[str], - defer_init: bool = False, - ) -> str: - """Internal hook that can be overridden to hook a different generate - method in. - - .. versionadded:: 2.5 - """ - return generate( # type: ignore - source, - self, - name, - filename, - defer_init=defer_init, - optimized=self.optimized, - ) - - def _compile(self, source: str, filename: str) -> CodeType: - """Internal hook that can be overridden to hook a different compile - method in. - - .. versionadded:: 2.5 - """ - return compile(source, filename, "exec") # type: ignore - - @typing.overload - def compile( # type: ignore - self, - source: t.Union[str, nodes.Template], - name: t.Optional[str] = None, - filename: t.Optional[str] = None, - raw: "te.Literal[False]" = False, - defer_init: bool = False, - ) -> CodeType: - ... - - @typing.overload - def compile( - self, - source: t.Union[str, nodes.Template], - name: t.Optional[str] = None, - filename: t.Optional[str] = None, - raw: "te.Literal[True]" = ..., - defer_init: bool = False, - ) -> str: - ... - - @internalcode - def compile( - self, - source: t.Union[str, nodes.Template], - name: t.Optional[str] = None, - filename: t.Optional[str] = None, - raw: bool = False, - defer_init: bool = False, - ) -> t.Union[str, CodeType]: - """Compile a node or template source code. The `name` parameter is - the load name of the template after it was joined using - :meth:`join_path` if necessary, not the filename on the file system. - the `filename` parameter is the estimated filename of the template on - the file system. If the template came from a database or memory this - can be omitted. - - The return value of this method is a python code object. If the `raw` - parameter is `True` the return value will be a string with python - code equivalent to the bytecode returned otherwise. This method is - mainly used internally. - - `defer_init` is use internally to aid the module code generator. This - causes the generated code to be able to import without the global - environment variable to be set. - - .. versionadded:: 2.4 - `defer_init` parameter added. - """ - source_hint = None - try: - if isinstance(source, str): - source_hint = source - source = self._parse(source, name, filename) - source = self._generate(source, name, filename, defer_init=defer_init) - if raw: - return source - if filename is None: - filename = "