diff --git a/spaces/1acneusushi/gradio-2dmoleculeeditor/data/Adobe Photoshop Lightroom Classic CC 2018 7.2.0.10 (x64) Patch ((TOP)) Keygen.md b/spaces/1acneusushi/gradio-2dmoleculeeditor/data/Adobe Photoshop Lightroom Classic CC 2018 7.2.0.10 (x64) Patch ((TOP)) Keygen.md deleted file mode 100644 index 2bffc7dd0ac1400e8d37db1f025aadaed839d0d7..0000000000000000000000000000000000000000 --- a/spaces/1acneusushi/gradio-2dmoleculeeditor/data/Adobe Photoshop Lightroom Classic CC 2018 7.2.0.10 (x64) Patch ((TOP)) Keygen.md +++ /dev/null @@ -1,32 +0,0 @@ -
-

How to Download and Install Adobe Photoshop Lightroom Classic CC 2018 7.2.0.10 (x64) Patch Keygen

-

Adobe Photoshop Lightroom Classic CC 2018 is a powerful and versatile software for editing and organizing your photos. It allows you to import, develop, and export your images in various formats, as well as create slideshows, web galleries, and photo books. With Lightroom Classic CC 2018, you can also sync your photos across multiple devices using the Creative Cloud service.

-

Adobe Photoshop Lightroom Classic CC 2018 7.2.0.10 (x64) Patch keygen


Download Filehttps://byltly.com/2uKzxs



-

If you want to download and install Adobe Photoshop Lightroom Classic CC 2018 7.2.0.10 (x64) Patch Keygen, you should be aware of the risks and drawbacks of using a cracked version of the software. In this article, we will explain why you should avoid using a Lightroom crack, and how you can get a legal and safe version of Lightroom Classic CC 2018 for free.

-

Why You Should Avoid Using a Lightroom Crack

-

A Lightroom crack is a modified version of the original software that bypasses the activation process and makes it appear as if it has been licensed with a valid key. However, using a Lightroom crack is illegal and unethical, as it violates the terms of use and the intellectual property rights of Adobe. Moreover, using a Lightroom crack can expose you to various problems, such as:

- -

How to Get Adobe Photoshop Lightroom Classic CC 2018 7.2.0.10 (x64) for Free

-

If you want to use Adobe Photoshop Lightroom Classic CC 2018 7.2.0.10 (x64) legally and safely, you have two options:

-

-
    -
  1. Download the free trial version: You can download the free trial version of Lightroom Classic CC 2018 from the official Adobe website[^2^]. The trial version gives you access to all the features and functions of the software for 7 days. After that, you will need to purchase a subscription plan to continue using it.
  2. -
  3. Download the free Creative Cloud version: You can also download the free Creative Cloud version of Lightroom Classic CC 2018 from the official Adobe website[^2^]. The Creative Cloud version is similar to the trial version, but it does not expire after 7 days. However, it has some limitations, such as:
  4. - -
-

To download either version of Lightroom Classic CC 2018, you will need to create an Adobe account and install the Creative Cloud desktop app on your computer. Then, you can follow these steps:

-
    -
  1. Launch the Creative Cloud desktop app and sign in with your Adobe account.
  2. -
  3. Select "Apps" from

    81aa517590
    -
    -
    \ No newline at end of file diff --git a/spaces/1acneusushi/gradio-2dmoleculeeditor/data/Extract Justdial Data What You Need to Know and How to Do It.md b/spaces/1acneusushi/gradio-2dmoleculeeditor/data/Extract Justdial Data What You Need to Know and How to Do It.md deleted file mode 100644 index 69c56c50c69dc54384f73237ec627b815c4adc59..0000000000000000000000000000000000000000 --- a/spaces/1acneusushi/gradio-2dmoleculeeditor/data/Extract Justdial Data What You Need to Know and How to Do It.md +++ /dev/null @@ -1,30 +0,0 @@ -
    -

    How to Extract Justdial Data for Your Business Needs

    -

    Justdial is an Indian internet technology company that provides local search for different services in India over the phone, website and mobile apps. It has a huge database of businesses, products, services, reviews, ratings, deals and more across various categories and locations. If you are looking for a way to extract Justdial data for your business needs, such as lead generation, market research, competitor analysis, or data mining, you have come to the right place. In this article, we will show you how to extract Justdial data using a web scraping tool and what to consider before and after the extraction process.

    -

    What is Web Scraping?

    -

    Web scraping is a technique of extracting data from websites using a software program or a script. It can automate the process of collecting large amounts of data from various web pages and save them in a structured format, such as CSV, Excel, JSON, XML, etc. Web scraping can help you get the data you need from any website without having to manually copy and paste it.

    -

    extract justdial data


    Download Zip » https://byltly.com/2uKw27



    -

    How to Choose the Best Web Scraping Tool to Extract Justdial Data?

    -

    There are many web scraping tools available on the internet, but not all of them are equally effective or reliable. Some may have limited features, poor performance, or even legal issues. Therefore, you need to be careful when choosing the best web scraping tool to extract Justdial data. Here are some factors to consider:

    - -

    How to Use Web Scraping Tool to Extract Justdial Data?

    -

    Once you have selected the best web scraping tool to extract Justdial data, you can follow these steps to use it:

    -
      -
    1. Download and Install the Tool: Go to the official website of the tool and download the latest version. Then, install it on your computer following the instructions.
    2. -
    3. Create a Project and Select Your Target Website: Open the tool and create a new project. Then, enter the URL of your target website or app. For example, if you want to extract Justdial data from its website, you can enter https://www.justdial.com/.
    4. -
    5. Select Your Data Fields and Configure Your Settings: Choose what data fields you want to extract from Justdial, such as business name, address, phone number, email, website, rating, review, category, location, etc. You can also configure your settings according to your preferences, such as pagination, proxy, delay, captcha, etc.
    6. -
    7. Start Scraping and Save Your Data: Click on "Start" or "Run" to begin the scraping process. You will see the extracted data on the interface. You can also monitor the progress and status of the scraping task. Once the scraping is completed, you can save your data in your desired format.
    8. -
    -

    Tips for Successful Data Extraction

    -

    To increase your chances of extracting Justdial data successfully, here are some tips to follow:

    -MenuHome
  4. About UsProducts & ServicesBook AppointmentContact UsBlogHome / STI / #WhispaStory: I have an STD, but I am a virgin!#WhispaStory: I have an STD, but I am a virgin!ByMorenikeSeptember 13, 2021March 17, 2022My name is Ada. You can call me old school, but I promised myself I would remain a virgin until I get married. This is not because I am keeping my virginity for my husband or anything like that. This is because I finally realized it was okay to keep my virginity for me. I ALWAYS take care of my private parts by cleaning it thoroughly and I have my bath twice daily. I also use scented wipes to wipe my vagina after I pee. So you can imagine my surprise when I started noticing swelling around my vagina. There were lots of discharge from my vagina and a tingling painful sensation whenever I pee. My vagina was itching me like crazy. My common sense told me that these are all symptoms of a Sexually Transmitted Disease (STD). Imagine my shock! Like how is it possible?? A virgin having STDs????

    -

    My friend said that I might have a spirit husband and that I probably have sex in my dreams. That, it was manifesting as infection in real life. Her theory was funny, but in the back of my mind, I was worried. I really had no clue as to why this was happening. I knew I should go to the hospital but the fear of opening my legs or explaining what was wrong to a stranger was holding me back from seeking treatment.

    -

    can you get a std from a virgin


    DOWNLOADhttps://ssurll.com/2uzvIl



    -

    Dr. Ezinne asked me about my habits when it comes to cleaning my genitals and she asked if I used scented soaps and douches. I said I used scented soap regularly and she advised me to stop immediately. The oddest question was when she asked me what type of panties I like to wear. Mehn I thought she was being funny until she explained that cotton panties help to absorb moisture unlike panties made of silk, satin, etc. She said wearing normal (not tight-fitting) cotton panties can keep my private area dry which will prevent Candida from overgrowing which happens when the area is too moist and not getting enough air. Also, she explained that the parts to wash are called the vulva and not vagina and that the vagina is the passageway inside my body that leads to the cervix-which she said is the door to the womb.

    -

    In sum, you can get STDs from kissing. Most STD cases arise from having unprotected sex but if you do get an STD from kissing, there are quick and easy treatment options that you can undertake to manage your symptoms.

    -

    It is possible to be a virgin and still have an STD. Of course, the definition of virginity is different for each person, so depending on your definition, it is possible to be a virgin and still have an STD.

    -

    If you learn that you or your partner has an STD, you should seek out treatment immediately. In some instances, you should refrain from having sex, in other instances, it is okay to have sex as long as you wear condoms, though ideally, you should wait until you have retested and are given the green light from your physician.

    -

    Results: Of 122 patients diagnosed with PID via surgery, 5 women were virgins (4.1%). The median age was 21 years (range, 14-24 years), and all patients presented with abdominal pain. The median diameter of the pelvic abscess pocket on preoperative imaging was 4.5 cm (range, 2.6-15 cm). Only 1 case was preoperatively diagnosed as a tubo-ovarian abscess; the others were expected to be benign ovarian tumors, such as endometrioma and dermoid cysts. No possible source of infection was identified for any patient, except 1 who had a history of an appendectomy because of a ruptured appendix. The results of the histopathological analysis of the excisional biopsy performed during surgery in 4 cases were consistent with acute suppurative inflammation. After postoperative antibiotic use, the conditions of all patients stabilized, and they were discharged from the hospital on median postoperative day 9.

    -

    In general, a pelvic exam or a vaginal exam cannot reveal with absolute certainty that a woman is a virgin or has been sexually active. It is always better to have an honest discussion with your gynecologist about your sexual history. This makes it easier for them to look for the early signs of pregnancy and test you for sexually transmitted diseases (STDs), such as gonorrhea and genital herpes. You can also discuss proper methods of birth control with them.

    -

    Every person is unique. Many people believe in the concept of virginity and hold it sacred. Though being a virgin or being a responsible sexually active person is a personal choice, an intact hymen has been used as proof of virginity in the past. The truth is that the hymen is a flexible piece of mucosal tissue that may be thick, thin or even absent in some women. In some women, using a tampon, vigorous cycling, exercises and masturbatory activities may cause the hymen to rupture.

    -

    A pelvic exam can be done even if you have never had sexual intercourse because the opening to your vagina is large enough to allow for the exam. Most of the time, a doctor can't tell if a girl has had sex just from a pelvic exam (and doctors don't usually give teen girls pelvic exams unless there's a sign of a problem). However, let the doctor know if you have had sex anyway. Having unprotected sex puts you at risk for STDs as well as unplanned pregnancy.

    -

    -

    Hymenoplasty or hymen restoration is a procedure that repairs a ripped or torn hymen. The hymen is a ring-like skin membrane partially covering the opening of the vagina. The biological function of the hymen is still under debate. However, its social function is widely known for its virginity status when intact.

    -

    The whole idea about the absence of part of the hymen means you are not a virgin is erroneous. It is your choice to retain your virginity or experience sexual intimacy with another person without pressure or impairment (such as from drugs or alcohol). Virginity cannot be lost or taken by someone else. This is important to understand because you are in charge of your body and of your sexuality.

    -

    Kaiser Family Foundation. Seventeen. A series of national surveys of teens about sex: sexually transmitted diseases [cited 2004 Dec 6]. Available from: www.kff.org/entpartnerships/seventeen_surveys.cfm.

    -

    Hi, could you get any disease, STD, HIV or other diseases from sucking your own penis; i am a virgin, never had any sex, only lots of masturbation and more recently auto-fellatio; before and after i do it i always wash thoroughly with soap.Other concern is that of some whitish small bums grouped have appeared around the foreskin.....and another problem would be from the precum: some of it landed on my cheek and soon after appeared 4 itchy zits, i don't am i allergic or something? and forgot to mention after auto-fellatio i wash my mouth with 70 degrees alcohol, rub some on my lips and in my mouth and after i clean insistently with water...is that any good?Please provide an answer....

    -

    Another personal afraid of contracting an STD, including HIV, from himself???? YIKES! You're like the fifth one this week. Can anyone really doubt we need to return to science-based sex education in our schools?

    aaccfb2cb3
    -
    -
    \ No newline at end of file diff --git a/spaces/cooelf/Multimodal-CoT/timm/models/layers/config.py b/spaces/cooelf/Multimodal-CoT/timm/models/layers/config.py deleted file mode 100644 index f07b9d782ba0597c174dee81097c28280335fdba..0000000000000000000000000000000000000000 --- a/spaces/cooelf/Multimodal-CoT/timm/models/layers/config.py +++ /dev/null @@ -1,115 +0,0 @@ -""" Model / Layer Config singleton state -""" -from typing import Any, Optional - -__all__ = [ - 'is_exportable', 'is_scriptable', 'is_no_jit', - 'set_exportable', 'set_scriptable', 'set_no_jit', 'set_layer_config' -] - -# Set to True if prefer to have layers with no jit optimization (includes activations) -_NO_JIT = False - -# Set to True if prefer to have activation layers with no jit optimization -# NOTE not currently used as no difference between no_jit and no_activation jit as only layers obeying -# the jit flags so far are activations. This will change as more layers are updated and/or added. -_NO_ACTIVATION_JIT = False - -# Set to True if exporting a model with Same padding via ONNX -_EXPORTABLE = False - -# Set to True if wanting to use torch.jit.script on a model -_SCRIPTABLE = False - - -def is_no_jit(): - return _NO_JIT - - -class set_no_jit: - def __init__(self, mode: bool) -> None: - global _NO_JIT - self.prev = _NO_JIT - _NO_JIT = mode - - def __enter__(self) -> None: - pass - - def __exit__(self, *args: Any) -> bool: - global _NO_JIT - _NO_JIT = self.prev - return False - - -def is_exportable(): - return _EXPORTABLE - - -class set_exportable: - def __init__(self, mode: bool) -> None: - global _EXPORTABLE - self.prev = _EXPORTABLE - _EXPORTABLE = mode - - def __enter__(self) -> None: - pass - - def __exit__(self, *args: Any) -> bool: - global _EXPORTABLE - _EXPORTABLE = self.prev - return False - - -def is_scriptable(): - return _SCRIPTABLE - - -class set_scriptable: - def __init__(self, mode: bool) -> None: - global _SCRIPTABLE - self.prev = _SCRIPTABLE - _SCRIPTABLE = mode - - def __enter__(self) -> None: - pass - - def __exit__(self, *args: Any) -> bool: - global _SCRIPTABLE - _SCRIPTABLE = self.prev - return False - - -class set_layer_config: - """ Layer config context manager that allows setting all layer config flags at once. - If a flag arg is None, it will not change the current value. - """ - def __init__( - self, - scriptable: Optional[bool] = None, - exportable: Optional[bool] = None, - no_jit: Optional[bool] = None, - no_activation_jit: Optional[bool] = None): - global _SCRIPTABLE - global _EXPORTABLE - global _NO_JIT - global _NO_ACTIVATION_JIT - self.prev = _SCRIPTABLE, _EXPORTABLE, _NO_JIT, _NO_ACTIVATION_JIT - if scriptable is not None: - _SCRIPTABLE = scriptable - if exportable is not None: - _EXPORTABLE = exportable - if no_jit is not None: - _NO_JIT = no_jit - if no_activation_jit is not None: - _NO_ACTIVATION_JIT = no_activation_jit - - def __enter__(self) -> None: - pass - - def __exit__(self, *args: Any) -> bool: - global _SCRIPTABLE - global _EXPORTABLE - global _NO_JIT - global _NO_ACTIVATION_JIT - _SCRIPTABLE, _EXPORTABLE, _NO_JIT, _NO_ACTIVATION_JIT = self.prev - return False diff --git a/spaces/coreml-community/ControlNet-v1-1-Annotators-cpu/annotator/mmpkg/mmseg/models/backbones/resnest.py b/spaces/coreml-community/ControlNet-v1-1-Annotators-cpu/annotator/mmpkg/mmseg/models/backbones/resnest.py deleted file mode 100644 index 076ef62195bac2a9660261446b5756c3880dfdf2..0000000000000000000000000000000000000000 --- a/spaces/coreml-community/ControlNet-v1-1-Annotators-cpu/annotator/mmpkg/mmseg/models/backbones/resnest.py +++ /dev/null @@ -1,314 +0,0 @@ -import math - -import torch -import torch.nn as nn -import torch.nn.functional as F -import torch.utils.checkpoint as cp -from annotator.mmpkg.mmcv.cnn import build_conv_layer, build_norm_layer - -from ..builder import BACKBONES -from ..utils import ResLayer -from .resnet import Bottleneck as _Bottleneck -from .resnet import ResNetV1d - - -class RSoftmax(nn.Module): - """Radix Softmax module in ``SplitAttentionConv2d``. - - Args: - radix (int): Radix of input. - groups (int): Groups of input. - """ - - def __init__(self, radix, groups): - super().__init__() - self.radix = radix - self.groups = groups - - def forward(self, x): - batch = x.size(0) - if self.radix > 1: - x = x.view(batch, self.groups, self.radix, -1).transpose(1, 2) - x = F.softmax(x, dim=1) - x = x.reshape(batch, -1) - else: - x = torch.sigmoid(x) - return x - - -class SplitAttentionConv2d(nn.Module): - """Split-Attention Conv2d in ResNeSt. - - Args: - in_channels (int): Same as nn.Conv2d. - out_channels (int): Same as nn.Conv2d. - kernel_size (int | tuple[int]): Same as nn.Conv2d. - stride (int | tuple[int]): Same as nn.Conv2d. - padding (int | tuple[int]): Same as nn.Conv2d. - dilation (int | tuple[int]): Same as nn.Conv2d. - groups (int): Same as nn.Conv2d. - radix (int): Radix of SpltAtConv2d. Default: 2 - reduction_factor (int): Reduction factor of inter_channels. Default: 4. - conv_cfg (dict): Config dict for convolution layer. Default: None, - which means using conv2d. - norm_cfg (dict): Config dict for normalization layer. Default: None. - dcn (dict): Config dict for DCN. Default: None. - """ - - def __init__(self, - in_channels, - channels, - kernel_size, - stride=1, - padding=0, - dilation=1, - groups=1, - radix=2, - reduction_factor=4, - conv_cfg=None, - norm_cfg=dict(type='BN'), - dcn=None): - super(SplitAttentionConv2d, self).__init__() - inter_channels = max(in_channels * radix // reduction_factor, 32) - self.radix = radix - self.groups = groups - self.channels = channels - self.with_dcn = dcn is not None - self.dcn = dcn - fallback_on_stride = False - if self.with_dcn: - fallback_on_stride = self.dcn.pop('fallback_on_stride', False) - if self.with_dcn and not fallback_on_stride: - assert conv_cfg is None, 'conv_cfg must be None for DCN' - conv_cfg = dcn - self.conv = build_conv_layer( - conv_cfg, - in_channels, - channels * radix, - kernel_size, - stride=stride, - padding=padding, - dilation=dilation, - groups=groups * radix, - bias=False) - self.norm0_name, norm0 = build_norm_layer( - norm_cfg, channels * radix, postfix=0) - self.add_module(self.norm0_name, norm0) - self.relu = nn.ReLU(inplace=True) - self.fc1 = build_conv_layer( - None, channels, inter_channels, 1, groups=self.groups) - self.norm1_name, norm1 = build_norm_layer( - norm_cfg, inter_channels, postfix=1) - self.add_module(self.norm1_name, norm1) - self.fc2 = build_conv_layer( - None, inter_channels, channels * radix, 1, groups=self.groups) - self.rsoftmax = RSoftmax(radix, groups) - - @property - def norm0(self): - """nn.Module: the normalization layer named "norm0" """ - return getattr(self, self.norm0_name) - - @property - def norm1(self): - """nn.Module: the normalization layer named "norm1" """ - return getattr(self, self.norm1_name) - - def forward(self, x): - x = self.conv(x) - x = self.norm0(x) - x = self.relu(x) - - batch, rchannel = x.shape[:2] - batch = x.size(0) - if self.radix > 1: - splits = x.view(batch, self.radix, -1, *x.shape[2:]) - gap = splits.sum(dim=1) - else: - gap = x - gap = F.adaptive_avg_pool2d(gap, 1) - gap = self.fc1(gap) - - gap = self.norm1(gap) - gap = self.relu(gap) - - atten = self.fc2(gap) - atten = self.rsoftmax(atten).view(batch, -1, 1, 1) - - if self.radix > 1: - attens = atten.view(batch, self.radix, -1, *atten.shape[2:]) - out = torch.sum(attens * splits, dim=1) - else: - out = atten * x - return out.contiguous() - - -class Bottleneck(_Bottleneck): - """Bottleneck block for ResNeSt. - - Args: - inplane (int): Input planes of this block. - planes (int): Middle planes of this block. - groups (int): Groups of conv2. - width_per_group (int): Width per group of conv2. 64x4d indicates - ``groups=64, width_per_group=4`` and 32x8d indicates - ``groups=32, width_per_group=8``. - radix (int): Radix of SpltAtConv2d. Default: 2 - reduction_factor (int): Reduction factor of inter_channels in - SplitAttentionConv2d. Default: 4. - avg_down_stride (bool): Whether to use average pool for stride in - Bottleneck. Default: True. - kwargs (dict): Key word arguments for base class. - """ - expansion = 4 - - def __init__(self, - inplanes, - planes, - groups=1, - base_width=4, - base_channels=64, - radix=2, - reduction_factor=4, - avg_down_stride=True, - **kwargs): - """Bottleneck block for ResNeSt.""" - super(Bottleneck, self).__init__(inplanes, planes, **kwargs) - - if groups == 1: - width = self.planes - else: - width = math.floor(self.planes * - (base_width / base_channels)) * groups - - self.avg_down_stride = avg_down_stride and self.conv2_stride > 1 - - self.norm1_name, norm1 = build_norm_layer( - self.norm_cfg, width, postfix=1) - self.norm3_name, norm3 = build_norm_layer( - self.norm_cfg, self.planes * self.expansion, postfix=3) - - self.conv1 = build_conv_layer( - self.conv_cfg, - self.inplanes, - width, - kernel_size=1, - stride=self.conv1_stride, - bias=False) - self.add_module(self.norm1_name, norm1) - self.with_modulated_dcn = False - self.conv2 = SplitAttentionConv2d( - width, - width, - kernel_size=3, - stride=1 if self.avg_down_stride else self.conv2_stride, - padding=self.dilation, - dilation=self.dilation, - groups=groups, - radix=radix, - reduction_factor=reduction_factor, - conv_cfg=self.conv_cfg, - norm_cfg=self.norm_cfg, - dcn=self.dcn) - delattr(self, self.norm2_name) - - if self.avg_down_stride: - self.avd_layer = nn.AvgPool2d(3, self.conv2_stride, padding=1) - - self.conv3 = build_conv_layer( - self.conv_cfg, - width, - self.planes * self.expansion, - kernel_size=1, - bias=False) - self.add_module(self.norm3_name, norm3) - - def forward(self, x): - - def _inner_forward(x): - identity = x - - out = self.conv1(x) - out = self.norm1(out) - out = self.relu(out) - - if self.with_plugins: - out = self.forward_plugin(out, self.after_conv1_plugin_names) - - out = self.conv2(out) - - if self.avg_down_stride: - out = self.avd_layer(out) - - if self.with_plugins: - out = self.forward_plugin(out, self.after_conv2_plugin_names) - - out = self.conv3(out) - out = self.norm3(out) - - if self.with_plugins: - out = self.forward_plugin(out, self.after_conv3_plugin_names) - - if self.downsample is not None: - identity = self.downsample(x) - - out += identity - - return out - - if self.with_cp and x.requires_grad: - out = cp.checkpoint(_inner_forward, x) - else: - out = _inner_forward(x) - - out = self.relu(out) - - return out - - -@BACKBONES.register_module() -class ResNeSt(ResNetV1d): - """ResNeSt backbone. - - Args: - groups (int): Number of groups of Bottleneck. Default: 1 - base_width (int): Base width of Bottleneck. Default: 4 - radix (int): Radix of SpltAtConv2d. Default: 2 - reduction_factor (int): Reduction factor of inter_channels in - SplitAttentionConv2d. Default: 4. - avg_down_stride (bool): Whether to use average pool for stride in - Bottleneck. Default: True. - kwargs (dict): Keyword arguments for ResNet. - """ - - arch_settings = { - 50: (Bottleneck, (3, 4, 6, 3)), - 101: (Bottleneck, (3, 4, 23, 3)), - 152: (Bottleneck, (3, 8, 36, 3)), - 200: (Bottleneck, (3, 24, 36, 3)) - } - - def __init__(self, - groups=1, - base_width=4, - radix=2, - reduction_factor=4, - avg_down_stride=True, - **kwargs): - self.groups = groups - self.base_width = base_width - self.radix = radix - self.reduction_factor = reduction_factor - self.avg_down_stride = avg_down_stride - super(ResNeSt, self).__init__(**kwargs) - - def make_res_layer(self, **kwargs): - """Pack all blocks in a stage into a ``ResLayer``.""" - return ResLayer( - groups=self.groups, - base_width=self.base_width, - base_channels=self.base_channels, - radix=self.radix, - reduction_factor=self.reduction_factor, - avg_down_stride=self.avg_down_stride, - **kwargs) diff --git a/spaces/coreml-community/ControlNet-v1-1-Annotators-cpu/annotator/oneformer/detectron2/data/datasets/cityscapes_panoptic.py b/spaces/coreml-community/ControlNet-v1-1-Annotators-cpu/annotator/oneformer/detectron2/data/datasets/cityscapes_panoptic.py deleted file mode 100644 index 7ce9ec48f673dadf3f5b4ae0592fc82415d9f925..0000000000000000000000000000000000000000 --- a/spaces/coreml-community/ControlNet-v1-1-Annotators-cpu/annotator/oneformer/detectron2/data/datasets/cityscapes_panoptic.py +++ /dev/null @@ -1,187 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -import json -import logging -import os - -from annotator.oneformer.detectron2.data import DatasetCatalog, MetadataCatalog -from annotator.oneformer.detectron2.data.datasets.builtin_meta import CITYSCAPES_CATEGORIES -from annotator.oneformer.detectron2.utils.file_io import PathManager - -""" -This file contains functions to register the Cityscapes panoptic dataset to the DatasetCatalog. -""" - - -logger = logging.getLogger(__name__) - - -def get_cityscapes_panoptic_files(image_dir, gt_dir, json_info): - files = [] - # scan through the directory - cities = PathManager.ls(image_dir) - logger.info(f"{len(cities)} cities found in '{image_dir}'.") - image_dict = {} - for city in cities: - city_img_dir = os.path.join(image_dir, city) - for basename in PathManager.ls(city_img_dir): - image_file = os.path.join(city_img_dir, basename) - - suffix = "_leftImg8bit.png" - assert basename.endswith(suffix), basename - basename = os.path.basename(basename)[: -len(suffix)] - - image_dict[basename] = image_file - - for ann in json_info["annotations"]: - image_file = image_dict.get(ann["image_id"], None) - assert image_file is not None, "No image {} found for annotation {}".format( - ann["image_id"], ann["file_name"] - ) - label_file = os.path.join(gt_dir, ann["file_name"]) - segments_info = ann["segments_info"] - - files.append((image_file, label_file, segments_info)) - - assert len(files), "No images found in {}".format(image_dir) - assert PathManager.isfile(files[0][0]), files[0][0] - assert PathManager.isfile(files[0][1]), files[0][1] - return files - - -def load_cityscapes_panoptic(image_dir, gt_dir, gt_json, meta): - """ - Args: - image_dir (str): path to the raw dataset. e.g., "~/cityscapes/leftImg8bit/train". - gt_dir (str): path to the raw annotations. e.g., - "~/cityscapes/gtFine/cityscapes_panoptic_train". - gt_json (str): path to the json file. e.g., - "~/cityscapes/gtFine/cityscapes_panoptic_train.json". - meta (dict): dictionary containing "thing_dataset_id_to_contiguous_id" - and "stuff_dataset_id_to_contiguous_id" to map category ids to - contiguous ids for training. - - Returns: - list[dict]: a list of dicts in Detectron2 standard format. (See - `Using Custom Datasets `_ ) - """ - - def _convert_category_id(segment_info, meta): - if segment_info["category_id"] in meta["thing_dataset_id_to_contiguous_id"]: - segment_info["category_id"] = meta["thing_dataset_id_to_contiguous_id"][ - segment_info["category_id"] - ] - else: - segment_info["category_id"] = meta["stuff_dataset_id_to_contiguous_id"][ - segment_info["category_id"] - ] - return segment_info - - assert os.path.exists( - gt_json - ), "Please run `python cityscapesscripts/preparation/createPanopticImgs.py` to generate label files." # noqa - with open(gt_json) as f: - json_info = json.load(f) - files = get_cityscapes_panoptic_files(image_dir, gt_dir, json_info) - ret = [] - for image_file, label_file, segments_info in files: - sem_label_file = ( - image_file.replace("leftImg8bit", "gtFine").split(".")[0] + "_labelTrainIds.png" - ) - segments_info = [_convert_category_id(x, meta) for x in segments_info] - ret.append( - { - "file_name": image_file, - "image_id": "_".join( - os.path.splitext(os.path.basename(image_file))[0].split("_")[:3] - ), - "sem_seg_file_name": sem_label_file, - "pan_seg_file_name": label_file, - "segments_info": segments_info, - } - ) - assert len(ret), f"No images found in {image_dir}!" - assert PathManager.isfile( - ret[0]["sem_seg_file_name"] - ), "Please generate labelTrainIds.png with cityscapesscripts/preparation/createTrainIdLabelImgs.py" # noqa - assert PathManager.isfile( - ret[0]["pan_seg_file_name"] - ), "Please generate panoptic annotation with python cityscapesscripts/preparation/createPanopticImgs.py" # noqa - return ret - - -_RAW_CITYSCAPES_PANOPTIC_SPLITS = { - "cityscapes_fine_panoptic_train": ( - "cityscapes/leftImg8bit/train", - "cityscapes/gtFine/cityscapes_panoptic_train", - "cityscapes/gtFine/cityscapes_panoptic_train.json", - ), - "cityscapes_fine_panoptic_val": ( - "cityscapes/leftImg8bit/val", - "cityscapes/gtFine/cityscapes_panoptic_val", - "cityscapes/gtFine/cityscapes_panoptic_val.json", - ), - # "cityscapes_fine_panoptic_test": not supported yet -} - - -def register_all_cityscapes_panoptic(root): - meta = {} - # The following metadata maps contiguous id from [0, #thing categories + - # #stuff categories) to their names and colors. We have to replica of the - # same name and color under "thing_*" and "stuff_*" because the current - # visualization function in D2 handles thing and class classes differently - # due to some heuristic used in Panoptic FPN. We keep the same naming to - # enable reusing existing visualization functions. - thing_classes = [k["name"] for k in CITYSCAPES_CATEGORIES] - thing_colors = [k["color"] for k in CITYSCAPES_CATEGORIES] - stuff_classes = [k["name"] for k in CITYSCAPES_CATEGORIES] - stuff_colors = [k["color"] for k in CITYSCAPES_CATEGORIES] - - meta["thing_classes"] = thing_classes - meta["thing_colors"] = thing_colors - meta["stuff_classes"] = stuff_classes - meta["stuff_colors"] = stuff_colors - - # There are three types of ids in cityscapes panoptic segmentation: - # (1) category id: like semantic segmentation, it is the class id for each - # pixel. Since there are some classes not used in evaluation, the category - # id is not always contiguous and thus we have two set of category ids: - # - original category id: category id in the original dataset, mainly - # used for evaluation. - # - contiguous category id: [0, #classes), in order to train the classifier - # (2) instance id: this id is used to differentiate different instances from - # the same category. For "stuff" classes, the instance id is always 0; for - # "thing" classes, the instance id starts from 1 and 0 is reserved for - # ignored instances (e.g. crowd annotation). - # (3) panoptic id: this is the compact id that encode both category and - # instance id by: category_id * 1000 + instance_id. - thing_dataset_id_to_contiguous_id = {} - stuff_dataset_id_to_contiguous_id = {} - - for k in CITYSCAPES_CATEGORIES: - if k["isthing"] == 1: - thing_dataset_id_to_contiguous_id[k["id"]] = k["trainId"] - else: - stuff_dataset_id_to_contiguous_id[k["id"]] = k["trainId"] - - meta["thing_dataset_id_to_contiguous_id"] = thing_dataset_id_to_contiguous_id - meta["stuff_dataset_id_to_contiguous_id"] = stuff_dataset_id_to_contiguous_id - - for key, (image_dir, gt_dir, gt_json) in _RAW_CITYSCAPES_PANOPTIC_SPLITS.items(): - image_dir = os.path.join(root, image_dir) - gt_dir = os.path.join(root, gt_dir) - gt_json = os.path.join(root, gt_json) - - DatasetCatalog.register( - key, lambda x=image_dir, y=gt_dir, z=gt_json: load_cityscapes_panoptic(x, y, z, meta) - ) - MetadataCatalog.get(key).set( - panoptic_root=gt_dir, - image_root=image_dir, - panoptic_json=gt_json, - gt_dir=gt_dir.replace("cityscapes_panoptic_", ""), - evaluator_type="cityscapes_panoptic_seg", - ignore_label=255, - label_divisor=1000, - **meta, - ) diff --git a/spaces/coreml-community/ControlNet-v1-1-Annotators-cpu/annotator/oneformer/detectron2/data/datasets/lvis_v1_category_image_count.py b/spaces/coreml-community/ControlNet-v1-1-Annotators-cpu/annotator/oneformer/detectron2/data/datasets/lvis_v1_category_image_count.py deleted file mode 100644 index 31bf0cfcd5096ab87835db86a28671d474514c40..0000000000000000000000000000000000000000 --- a/spaces/coreml-community/ControlNet-v1-1-Annotators-cpu/annotator/oneformer/detectron2/data/datasets/lvis_v1_category_image_count.py +++ /dev/null @@ -1,20 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# Autogen with -# with open("lvis_v1_train.json", "r") as f: -# a = json.load(f) -# c = a["categories"] -# for x in c: -# del x["name"] -# del x["instance_count"] -# del x["def"] -# del x["synonyms"] -# del x["frequency"] -# del x["synset"] -# LVIS_CATEGORY_IMAGE_COUNT = repr(c) + " # noqa" -# with open("/tmp/lvis_category_image_count.py", "wt") as f: -# f.write(f"LVIS_CATEGORY_IMAGE_COUNT = {LVIS_CATEGORY_IMAGE_COUNT}") -# Then paste the contents of that file below - -# fmt: off -LVIS_CATEGORY_IMAGE_COUNT = [{'id': 1, 'image_count': 64}, {'id': 2, 'image_count': 364}, {'id': 3, 'image_count': 1911}, {'id': 4, 'image_count': 149}, {'id': 5, 'image_count': 29}, {'id': 6, 'image_count': 26}, {'id': 7, 'image_count': 59}, {'id': 8, 'image_count': 22}, {'id': 9, 'image_count': 12}, {'id': 10, 'image_count': 28}, {'id': 11, 'image_count': 505}, {'id': 12, 'image_count': 1207}, {'id': 13, 'image_count': 4}, {'id': 14, 'image_count': 10}, {'id': 15, 'image_count': 500}, {'id': 16, 'image_count': 33}, {'id': 17, 'image_count': 3}, {'id': 18, 'image_count': 44}, {'id': 19, 'image_count': 561}, {'id': 20, 'image_count': 8}, {'id': 21, 'image_count': 9}, {'id': 22, 'image_count': 33}, {'id': 23, 'image_count': 1883}, {'id': 24, 'image_count': 98}, {'id': 25, 'image_count': 70}, {'id': 26, 'image_count': 46}, {'id': 27, 'image_count': 117}, {'id': 28, 'image_count': 41}, {'id': 29, 'image_count': 1395}, {'id': 30, 'image_count': 7}, {'id': 31, 'image_count': 1}, {'id': 32, 'image_count': 314}, {'id': 33, 'image_count': 31}, {'id': 34, 'image_count': 1905}, {'id': 35, 'image_count': 1859}, {'id': 36, 'image_count': 1623}, {'id': 37, 'image_count': 47}, {'id': 38, 'image_count': 3}, {'id': 39, 'image_count': 3}, {'id': 40, 'image_count': 1}, {'id': 41, 'image_count': 305}, {'id': 42, 'image_count': 6}, {'id': 43, 'image_count': 210}, {'id': 44, 'image_count': 36}, {'id': 45, 'image_count': 1787}, {'id': 46, 'image_count': 17}, {'id': 47, 'image_count': 51}, {'id': 48, 'image_count': 138}, {'id': 49, 'image_count': 3}, {'id': 50, 'image_count': 1470}, {'id': 51, 'image_count': 3}, {'id': 52, 'image_count': 2}, {'id': 53, 'image_count': 186}, {'id': 54, 'image_count': 76}, {'id': 55, 'image_count': 26}, {'id': 56, 'image_count': 303}, {'id': 57, 'image_count': 738}, {'id': 58, 'image_count': 1799}, {'id': 59, 'image_count': 1934}, {'id': 60, 'image_count': 1609}, {'id': 61, 'image_count': 1622}, {'id': 62, 'image_count': 41}, {'id': 63, 'image_count': 4}, {'id': 64, 'image_count': 11}, {'id': 65, 'image_count': 270}, {'id': 66, 'image_count': 349}, {'id': 67, 'image_count': 42}, {'id': 68, 'image_count': 823}, {'id': 69, 'image_count': 6}, {'id': 70, 'image_count': 48}, {'id': 71, 'image_count': 3}, {'id': 72, 'image_count': 42}, {'id': 73, 'image_count': 24}, {'id': 74, 'image_count': 16}, {'id': 75, 'image_count': 605}, {'id': 76, 'image_count': 646}, {'id': 77, 'image_count': 1765}, {'id': 78, 'image_count': 2}, {'id': 79, 'image_count': 125}, {'id': 80, 'image_count': 1420}, {'id': 81, 'image_count': 140}, {'id': 82, 'image_count': 4}, {'id': 83, 'image_count': 322}, {'id': 84, 'image_count': 60}, {'id': 85, 'image_count': 2}, {'id': 86, 'image_count': 231}, {'id': 87, 'image_count': 333}, {'id': 88, 'image_count': 1941}, {'id': 89, 'image_count': 367}, {'id': 90, 'image_count': 1922}, {'id': 91, 'image_count': 18}, {'id': 92, 'image_count': 81}, {'id': 93, 'image_count': 1}, {'id': 94, 'image_count': 1852}, {'id': 95, 'image_count': 430}, {'id': 96, 'image_count': 247}, {'id': 97, 'image_count': 94}, {'id': 98, 'image_count': 21}, {'id': 99, 'image_count': 1821}, {'id': 100, 'image_count': 16}, {'id': 101, 'image_count': 12}, {'id': 102, 'image_count': 25}, {'id': 103, 'image_count': 41}, {'id': 104, 'image_count': 244}, {'id': 105, 'image_count': 7}, {'id': 106, 'image_count': 1}, {'id': 107, 'image_count': 40}, {'id': 108, 'image_count': 40}, {'id': 109, 'image_count': 104}, {'id': 110, 'image_count': 1671}, {'id': 111, 'image_count': 49}, {'id': 112, 'image_count': 243}, {'id': 113, 'image_count': 2}, {'id': 114, 'image_count': 242}, {'id': 115, 'image_count': 271}, {'id': 116, 'image_count': 104}, {'id': 117, 'image_count': 8}, {'id': 118, 'image_count': 1758}, {'id': 119, 'image_count': 1}, {'id': 120, 'image_count': 48}, {'id': 121, 'image_count': 14}, {'id': 122, 'image_count': 40}, {'id': 123, 'image_count': 1}, {'id': 124, 'image_count': 37}, {'id': 125, 'image_count': 1510}, {'id': 126, 'image_count': 6}, {'id': 127, 'image_count': 1903}, {'id': 128, 'image_count': 70}, {'id': 129, 'image_count': 86}, {'id': 130, 'image_count': 7}, {'id': 131, 'image_count': 5}, {'id': 132, 'image_count': 1406}, {'id': 133, 'image_count': 1901}, {'id': 134, 'image_count': 15}, {'id': 135, 'image_count': 28}, {'id': 136, 'image_count': 6}, {'id': 137, 'image_count': 494}, {'id': 138, 'image_count': 234}, {'id': 139, 'image_count': 1922}, {'id': 140, 'image_count': 1}, {'id': 141, 'image_count': 35}, {'id': 142, 'image_count': 5}, {'id': 143, 'image_count': 1828}, {'id': 144, 'image_count': 8}, {'id': 145, 'image_count': 63}, {'id': 146, 'image_count': 1668}, {'id': 147, 'image_count': 4}, {'id': 148, 'image_count': 95}, {'id': 149, 'image_count': 17}, {'id': 150, 'image_count': 1567}, {'id': 151, 'image_count': 2}, {'id': 152, 'image_count': 103}, {'id': 153, 'image_count': 50}, {'id': 154, 'image_count': 1309}, {'id': 155, 'image_count': 6}, {'id': 156, 'image_count': 92}, {'id': 157, 'image_count': 19}, {'id': 158, 'image_count': 37}, {'id': 159, 'image_count': 4}, {'id': 160, 'image_count': 709}, {'id': 161, 'image_count': 9}, {'id': 162, 'image_count': 82}, {'id': 163, 'image_count': 15}, {'id': 164, 'image_count': 3}, {'id': 165, 'image_count': 61}, {'id': 166, 'image_count': 51}, {'id': 167, 'image_count': 5}, {'id': 168, 'image_count': 13}, {'id': 169, 'image_count': 642}, {'id': 170, 'image_count': 24}, {'id': 171, 'image_count': 255}, {'id': 172, 'image_count': 9}, {'id': 173, 'image_count': 1808}, {'id': 174, 'image_count': 31}, {'id': 175, 'image_count': 158}, {'id': 176, 'image_count': 80}, {'id': 177, 'image_count': 1884}, {'id': 178, 'image_count': 158}, {'id': 179, 'image_count': 2}, {'id': 180, 'image_count': 12}, {'id': 181, 'image_count': 1659}, {'id': 182, 'image_count': 7}, {'id': 183, 'image_count': 834}, {'id': 184, 'image_count': 57}, {'id': 185, 'image_count': 174}, {'id': 186, 'image_count': 95}, {'id': 187, 'image_count': 27}, {'id': 188, 'image_count': 22}, {'id': 189, 'image_count': 1391}, {'id': 190, 'image_count': 90}, {'id': 191, 'image_count': 40}, {'id': 192, 'image_count': 445}, {'id': 193, 'image_count': 21}, {'id': 194, 'image_count': 1132}, {'id': 195, 'image_count': 177}, {'id': 196, 'image_count': 4}, {'id': 197, 'image_count': 17}, {'id': 198, 'image_count': 84}, {'id': 199, 'image_count': 55}, {'id': 200, 'image_count': 30}, {'id': 201, 'image_count': 25}, {'id': 202, 'image_count': 2}, {'id': 203, 'image_count': 125}, {'id': 204, 'image_count': 1135}, {'id': 205, 'image_count': 19}, {'id': 206, 'image_count': 72}, {'id': 207, 'image_count': 1926}, {'id': 208, 'image_count': 159}, {'id': 209, 'image_count': 7}, {'id': 210, 'image_count': 1}, {'id': 211, 'image_count': 13}, {'id': 212, 'image_count': 35}, {'id': 213, 'image_count': 18}, {'id': 214, 'image_count': 8}, {'id': 215, 'image_count': 6}, {'id': 216, 'image_count': 35}, {'id': 217, 'image_count': 1222}, {'id': 218, 'image_count': 103}, {'id': 219, 'image_count': 28}, {'id': 220, 'image_count': 63}, {'id': 221, 'image_count': 28}, {'id': 222, 'image_count': 5}, {'id': 223, 'image_count': 7}, {'id': 224, 'image_count': 14}, {'id': 225, 'image_count': 1918}, {'id': 226, 'image_count': 133}, {'id': 227, 'image_count': 16}, {'id': 228, 'image_count': 27}, {'id': 229, 'image_count': 110}, {'id': 230, 'image_count': 1895}, {'id': 231, 'image_count': 4}, {'id': 232, 'image_count': 1927}, {'id': 233, 'image_count': 8}, {'id': 234, 'image_count': 1}, {'id': 235, 'image_count': 263}, {'id': 236, 'image_count': 10}, {'id': 237, 'image_count': 2}, {'id': 238, 'image_count': 3}, {'id': 239, 'image_count': 87}, {'id': 240, 'image_count': 9}, {'id': 241, 'image_count': 71}, {'id': 242, 'image_count': 13}, {'id': 243, 'image_count': 18}, {'id': 244, 'image_count': 2}, {'id': 245, 'image_count': 5}, {'id': 246, 'image_count': 45}, {'id': 247, 'image_count': 1}, {'id': 248, 'image_count': 23}, {'id': 249, 'image_count': 32}, {'id': 250, 'image_count': 4}, {'id': 251, 'image_count': 1}, {'id': 252, 'image_count': 858}, {'id': 253, 'image_count': 661}, {'id': 254, 'image_count': 168}, {'id': 255, 'image_count': 210}, {'id': 256, 'image_count': 65}, {'id': 257, 'image_count': 4}, {'id': 258, 'image_count': 2}, {'id': 259, 'image_count': 159}, {'id': 260, 'image_count': 31}, {'id': 261, 'image_count': 811}, {'id': 262, 'image_count': 1}, {'id': 263, 'image_count': 42}, {'id': 264, 'image_count': 27}, {'id': 265, 'image_count': 2}, {'id': 266, 'image_count': 5}, {'id': 267, 'image_count': 95}, {'id': 268, 'image_count': 32}, {'id': 269, 'image_count': 1}, {'id': 270, 'image_count': 1}, {'id': 271, 'image_count': 1844}, {'id': 272, 'image_count': 897}, {'id': 273, 'image_count': 31}, {'id': 274, 'image_count': 23}, {'id': 275, 'image_count': 1}, {'id': 276, 'image_count': 202}, {'id': 277, 'image_count': 746}, {'id': 278, 'image_count': 44}, {'id': 279, 'image_count': 14}, {'id': 280, 'image_count': 26}, {'id': 281, 'image_count': 1}, {'id': 282, 'image_count': 2}, {'id': 283, 'image_count': 25}, {'id': 284, 'image_count': 238}, {'id': 285, 'image_count': 592}, {'id': 286, 'image_count': 26}, {'id': 287, 'image_count': 5}, {'id': 288, 'image_count': 42}, {'id': 289, 'image_count': 13}, {'id': 290, 'image_count': 46}, {'id': 291, 'image_count': 1}, {'id': 292, 'image_count': 8}, {'id': 293, 'image_count': 34}, {'id': 294, 'image_count': 5}, {'id': 295, 'image_count': 1}, {'id': 296, 'image_count': 1871}, {'id': 297, 'image_count': 717}, {'id': 298, 'image_count': 1010}, {'id': 299, 'image_count': 679}, {'id': 300, 'image_count': 3}, {'id': 301, 'image_count': 4}, {'id': 302, 'image_count': 1}, {'id': 303, 'image_count': 166}, {'id': 304, 'image_count': 2}, {'id': 305, 'image_count': 266}, {'id': 306, 'image_count': 101}, {'id': 307, 'image_count': 6}, {'id': 308, 'image_count': 14}, {'id': 309, 'image_count': 133}, {'id': 310, 'image_count': 2}, {'id': 311, 'image_count': 38}, {'id': 312, 'image_count': 95}, {'id': 313, 'image_count': 1}, {'id': 314, 'image_count': 12}, {'id': 315, 'image_count': 49}, {'id': 316, 'image_count': 5}, {'id': 317, 'image_count': 5}, {'id': 318, 'image_count': 16}, {'id': 319, 'image_count': 216}, {'id': 320, 'image_count': 12}, {'id': 321, 'image_count': 1}, {'id': 322, 'image_count': 54}, {'id': 323, 'image_count': 5}, {'id': 324, 'image_count': 245}, {'id': 325, 'image_count': 12}, {'id': 326, 'image_count': 7}, {'id': 327, 'image_count': 35}, {'id': 328, 'image_count': 36}, {'id': 329, 'image_count': 32}, {'id': 330, 'image_count': 1027}, {'id': 331, 'image_count': 10}, {'id': 332, 'image_count': 12}, {'id': 333, 'image_count': 1}, {'id': 334, 'image_count': 67}, {'id': 335, 'image_count': 71}, {'id': 336, 'image_count': 30}, {'id': 337, 'image_count': 48}, {'id': 338, 'image_count': 249}, {'id': 339, 'image_count': 13}, {'id': 340, 'image_count': 29}, {'id': 341, 'image_count': 14}, {'id': 342, 'image_count': 236}, {'id': 343, 'image_count': 15}, {'id': 344, 'image_count': 1521}, {'id': 345, 'image_count': 25}, {'id': 346, 'image_count': 249}, {'id': 347, 'image_count': 139}, {'id': 348, 'image_count': 2}, {'id': 349, 'image_count': 2}, {'id': 350, 'image_count': 1890}, {'id': 351, 'image_count': 1240}, {'id': 352, 'image_count': 1}, {'id': 353, 'image_count': 9}, {'id': 354, 'image_count': 1}, {'id': 355, 'image_count': 3}, {'id': 356, 'image_count': 11}, {'id': 357, 'image_count': 4}, {'id': 358, 'image_count': 236}, {'id': 359, 'image_count': 44}, {'id': 360, 'image_count': 19}, {'id': 361, 'image_count': 1100}, {'id': 362, 'image_count': 7}, {'id': 363, 'image_count': 69}, {'id': 364, 'image_count': 2}, {'id': 365, 'image_count': 8}, {'id': 366, 'image_count': 5}, {'id': 367, 'image_count': 227}, {'id': 368, 'image_count': 6}, {'id': 369, 'image_count': 106}, {'id': 370, 'image_count': 81}, {'id': 371, 'image_count': 17}, {'id': 372, 'image_count': 134}, {'id': 373, 'image_count': 312}, {'id': 374, 'image_count': 8}, {'id': 375, 'image_count': 271}, {'id': 376, 'image_count': 2}, {'id': 377, 'image_count': 103}, {'id': 378, 'image_count': 1938}, {'id': 379, 'image_count': 574}, {'id': 380, 'image_count': 120}, {'id': 381, 'image_count': 2}, {'id': 382, 'image_count': 2}, {'id': 383, 'image_count': 13}, {'id': 384, 'image_count': 29}, {'id': 385, 'image_count': 1710}, {'id': 386, 'image_count': 66}, {'id': 387, 'image_count': 1008}, {'id': 388, 'image_count': 1}, {'id': 389, 'image_count': 3}, {'id': 390, 'image_count': 1942}, {'id': 391, 'image_count': 19}, {'id': 392, 'image_count': 1488}, {'id': 393, 'image_count': 46}, {'id': 394, 'image_count': 106}, {'id': 395, 'image_count': 115}, {'id': 396, 'image_count': 19}, {'id': 397, 'image_count': 2}, {'id': 398, 'image_count': 1}, {'id': 399, 'image_count': 28}, {'id': 400, 'image_count': 9}, {'id': 401, 'image_count': 192}, {'id': 402, 'image_count': 12}, {'id': 403, 'image_count': 21}, {'id': 404, 'image_count': 247}, {'id': 405, 'image_count': 6}, {'id': 406, 'image_count': 64}, {'id': 407, 'image_count': 7}, {'id': 408, 'image_count': 40}, {'id': 409, 'image_count': 542}, {'id': 410, 'image_count': 2}, {'id': 411, 'image_count': 1898}, {'id': 412, 'image_count': 36}, {'id': 413, 'image_count': 4}, {'id': 414, 'image_count': 1}, {'id': 415, 'image_count': 191}, {'id': 416, 'image_count': 6}, {'id': 417, 'image_count': 41}, {'id': 418, 'image_count': 39}, {'id': 419, 'image_count': 46}, {'id': 420, 'image_count': 1}, {'id': 421, 'image_count': 1451}, {'id': 422, 'image_count': 1878}, {'id': 423, 'image_count': 11}, {'id': 424, 'image_count': 82}, {'id': 425, 'image_count': 18}, {'id': 426, 'image_count': 1}, {'id': 427, 'image_count': 7}, {'id': 428, 'image_count': 3}, {'id': 429, 'image_count': 575}, {'id': 430, 'image_count': 1907}, {'id': 431, 'image_count': 8}, {'id': 432, 'image_count': 4}, {'id': 433, 'image_count': 32}, {'id': 434, 'image_count': 11}, {'id': 435, 'image_count': 4}, {'id': 436, 'image_count': 54}, {'id': 437, 'image_count': 202}, {'id': 438, 'image_count': 32}, {'id': 439, 'image_count': 3}, {'id': 440, 'image_count': 130}, {'id': 441, 'image_count': 119}, {'id': 442, 'image_count': 141}, {'id': 443, 'image_count': 29}, {'id': 444, 'image_count': 525}, {'id': 445, 'image_count': 1323}, {'id': 446, 'image_count': 2}, {'id': 447, 'image_count': 113}, {'id': 448, 'image_count': 16}, {'id': 449, 'image_count': 7}, {'id': 450, 'image_count': 35}, {'id': 451, 'image_count': 1908}, {'id': 452, 'image_count': 353}, {'id': 453, 'image_count': 18}, {'id': 454, 'image_count': 14}, {'id': 455, 'image_count': 77}, {'id': 456, 'image_count': 8}, {'id': 457, 'image_count': 37}, {'id': 458, 'image_count': 1}, {'id': 459, 'image_count': 346}, {'id': 460, 'image_count': 19}, {'id': 461, 'image_count': 1779}, {'id': 462, 'image_count': 23}, {'id': 463, 'image_count': 25}, {'id': 464, 'image_count': 67}, {'id': 465, 'image_count': 19}, {'id': 466, 'image_count': 28}, {'id': 467, 'image_count': 4}, {'id': 468, 'image_count': 27}, {'id': 469, 'image_count': 1861}, {'id': 470, 'image_count': 11}, {'id': 471, 'image_count': 13}, {'id': 472, 'image_count': 13}, {'id': 473, 'image_count': 32}, {'id': 474, 'image_count': 1767}, {'id': 475, 'image_count': 42}, {'id': 476, 'image_count': 17}, {'id': 477, 'image_count': 128}, {'id': 478, 'image_count': 1}, {'id': 479, 'image_count': 9}, {'id': 480, 'image_count': 10}, {'id': 481, 'image_count': 4}, {'id': 482, 'image_count': 9}, {'id': 483, 'image_count': 18}, {'id': 484, 'image_count': 41}, {'id': 485, 'image_count': 28}, {'id': 486, 'image_count': 3}, {'id': 487, 'image_count': 65}, {'id': 488, 'image_count': 9}, {'id': 489, 'image_count': 23}, {'id': 490, 'image_count': 24}, {'id': 491, 'image_count': 1}, {'id': 492, 'image_count': 2}, {'id': 493, 'image_count': 59}, {'id': 494, 'image_count': 48}, {'id': 495, 'image_count': 17}, {'id': 496, 'image_count': 1877}, {'id': 497, 'image_count': 18}, {'id': 498, 'image_count': 1920}, {'id': 499, 'image_count': 50}, {'id': 500, 'image_count': 1890}, {'id': 501, 'image_count': 99}, {'id': 502, 'image_count': 1530}, {'id': 503, 'image_count': 3}, {'id': 504, 'image_count': 11}, {'id': 505, 'image_count': 19}, {'id': 506, 'image_count': 3}, {'id': 507, 'image_count': 63}, {'id': 508, 'image_count': 5}, {'id': 509, 'image_count': 6}, {'id': 510, 'image_count': 233}, {'id': 511, 'image_count': 54}, {'id': 512, 'image_count': 36}, {'id': 513, 'image_count': 10}, {'id': 514, 'image_count': 124}, {'id': 515, 'image_count': 101}, {'id': 516, 'image_count': 3}, {'id': 517, 'image_count': 363}, {'id': 518, 'image_count': 3}, {'id': 519, 'image_count': 30}, {'id': 520, 'image_count': 18}, {'id': 521, 'image_count': 199}, {'id': 522, 'image_count': 97}, {'id': 523, 'image_count': 32}, {'id': 524, 'image_count': 121}, {'id': 525, 'image_count': 16}, {'id': 526, 'image_count': 12}, {'id': 527, 'image_count': 2}, {'id': 528, 'image_count': 214}, {'id': 529, 'image_count': 48}, {'id': 530, 'image_count': 26}, {'id': 531, 'image_count': 13}, {'id': 532, 'image_count': 4}, {'id': 533, 'image_count': 11}, {'id': 534, 'image_count': 123}, {'id': 535, 'image_count': 7}, {'id': 536, 'image_count': 200}, {'id': 537, 'image_count': 91}, {'id': 538, 'image_count': 9}, {'id': 539, 'image_count': 72}, {'id': 540, 'image_count': 1886}, {'id': 541, 'image_count': 4}, {'id': 542, 'image_count': 1}, {'id': 543, 'image_count': 1}, {'id': 544, 'image_count': 1932}, {'id': 545, 'image_count': 4}, {'id': 546, 'image_count': 56}, {'id': 547, 'image_count': 854}, {'id': 548, 'image_count': 755}, {'id': 549, 'image_count': 1843}, {'id': 550, 'image_count': 96}, {'id': 551, 'image_count': 7}, {'id': 552, 'image_count': 74}, {'id': 553, 'image_count': 66}, {'id': 554, 'image_count': 57}, {'id': 555, 'image_count': 44}, {'id': 556, 'image_count': 1905}, {'id': 557, 'image_count': 4}, {'id': 558, 'image_count': 90}, {'id': 559, 'image_count': 1635}, {'id': 560, 'image_count': 8}, {'id': 561, 'image_count': 5}, {'id': 562, 'image_count': 50}, {'id': 563, 'image_count': 545}, {'id': 564, 'image_count': 20}, {'id': 565, 'image_count': 193}, {'id': 566, 'image_count': 285}, {'id': 567, 'image_count': 3}, {'id': 568, 'image_count': 1}, {'id': 569, 'image_count': 1904}, {'id': 570, 'image_count': 294}, {'id': 571, 'image_count': 3}, {'id': 572, 'image_count': 5}, {'id': 573, 'image_count': 24}, {'id': 574, 'image_count': 2}, {'id': 575, 'image_count': 2}, {'id': 576, 'image_count': 16}, {'id': 577, 'image_count': 8}, {'id': 578, 'image_count': 154}, {'id': 579, 'image_count': 66}, {'id': 580, 'image_count': 1}, {'id': 581, 'image_count': 24}, {'id': 582, 'image_count': 1}, {'id': 583, 'image_count': 4}, {'id': 584, 'image_count': 75}, {'id': 585, 'image_count': 6}, {'id': 586, 'image_count': 126}, {'id': 587, 'image_count': 24}, {'id': 588, 'image_count': 22}, {'id': 589, 'image_count': 1872}, {'id': 590, 'image_count': 16}, {'id': 591, 'image_count': 423}, {'id': 592, 'image_count': 1927}, {'id': 593, 'image_count': 38}, {'id': 594, 'image_count': 3}, {'id': 595, 'image_count': 1945}, {'id': 596, 'image_count': 35}, {'id': 597, 'image_count': 1}, {'id': 598, 'image_count': 13}, {'id': 599, 'image_count': 9}, {'id': 600, 'image_count': 14}, {'id': 601, 'image_count': 37}, {'id': 602, 'image_count': 3}, {'id': 603, 'image_count': 4}, {'id': 604, 'image_count': 100}, {'id': 605, 'image_count': 195}, {'id': 606, 'image_count': 1}, {'id': 607, 'image_count': 12}, {'id': 608, 'image_count': 24}, {'id': 609, 'image_count': 489}, {'id': 610, 'image_count': 10}, {'id': 611, 'image_count': 1689}, {'id': 612, 'image_count': 42}, {'id': 613, 'image_count': 81}, {'id': 614, 'image_count': 894}, {'id': 615, 'image_count': 1868}, {'id': 616, 'image_count': 7}, {'id': 617, 'image_count': 1567}, {'id': 618, 'image_count': 10}, {'id': 619, 'image_count': 8}, {'id': 620, 'image_count': 7}, {'id': 621, 'image_count': 629}, {'id': 622, 'image_count': 89}, {'id': 623, 'image_count': 15}, {'id': 624, 'image_count': 134}, {'id': 625, 'image_count': 4}, {'id': 626, 'image_count': 1802}, {'id': 627, 'image_count': 595}, {'id': 628, 'image_count': 1210}, {'id': 629, 'image_count': 48}, {'id': 630, 'image_count': 418}, {'id': 631, 'image_count': 1846}, {'id': 632, 'image_count': 5}, {'id': 633, 'image_count': 221}, {'id': 634, 'image_count': 10}, {'id': 635, 'image_count': 7}, {'id': 636, 'image_count': 76}, {'id': 637, 'image_count': 22}, {'id': 638, 'image_count': 10}, {'id': 639, 'image_count': 341}, {'id': 640, 'image_count': 1}, {'id': 641, 'image_count': 705}, {'id': 642, 'image_count': 1900}, {'id': 643, 'image_count': 188}, {'id': 644, 'image_count': 227}, {'id': 645, 'image_count': 861}, {'id': 646, 'image_count': 6}, {'id': 647, 'image_count': 115}, {'id': 648, 'image_count': 5}, {'id': 649, 'image_count': 43}, {'id': 650, 'image_count': 14}, {'id': 651, 'image_count': 6}, {'id': 652, 'image_count': 15}, {'id': 653, 'image_count': 1167}, {'id': 654, 'image_count': 15}, {'id': 655, 'image_count': 994}, {'id': 656, 'image_count': 28}, {'id': 657, 'image_count': 2}, {'id': 658, 'image_count': 338}, {'id': 659, 'image_count': 334}, {'id': 660, 'image_count': 15}, {'id': 661, 'image_count': 102}, {'id': 662, 'image_count': 1}, {'id': 663, 'image_count': 8}, {'id': 664, 'image_count': 1}, {'id': 665, 'image_count': 1}, {'id': 666, 'image_count': 28}, {'id': 667, 'image_count': 91}, {'id': 668, 'image_count': 260}, {'id': 669, 'image_count': 131}, {'id': 670, 'image_count': 128}, {'id': 671, 'image_count': 3}, {'id': 672, 'image_count': 10}, {'id': 673, 'image_count': 39}, {'id': 674, 'image_count': 2}, {'id': 675, 'image_count': 925}, {'id': 676, 'image_count': 354}, {'id': 677, 'image_count': 31}, {'id': 678, 'image_count': 10}, {'id': 679, 'image_count': 215}, {'id': 680, 'image_count': 71}, {'id': 681, 'image_count': 43}, {'id': 682, 'image_count': 28}, {'id': 683, 'image_count': 34}, {'id': 684, 'image_count': 16}, {'id': 685, 'image_count': 273}, {'id': 686, 'image_count': 2}, {'id': 687, 'image_count': 999}, {'id': 688, 'image_count': 4}, {'id': 689, 'image_count': 107}, {'id': 690, 'image_count': 2}, {'id': 691, 'image_count': 1}, {'id': 692, 'image_count': 454}, {'id': 693, 'image_count': 9}, {'id': 694, 'image_count': 1901}, {'id': 695, 'image_count': 61}, {'id': 696, 'image_count': 91}, {'id': 697, 'image_count': 46}, {'id': 698, 'image_count': 1402}, {'id': 699, 'image_count': 74}, {'id': 700, 'image_count': 421}, {'id': 701, 'image_count': 226}, {'id': 702, 'image_count': 10}, {'id': 703, 'image_count': 1720}, {'id': 704, 'image_count': 261}, {'id': 705, 'image_count': 1337}, {'id': 706, 'image_count': 293}, {'id': 707, 'image_count': 62}, {'id': 708, 'image_count': 814}, {'id': 709, 'image_count': 407}, {'id': 710, 'image_count': 6}, {'id': 711, 'image_count': 16}, {'id': 712, 'image_count': 7}, {'id': 713, 'image_count': 1791}, {'id': 714, 'image_count': 2}, {'id': 715, 'image_count': 1915}, {'id': 716, 'image_count': 1940}, {'id': 717, 'image_count': 13}, {'id': 718, 'image_count': 16}, {'id': 719, 'image_count': 448}, {'id': 720, 'image_count': 12}, {'id': 721, 'image_count': 18}, {'id': 722, 'image_count': 4}, {'id': 723, 'image_count': 71}, {'id': 724, 'image_count': 189}, {'id': 725, 'image_count': 74}, {'id': 726, 'image_count': 103}, {'id': 727, 'image_count': 3}, {'id': 728, 'image_count': 110}, {'id': 729, 'image_count': 5}, {'id': 730, 'image_count': 9}, {'id': 731, 'image_count': 15}, {'id': 732, 'image_count': 25}, {'id': 733, 'image_count': 7}, {'id': 734, 'image_count': 647}, {'id': 735, 'image_count': 824}, {'id': 736, 'image_count': 100}, {'id': 737, 'image_count': 47}, {'id': 738, 'image_count': 121}, {'id': 739, 'image_count': 731}, {'id': 740, 'image_count': 73}, {'id': 741, 'image_count': 49}, {'id': 742, 'image_count': 23}, {'id': 743, 'image_count': 4}, {'id': 744, 'image_count': 62}, {'id': 745, 'image_count': 118}, {'id': 746, 'image_count': 99}, {'id': 747, 'image_count': 40}, {'id': 748, 'image_count': 1036}, {'id': 749, 'image_count': 105}, {'id': 750, 'image_count': 21}, {'id': 751, 'image_count': 229}, {'id': 752, 'image_count': 7}, {'id': 753, 'image_count': 72}, {'id': 754, 'image_count': 9}, {'id': 755, 'image_count': 10}, {'id': 756, 'image_count': 328}, {'id': 757, 'image_count': 468}, {'id': 758, 'image_count': 1}, {'id': 759, 'image_count': 2}, {'id': 760, 'image_count': 24}, {'id': 761, 'image_count': 11}, {'id': 762, 'image_count': 72}, {'id': 763, 'image_count': 17}, {'id': 764, 'image_count': 10}, {'id': 765, 'image_count': 17}, {'id': 766, 'image_count': 489}, {'id': 767, 'image_count': 47}, {'id': 768, 'image_count': 93}, {'id': 769, 'image_count': 1}, {'id': 770, 'image_count': 12}, {'id': 771, 'image_count': 228}, {'id': 772, 'image_count': 5}, {'id': 773, 'image_count': 76}, {'id': 774, 'image_count': 71}, {'id': 775, 'image_count': 30}, {'id': 776, 'image_count': 109}, {'id': 777, 'image_count': 14}, {'id': 778, 'image_count': 1}, {'id': 779, 'image_count': 8}, {'id': 780, 'image_count': 26}, {'id': 781, 'image_count': 339}, {'id': 782, 'image_count': 153}, {'id': 783, 'image_count': 2}, {'id': 784, 'image_count': 3}, {'id': 785, 'image_count': 8}, {'id': 786, 'image_count': 47}, {'id': 787, 'image_count': 8}, {'id': 788, 'image_count': 6}, {'id': 789, 'image_count': 116}, {'id': 790, 'image_count': 69}, {'id': 791, 'image_count': 13}, {'id': 792, 'image_count': 6}, {'id': 793, 'image_count': 1928}, {'id': 794, 'image_count': 79}, {'id': 795, 'image_count': 14}, {'id': 796, 'image_count': 7}, {'id': 797, 'image_count': 20}, {'id': 798, 'image_count': 114}, {'id': 799, 'image_count': 221}, {'id': 800, 'image_count': 502}, {'id': 801, 'image_count': 62}, {'id': 802, 'image_count': 87}, {'id': 803, 'image_count': 4}, {'id': 804, 'image_count': 1912}, {'id': 805, 'image_count': 7}, {'id': 806, 'image_count': 186}, {'id': 807, 'image_count': 18}, {'id': 808, 'image_count': 4}, {'id': 809, 'image_count': 3}, {'id': 810, 'image_count': 7}, {'id': 811, 'image_count': 1413}, {'id': 812, 'image_count': 7}, {'id': 813, 'image_count': 12}, {'id': 814, 'image_count': 248}, {'id': 815, 'image_count': 4}, {'id': 816, 'image_count': 1881}, {'id': 817, 'image_count': 529}, {'id': 818, 'image_count': 1932}, {'id': 819, 'image_count': 50}, {'id': 820, 'image_count': 3}, {'id': 821, 'image_count': 28}, {'id': 822, 'image_count': 10}, {'id': 823, 'image_count': 5}, {'id': 824, 'image_count': 5}, {'id': 825, 'image_count': 18}, {'id': 826, 'image_count': 14}, {'id': 827, 'image_count': 1890}, {'id': 828, 'image_count': 660}, {'id': 829, 'image_count': 8}, {'id': 830, 'image_count': 25}, {'id': 831, 'image_count': 10}, {'id': 832, 'image_count': 218}, {'id': 833, 'image_count': 36}, {'id': 834, 'image_count': 16}, {'id': 835, 'image_count': 808}, {'id': 836, 'image_count': 479}, {'id': 837, 'image_count': 1404}, {'id': 838, 'image_count': 307}, {'id': 839, 'image_count': 57}, {'id': 840, 'image_count': 28}, {'id': 841, 'image_count': 80}, {'id': 842, 'image_count': 11}, {'id': 843, 'image_count': 92}, {'id': 844, 'image_count': 20}, {'id': 845, 'image_count': 194}, {'id': 846, 'image_count': 23}, {'id': 847, 'image_count': 52}, {'id': 848, 'image_count': 673}, {'id': 849, 'image_count': 2}, {'id': 850, 'image_count': 2}, {'id': 851, 'image_count': 1}, {'id': 852, 'image_count': 2}, {'id': 853, 'image_count': 8}, {'id': 854, 'image_count': 80}, {'id': 855, 'image_count': 3}, {'id': 856, 'image_count': 3}, {'id': 857, 'image_count': 15}, {'id': 858, 'image_count': 2}, {'id': 859, 'image_count': 10}, {'id': 860, 'image_count': 386}, {'id': 861, 'image_count': 65}, {'id': 862, 'image_count': 3}, {'id': 863, 'image_count': 35}, {'id': 864, 'image_count': 5}, {'id': 865, 'image_count': 180}, {'id': 866, 'image_count': 99}, {'id': 867, 'image_count': 49}, {'id': 868, 'image_count': 28}, {'id': 869, 'image_count': 1}, {'id': 870, 'image_count': 52}, {'id': 871, 'image_count': 36}, {'id': 872, 'image_count': 70}, {'id': 873, 'image_count': 6}, {'id': 874, 'image_count': 29}, {'id': 875, 'image_count': 24}, {'id': 876, 'image_count': 1115}, {'id': 877, 'image_count': 61}, {'id': 878, 'image_count': 18}, {'id': 879, 'image_count': 18}, {'id': 880, 'image_count': 665}, {'id': 881, 'image_count': 1096}, {'id': 882, 'image_count': 29}, {'id': 883, 'image_count': 8}, {'id': 884, 'image_count': 14}, {'id': 885, 'image_count': 1622}, {'id': 886, 'image_count': 2}, {'id': 887, 'image_count': 3}, {'id': 888, 'image_count': 32}, {'id': 889, 'image_count': 55}, {'id': 890, 'image_count': 1}, {'id': 891, 'image_count': 10}, {'id': 892, 'image_count': 10}, {'id': 893, 'image_count': 47}, {'id': 894, 'image_count': 3}, {'id': 895, 'image_count': 29}, {'id': 896, 'image_count': 342}, {'id': 897, 'image_count': 25}, {'id': 898, 'image_count': 1469}, {'id': 899, 'image_count': 521}, {'id': 900, 'image_count': 347}, {'id': 901, 'image_count': 35}, {'id': 902, 'image_count': 7}, {'id': 903, 'image_count': 207}, {'id': 904, 'image_count': 108}, {'id': 905, 'image_count': 2}, {'id': 906, 'image_count': 34}, {'id': 907, 'image_count': 12}, {'id': 908, 'image_count': 10}, {'id': 909, 'image_count': 13}, {'id': 910, 'image_count': 361}, {'id': 911, 'image_count': 1023}, {'id': 912, 'image_count': 782}, {'id': 913, 'image_count': 2}, {'id': 914, 'image_count': 5}, {'id': 915, 'image_count': 247}, {'id': 916, 'image_count': 221}, {'id': 917, 'image_count': 4}, {'id': 918, 'image_count': 8}, {'id': 919, 'image_count': 158}, {'id': 920, 'image_count': 3}, {'id': 921, 'image_count': 752}, {'id': 922, 'image_count': 64}, {'id': 923, 'image_count': 707}, {'id': 924, 'image_count': 143}, {'id': 925, 'image_count': 1}, {'id': 926, 'image_count': 49}, {'id': 927, 'image_count': 126}, {'id': 928, 'image_count': 76}, {'id': 929, 'image_count': 11}, {'id': 930, 'image_count': 11}, {'id': 931, 'image_count': 4}, {'id': 932, 'image_count': 39}, {'id': 933, 'image_count': 11}, {'id': 934, 'image_count': 13}, {'id': 935, 'image_count': 91}, {'id': 936, 'image_count': 14}, {'id': 937, 'image_count': 5}, {'id': 938, 'image_count': 3}, {'id': 939, 'image_count': 10}, {'id': 940, 'image_count': 18}, {'id': 941, 'image_count': 9}, {'id': 942, 'image_count': 6}, {'id': 943, 'image_count': 951}, {'id': 944, 'image_count': 2}, {'id': 945, 'image_count': 1}, {'id': 946, 'image_count': 19}, {'id': 947, 'image_count': 1942}, {'id': 948, 'image_count': 1916}, {'id': 949, 'image_count': 139}, {'id': 950, 'image_count': 43}, {'id': 951, 'image_count': 1969}, {'id': 952, 'image_count': 5}, {'id': 953, 'image_count': 134}, {'id': 954, 'image_count': 74}, {'id': 955, 'image_count': 381}, {'id': 956, 'image_count': 1}, {'id': 957, 'image_count': 381}, {'id': 958, 'image_count': 6}, {'id': 959, 'image_count': 1826}, {'id': 960, 'image_count': 28}, {'id': 961, 'image_count': 1635}, {'id': 962, 'image_count': 1967}, {'id': 963, 'image_count': 16}, {'id': 964, 'image_count': 1926}, {'id': 965, 'image_count': 1789}, {'id': 966, 'image_count': 401}, {'id': 967, 'image_count': 1968}, {'id': 968, 'image_count': 1167}, {'id': 969, 'image_count': 1}, {'id': 970, 'image_count': 56}, {'id': 971, 'image_count': 17}, {'id': 972, 'image_count': 1}, {'id': 973, 'image_count': 58}, {'id': 974, 'image_count': 9}, {'id': 975, 'image_count': 8}, {'id': 976, 'image_count': 1124}, {'id': 977, 'image_count': 31}, {'id': 978, 'image_count': 16}, {'id': 979, 'image_count': 491}, {'id': 980, 'image_count': 432}, {'id': 981, 'image_count': 1945}, {'id': 982, 'image_count': 1899}, {'id': 983, 'image_count': 5}, {'id': 984, 'image_count': 28}, {'id': 985, 'image_count': 7}, {'id': 986, 'image_count': 146}, {'id': 987, 'image_count': 1}, {'id': 988, 'image_count': 25}, {'id': 989, 'image_count': 22}, {'id': 990, 'image_count': 1}, {'id': 991, 'image_count': 10}, {'id': 992, 'image_count': 9}, {'id': 993, 'image_count': 308}, {'id': 994, 'image_count': 4}, {'id': 995, 'image_count': 1969}, {'id': 996, 'image_count': 45}, {'id': 997, 'image_count': 12}, {'id': 998, 'image_count': 1}, {'id': 999, 'image_count': 85}, {'id': 1000, 'image_count': 1127}, {'id': 1001, 'image_count': 11}, {'id': 1002, 'image_count': 60}, {'id': 1003, 'image_count': 1}, {'id': 1004, 'image_count': 16}, {'id': 1005, 'image_count': 1}, {'id': 1006, 'image_count': 65}, {'id': 1007, 'image_count': 13}, {'id': 1008, 'image_count': 655}, {'id': 1009, 'image_count': 51}, {'id': 1010, 'image_count': 1}, {'id': 1011, 'image_count': 673}, {'id': 1012, 'image_count': 5}, {'id': 1013, 'image_count': 36}, {'id': 1014, 'image_count': 54}, {'id': 1015, 'image_count': 5}, {'id': 1016, 'image_count': 8}, {'id': 1017, 'image_count': 305}, {'id': 1018, 'image_count': 297}, {'id': 1019, 'image_count': 1053}, {'id': 1020, 'image_count': 223}, {'id': 1021, 'image_count': 1037}, {'id': 1022, 'image_count': 63}, {'id': 1023, 'image_count': 1881}, {'id': 1024, 'image_count': 507}, {'id': 1025, 'image_count': 333}, {'id': 1026, 'image_count': 1911}, {'id': 1027, 'image_count': 1765}, {'id': 1028, 'image_count': 1}, {'id': 1029, 'image_count': 5}, {'id': 1030, 'image_count': 1}, {'id': 1031, 'image_count': 9}, {'id': 1032, 'image_count': 2}, {'id': 1033, 'image_count': 151}, {'id': 1034, 'image_count': 82}, {'id': 1035, 'image_count': 1931}, {'id': 1036, 'image_count': 41}, {'id': 1037, 'image_count': 1895}, {'id': 1038, 'image_count': 24}, {'id': 1039, 'image_count': 22}, {'id': 1040, 'image_count': 35}, {'id': 1041, 'image_count': 69}, {'id': 1042, 'image_count': 962}, {'id': 1043, 'image_count': 588}, {'id': 1044, 'image_count': 21}, {'id': 1045, 'image_count': 825}, {'id': 1046, 'image_count': 52}, {'id': 1047, 'image_count': 5}, {'id': 1048, 'image_count': 5}, {'id': 1049, 'image_count': 5}, {'id': 1050, 'image_count': 1860}, {'id': 1051, 'image_count': 56}, {'id': 1052, 'image_count': 1582}, {'id': 1053, 'image_count': 7}, {'id': 1054, 'image_count': 2}, {'id': 1055, 'image_count': 1562}, {'id': 1056, 'image_count': 1885}, {'id': 1057, 'image_count': 1}, {'id': 1058, 'image_count': 5}, {'id': 1059, 'image_count': 137}, {'id': 1060, 'image_count': 1094}, {'id': 1061, 'image_count': 134}, {'id': 1062, 'image_count': 29}, {'id': 1063, 'image_count': 22}, {'id': 1064, 'image_count': 522}, {'id': 1065, 'image_count': 50}, {'id': 1066, 'image_count': 68}, {'id': 1067, 'image_count': 16}, {'id': 1068, 'image_count': 40}, {'id': 1069, 'image_count': 35}, {'id': 1070, 'image_count': 135}, {'id': 1071, 'image_count': 1413}, {'id': 1072, 'image_count': 772}, {'id': 1073, 'image_count': 50}, {'id': 1074, 'image_count': 1015}, {'id': 1075, 'image_count': 1}, {'id': 1076, 'image_count': 65}, {'id': 1077, 'image_count': 1900}, {'id': 1078, 'image_count': 1302}, {'id': 1079, 'image_count': 1977}, {'id': 1080, 'image_count': 2}, {'id': 1081, 'image_count': 29}, {'id': 1082, 'image_count': 36}, {'id': 1083, 'image_count': 138}, {'id': 1084, 'image_count': 4}, {'id': 1085, 'image_count': 67}, {'id': 1086, 'image_count': 26}, {'id': 1087, 'image_count': 25}, {'id': 1088, 'image_count': 33}, {'id': 1089, 'image_count': 37}, {'id': 1090, 'image_count': 50}, {'id': 1091, 'image_count': 270}, {'id': 1092, 'image_count': 12}, {'id': 1093, 'image_count': 316}, {'id': 1094, 'image_count': 41}, {'id': 1095, 'image_count': 224}, {'id': 1096, 'image_count': 105}, {'id': 1097, 'image_count': 1925}, {'id': 1098, 'image_count': 1021}, {'id': 1099, 'image_count': 1213}, {'id': 1100, 'image_count': 172}, {'id': 1101, 'image_count': 28}, {'id': 1102, 'image_count': 745}, {'id': 1103, 'image_count': 187}, {'id': 1104, 'image_count': 147}, {'id': 1105, 'image_count': 136}, {'id': 1106, 'image_count': 34}, {'id': 1107, 'image_count': 41}, {'id': 1108, 'image_count': 636}, {'id': 1109, 'image_count': 570}, {'id': 1110, 'image_count': 1149}, {'id': 1111, 'image_count': 61}, {'id': 1112, 'image_count': 1890}, {'id': 1113, 'image_count': 18}, {'id': 1114, 'image_count': 143}, {'id': 1115, 'image_count': 1517}, {'id': 1116, 'image_count': 7}, {'id': 1117, 'image_count': 943}, {'id': 1118, 'image_count': 6}, {'id': 1119, 'image_count': 1}, {'id': 1120, 'image_count': 11}, {'id': 1121, 'image_count': 101}, {'id': 1122, 'image_count': 1909}, {'id': 1123, 'image_count': 800}, {'id': 1124, 'image_count': 1}, {'id': 1125, 'image_count': 44}, {'id': 1126, 'image_count': 3}, {'id': 1127, 'image_count': 44}, {'id': 1128, 'image_count': 31}, {'id': 1129, 'image_count': 7}, {'id': 1130, 'image_count': 20}, {'id': 1131, 'image_count': 11}, {'id': 1132, 'image_count': 13}, {'id': 1133, 'image_count': 1924}, {'id': 1134, 'image_count': 113}, {'id': 1135, 'image_count': 2}, {'id': 1136, 'image_count': 139}, {'id': 1137, 'image_count': 12}, {'id': 1138, 'image_count': 37}, {'id': 1139, 'image_count': 1866}, {'id': 1140, 'image_count': 47}, {'id': 1141, 'image_count': 1468}, {'id': 1142, 'image_count': 729}, {'id': 1143, 'image_count': 24}, {'id': 1144, 'image_count': 1}, {'id': 1145, 'image_count': 10}, {'id': 1146, 'image_count': 3}, {'id': 1147, 'image_count': 14}, {'id': 1148, 'image_count': 4}, {'id': 1149, 'image_count': 29}, {'id': 1150, 'image_count': 4}, {'id': 1151, 'image_count': 70}, {'id': 1152, 'image_count': 46}, {'id': 1153, 'image_count': 14}, {'id': 1154, 'image_count': 48}, {'id': 1155, 'image_count': 1855}, {'id': 1156, 'image_count': 113}, {'id': 1157, 'image_count': 1}, {'id': 1158, 'image_count': 1}, {'id': 1159, 'image_count': 10}, {'id': 1160, 'image_count': 54}, {'id': 1161, 'image_count': 1923}, {'id': 1162, 'image_count': 630}, {'id': 1163, 'image_count': 31}, {'id': 1164, 'image_count': 69}, {'id': 1165, 'image_count': 7}, {'id': 1166, 'image_count': 11}, {'id': 1167, 'image_count': 1}, {'id': 1168, 'image_count': 30}, {'id': 1169, 'image_count': 50}, {'id': 1170, 'image_count': 45}, {'id': 1171, 'image_count': 28}, {'id': 1172, 'image_count': 114}, {'id': 1173, 'image_count': 193}, {'id': 1174, 'image_count': 21}, {'id': 1175, 'image_count': 91}, {'id': 1176, 'image_count': 31}, {'id': 1177, 'image_count': 1469}, {'id': 1178, 'image_count': 1924}, {'id': 1179, 'image_count': 87}, {'id': 1180, 'image_count': 77}, {'id': 1181, 'image_count': 11}, {'id': 1182, 'image_count': 47}, {'id': 1183, 'image_count': 21}, {'id': 1184, 'image_count': 47}, {'id': 1185, 'image_count': 70}, {'id': 1186, 'image_count': 1838}, {'id': 1187, 'image_count': 19}, {'id': 1188, 'image_count': 531}, {'id': 1189, 'image_count': 11}, {'id': 1190, 'image_count': 941}, {'id': 1191, 'image_count': 113}, {'id': 1192, 'image_count': 26}, {'id': 1193, 'image_count': 5}, {'id': 1194, 'image_count': 56}, {'id': 1195, 'image_count': 73}, {'id': 1196, 'image_count': 32}, {'id': 1197, 'image_count': 128}, {'id': 1198, 'image_count': 623}, {'id': 1199, 'image_count': 12}, {'id': 1200, 'image_count': 52}, {'id': 1201, 'image_count': 11}, {'id': 1202, 'image_count': 1674}, {'id': 1203, 'image_count': 81}] # noqa -# fmt: on diff --git a/spaces/coreml-community/ControlNet-v1-1-Annotators-cpu/annotator/oneformer/detectron2/evaluation/cityscapes_evaluation.py b/spaces/coreml-community/ControlNet-v1-1-Annotators-cpu/annotator/oneformer/detectron2/evaluation/cityscapes_evaluation.py deleted file mode 100644 index f5be637dc87b5ca8645563a4a921144f6c5fd877..0000000000000000000000000000000000000000 --- a/spaces/coreml-community/ControlNet-v1-1-Annotators-cpu/annotator/oneformer/detectron2/evaluation/cityscapes_evaluation.py +++ /dev/null @@ -1,197 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -import glob -import logging -import numpy as np -import os -import tempfile -from collections import OrderedDict -import torch -from PIL import Image - -from annotator.oneformer.detectron2.data import MetadataCatalog -from annotator.oneformer.detectron2.utils import comm -from annotator.oneformer.detectron2.utils.file_io import PathManager - -from .evaluator import DatasetEvaluator - - -class CityscapesEvaluator(DatasetEvaluator): - """ - Base class for evaluation using cityscapes API. - """ - - def __init__(self, dataset_name): - """ - Args: - dataset_name (str): the name of the dataset. - It must have the following metadata associated with it: - "thing_classes", "gt_dir". - """ - self._metadata = MetadataCatalog.get(dataset_name) - self._cpu_device = torch.device("cpu") - self._logger = logging.getLogger(__name__) - - def reset(self): - self._working_dir = tempfile.TemporaryDirectory(prefix="cityscapes_eval_") - self._temp_dir = self._working_dir.name - # All workers will write to the same results directory - # TODO this does not work in distributed training - assert ( - comm.get_local_size() == comm.get_world_size() - ), "CityscapesEvaluator currently do not work with multiple machines." - self._temp_dir = comm.all_gather(self._temp_dir)[0] - if self._temp_dir != self._working_dir.name: - self._working_dir.cleanup() - self._logger.info( - "Writing cityscapes results to temporary directory {} ...".format(self._temp_dir) - ) - - -class CityscapesInstanceEvaluator(CityscapesEvaluator): - """ - Evaluate instance segmentation results on cityscapes dataset using cityscapes API. - - Note: - * It does not work in multi-machine distributed training. - * It contains a synchronization, therefore has to be used on all ranks. - * Only the main process runs evaluation. - """ - - def process(self, inputs, outputs): - from cityscapesscripts.helpers.labels import name2label - - for input, output in zip(inputs, outputs): - file_name = input["file_name"] - basename = os.path.splitext(os.path.basename(file_name))[0] - pred_txt = os.path.join(self._temp_dir, basename + "_pred.txt") - - if "instances" in output: - output = output["instances"].to(self._cpu_device) - num_instances = len(output) - with open(pred_txt, "w") as fout: - for i in range(num_instances): - pred_class = output.pred_classes[i] - classes = self._metadata.thing_classes[pred_class] - class_id = name2label[classes].id - score = output.scores[i] - mask = output.pred_masks[i].numpy().astype("uint8") - png_filename = os.path.join( - self._temp_dir, basename + "_{}_{}.png".format(i, classes) - ) - - Image.fromarray(mask * 255).save(png_filename) - fout.write( - "{} {} {}\n".format(os.path.basename(png_filename), class_id, score) - ) - else: - # Cityscapes requires a prediction file for every ground truth image. - with open(pred_txt, "w") as fout: - pass - - def evaluate(self): - """ - Returns: - dict: has a key "segm", whose value is a dict of "AP" and "AP50". - """ - comm.synchronize() - if comm.get_rank() > 0: - return - import cityscapesscripts.evaluation.evalInstanceLevelSemanticLabeling as cityscapes_eval - - self._logger.info("Evaluating results under {} ...".format(self._temp_dir)) - - # set some global states in cityscapes evaluation API, before evaluating - cityscapes_eval.args.predictionPath = os.path.abspath(self._temp_dir) - cityscapes_eval.args.predictionWalk = None - cityscapes_eval.args.JSONOutput = False - cityscapes_eval.args.colorized = False - cityscapes_eval.args.gtInstancesFile = os.path.join(self._temp_dir, "gtInstances.json") - - # These lines are adopted from - # https://github.com/mcordts/cityscapesScripts/blob/master/cityscapesscripts/evaluation/evalInstanceLevelSemanticLabeling.py # noqa - gt_dir = PathManager.get_local_path(self._metadata.gt_dir) - groundTruthImgList = glob.glob(os.path.join(gt_dir, "*", "*_gtFine_instanceIds.png")) - assert len( - groundTruthImgList - ), "Cannot find any ground truth images to use for evaluation. Searched for: {}".format( - cityscapes_eval.args.groundTruthSearch - ) - predictionImgList = [] - for gt in groundTruthImgList: - predictionImgList.append(cityscapes_eval.getPrediction(gt, cityscapes_eval.args)) - results = cityscapes_eval.evaluateImgLists( - predictionImgList, groundTruthImgList, cityscapes_eval.args - )["averages"] - - ret = OrderedDict() - ret["segm"] = {"AP": results["allAp"] * 100, "AP50": results["allAp50%"] * 100} - self._working_dir.cleanup() - return ret - - -class CityscapesSemSegEvaluator(CityscapesEvaluator): - """ - Evaluate semantic segmentation results on cityscapes dataset using cityscapes API. - - Note: - * It does not work in multi-machine distributed training. - * It contains a synchronization, therefore has to be used on all ranks. - * Only the main process runs evaluation. - """ - - def process(self, inputs, outputs): - from cityscapesscripts.helpers.labels import trainId2label - - for input, output in zip(inputs, outputs): - file_name = input["file_name"] - basename = os.path.splitext(os.path.basename(file_name))[0] - pred_filename = os.path.join(self._temp_dir, basename + "_pred.png") - - output = output["sem_seg"].argmax(dim=0).to(self._cpu_device).numpy() - pred = 255 * np.ones(output.shape, dtype=np.uint8) - for train_id, label in trainId2label.items(): - if label.ignoreInEval: - continue - pred[output == train_id] = label.id - Image.fromarray(pred).save(pred_filename) - - def evaluate(self): - comm.synchronize() - if comm.get_rank() > 0: - return - # Load the Cityscapes eval script *after* setting the required env var, - # since the script reads CITYSCAPES_DATASET into global variables at load time. - import cityscapesscripts.evaluation.evalPixelLevelSemanticLabeling as cityscapes_eval - - self._logger.info("Evaluating results under {} ...".format(self._temp_dir)) - - # set some global states in cityscapes evaluation API, before evaluating - cityscapes_eval.args.predictionPath = os.path.abspath(self._temp_dir) - cityscapes_eval.args.predictionWalk = None - cityscapes_eval.args.JSONOutput = False - cityscapes_eval.args.colorized = False - - # These lines are adopted from - # https://github.com/mcordts/cityscapesScripts/blob/master/cityscapesscripts/evaluation/evalPixelLevelSemanticLabeling.py # noqa - gt_dir = PathManager.get_local_path(self._metadata.gt_dir) - groundTruthImgList = glob.glob(os.path.join(gt_dir, "*", "*_gtFine_labelIds.png")) - assert len( - groundTruthImgList - ), "Cannot find any ground truth images to use for evaluation. Searched for: {}".format( - cityscapes_eval.args.groundTruthSearch - ) - predictionImgList = [] - for gt in groundTruthImgList: - predictionImgList.append(cityscapes_eval.getPrediction(cityscapes_eval.args, gt)) - results = cityscapes_eval.evaluateImgLists( - predictionImgList, groundTruthImgList, cityscapes_eval.args - ) - ret = OrderedDict() - ret["sem_seg"] = { - "IoU": 100.0 * results["averageScoreClasses"], - "iIoU": 100.0 * results["averageScoreInstClasses"], - "IoU_sup": 100.0 * results["averageScoreCategories"], - "iIoU_sup": 100.0 * results["averageScoreInstCategories"], - } - self._working_dir.cleanup() - return ret diff --git a/spaces/coreml-community/ControlNet-v1-1-Annotators-cpu/annotator/oneformer/detectron2/modeling/meta_arch/fcos.py b/spaces/coreml-community/ControlNet-v1-1-Annotators-cpu/annotator/oneformer/detectron2/modeling/meta_arch/fcos.py deleted file mode 100644 index 150726a459b99c1aa26213043b8e609213218201..0000000000000000000000000000000000000000 --- a/spaces/coreml-community/ControlNet-v1-1-Annotators-cpu/annotator/oneformer/detectron2/modeling/meta_arch/fcos.py +++ /dev/null @@ -1,328 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. - -import logging -from typing import List, Optional, Tuple -import torch -from fvcore.nn import sigmoid_focal_loss_jit -from torch import nn -from torch.nn import functional as F - -from annotator.oneformer.detectron2.layers import ShapeSpec, batched_nms -from annotator.oneformer.detectron2.structures import Boxes, ImageList, Instances, pairwise_point_box_distance -from annotator.oneformer.detectron2.utils.events import get_event_storage - -from ..anchor_generator import DefaultAnchorGenerator -from ..backbone import Backbone -from ..box_regression import Box2BoxTransformLinear, _dense_box_regression_loss -from .dense_detector import DenseDetector -from .retinanet import RetinaNetHead - -__all__ = ["FCOS"] - -logger = logging.getLogger(__name__) - - -class FCOS(DenseDetector): - """ - Implement FCOS in :paper:`fcos`. - """ - - def __init__( - self, - *, - backbone: Backbone, - head: nn.Module, - head_in_features: Optional[List[str]] = None, - box2box_transform=None, - num_classes, - center_sampling_radius: float = 1.5, - focal_loss_alpha=0.25, - focal_loss_gamma=2.0, - test_score_thresh=0.2, - test_topk_candidates=1000, - test_nms_thresh=0.6, - max_detections_per_image=100, - pixel_mean, - pixel_std, - ): - """ - Args: - center_sampling_radius: radius of the "center" of a groundtruth box, - within which all anchor points are labeled positive. - Other arguments mean the same as in :class:`RetinaNet`. - """ - super().__init__( - backbone, head, head_in_features, pixel_mean=pixel_mean, pixel_std=pixel_std - ) - - self.num_classes = num_classes - - # FCOS uses one anchor point per location. - # We represent the anchor point by a box whose size equals the anchor stride. - feature_shapes = backbone.output_shape() - fpn_strides = [feature_shapes[k].stride for k in self.head_in_features] - self.anchor_generator = DefaultAnchorGenerator( - sizes=[[k] for k in fpn_strides], aspect_ratios=[1.0], strides=fpn_strides - ) - - # FCOS parameterizes box regression by a linear transform, - # where predictions are normalized by anchor stride (equal to anchor size). - if box2box_transform is None: - box2box_transform = Box2BoxTransformLinear(normalize_by_size=True) - self.box2box_transform = box2box_transform - - self.center_sampling_radius = float(center_sampling_radius) - - # Loss parameters: - self.focal_loss_alpha = focal_loss_alpha - self.focal_loss_gamma = focal_loss_gamma - - # Inference parameters: - self.test_score_thresh = test_score_thresh - self.test_topk_candidates = test_topk_candidates - self.test_nms_thresh = test_nms_thresh - self.max_detections_per_image = max_detections_per_image - - def forward_training(self, images, features, predictions, gt_instances): - # Transpose the Hi*Wi*A dimension to the middle: - pred_logits, pred_anchor_deltas, pred_centerness = self._transpose_dense_predictions( - predictions, [self.num_classes, 4, 1] - ) - anchors = self.anchor_generator(features) - gt_labels, gt_boxes = self.label_anchors(anchors, gt_instances) - return self.losses( - anchors, pred_logits, gt_labels, pred_anchor_deltas, gt_boxes, pred_centerness - ) - - @torch.no_grad() - def _match_anchors(self, gt_boxes: Boxes, anchors: List[Boxes]): - """ - Match ground-truth boxes to a set of multi-level anchors. - - Args: - gt_boxes: Ground-truth boxes from instances of an image. - anchors: List of anchors for each feature map (of different scales). - - Returns: - torch.Tensor - A tensor of shape `(M, R)`, given `M` ground-truth boxes and total - `R` anchor points from all feature levels, indicating the quality - of match between m-th box and r-th anchor. Higher value indicates - better match. - """ - # Naming convention: (M = ground-truth boxes, R = anchor points) - # Anchor points are represented as square boxes of size = stride. - num_anchors_per_level = [len(x) for x in anchors] - anchors = Boxes.cat(anchors) # (R, 4) - anchor_centers = anchors.get_centers() # (R, 2) - anchor_sizes = anchors.tensor[:, 2] - anchors.tensor[:, 0] # (R, ) - - lower_bound = anchor_sizes * 4 - lower_bound[: num_anchors_per_level[0]] = 0 - upper_bound = anchor_sizes * 8 - upper_bound[-num_anchors_per_level[-1] :] = float("inf") - - gt_centers = gt_boxes.get_centers() - - # FCOS with center sampling: anchor point must be close enough to - # ground-truth box center. - center_dists = (anchor_centers[None, :, :] - gt_centers[:, None, :]).abs_() - sampling_regions = self.center_sampling_radius * anchor_sizes[None, :] - - match_quality_matrix = center_dists.max(dim=2).values < sampling_regions - - pairwise_dist = pairwise_point_box_distance(anchor_centers, gt_boxes) - pairwise_dist = pairwise_dist.permute(1, 0, 2) # (M, R, 4) - - # The original FCOS anchor matching rule: anchor point must be inside GT. - match_quality_matrix &= pairwise_dist.min(dim=2).values > 0 - - # Multilevel anchor matching in FCOS: each anchor is only responsible - # for certain scale range. - pairwise_dist = pairwise_dist.max(dim=2).values - match_quality_matrix &= (pairwise_dist > lower_bound[None, :]) & ( - pairwise_dist < upper_bound[None, :] - ) - # Match the GT box with minimum area, if there are multiple GT matches. - gt_areas = gt_boxes.area() # (M, ) - - match_quality_matrix = match_quality_matrix.to(torch.float32) - match_quality_matrix *= 1e8 - gt_areas[:, None] - return match_quality_matrix # (M, R) - - @torch.no_grad() - def label_anchors(self, anchors: List[Boxes], gt_instances: List[Instances]): - """ - Same interface as :meth:`RetinaNet.label_anchors`, but implemented with FCOS - anchor matching rule. - - Unlike RetinaNet, there are no ignored anchors. - """ - - gt_labels, matched_gt_boxes = [], [] - - for inst in gt_instances: - if len(inst) > 0: - match_quality_matrix = self._match_anchors(inst.gt_boxes, anchors) - - # Find matched ground-truth box per anchor. Un-matched anchors are - # assigned -1. This is equivalent to using an anchor matcher as used - # in R-CNN/RetinaNet: `Matcher(thresholds=[1e-5], labels=[0, 1])` - match_quality, matched_idxs = match_quality_matrix.max(dim=0) - matched_idxs[match_quality < 1e-5] = -1 - - matched_gt_boxes_i = inst.gt_boxes.tensor[matched_idxs.clip(min=0)] - gt_labels_i = inst.gt_classes[matched_idxs.clip(min=0)] - - # Anchors with matched_idxs = -1 are labeled background. - gt_labels_i[matched_idxs < 0] = self.num_classes - else: - matched_gt_boxes_i = torch.zeros_like(Boxes.cat(anchors).tensor) - gt_labels_i = torch.full( - (len(matched_gt_boxes_i),), - fill_value=self.num_classes, - dtype=torch.long, - device=matched_gt_boxes_i.device, - ) - - gt_labels.append(gt_labels_i) - matched_gt_boxes.append(matched_gt_boxes_i) - - return gt_labels, matched_gt_boxes - - def losses( - self, anchors, pred_logits, gt_labels, pred_anchor_deltas, gt_boxes, pred_centerness - ): - """ - This method is almost identical to :meth:`RetinaNet.losses`, with an extra - "loss_centerness" in the returned dict. - """ - num_images = len(gt_labels) - gt_labels = torch.stack(gt_labels) # (M, R) - - pos_mask = (gt_labels >= 0) & (gt_labels != self.num_classes) - num_pos_anchors = pos_mask.sum().item() - get_event_storage().put_scalar("num_pos_anchors", num_pos_anchors / num_images) - normalizer = self._ema_update("loss_normalizer", max(num_pos_anchors, 1), 300) - - # classification and regression loss - gt_labels_target = F.one_hot(gt_labels, num_classes=self.num_classes + 1)[ - :, :, :-1 - ] # no loss for the last (background) class - loss_cls = sigmoid_focal_loss_jit( - torch.cat(pred_logits, dim=1), - gt_labels_target.to(pred_logits[0].dtype), - alpha=self.focal_loss_alpha, - gamma=self.focal_loss_gamma, - reduction="sum", - ) - - loss_box_reg = _dense_box_regression_loss( - anchors, - self.box2box_transform, - pred_anchor_deltas, - gt_boxes, - pos_mask, - box_reg_loss_type="giou", - ) - - ctrness_targets = self.compute_ctrness_targets(anchors, gt_boxes) # (M, R) - pred_centerness = torch.cat(pred_centerness, dim=1).squeeze(dim=2) # (M, R) - ctrness_loss = F.binary_cross_entropy_with_logits( - pred_centerness[pos_mask], ctrness_targets[pos_mask], reduction="sum" - ) - return { - "loss_fcos_cls": loss_cls / normalizer, - "loss_fcos_loc": loss_box_reg / normalizer, - "loss_fcos_ctr": ctrness_loss / normalizer, - } - - def compute_ctrness_targets(self, anchors: List[Boxes], gt_boxes: List[torch.Tensor]): - anchors = Boxes.cat(anchors).tensor # Rx4 - reg_targets = [self.box2box_transform.get_deltas(anchors, m) for m in gt_boxes] - reg_targets = torch.stack(reg_targets, dim=0) # NxRx4 - if len(reg_targets) == 0: - return reg_targets.new_zeros(len(reg_targets)) - left_right = reg_targets[:, :, [0, 2]] - top_bottom = reg_targets[:, :, [1, 3]] - ctrness = (left_right.min(dim=-1)[0] / left_right.max(dim=-1)[0]) * ( - top_bottom.min(dim=-1)[0] / top_bottom.max(dim=-1)[0] - ) - return torch.sqrt(ctrness) - - def forward_inference( - self, - images: ImageList, - features: List[torch.Tensor], - predictions: List[List[torch.Tensor]], - ): - pred_logits, pred_anchor_deltas, pred_centerness = self._transpose_dense_predictions( - predictions, [self.num_classes, 4, 1] - ) - anchors = self.anchor_generator(features) - - results: List[Instances] = [] - for img_idx, image_size in enumerate(images.image_sizes): - scores_per_image = [ - # Multiply and sqrt centerness & classification scores - # (See eqn. 4 in https://arxiv.org/abs/2006.09214) - torch.sqrt(x[img_idx].sigmoid_() * y[img_idx].sigmoid_()) - for x, y in zip(pred_logits, pred_centerness) - ] - deltas_per_image = [x[img_idx] for x in pred_anchor_deltas] - results_per_image = self.inference_single_image( - anchors, scores_per_image, deltas_per_image, image_size - ) - results.append(results_per_image) - return results - - def inference_single_image( - self, - anchors: List[Boxes], - box_cls: List[torch.Tensor], - box_delta: List[torch.Tensor], - image_size: Tuple[int, int], - ): - """ - Identical to :meth:`RetinaNet.inference_single_image. - """ - pred = self._decode_multi_level_predictions( - anchors, - box_cls, - box_delta, - self.test_score_thresh, - self.test_topk_candidates, - image_size, - ) - keep = batched_nms( - pred.pred_boxes.tensor, pred.scores, pred.pred_classes, self.test_nms_thresh - ) - return pred[keep[: self.max_detections_per_image]] - - -class FCOSHead(RetinaNetHead): - """ - The head used in :paper:`fcos`. It adds an additional centerness - prediction branch on top of :class:`RetinaNetHead`. - """ - - def __init__(self, *, input_shape: List[ShapeSpec], conv_dims: List[int], **kwargs): - super().__init__(input_shape=input_shape, conv_dims=conv_dims, num_anchors=1, **kwargs) - # Unlike original FCOS, we do not add an additional learnable scale layer - # because it's found to have no benefits after normalizing regression targets by stride. - self._num_features = len(input_shape) - self.ctrness = nn.Conv2d(conv_dims[-1], 1, kernel_size=3, stride=1, padding=1) - torch.nn.init.normal_(self.ctrness.weight, std=0.01) - torch.nn.init.constant_(self.ctrness.bias, 0) - - def forward(self, features): - assert len(features) == self._num_features - logits = [] - bbox_reg = [] - ctrness = [] - for feature in features: - logits.append(self.cls_score(self.cls_subnet(feature))) - bbox_feature = self.bbox_subnet(feature) - bbox_reg.append(self.bbox_pred(bbox_feature)) - ctrness.append(self.ctrness(bbox_feature)) - return logits, bbox_reg, ctrness diff --git a/spaces/coreml-community/ControlNet-v1-1-Annotators-cpu/annotator/oneformer/oneformer/evaluation/instance_evaluation.py b/spaces/coreml-community/ControlNet-v1-1-Annotators-cpu/annotator/oneformer/oneformer/evaluation/instance_evaluation.py deleted file mode 100644 index 10cd092a482c1608c1a20f79a94e79a6153fc48a..0000000000000000000000000000000000000000 --- a/spaces/coreml-community/ControlNet-v1-1-Annotators-cpu/annotator/oneformer/oneformer/evaluation/instance_evaluation.py +++ /dev/null @@ -1,110 +0,0 @@ -# ------------------------------------------------------------------------------ -# Reference: https://github.com/facebookresearch/Mask2Former/blob/main/mask2former/evaluation/instance_evaluation.py -# ------------------------------------------------------------------------------ - -import contextlib -import copy -import io -import itertools -import json -import logging -import numpy as np -import os -import pickle -from collections import OrderedDict -import pycocotools.mask as mask_util -import torch -from pycocotools.coco import COCO -from pycocotools.cocoeval import COCOeval -from tabulate import tabulate - -import annotator.oneformer.detectron2.utils.comm as comm -from annotator.oneformer.detectron2.config import CfgNode -from annotator.oneformer.detectron2.data import MetadataCatalog -from annotator.oneformer.detectron2.data.datasets.coco import convert_to_coco_json -from annotator.oneformer.detectron2.evaluation.coco_evaluation import COCOEvaluator, _evaluate_predictions_on_coco -from annotator.oneformer.detectron2.evaluation.fast_eval_api import COCOeval_opt -from annotator.oneformer.detectron2.structures import Boxes, BoxMode, pairwise_iou -from annotator.oneformer.detectron2.utils.file_io import PathManager -from annotator.oneformer.detectron2.utils.logger import create_small_table - - -# modified from COCOEvaluator for instance segmetnat -class InstanceSegEvaluator(COCOEvaluator): - """ - Evaluate AR for object proposals, AP for instance detection/segmentation, AP - for keypoint detection outputs using COCO's metrics. - See http://cocodataset.org/#detection-eval and - http://cocodataset.org/#keypoints-eval to understand its metrics. - The metrics range from 0 to 100 (instead of 0 to 1), where a -1 or NaN means - the metric cannot be computed (e.g. due to no predictions made). - - In addition to COCO, this evaluator is able to support any bounding box detection, - instance segmentation, or keypoint detection dataset. - """ - - def _eval_predictions(self, predictions, img_ids=None): - """ - Evaluate predictions. Fill self._results with the metrics of the tasks. - """ - self._logger.info("Preparing results for COCO format ...") - coco_results = list(itertools.chain(*[x["instances"] for x in predictions])) - tasks = self._tasks or self._tasks_from_predictions(coco_results) - - # unmap the category ids for COCO - if hasattr(self._metadata, "thing_dataset_id_to_contiguous_id"): - dataset_id_to_contiguous_id = self._metadata.thing_dataset_id_to_contiguous_id - # all_contiguous_ids = list(dataset_id_to_contiguous_id.values()) - # num_classes = len(all_contiguous_ids) - # assert min(all_contiguous_ids) == 0 and max(all_contiguous_ids) == num_classes - 1 - - reverse_id_mapping = {v: k for k, v in dataset_id_to_contiguous_id.items()} - for result in coco_results: - category_id = result["category_id"] - # assert category_id < num_classes, ( - # f"A prediction has class={category_id}, " - # f"but the dataset only has {num_classes} classes and " - # f"predicted class id should be in [0, {num_classes - 1}]." - # ) - assert category_id in reverse_id_mapping, ( - f"A prediction has class={category_id}, " - f"but the dataset only has class ids in {dataset_id_to_contiguous_id}." - ) - result["category_id"] = reverse_id_mapping[category_id] - - if self._output_dir: - file_path = os.path.join(self._output_dir, "coco_instances_results.json") - self._logger.info("Saving results to {}".format(file_path)) - with PathManager.open(file_path, "w") as f: - f.write(json.dumps(coco_results)) - f.flush() - - if not self._do_evaluation: - self._logger.info("Annotations are not available for evaluation.") - return - - self._logger.info( - "Evaluating predictions with {} COCO API...".format( - "unofficial" if self._use_fast_impl else "official" - ) - ) - for task in sorted(tasks): - assert task in {"bbox", "segm", "keypoints"}, f"Got unknown task: {task}!" - coco_eval = ( - _evaluate_predictions_on_coco( - self._coco_api, - coco_results, - task, - kpt_oks_sigmas=self._kpt_oks_sigmas, - use_fast_impl=self._use_fast_impl, - img_ids=img_ids, - max_dets_per_image=self._max_dets_per_image, - ) - if len(coco_results) > 0 - else None # cocoapi does not handle empty results very well - ) - - res = self._derive_coco_results( - coco_eval, task, class_names=self._metadata.get("thing_classes") - ) - self._results[task] = res diff --git a/spaces/coreml-community/ControlNet-v1-1-Annotators-cpu/annotator/zoe/zoedepth/models/base_models/midas_repo/run.py b/spaces/coreml-community/ControlNet-v1-1-Annotators-cpu/annotator/zoe/zoedepth/models/base_models/midas_repo/run.py deleted file mode 100644 index 53194ed1137ef2eaf63e8a83b51a5c8fde24aea4..0000000000000000000000000000000000000000 --- a/spaces/coreml-community/ControlNet-v1-1-Annotators-cpu/annotator/zoe/zoedepth/models/base_models/midas_repo/run.py +++ /dev/null @@ -1,278 +0,0 @@ -"""Compute depth maps for images in the input folder. -""" -import os -import glob -import torch -import utils -import cv2 -import argparse -import time - -import numpy as np - -from imutils.video import VideoStream -from midas.model_loader import default_models, load_model - -first_execution = True -def process(device, model, model_type, image, input_size, target_size, optimize, use_camera): - """ - Run the inference and interpolate. - - Args: - device (torch.device): the torch device used - model: the model used for inference - model_type: the type of the model - image: the image fed into the neural network - input_size: the size (width, height) of the neural network input (for OpenVINO) - target_size: the size (width, height) the neural network output is interpolated to - optimize: optimize the model to half-floats on CUDA? - use_camera: is the camera used? - - Returns: - the prediction - """ - global first_execution - - if "openvino" in model_type: - if first_execution or not use_camera: - print(f" Input resized to {input_size[0]}x{input_size[1]} before entering the encoder") - first_execution = False - - sample = [np.reshape(image, (1, 3, *input_size))] - prediction = model(sample)[model.output(0)][0] - prediction = cv2.resize(prediction, dsize=target_size, - interpolation=cv2.INTER_CUBIC) - else: - sample = torch.from_numpy(image).to(device).unsqueeze(0) - - if optimize and device == torch.device("cuda"): - if first_execution: - print(" Optimization to half-floats activated. Use with caution, because models like Swin require\n" - " float precision to work properly and may yield non-finite depth values to some extent for\n" - " half-floats.") - sample = sample.to(memory_format=torch.channels_last) - sample = sample.half() - - if first_execution or not use_camera: - height, width = sample.shape[2:] - print(f" Input resized to {width}x{height} before entering the encoder") - first_execution = False - - prediction = model.forward(sample) - prediction = ( - torch.nn.functional.interpolate( - prediction.unsqueeze(1), - size=target_size[::-1], - mode="bicubic", - align_corners=False, - ) - .squeeze() - .cpu() - .numpy() - ) - - return prediction - - -def create_side_by_side(image, depth, grayscale): - """ - Take an RGB image and depth map and place them side by side. This includes a proper normalization of the depth map - for better visibility. - - Args: - image: the RGB image - depth: the depth map - grayscale: use a grayscale colormap? - - Returns: - the image and depth map place side by side - """ - depth_min = depth.min() - depth_max = depth.max() - normalized_depth = 255 * (depth - depth_min) / (depth_max - depth_min) - normalized_depth *= 3 - - right_side = np.repeat(np.expand_dims(normalized_depth, 2), 3, axis=2) / 3 - if not grayscale: - right_side = cv2.applyColorMap(np.uint8(right_side), cv2.COLORMAP_INFERNO) - - if image is None: - return right_side - else: - return np.concatenate((image, right_side), axis=1) - - -def run(input_path, output_path, model_path, model_type="dpt_beit_large_512", optimize=False, side=False, height=None, - square=False, grayscale=False): - """Run MonoDepthNN to compute depth maps. - - Args: - input_path (str): path to input folder - output_path (str): path to output folder - model_path (str): path to saved model - model_type (str): the model type - optimize (bool): optimize the model to half-floats on CUDA? - side (bool): RGB and depth side by side in output images? - height (int): inference encoder image height - square (bool): resize to a square resolution? - grayscale (bool): use a grayscale colormap? - """ - print("Initialize") - - # select device -# device = torch.device("cuda" if torch.cuda.is_available() else "cpu") - device = torch.device("cpu") - print("Device: %s" % device) - - model, transform, net_w, net_h = load_model(device, model_path, model_type, optimize, height, square) - - # get input - if input_path is not None: - image_names = glob.glob(os.path.join(input_path, "*")) - num_images = len(image_names) - else: - print("No input path specified. Grabbing images from camera.") - - # create output folder - if output_path is not None: - os.makedirs(output_path, exist_ok=True) - - print("Start processing") - - if input_path is not None: - if output_path is None: - print("Warning: No output path specified. Images will be processed but not shown or stored anywhere.") - for index, image_name in enumerate(image_names): - - print(" Processing {} ({}/{})".format(image_name, index + 1, num_images)) - - # input - original_image_rgb = utils.read_image(image_name) # in [0, 1] - image = transform({"image": original_image_rgb})["image"] - - # compute - with torch.no_grad(): - prediction = process(device, model, model_type, image, (net_w, net_h), original_image_rgb.shape[1::-1], - optimize, False) - - # output - if output_path is not None: - filename = os.path.join( - output_path, os.path.splitext(os.path.basename(image_name))[0] + '-' + model_type - ) - if not side: - utils.write_depth(filename, prediction, grayscale, bits=2) - else: - original_image_bgr = np.flip(original_image_rgb, 2) - content = create_side_by_side(original_image_bgr*255, prediction, grayscale) - cv2.imwrite(filename + ".png", content) - utils.write_pfm(filename + ".pfm", prediction.astype(np.float32)) - - else: - with torch.no_grad(): - fps = 1 - video = VideoStream(0).start() - time_start = time.time() - frame_index = 0 - while True: - frame = video.read() - if frame is not None: - original_image_rgb = np.flip(frame, 2) # in [0, 255] (flip required to get RGB) - image = transform({"image": original_image_rgb/255})["image"] - - prediction = process(device, model, model_type, image, (net_w, net_h), - original_image_rgb.shape[1::-1], optimize, True) - - original_image_bgr = np.flip(original_image_rgb, 2) if side else None - content = create_side_by_side(original_image_bgr, prediction, grayscale) - cv2.imshow('MiDaS Depth Estimation - Press Escape to close window ', content/255) - - if output_path is not None: - filename = os.path.join(output_path, 'Camera' + '-' + model_type + '_' + str(frame_index)) - cv2.imwrite(filename + ".png", content) - - alpha = 0.1 - if time.time()-time_start > 0: - fps = (1 - alpha) * fps + alpha * 1 / (time.time()-time_start) # exponential moving average - time_start = time.time() - print(f"\rFPS: {round(fps,2)}", end="") - - if cv2.waitKey(1) == 27: # Escape key - break - - frame_index += 1 - print() - - print("Finished") - - -if __name__ == "__main__": - parser = argparse.ArgumentParser() - - parser.add_argument('-i', '--input_path', - default=None, - help='Folder with input images (if no input path is specified, images are tried to be grabbed ' - 'from camera)' - ) - - parser.add_argument('-o', '--output_path', - default=None, - help='Folder for output images' - ) - - parser.add_argument('-m', '--model_weights', - default=None, - help='Path to the trained weights of model' - ) - - parser.add_argument('-t', '--model_type', - default='dpt_beit_large_512', - help='Model type: ' - 'dpt_beit_large_512, dpt_beit_large_384, dpt_beit_base_384, dpt_swin2_large_384, ' - 'dpt_swin2_base_384, dpt_swin2_tiny_256, dpt_swin_large_384, dpt_next_vit_large_384, ' - 'dpt_levit_224, dpt_large_384, dpt_hybrid_384, midas_v21_384, midas_v21_small_256 or ' - 'openvino_midas_v21_small_256' - ) - - parser.add_argument('-s', '--side', - action='store_true', - help='Output images contain RGB and depth images side by side' - ) - - parser.add_argument('--optimize', dest='optimize', action='store_true', help='Use half-float optimization') - parser.set_defaults(optimize=False) - - parser.add_argument('--height', - type=int, default=None, - help='Preferred height of images feed into the encoder during inference. Note that the ' - 'preferred height may differ from the actual height, because an alignment to multiples of ' - '32 takes place. Many models support only the height chosen during training, which is ' - 'used automatically if this parameter is not set.' - ) - parser.add_argument('--square', - action='store_true', - help='Option to resize images to a square resolution by changing their widths when images are ' - 'fed into the encoder during inference. If this parameter is not set, the aspect ratio of ' - 'images is tried to be preserved if supported by the model.' - ) - parser.add_argument('--grayscale', - action='store_true', - help='Use a grayscale colormap instead of the inferno one. Although the inferno colormap, ' - 'which is used by default, is better for visibility, it does not allow storing 16-bit ' - 'depth values in PNGs but only 8-bit ones due to the precision limitation of this ' - 'colormap.' - ) - - args = parser.parse_args() - - - if args.model_weights is None: - args.model_weights = default_models[args.model_type] - - # set torch options - torch.backends.cudnn.enabled = True - torch.backends.cudnn.benchmark = True - - # compute depth maps - run(args.input_path, args.output_path, args.model_weights, args.model_type, args.optimize, args.side, args.height, - args.square, args.grayscale) diff --git a/spaces/cscan/CodeFormer/CodeFormer/facelib/detection/yolov5face/__init__.py b/spaces/cscan/CodeFormer/CodeFormer/facelib/detection/yolov5face/__init__.py deleted file mode 100644 index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000 diff --git a/spaces/cvlab/zero123-live/taming-transformers/taming/modules/diffusionmodules/model.py b/spaces/cvlab/zero123-live/taming-transformers/taming/modules/diffusionmodules/model.py deleted file mode 100644 index d3a5db6aa2ef915e270f1ae135e4a9918fdd884c..0000000000000000000000000000000000000000 --- a/spaces/cvlab/zero123-live/taming-transformers/taming/modules/diffusionmodules/model.py +++ /dev/null @@ -1,776 +0,0 @@ -# pytorch_diffusion + derived encoder decoder -import math -import torch -import torch.nn as nn -import numpy as np - - -def get_timestep_embedding(timesteps, embedding_dim): - """ - This matches the implementation in Denoising Diffusion Probabilistic Models: - From Fairseq. - Build sinusoidal embeddings. - This matches the implementation in tensor2tensor, but differs slightly - from the description in Section 3.5 of "Attention Is All You Need". - """ - assert len(timesteps.shape) == 1 - - half_dim = embedding_dim // 2 - emb = math.log(10000) / (half_dim - 1) - emb = torch.exp(torch.arange(half_dim, dtype=torch.float32) * -emb) - emb = emb.to(device=timesteps.device) - emb = timesteps.float()[:, None] * emb[None, :] - emb = torch.cat([torch.sin(emb), torch.cos(emb)], dim=1) - if embedding_dim % 2 == 1: # zero pad - emb = torch.nn.functional.pad(emb, (0,1,0,0)) - return emb - - -def nonlinearity(x): - # swish - return x*torch.sigmoid(x) - - -def Normalize(in_channels): - return torch.nn.GroupNorm(num_groups=32, num_channels=in_channels, eps=1e-6, affine=True) - - -class Upsample(nn.Module): - def __init__(self, in_channels, with_conv): - super().__init__() - self.with_conv = with_conv - if self.with_conv: - self.conv = torch.nn.Conv2d(in_channels, - in_channels, - kernel_size=3, - stride=1, - padding=1) - - def forward(self, x): - x = torch.nn.functional.interpolate(x, scale_factor=2.0, mode="nearest") - if self.with_conv: - x = self.conv(x) - return x - - -class Downsample(nn.Module): - def __init__(self, in_channels, with_conv): - super().__init__() - self.with_conv = with_conv - if self.with_conv: - # no asymmetric padding in torch conv, must do it ourselves - self.conv = torch.nn.Conv2d(in_channels, - in_channels, - kernel_size=3, - stride=2, - padding=0) - - def forward(self, x): - if self.with_conv: - pad = (0,1,0,1) - x = torch.nn.functional.pad(x, pad, mode="constant", value=0) - x = self.conv(x) - else: - x = torch.nn.functional.avg_pool2d(x, kernel_size=2, stride=2) - return x - - -class ResnetBlock(nn.Module): - def __init__(self, *, in_channels, out_channels=None, conv_shortcut=False, - dropout, temb_channels=512): - super().__init__() - self.in_channels = in_channels - out_channels = in_channels if out_channels is None else out_channels - self.out_channels = out_channels - self.use_conv_shortcut = conv_shortcut - - self.norm1 = Normalize(in_channels) - self.conv1 = torch.nn.Conv2d(in_channels, - out_channels, - kernel_size=3, - stride=1, - padding=1) - if temb_channels > 0: - self.temb_proj = torch.nn.Linear(temb_channels, - out_channels) - self.norm2 = Normalize(out_channels) - self.dropout = torch.nn.Dropout(dropout) - self.conv2 = torch.nn.Conv2d(out_channels, - out_channels, - kernel_size=3, - stride=1, - padding=1) - if self.in_channels != self.out_channels: - if self.use_conv_shortcut: - self.conv_shortcut = torch.nn.Conv2d(in_channels, - out_channels, - kernel_size=3, - stride=1, - padding=1) - else: - self.nin_shortcut = torch.nn.Conv2d(in_channels, - out_channels, - kernel_size=1, - stride=1, - padding=0) - - def forward(self, x, temb): - h = x - h = self.norm1(h) - h = nonlinearity(h) - h = self.conv1(h) - - if temb is not None: - h = h + self.temb_proj(nonlinearity(temb))[:,:,None,None] - - h = self.norm2(h) - h = nonlinearity(h) - h = self.dropout(h) - h = self.conv2(h) - - if self.in_channels != self.out_channels: - if self.use_conv_shortcut: - x = self.conv_shortcut(x) - else: - x = self.nin_shortcut(x) - - return x+h - - -class AttnBlock(nn.Module): - def __init__(self, in_channels): - super().__init__() - self.in_channels = in_channels - - self.norm = Normalize(in_channels) - self.q = torch.nn.Conv2d(in_channels, - in_channels, - kernel_size=1, - stride=1, - padding=0) - self.k = torch.nn.Conv2d(in_channels, - in_channels, - kernel_size=1, - stride=1, - padding=0) - self.v = torch.nn.Conv2d(in_channels, - in_channels, - kernel_size=1, - stride=1, - padding=0) - self.proj_out = torch.nn.Conv2d(in_channels, - in_channels, - kernel_size=1, - stride=1, - padding=0) - - - def forward(self, x): - h_ = x - h_ = self.norm(h_) - q = self.q(h_) - k = self.k(h_) - v = self.v(h_) - - # compute attention - b,c,h,w = q.shape - q = q.reshape(b,c,h*w) - q = q.permute(0,2,1) # b,hw,c - k = k.reshape(b,c,h*w) # b,c,hw - w_ = torch.bmm(q,k) # b,hw,hw w[b,i,j]=sum_c q[b,i,c]k[b,c,j] - w_ = w_ * (int(c)**(-0.5)) - w_ = torch.nn.functional.softmax(w_, dim=2) - - # attend to values - v = v.reshape(b,c,h*w) - w_ = w_.permute(0,2,1) # b,hw,hw (first hw of k, second of q) - h_ = torch.bmm(v,w_) # b, c,hw (hw of q) h_[b,c,j] = sum_i v[b,c,i] w_[b,i,j] - h_ = h_.reshape(b,c,h,w) - - h_ = self.proj_out(h_) - - return x+h_ - - -class Model(nn.Module): - def __init__(self, *, ch, out_ch, ch_mult=(1,2,4,8), num_res_blocks, - attn_resolutions, dropout=0.0, resamp_with_conv=True, in_channels, - resolution, use_timestep=True): - super().__init__() - self.ch = ch - self.temb_ch = self.ch*4 - self.num_resolutions = len(ch_mult) - self.num_res_blocks = num_res_blocks - self.resolution = resolution - self.in_channels = in_channels - - self.use_timestep = use_timestep - if self.use_timestep: - # timestep embedding - self.temb = nn.Module() - self.temb.dense = nn.ModuleList([ - torch.nn.Linear(self.ch, - self.temb_ch), - torch.nn.Linear(self.temb_ch, - self.temb_ch), - ]) - - # downsampling - self.conv_in = torch.nn.Conv2d(in_channels, - self.ch, - kernel_size=3, - stride=1, - padding=1) - - curr_res = resolution - in_ch_mult = (1,)+tuple(ch_mult) - self.down = nn.ModuleList() - for i_level in range(self.num_resolutions): - block = nn.ModuleList() - attn = nn.ModuleList() - block_in = ch*in_ch_mult[i_level] - block_out = ch*ch_mult[i_level] - for i_block in range(self.num_res_blocks): - block.append(ResnetBlock(in_channels=block_in, - out_channels=block_out, - temb_channels=self.temb_ch, - dropout=dropout)) - block_in = block_out - if curr_res in attn_resolutions: - attn.append(AttnBlock(block_in)) - down = nn.Module() - down.block = block - down.attn = attn - if i_level != self.num_resolutions-1: - down.downsample = Downsample(block_in, resamp_with_conv) - curr_res = curr_res // 2 - self.down.append(down) - - # middle - self.mid = nn.Module() - self.mid.block_1 = ResnetBlock(in_channels=block_in, - out_channels=block_in, - temb_channels=self.temb_ch, - dropout=dropout) - self.mid.attn_1 = AttnBlock(block_in) - self.mid.block_2 = ResnetBlock(in_channels=block_in, - out_channels=block_in, - temb_channels=self.temb_ch, - dropout=dropout) - - # upsampling - self.up = nn.ModuleList() - for i_level in reversed(range(self.num_resolutions)): - block = nn.ModuleList() - attn = nn.ModuleList() - block_out = ch*ch_mult[i_level] - skip_in = ch*ch_mult[i_level] - for i_block in range(self.num_res_blocks+1): - if i_block == self.num_res_blocks: - skip_in = ch*in_ch_mult[i_level] - block.append(ResnetBlock(in_channels=block_in+skip_in, - out_channels=block_out, - temb_channels=self.temb_ch, - dropout=dropout)) - block_in = block_out - if curr_res in attn_resolutions: - attn.append(AttnBlock(block_in)) - up = nn.Module() - up.block = block - up.attn = attn - if i_level != 0: - up.upsample = Upsample(block_in, resamp_with_conv) - curr_res = curr_res * 2 - self.up.insert(0, up) # prepend to get consistent order - - # end - self.norm_out = Normalize(block_in) - self.conv_out = torch.nn.Conv2d(block_in, - out_ch, - kernel_size=3, - stride=1, - padding=1) - - - def forward(self, x, t=None): - #assert x.shape[2] == x.shape[3] == self.resolution - - if self.use_timestep: - # timestep embedding - assert t is not None - temb = get_timestep_embedding(t, self.ch) - temb = self.temb.dense[0](temb) - temb = nonlinearity(temb) - temb = self.temb.dense[1](temb) - else: - temb = None - - # downsampling - hs = [self.conv_in(x)] - for i_level in range(self.num_resolutions): - for i_block in range(self.num_res_blocks): - h = self.down[i_level].block[i_block](hs[-1], temb) - if len(self.down[i_level].attn) > 0: - h = self.down[i_level].attn[i_block](h) - hs.append(h) - if i_level != self.num_resolutions-1: - hs.append(self.down[i_level].downsample(hs[-1])) - - # middle - h = hs[-1] - h = self.mid.block_1(h, temb) - h = self.mid.attn_1(h) - h = self.mid.block_2(h, temb) - - # upsampling - for i_level in reversed(range(self.num_resolutions)): - for i_block in range(self.num_res_blocks+1): - h = self.up[i_level].block[i_block]( - torch.cat([h, hs.pop()], dim=1), temb) - if len(self.up[i_level].attn) > 0: - h = self.up[i_level].attn[i_block](h) - if i_level != 0: - h = self.up[i_level].upsample(h) - - # end - h = self.norm_out(h) - h = nonlinearity(h) - h = self.conv_out(h) - return h - - -class Encoder(nn.Module): - def __init__(self, *, ch, out_ch, ch_mult=(1,2,4,8), num_res_blocks, - attn_resolutions, dropout=0.0, resamp_with_conv=True, in_channels, - resolution, z_channels, double_z=True, **ignore_kwargs): - super().__init__() - self.ch = ch - self.temb_ch = 0 - self.num_resolutions = len(ch_mult) - self.num_res_blocks = num_res_blocks - self.resolution = resolution - self.in_channels = in_channels - - # downsampling - self.conv_in = torch.nn.Conv2d(in_channels, - self.ch, - kernel_size=3, - stride=1, - padding=1) - - curr_res = resolution - in_ch_mult = (1,)+tuple(ch_mult) - self.down = nn.ModuleList() - for i_level in range(self.num_resolutions): - block = nn.ModuleList() - attn = nn.ModuleList() - block_in = ch*in_ch_mult[i_level] - block_out = ch*ch_mult[i_level] - for i_block in range(self.num_res_blocks): - block.append(ResnetBlock(in_channels=block_in, - out_channels=block_out, - temb_channels=self.temb_ch, - dropout=dropout)) - block_in = block_out - if curr_res in attn_resolutions: - attn.append(AttnBlock(block_in)) - down = nn.Module() - down.block = block - down.attn = attn - if i_level != self.num_resolutions-1: - down.downsample = Downsample(block_in, resamp_with_conv) - curr_res = curr_res // 2 - self.down.append(down) - - # middle - self.mid = nn.Module() - self.mid.block_1 = ResnetBlock(in_channels=block_in, - out_channels=block_in, - temb_channels=self.temb_ch, - dropout=dropout) - self.mid.attn_1 = AttnBlock(block_in) - self.mid.block_2 = ResnetBlock(in_channels=block_in, - out_channels=block_in, - temb_channels=self.temb_ch, - dropout=dropout) - - # end - self.norm_out = Normalize(block_in) - self.conv_out = torch.nn.Conv2d(block_in, - 2*z_channels if double_z else z_channels, - kernel_size=3, - stride=1, - padding=1) - - - def forward(self, x): - #assert x.shape[2] == x.shape[3] == self.resolution, "{}, {}, {}".format(x.shape[2], x.shape[3], self.resolution) - - # timestep embedding - temb = None - - # downsampling - hs = [self.conv_in(x)] - for i_level in range(self.num_resolutions): - for i_block in range(self.num_res_blocks): - h = self.down[i_level].block[i_block](hs[-1], temb) - if len(self.down[i_level].attn) > 0: - h = self.down[i_level].attn[i_block](h) - hs.append(h) - if i_level != self.num_resolutions-1: - hs.append(self.down[i_level].downsample(hs[-1])) - - # middle - h = hs[-1] - h = self.mid.block_1(h, temb) - h = self.mid.attn_1(h) - h = self.mid.block_2(h, temb) - - # end - h = self.norm_out(h) - h = nonlinearity(h) - h = self.conv_out(h) - return h - - -class Decoder(nn.Module): - def __init__(self, *, ch, out_ch, ch_mult=(1,2,4,8), num_res_blocks, - attn_resolutions, dropout=0.0, resamp_with_conv=True, in_channels, - resolution, z_channels, give_pre_end=False, **ignorekwargs): - super().__init__() - self.ch = ch - self.temb_ch = 0 - self.num_resolutions = len(ch_mult) - self.num_res_blocks = num_res_blocks - self.resolution = resolution - self.in_channels = in_channels - self.give_pre_end = give_pre_end - - # compute in_ch_mult, block_in and curr_res at lowest res - in_ch_mult = (1,)+tuple(ch_mult) - block_in = ch*ch_mult[self.num_resolutions-1] - curr_res = resolution // 2**(self.num_resolutions-1) - self.z_shape = (1,z_channels,curr_res,curr_res) - print("Working with z of shape {} = {} dimensions.".format( - self.z_shape, np.prod(self.z_shape))) - - # z to block_in - self.conv_in = torch.nn.Conv2d(z_channels, - block_in, - kernel_size=3, - stride=1, - padding=1) - - # middle - self.mid = nn.Module() - self.mid.block_1 = ResnetBlock(in_channels=block_in, - out_channels=block_in, - temb_channels=self.temb_ch, - dropout=dropout) - self.mid.attn_1 = AttnBlock(block_in) - self.mid.block_2 = ResnetBlock(in_channels=block_in, - out_channels=block_in, - temb_channels=self.temb_ch, - dropout=dropout) - - # upsampling - self.up = nn.ModuleList() - for i_level in reversed(range(self.num_resolutions)): - block = nn.ModuleList() - attn = nn.ModuleList() - block_out = ch*ch_mult[i_level] - for i_block in range(self.num_res_blocks+1): - block.append(ResnetBlock(in_channels=block_in, - out_channels=block_out, - temb_channels=self.temb_ch, - dropout=dropout)) - block_in = block_out - if curr_res in attn_resolutions: - attn.append(AttnBlock(block_in)) - up = nn.Module() - up.block = block - up.attn = attn - if i_level != 0: - up.upsample = Upsample(block_in, resamp_with_conv) - curr_res = curr_res * 2 - self.up.insert(0, up) # prepend to get consistent order - - # end - self.norm_out = Normalize(block_in) - self.conv_out = torch.nn.Conv2d(block_in, - out_ch, - kernel_size=3, - stride=1, - padding=1) - - def forward(self, z): - #assert z.shape[1:] == self.z_shape[1:] - self.last_z_shape = z.shape - - # timestep embedding - temb = None - - # z to block_in - h = self.conv_in(z) - - # middle - h = self.mid.block_1(h, temb) - h = self.mid.attn_1(h) - h = self.mid.block_2(h, temb) - - # upsampling - for i_level in reversed(range(self.num_resolutions)): - for i_block in range(self.num_res_blocks+1): - h = self.up[i_level].block[i_block](h, temb) - if len(self.up[i_level].attn) > 0: - h = self.up[i_level].attn[i_block](h) - if i_level != 0: - h = self.up[i_level].upsample(h) - - # end - if self.give_pre_end: - return h - - h = self.norm_out(h) - h = nonlinearity(h) - h = self.conv_out(h) - return h - - -class VUNet(nn.Module): - def __init__(self, *, ch, out_ch, ch_mult=(1,2,4,8), num_res_blocks, - attn_resolutions, dropout=0.0, resamp_with_conv=True, - in_channels, c_channels, - resolution, z_channels, use_timestep=False, **ignore_kwargs): - super().__init__() - self.ch = ch - self.temb_ch = self.ch*4 - self.num_resolutions = len(ch_mult) - self.num_res_blocks = num_res_blocks - self.resolution = resolution - - self.use_timestep = use_timestep - if self.use_timestep: - # timestep embedding - self.temb = nn.Module() - self.temb.dense = nn.ModuleList([ - torch.nn.Linear(self.ch, - self.temb_ch), - torch.nn.Linear(self.temb_ch, - self.temb_ch), - ]) - - # downsampling - self.conv_in = torch.nn.Conv2d(c_channels, - self.ch, - kernel_size=3, - stride=1, - padding=1) - - curr_res = resolution - in_ch_mult = (1,)+tuple(ch_mult) - self.down = nn.ModuleList() - for i_level in range(self.num_resolutions): - block = nn.ModuleList() - attn = nn.ModuleList() - block_in = ch*in_ch_mult[i_level] - block_out = ch*ch_mult[i_level] - for i_block in range(self.num_res_blocks): - block.append(ResnetBlock(in_channels=block_in, - out_channels=block_out, - temb_channels=self.temb_ch, - dropout=dropout)) - block_in = block_out - if curr_res in attn_resolutions: - attn.append(AttnBlock(block_in)) - down = nn.Module() - down.block = block - down.attn = attn - if i_level != self.num_resolutions-1: - down.downsample = Downsample(block_in, resamp_with_conv) - curr_res = curr_res // 2 - self.down.append(down) - - self.z_in = torch.nn.Conv2d(z_channels, - block_in, - kernel_size=1, - stride=1, - padding=0) - # middle - self.mid = nn.Module() - self.mid.block_1 = ResnetBlock(in_channels=2*block_in, - out_channels=block_in, - temb_channels=self.temb_ch, - dropout=dropout) - self.mid.attn_1 = AttnBlock(block_in) - self.mid.block_2 = ResnetBlock(in_channels=block_in, - out_channels=block_in, - temb_channels=self.temb_ch, - dropout=dropout) - - # upsampling - self.up = nn.ModuleList() - for i_level in reversed(range(self.num_resolutions)): - block = nn.ModuleList() - attn = nn.ModuleList() - block_out = ch*ch_mult[i_level] - skip_in = ch*ch_mult[i_level] - for i_block in range(self.num_res_blocks+1): - if i_block == self.num_res_blocks: - skip_in = ch*in_ch_mult[i_level] - block.append(ResnetBlock(in_channels=block_in+skip_in, - out_channels=block_out, - temb_channels=self.temb_ch, - dropout=dropout)) - block_in = block_out - if curr_res in attn_resolutions: - attn.append(AttnBlock(block_in)) - up = nn.Module() - up.block = block - up.attn = attn - if i_level != 0: - up.upsample = Upsample(block_in, resamp_with_conv) - curr_res = curr_res * 2 - self.up.insert(0, up) # prepend to get consistent order - - # end - self.norm_out = Normalize(block_in) - self.conv_out = torch.nn.Conv2d(block_in, - out_ch, - kernel_size=3, - stride=1, - padding=1) - - - def forward(self, x, z): - #assert x.shape[2] == x.shape[3] == self.resolution - - if self.use_timestep: - # timestep embedding - assert t is not None - temb = get_timestep_embedding(t, self.ch) - temb = self.temb.dense[0](temb) - temb = nonlinearity(temb) - temb = self.temb.dense[1](temb) - else: - temb = None - - # downsampling - hs = [self.conv_in(x)] - for i_level in range(self.num_resolutions): - for i_block in range(self.num_res_blocks): - h = self.down[i_level].block[i_block](hs[-1], temb) - if len(self.down[i_level].attn) > 0: - h = self.down[i_level].attn[i_block](h) - hs.append(h) - if i_level != self.num_resolutions-1: - hs.append(self.down[i_level].downsample(hs[-1])) - - # middle - h = hs[-1] - z = self.z_in(z) - h = torch.cat((h,z),dim=1) - h = self.mid.block_1(h, temb) - h = self.mid.attn_1(h) - h = self.mid.block_2(h, temb) - - # upsampling - for i_level in reversed(range(self.num_resolutions)): - for i_block in range(self.num_res_blocks+1): - h = self.up[i_level].block[i_block]( - torch.cat([h, hs.pop()], dim=1), temb) - if len(self.up[i_level].attn) > 0: - h = self.up[i_level].attn[i_block](h) - if i_level != 0: - h = self.up[i_level].upsample(h) - - # end - h = self.norm_out(h) - h = nonlinearity(h) - h = self.conv_out(h) - return h - - -class SimpleDecoder(nn.Module): - def __init__(self, in_channels, out_channels, *args, **kwargs): - super().__init__() - self.model = nn.ModuleList([nn.Conv2d(in_channels, in_channels, 1), - ResnetBlock(in_channels=in_channels, - out_channels=2 * in_channels, - temb_channels=0, dropout=0.0), - ResnetBlock(in_channels=2 * in_channels, - out_channels=4 * in_channels, - temb_channels=0, dropout=0.0), - ResnetBlock(in_channels=4 * in_channels, - out_channels=2 * in_channels, - temb_channels=0, dropout=0.0), - nn.Conv2d(2*in_channels, in_channels, 1), - Upsample(in_channels, with_conv=True)]) - # end - self.norm_out = Normalize(in_channels) - self.conv_out = torch.nn.Conv2d(in_channels, - out_channels, - kernel_size=3, - stride=1, - padding=1) - - def forward(self, x): - for i, layer in enumerate(self.model): - if i in [1,2,3]: - x = layer(x, None) - else: - x = layer(x) - - h = self.norm_out(x) - h = nonlinearity(h) - x = self.conv_out(h) - return x - - -class UpsampleDecoder(nn.Module): - def __init__(self, in_channels, out_channels, ch, num_res_blocks, resolution, - ch_mult=(2,2), dropout=0.0): - super().__init__() - # upsampling - self.temb_ch = 0 - self.num_resolutions = len(ch_mult) - self.num_res_blocks = num_res_blocks - block_in = in_channels - curr_res = resolution // 2 ** (self.num_resolutions - 1) - self.res_blocks = nn.ModuleList() - self.upsample_blocks = nn.ModuleList() - for i_level in range(self.num_resolutions): - res_block = [] - block_out = ch * ch_mult[i_level] - for i_block in range(self.num_res_blocks + 1): - res_block.append(ResnetBlock(in_channels=block_in, - out_channels=block_out, - temb_channels=self.temb_ch, - dropout=dropout)) - block_in = block_out - self.res_blocks.append(nn.ModuleList(res_block)) - if i_level != self.num_resolutions - 1: - self.upsample_blocks.append(Upsample(block_in, True)) - curr_res = curr_res * 2 - - # end - self.norm_out = Normalize(block_in) - self.conv_out = torch.nn.Conv2d(block_in, - out_channels, - kernel_size=3, - stride=1, - padding=1) - - def forward(self, x): - # upsampling - h = x - for k, i_level in enumerate(range(self.num_resolutions)): - for i_block in range(self.num_res_blocks + 1): - h = self.res_blocks[i_level][i_block](h, None) - if i_level != self.num_resolutions - 1: - h = self.upsample_blocks[k](h) - h = self.norm_out(h) - h = nonlinearity(h) - h = self.conv_out(h) - return h - diff --git a/spaces/cvr/3classifier/README.md b/spaces/cvr/3classifier/README.md deleted file mode 100644 index c74a0511ed6fc6b868f5c854c30ec65501bd4c3f..0000000000000000000000000000000000000000 --- a/spaces/cvr/3classifier/README.md +++ /dev/null @@ -1,45 +0,0 @@ ---- -title: 3classifier -emoji: 📈 -colorFrom: indigo -colorTo: red -sdk: gradio -app_file: app.py -pinned: false ---- - -# Configuration - -`title`: _string_ -Display title for the Space - -`emoji`: _string_ -Space emoji (emoji-only character allowed) - -`colorFrom`: _string_ -Color for Thumbnail gradient (red, yellow, green, blue, indigo, purple, pink, gray) - -`colorTo`: _string_ -Color for Thumbnail gradient (red, yellow, green, blue, indigo, purple, pink, gray) - -`sdk`: _string_ -Can be either `gradio`, `streamlit`, or `static` - -`sdk_version` : _string_ -Only applicable for `streamlit` SDK. -See [doc](https://hf.co/docs/hub/spaces) for more info on supported versions. - -`app_file`: _string_ -Path to your main application file (which contains either `gradio` or `streamlit` Python code, or `static` html code). -Path is relative to the root of the repository. - -`models`: _List[string]_ -HF model IDs (like "gpt2" or "deepset/roberta-base-squad2") used in the Space. -Will be parsed automatically from your code if not specified here. - -`datasets`: _List[string]_ -HF dataset IDs (like "common_voice" or "oscar-corpus/OSCAR-2109") used in the Space. -Will be parsed automatically from your code if not specified here. - -`pinned`: _boolean_ -Whether the Space stays on top of your list. diff --git a/spaces/cyllum/soccertwos-analytics/README.md b/spaces/cyllum/soccertwos-analytics/README.md deleted file mode 100644 index e013057641a18f8d723a59c41037e9f29b14cd43..0000000000000000000000000000000000000000 --- a/spaces/cyllum/soccertwos-analytics/README.md +++ /dev/null @@ -1,13 +0,0 @@ ---- -title: SoccerTwos Analytics -emoji: ⚽ -colorFrom: blue -colorTo: green -sdk: docker -app_port: 8501 -pinned: true ---- - -## SoccerTwos Analytics - -A dashboard for analysing your team's performance in the [SoccerTwos competition](https://huggingface.co/spaces/huggingface-projects/AIvsAI-SoccerTwos). diff --git a/spaces/daddyjin/TalkingFaceGeneration/Demo_TFR_Pirenderer/src/face3d/util/generate_list.py b/spaces/daddyjin/TalkingFaceGeneration/Demo_TFR_Pirenderer/src/face3d/util/generate_list.py deleted file mode 100644 index 943d906781063c3584a7e5b5c784f8aac0694985..0000000000000000000000000000000000000000 --- a/spaces/daddyjin/TalkingFaceGeneration/Demo_TFR_Pirenderer/src/face3d/util/generate_list.py +++ /dev/null @@ -1,34 +0,0 @@ -"""This script is to generate training list files for Deep3DFaceRecon_pytorch -""" - -import os - -# save path to training data -def write_list(lms_list, imgs_list, msks_list, mode='train',save_folder='datalist', save_name=''): - save_path = os.path.join(save_folder, mode) - if not os.path.isdir(save_path): - os.makedirs(save_path) - with open(os.path.join(save_path, save_name + 'landmarks.txt'), 'w') as fd: - fd.writelines([i + '\n' for i in lms_list]) - - with open(os.path.join(save_path, save_name + 'images.txt'), 'w') as fd: - fd.writelines([i + '\n' for i in imgs_list]) - - with open(os.path.join(save_path, save_name + 'masks.txt'), 'w') as fd: - fd.writelines([i + '\n' for i in msks_list]) - -# check if the path is valid -def check_list(rlms_list, rimgs_list, rmsks_list): - lms_list, imgs_list, msks_list = [], [], [] - for i in range(len(rlms_list)): - flag = 'false' - lm_path = rlms_list[i] - im_path = rimgs_list[i] - msk_path = rmsks_list[i] - if os.path.isfile(lm_path) and os.path.isfile(im_path) and os.path.isfile(msk_path): - flag = 'true' - lms_list.append(rlms_list[i]) - imgs_list.append(rimgs_list[i]) - msks_list.append(rmsks_list[i]) - print(i, rlms_list[i], flag) - return lms_list, imgs_list, msks_list diff --git "a/spaces/dakaiye/dky_xuexi/crazy_functions/\350\257\242\351\227\256\345\244\232\344\270\252\345\244\247\350\257\255\350\250\200\346\250\241\345\236\213.py" "b/spaces/dakaiye/dky_xuexi/crazy_functions/\350\257\242\351\227\256\345\244\232\344\270\252\345\244\247\350\257\255\350\250\200\346\250\241\345\236\213.py" deleted file mode 100644 index c1e5dadd142de683323463d3df260cbe6eefa6d8..0000000000000000000000000000000000000000 --- "a/spaces/dakaiye/dky_xuexi/crazy_functions/\350\257\242\351\227\256\345\244\232\344\270\252\345\244\247\350\257\255\350\250\200\346\250\241\345\236\213.py" +++ /dev/null @@ -1,60 +0,0 @@ -from toolbox import CatchException, update_ui -from .crazy_utils import request_gpt_model_in_new_thread_with_ui_alive -import datetime -@CatchException -def 同时问询(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, web_port): - """ - txt 输入栏用户输入的文本,例如需要翻译的一段话,再例如一个包含了待处理文件的路径 - llm_kwargs gpt模型参数,如温度和top_p等,一般原样传递下去就行 - plugin_kwargs 插件模型的参数,如温度和top_p等,一般原样传递下去就行 - chatbot 聊天显示框的句柄,用于显示给用户 - history 聊天历史,前情提要 - system_prompt 给gpt的静默提醒 - web_port 当前软件运行的端口号 - """ - history = [] # 清空历史,以免输入溢出 - chatbot.append((txt, "正在同时咨询gpt-3.5和gpt-4……")) - yield from update_ui(chatbot=chatbot, history=history) # 刷新界面 # 由于请求gpt需要一段时间,我们先及时地做一次界面更新 - - # llm_kwargs['llm_model'] = 'chatglm&gpt-3.5-turbo&api2d-gpt-3.5-turbo' # 支持任意数量的llm接口,用&符号分隔 - llm_kwargs['llm_model'] = 'gpt-3.5-turbo&gpt-4' # 支持任意数量的llm接口,用&符号分隔 - gpt_say = yield from request_gpt_model_in_new_thread_with_ui_alive( - inputs=txt, inputs_show_user=txt, - llm_kwargs=llm_kwargs, chatbot=chatbot, history=history, - sys_prompt=system_prompt, - retry_times_at_unknown_error=0 - ) - - history.append(txt) - history.append(gpt_say) - yield from update_ui(chatbot=chatbot, history=history) # 刷新界面 # 界面更新 - - -@CatchException -def 同时问询_指定模型(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, web_port): - """ - txt 输入栏用户输入的文本,例如需要翻译的一段话,再例如一个包含了待处理文件的路径 - llm_kwargs gpt模型参数,如温度和top_p等,一般原样传递下去就行 - plugin_kwargs 插件模型的参数,如温度和top_p等,一般原样传递下去就行 - chatbot 聊天显示框的句柄,用于显示给用户 - history 聊天历史,前情提要 - system_prompt 给gpt的静默提醒 - web_port 当前软件运行的端口号 - """ - history = [] # 清空历史,以免输入溢出 - chatbot.append((txt, "正在同时咨询ChatGPT和ChatGLM……")) - yield from update_ui(chatbot=chatbot, history=history) # 刷新界面 # 由于请求gpt需要一段时间,我们先及时地做一次界面更新 - - if ("advanced_arg" in plugin_kwargs) and (plugin_kwargs["advanced_arg"] == ""): plugin_kwargs.pop("advanced_arg") - # llm_kwargs['llm_model'] = 'chatglm&gpt-3.5-turbo&api2d-gpt-3.5-turbo' # 支持任意数量的llm接口,用&符号分隔 - llm_kwargs['llm_model'] = plugin_kwargs.get("advanced_arg", 'chatglm&gpt-3.5-turbo') # 'chatglm&gpt-3.5-turbo' # 支持任意数量的llm接口,用&符号分隔 - gpt_say = yield from request_gpt_model_in_new_thread_with_ui_alive( - inputs=txt, inputs_show_user=txt, - llm_kwargs=llm_kwargs, chatbot=chatbot, history=history, - sys_prompt=system_prompt, - retry_times_at_unknown_error=0 - ) - - history.append(txt) - history.append(gpt_say) - yield from update_ui(chatbot=chatbot, history=history) # 刷新界面 # 界面更新 \ No newline at end of file diff --git a/spaces/danieldux/isco-gpt/prompts.py b/spaces/danieldux/isco-gpt/prompts.py deleted file mode 100644 index 356875824aeb468e244072f4cdf8f80b007ef359..0000000000000000000000000000000000000000 --- a/spaces/danieldux/isco-gpt/prompts.py +++ /dev/null @@ -1,26 +0,0 @@ -from langchain.prompts import PromptTemplate - -## Use a shorter template to reduce the number of tokens in the prompt -template = """Create a final answer to the given questions using the provided document excerpts(in no particular order) as references. ALWAYS include a "SOURCES" section in your answer including only the minimal set of sources needed to answer the question. If you are unable to answer the question, simply state that you do not know. Do not attempt to fabricate an answer and leave the SOURCES section empty. ---------- -QUESTION: What is the purpose of ARPA-H? -========= -Content: More support for patients and families. \n\nTo get there, I call on Congress to fund ARPA-H, the Advanced Research Projects Agency for Health. \n\nIt’s based on DARPA—the Defense Department project that led to the Internet, GPS, and so much more. \n\nARPA-H will have a singular purpose—to drive breakthroughs in cancer, Alzheimer’s, diabetes, and more. -Source: 1-32 -Content: While we’re at it, let’s make sure every American can get the health care they need. \n\nWe’ve already made historic investments in health care. \n\nWe’ve made it easier for Americans to get the care they need, when they need it. \n\nWe’ve made it easier for Americans to get the treatments they need, when they need them. \n\nWe’ve made it easier for Americans to get the medications they need, when they need them. -Source: 1-33 -Content: The V.A. is pioneering new ways of linking toxic exposures to disease, already helping veterans get the care they deserve. \n\nWe need to extend that same care to all Americans. \n\nThat’s why I’m calling on Congress to pass legislation that would establish a national registry of toxic exposures, and provide health care and financial assistance to those affected. -Source: 1-30 -========= -FINAL ANSWER: The purpose of ARPA-H is to drive breakthroughs in cancer, Alzheimer’s, diabetes, and more. -SOURCES: 1-32 ---------- -QUESTION: {question} -========= -{summaries} -========= -FINAL ANSWER:""" - -STUFF_PROMPT = PromptTemplate( - template=template, input_variables=["summaries", "question"] -) \ No newline at end of file diff --git a/spaces/davda54/chat-nort5/retrieval.py b/spaces/davda54/chat-nort5/retrieval.py deleted file mode 100644 index 9e27ea339e6a60a3b074ffc06ef970522b4fe3fc..0000000000000000000000000000000000000000 --- a/spaces/davda54/chat-nort5/retrieval.py +++ /dev/null @@ -1,183 +0,0 @@ -import torch -import torch.nn as nn -import torch.nn.functional as F - -from question_detection_norbert3_small.modeling_norbert import NorbertForSequenceClassification -from keyword_generation_nort5_small.modeling_nort5 import NorT5ForConditionalGeneration -from norquad.modeling_norbert import NorbertForQuestionAnswering -from transformers import AutoTokenizer - -import wikipedia -import wikipediaapi - -from smart_open import open -from trie import MarisaTrie - - -class Retrival: - def __init__(self): - self.tokenizer = AutoTokenizer.from_pretrained("keyword_generation_nort5_small") - self.device = "cuda" if torch.cuda.is_available() else "cpu" - - self.question_detection = NorbertForSequenceClassification.from_pretrained("question_detection_norbert3_small") - self.question_detection = self.question_detection.to(self.device) - self.question_detection.eval() - - self.keyword_generation = NorT5ForConditionalGeneration.from_pretrained("keyword_generation_nort5_small") - self.keyword_generation = self.keyword_generation.to(self.device) - self.keyword_generation.eval() - - self.norquad = NorbertForQuestionAnswering.from_pretrained("norquad") - self.norquad = self.norquad.to(self.device) - self.norquad.eval() - - self.wiki = wikipediaapi.Wikipedia("ChatNorT5 (https://huggingface.co/spaces/davda54/chat-nort5", 'nb') - wikipedia.set_lang('nb') - - titles = [title.strip().replace('_', ' ') for title in open("https://dumps.wikimedia.org/nowiki/latest/nowiki-latest-all-titles-in-ns0.gz")] - titles = [[5] + self.tokenizer(title, add_special_tokens=False).input_ids + [6] for title in titles] - self.trie = MarisaTrie(titles) - - self.extraction_max_length = 384 # The maximum length of a feature (question and context) - self.extraction_doc_stride = 128 # The authorized overlap between two part of the context when splitting it is needed. - - def is_question(self, text): - if len(text) > 256 or len(text) < 8: - return False - - input_ids = self.tokenizer([text]).input_ids - input_ids = torch.tensor(input_ids, device=self.device) - - output = self.question_detection(input_ids).logits - - print(f"Question probability: {F.softmax(output.squeeze(), -1)[1].item() * 100.0:.2f}") - return output.argmax(-1).squeeze().item() == 1 - - def get_answer(self, text): - keywords = self._create_keywords(text) - print(keywords) - - for title, url, document in self._yield_documents(keywords): - response = self._extract_answer(text, document) - if response is None: - continue - - print(response) - return title, url, response - - return None, None, None - - def _extract_answer(self, question, document): - encoding = self.tokenizer( - question, - document, - truncation="only_second", - max_length=self.extraction_max_length, - stride=self.extraction_doc_stride, - return_overflowing_tokens=True, - return_offsets_mapping=True, - padding="max_length" - ) - - for i in range(len(encoding["input_ids"])): - sequence_ids = encoding.sequence_ids(i) - encoding["offset_mapping"][i] = [ - (o if sequence_ids[k] == 1 else None) - for k, o in enumerate(encoding["offset_mapping"][i]) - ] - - batch_size = len(encoding.input_ids) - start_logits, end_logits = [], [] - for i in range((batch_size + 7) // 8): - output = self.norquad(torch.tensor(encoding.input_ids[i*8 : (i+1)*8], device=self.device)) - start_logits.append(output.start_logits.cpu()) - end_logits.append(output.end_logits.cpu()) - - start_logits = torch.cat(start_logits, dim=0) - end_logits = torch.cat(end_logits, dim=0) - - valid_answers = [] - - for feature_index in range(batch_size): - offset_mapping = encoding["offset_mapping"][feature_index] - feature_null_score = start_logits[feature_index, 0] + end_logits[feature_index, 0] - - start_indexes = start_logits[feature_index].argsort(descending=True)[:20].tolist() - end_indexes = end_logits[feature_index].argsort(descending=True)[:20].tolist() - for start_index in start_indexes: - for end_index in end_indexes: - if ( - start_index >= len(offset_mapping) - or end_index >= len(offset_mapping) - or offset_mapping[start_index] is None - or offset_mapping[end_index] is None - ): - continue - - if end_index < start_index: - continue - - if start_logits[feature_index, start_index] + end_logits[feature_index, end_index] < feature_null_score: - continue - - start_char = offset_mapping[start_index][0] - end_char = offset_mapping[end_index][1] - valid_answers.append( - { - "score": start_logits[feature_index, start_index] + end_logits[feature_index, end_index], - "start": start_char, - "end": end_char - } - ) - - if len(valid_answers) == 0: - return None - - best_answer = sorted(valid_answers, key=lambda x: x["score"], reverse=True)[0] - start, end = best_answer["start"], best_answer["end"] - print(document[start:end]) - - left_context = document[max(0, start - 128):start] - answer = document[start:end] - right_context = document[end:end+256] - - left_context = ' '.join(left_context.split('\n')[-1].split()) - right_context = ' '.join(right_context.split('\n\n')[0].split()) - - return left_context, answer, right_context - - - def _create_keywords(self, text): - input_ids = self.tokenizer([text]).input_ids - input_ids = torch.tensor(input_ids, device=self.device) - - output = self.keyword_generation.generate( - input_ids=input_ids, - do_sample = False, - num_return_sequences=8, - num_beams=16, - temperature=1.0, - prefix_allowed_tokens_fn=lambda batch_id, sent: self.trie.get(sent.tolist()) - ) - keywords = self.tokenizer.batch_decode(output.cpu(), skip_special_tokens=True) - return keywords - - - def _yield_documents(self, keywords): - used_keywords = set() - for keyword in keywords: - suggestions = wikipedia.search(keyword, results=1, suggestion=False) - if len(suggestions) == 0: - continue - - if suggestions[0] in used_keywords: - continue - used_keywords.add(suggestions[0]) - - page = self.wiki.page(suggestions[0]) - - if not page.exists(): - continue - - print(suggestions[0], '->', page.fullurl) - yield suggestions[0], page.fullurl, page.text diff --git a/spaces/dcarpintero/nlp-summarizer-pegasus/.venv/lib/python3.9/site-packages/aiohttp/log.py b/spaces/dcarpintero/nlp-summarizer-pegasus/.venv/lib/python3.9/site-packages/aiohttp/log.py deleted file mode 100644 index 3cecea2bac185df741bccd0a32a5fef9cfe23299..0000000000000000000000000000000000000000 --- a/spaces/dcarpintero/nlp-summarizer-pegasus/.venv/lib/python3.9/site-packages/aiohttp/log.py +++ /dev/null @@ -1,8 +0,0 @@ -import logging - -access_logger = logging.getLogger("aiohttp.access") -client_logger = logging.getLogger("aiohttp.client") -internal_logger = logging.getLogger("aiohttp.internal") -server_logger = logging.getLogger("aiohttp.server") -web_logger = logging.getLogger("aiohttp.web") -ws_logger = logging.getLogger("aiohttp.websocket") diff --git a/spaces/dcarpintero/nlp-summarizer-pegasus/.venv/lib/python3.9/site-packages/fontTools/ttLib/tables/grUtils.py b/spaces/dcarpintero/nlp-summarizer-pegasus/.venv/lib/python3.9/site-packages/fontTools/ttLib/tables/grUtils.py deleted file mode 100644 index 785684b1eb30a76ae598bfe46416d4556fc422a0..0000000000000000000000000000000000000000 --- a/spaces/dcarpintero/nlp-summarizer-pegasus/.venv/lib/python3.9/site-packages/fontTools/ttLib/tables/grUtils.py +++ /dev/null @@ -1,92 +0,0 @@ -import struct, warnings - -try: - import lz4 -except ImportError: - lz4 = None -else: - import lz4.block - -# old scheme for VERSION < 0.9 otherwise use lz4.block - - -def decompress(data): - (compression,) = struct.unpack(">L", data[4:8]) - scheme = compression >> 27 - size = compression & 0x07FFFFFF - if scheme == 0: - pass - elif scheme == 1 and lz4: - res = lz4.block.decompress(struct.pack("L", (scheme << 27) + (len(data) & 0x07FFFFFF)) - if scheme == 0: - return data - elif scheme == 1 and lz4: - res = lz4.block.compress( - data, mode="high_compression", compression=16, store_size=False - ) - return hdr + res - else: - warnings.warn("Table failed to compress by unsupported compression scheme") - return data - - -def _entries(attrs, sameval): - ak = 0 - vals = [] - lastv = 0 - for k, v in attrs: - if len(vals) and (k != ak + 1 or (sameval and v != lastv)): - yield (ak - len(vals) + 1, len(vals), vals) - vals = [] - ak = k - vals.append(v) - lastv = v - yield (ak - len(vals) + 1, len(vals), vals) - - -def entries(attributes, sameval=False): - g = _entries(sorted(attributes.items(), key=lambda x: int(x[0])), sameval) - return g - - -def bininfo(num, size=1): - if num == 0: - return struct.pack(">4H", 0, 0, 0, 0) - srange = 1 - select = 0 - while srange <= num: - srange *= 2 - select += 1 - select -= 1 - srange //= 2 - srange *= size - shift = num * size - srange - return struct.pack(">4H", num, srange, select, shift) - - -def num2tag(n): - if n < 0x200000: - return str(n) - else: - return ( - struct.unpack("4s", struct.pack(">L", n))[0].replace(b"\000", b"").decode() - ) - - -def tag2num(n): - try: - return int(n) - except ValueError: - n = (n + " ")[:4] - return struct.unpack(">L", n.encode("ascii"))[0] diff --git a/spaces/declare-lab/tango/diffusers/src/diffusers/pipelines/alt_diffusion/pipeline_alt_diffusion_img2img.py b/spaces/declare-lab/tango/diffusers/src/diffusers/pipelines/alt_diffusion/pipeline_alt_diffusion_img2img.py deleted file mode 100644 index bb8116f2f5d5c23efe7a285a73d8dd13ec69b8c7..0000000000000000000000000000000000000000 --- a/spaces/declare-lab/tango/diffusers/src/diffusers/pipelines/alt_diffusion/pipeline_alt_diffusion_img2img.py +++ /dev/null @@ -1,754 +0,0 @@ -# Copyright 2023 The HuggingFace Team. All rights reserved. -# -# Licensed under the Apache License, Version 2.0 (the "License"); -# you may not use this file except in compliance with the License. -# You may obtain a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an "AS IS" BASIS, -# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -# See the License for the specific language governing permissions and -# limitations under the License. - -import inspect -from typing import Any, Callable, Dict, List, Optional, Union - -import numpy as np -import PIL -import torch -from packaging import version -from transformers import CLIPImageProcessor, XLMRobertaTokenizer - -from diffusers.utils import is_accelerate_available, is_accelerate_version - -from ...configuration_utils import FrozenDict -from ...image_processor import VaeImageProcessor -from ...loaders import TextualInversionLoaderMixin -from ...models import AutoencoderKL, UNet2DConditionModel -from ...schedulers import KarrasDiffusionSchedulers -from ...utils import PIL_INTERPOLATION, deprecate, logging, randn_tensor, replace_example_docstring -from ..pipeline_utils import DiffusionPipeline -from ..stable_diffusion.safety_checker import StableDiffusionSafetyChecker -from . import AltDiffusionPipelineOutput, RobertaSeriesModelWithTransformation - - -logger = logging.get_logger(__name__) # pylint: disable=invalid-name - -EXAMPLE_DOC_STRING = """ - Examples: - ```py - >>> import requests - >>> import torch - >>> from PIL import Image - >>> from io import BytesIO - - >>> from diffusers import AltDiffusionImg2ImgPipeline - - >>> device = "cuda" - >>> model_id_or_path = "BAAI/AltDiffusion-m9" - >>> pipe = AltDiffusionImg2ImgPipeline.from_pretrained(model_id_or_path, torch_dtype=torch.float16) - >>> pipe = pipe.to(device) - - >>> url = "https://raw.githubusercontent.com/CompVis/stable-diffusion/main/assets/stable-samples/img2img/sketch-mountains-input.jpg" - - >>> response = requests.get(url) - >>> init_image = Image.open(BytesIO(response.content)).convert("RGB") - >>> init_image = init_image.resize((768, 512)) - - >>> # "A fantasy landscape, trending on artstation" - >>> prompt = "幻想风景, artstation" - - >>> images = pipe(prompt=prompt, image=init_image, strength=0.75, guidance_scale=7.5).images - >>> images[0].save("幻想风景.png") - ``` -""" - - -# Copied from diffusers.pipelines.stable_diffusion.pipeline_stable_diffusion_img2img.preprocess -def preprocess(image): - if isinstance(image, torch.Tensor): - return image - elif isinstance(image, PIL.Image.Image): - image = [image] - - if isinstance(image[0], PIL.Image.Image): - w, h = image[0].size - w, h = (x - x % 8 for x in (w, h)) # resize to integer multiple of 8 - - image = [np.array(i.resize((w, h), resample=PIL_INTERPOLATION["lanczos"]))[None, :] for i in image] - image = np.concatenate(image, axis=0) - image = np.array(image).astype(np.float32) / 255.0 - image = image.transpose(0, 3, 1, 2) - image = 2.0 * image - 1.0 - image = torch.from_numpy(image) - elif isinstance(image[0], torch.Tensor): - image = torch.cat(image, dim=0) - return image - - -# Copied from diffusers.pipelines.stable_diffusion.pipeline_stable_diffusion_img2img.StableDiffusionImg2ImgPipeline with Stable->Alt, CLIPTextModel->RobertaSeriesModelWithTransformation, CLIPTokenizer->XLMRobertaTokenizer, AltDiffusionSafetyChecker->StableDiffusionSafetyChecker -class AltDiffusionImg2ImgPipeline(DiffusionPipeline, TextualInversionLoaderMixin): - r""" - Pipeline for text-guided image to image generation using Alt Diffusion. - - This model inherits from [`DiffusionPipeline`]. Check the superclass documentation for the generic methods the - library implements for all the pipelines (such as downloading or saving, running on a particular device, etc.) - - Args: - vae ([`AutoencoderKL`]): - Variational Auto-Encoder (VAE) Model to encode and decode images to and from latent representations. - text_encoder ([`RobertaSeriesModelWithTransformation`]): - Frozen text-encoder. Alt Diffusion uses the text portion of - [CLIP](https://huggingface.co/docs/transformers/model_doc/clip#transformers.RobertaSeriesModelWithTransformation), - specifically the [clip-vit-large-patch14](https://huggingface.co/openai/clip-vit-large-patch14) variant. - tokenizer (`XLMRobertaTokenizer`): - Tokenizer of class - [XLMRobertaTokenizer](https://huggingface.co/docs/transformers/v4.21.0/en/model_doc/clip#transformers.XLMRobertaTokenizer). - unet ([`UNet2DConditionModel`]): Conditional U-Net architecture to denoise the encoded image latents. - scheduler ([`SchedulerMixin`]): - A scheduler to be used in combination with `unet` to denoise the encoded image latents. Can be one of - [`DDIMScheduler`], [`LMSDiscreteScheduler`], or [`PNDMScheduler`]. - safety_checker ([`StableDiffusionSafetyChecker`]): - Classification module that estimates whether generated images could be considered offensive or harmful. - Please, refer to the [model card](https://huggingface.co/runwayml/stable-diffusion-v1-5) for details. - feature_extractor ([`CLIPImageProcessor`]): - Model that extracts features from generated images to be used as inputs for the `safety_checker`. - """ - _optional_components = ["safety_checker", "feature_extractor"] - - def __init__( - self, - vae: AutoencoderKL, - text_encoder: RobertaSeriesModelWithTransformation, - tokenizer: XLMRobertaTokenizer, - unet: UNet2DConditionModel, - scheduler: KarrasDiffusionSchedulers, - safety_checker: StableDiffusionSafetyChecker, - feature_extractor: CLIPImageProcessor, - requires_safety_checker: bool = True, - ): - super().__init__() - - if hasattr(scheduler.config, "steps_offset") and scheduler.config.steps_offset != 1: - deprecation_message = ( - f"The configuration file of this scheduler: {scheduler} is outdated. `steps_offset`" - f" should be set to 1 instead of {scheduler.config.steps_offset}. Please make sure " - "to update the config accordingly as leaving `steps_offset` might led to incorrect results" - " in future versions. If you have downloaded this checkpoint from the Hugging Face Hub," - " it would be very nice if you could open a Pull request for the `scheduler/scheduler_config.json`" - " file" - ) - deprecate("steps_offset!=1", "1.0.0", deprecation_message, standard_warn=False) - new_config = dict(scheduler.config) - new_config["steps_offset"] = 1 - scheduler._internal_dict = FrozenDict(new_config) - - if hasattr(scheduler.config, "clip_sample") and scheduler.config.clip_sample is True: - deprecation_message = ( - f"The configuration file of this scheduler: {scheduler} has not set the configuration `clip_sample`." - " `clip_sample` should be set to False in the configuration file. Please make sure to update the" - " config accordingly as not setting `clip_sample` in the config might lead to incorrect results in" - " future versions. If you have downloaded this checkpoint from the Hugging Face Hub, it would be very" - " nice if you could open a Pull request for the `scheduler/scheduler_config.json` file" - ) - deprecate("clip_sample not set", "1.0.0", deprecation_message, standard_warn=False) - new_config = dict(scheduler.config) - new_config["clip_sample"] = False - scheduler._internal_dict = FrozenDict(new_config) - - if safety_checker is None and requires_safety_checker: - logger.warning( - f"You have disabled the safety checker for {self.__class__} by passing `safety_checker=None`. Ensure" - " that you abide to the conditions of the Alt Diffusion license and do not expose unfiltered" - " results in services or applications open to the public. Both the diffusers team and Hugging Face" - " strongly recommend to keep the safety filter enabled in all public facing circumstances, disabling" - " it only for use-cases that involve analyzing network behavior or auditing its results. For more" - " information, please have a look at https://github.com/huggingface/diffusers/pull/254 ." - ) - - if safety_checker is not None and feature_extractor is None: - raise ValueError( - "Make sure to define a feature extractor when loading {self.__class__} if you want to use the safety" - " checker. If you do not want to use the safety checker, you can pass `'safety_checker=None'` instead." - ) - - is_unet_version_less_0_9_0 = hasattr(unet.config, "_diffusers_version") and version.parse( - version.parse(unet.config._diffusers_version).base_version - ) < version.parse("0.9.0.dev0") - is_unet_sample_size_less_64 = hasattr(unet.config, "sample_size") and unet.config.sample_size < 64 - if is_unet_version_less_0_9_0 and is_unet_sample_size_less_64: - deprecation_message = ( - "The configuration file of the unet has set the default `sample_size` to smaller than" - " 64 which seems highly unlikely. If your checkpoint is a fine-tuned version of any of the" - " following: \n- CompVis/stable-diffusion-v1-4 \n- CompVis/stable-diffusion-v1-3 \n-" - " CompVis/stable-diffusion-v1-2 \n- CompVis/stable-diffusion-v1-1 \n- runwayml/stable-diffusion-v1-5" - " \n- runwayml/stable-diffusion-inpainting \n you should change 'sample_size' to 64 in the" - " configuration file. Please make sure to update the config accordingly as leaving `sample_size=32`" - " in the config might lead to incorrect results in future versions. If you have downloaded this" - " checkpoint from the Hugging Face Hub, it would be very nice if you could open a Pull request for" - " the `unet/config.json` file" - ) - deprecate("sample_size<64", "1.0.0", deprecation_message, standard_warn=False) - new_config = dict(unet.config) - new_config["sample_size"] = 64 - unet._internal_dict = FrozenDict(new_config) - self.register_modules( - vae=vae, - text_encoder=text_encoder, - tokenizer=tokenizer, - unet=unet, - scheduler=scheduler, - safety_checker=safety_checker, - feature_extractor=feature_extractor, - ) - self.vae_scale_factor = 2 ** (len(self.vae.config.block_out_channels) - 1) - - self.image_processor = VaeImageProcessor(vae_scale_factor=self.vae_scale_factor) - self.register_to_config( - requires_safety_checker=requires_safety_checker, - ) - - def enable_sequential_cpu_offload(self, gpu_id=0): - r""" - Offloads all models to CPU using accelerate, significantly reducing memory usage. When called, unet, - text_encoder, vae and safety checker have their state dicts saved to CPU and then are moved to a - `torch.device('meta') and loaded to GPU only when their specific submodule has its `forward` method called. - Note that offloading happens on a submodule basis. Memory savings are higher than with - `enable_model_cpu_offload`, but performance is lower. - """ - if is_accelerate_available() and is_accelerate_version(">=", "0.14.0"): - from accelerate import cpu_offload - else: - raise ImportError("`enable_sequential_cpu_offload` requires `accelerate v0.14.0` or higher") - - device = torch.device(f"cuda:{gpu_id}") - - if self.device.type != "cpu": - self.to("cpu", silence_dtype_warnings=True) - torch.cuda.empty_cache() # otherwise we don't see the memory savings (but they probably exist) - - for cpu_offloaded_model in [self.unet, self.text_encoder, self.vae]: - cpu_offload(cpu_offloaded_model, device) - - if self.safety_checker is not None: - cpu_offload(self.safety_checker, execution_device=device, offload_buffers=True) - - def enable_model_cpu_offload(self, gpu_id=0): - r""" - Offloads all models to CPU using accelerate, reducing memory usage with a low impact on performance. Compared - to `enable_sequential_cpu_offload`, this method moves one whole model at a time to the GPU when its `forward` - method is called, and the model remains in GPU until the next model runs. Memory savings are lower than with - `enable_sequential_cpu_offload`, but performance is much better due to the iterative execution of the `unet`. - """ - if is_accelerate_available() and is_accelerate_version(">=", "0.17.0.dev0"): - from accelerate import cpu_offload_with_hook - else: - raise ImportError("`enable_model_cpu_offload` requires `accelerate v0.17.0` or higher.") - - device = torch.device(f"cuda:{gpu_id}") - - if self.device.type != "cpu": - self.to("cpu", silence_dtype_warnings=True) - torch.cuda.empty_cache() # otherwise we don't see the memory savings (but they probably exist) - - hook = None - for cpu_offloaded_model in [self.text_encoder, self.unet, self.vae]: - _, hook = cpu_offload_with_hook(cpu_offloaded_model, device, prev_module_hook=hook) - - if self.safety_checker is not None: - _, hook = cpu_offload_with_hook(self.safety_checker, device, prev_module_hook=hook) - - # We'll offload the last model manually. - self.final_offload_hook = hook - - @property - def _execution_device(self): - r""" - Returns the device on which the pipeline's models will be executed. After calling - `pipeline.enable_sequential_cpu_offload()` the execution device can only be inferred from Accelerate's module - hooks. - """ - if not hasattr(self.unet, "_hf_hook"): - return self.device - for module in self.unet.modules(): - if ( - hasattr(module, "_hf_hook") - and hasattr(module._hf_hook, "execution_device") - and module._hf_hook.execution_device is not None - ): - return torch.device(module._hf_hook.execution_device) - return self.device - - def _encode_prompt( - self, - prompt, - device, - num_images_per_prompt, - do_classifier_free_guidance, - negative_prompt=None, - prompt_embeds: Optional[torch.FloatTensor] = None, - negative_prompt_embeds: Optional[torch.FloatTensor] = None, - ): - r""" - Encodes the prompt into text encoder hidden states. - - Args: - prompt (`str` or `List[str]`, *optional*): - prompt to be encoded - device: (`torch.device`): - torch device - num_images_per_prompt (`int`): - number of images that should be generated per prompt - do_classifier_free_guidance (`bool`): - whether to use classifier free guidance or not - negative_prompt (`str` or `List[str]`, *optional*): - The prompt or prompts not to guide the image generation. If not defined, one has to pass - `negative_prompt_embeds` instead. Ignored when not using guidance (i.e., ignored if `guidance_scale` is - less than `1`). - prompt_embeds (`torch.FloatTensor`, *optional*): - Pre-generated text embeddings. Can be used to easily tweak text inputs, *e.g.* prompt weighting. If not - provided, text embeddings will be generated from `prompt` input argument. - negative_prompt_embeds (`torch.FloatTensor`, *optional*): - Pre-generated negative text embeddings. Can be used to easily tweak text inputs, *e.g.* prompt - weighting. If not provided, negative_prompt_embeds will be generated from `negative_prompt` input - argument. - """ - if prompt is not None and isinstance(prompt, str): - batch_size = 1 - elif prompt is not None and isinstance(prompt, list): - batch_size = len(prompt) - else: - batch_size = prompt_embeds.shape[0] - - if prompt_embeds is None: - # textual inversion: procecss multi-vector tokens if necessary - if isinstance(self, TextualInversionLoaderMixin): - prompt = self.maybe_convert_prompt(prompt, self.tokenizer) - - text_inputs = self.tokenizer( - prompt, - padding="max_length", - max_length=self.tokenizer.model_max_length, - truncation=True, - return_tensors="pt", - ) - text_input_ids = text_inputs.input_ids - untruncated_ids = self.tokenizer(prompt, padding="longest", return_tensors="pt").input_ids - - if untruncated_ids.shape[-1] >= text_input_ids.shape[-1] and not torch.equal( - text_input_ids, untruncated_ids - ): - removed_text = self.tokenizer.batch_decode( - untruncated_ids[:, self.tokenizer.model_max_length - 1 : -1] - ) - logger.warning( - "The following part of your input was truncated because CLIP can only handle sequences up to" - f" {self.tokenizer.model_max_length} tokens: {removed_text}" - ) - - if hasattr(self.text_encoder.config, "use_attention_mask") and self.text_encoder.config.use_attention_mask: - attention_mask = text_inputs.attention_mask.to(device) - else: - attention_mask = None - - prompt_embeds = self.text_encoder( - text_input_ids.to(device), - attention_mask=attention_mask, - ) - prompt_embeds = prompt_embeds[0] - - prompt_embeds = prompt_embeds.to(dtype=self.text_encoder.dtype, device=device) - - bs_embed, seq_len, _ = prompt_embeds.shape - # duplicate text embeddings for each generation per prompt, using mps friendly method - prompt_embeds = prompt_embeds.repeat(1, num_images_per_prompt, 1) - prompt_embeds = prompt_embeds.view(bs_embed * num_images_per_prompt, seq_len, -1) - - # get unconditional embeddings for classifier free guidance - if do_classifier_free_guidance and negative_prompt_embeds is None: - uncond_tokens: List[str] - if negative_prompt is None: - uncond_tokens = [""] * batch_size - elif type(prompt) is not type(negative_prompt): - raise TypeError( - f"`negative_prompt` should be the same type to `prompt`, but got {type(negative_prompt)} !=" - f" {type(prompt)}." - ) - elif isinstance(negative_prompt, str): - uncond_tokens = [negative_prompt] - elif batch_size != len(negative_prompt): - raise ValueError( - f"`negative_prompt`: {negative_prompt} has batch size {len(negative_prompt)}, but `prompt`:" - f" {prompt} has batch size {batch_size}. Please make sure that passed `negative_prompt` matches" - " the batch size of `prompt`." - ) - else: - uncond_tokens = negative_prompt - - # textual inversion: procecss multi-vector tokens if necessary - if isinstance(self, TextualInversionLoaderMixin): - uncond_tokens = self.maybe_convert_prompt(uncond_tokens, self.tokenizer) - - max_length = prompt_embeds.shape[1] - uncond_input = self.tokenizer( - uncond_tokens, - padding="max_length", - max_length=max_length, - truncation=True, - return_tensors="pt", - ) - - if hasattr(self.text_encoder.config, "use_attention_mask") and self.text_encoder.config.use_attention_mask: - attention_mask = uncond_input.attention_mask.to(device) - else: - attention_mask = None - - negative_prompt_embeds = self.text_encoder( - uncond_input.input_ids.to(device), - attention_mask=attention_mask, - ) - negative_prompt_embeds = negative_prompt_embeds[0] - - if do_classifier_free_guidance: - # duplicate unconditional embeddings for each generation per prompt, using mps friendly method - seq_len = negative_prompt_embeds.shape[1] - - negative_prompt_embeds = negative_prompt_embeds.to(dtype=self.text_encoder.dtype, device=device) - - negative_prompt_embeds = negative_prompt_embeds.repeat(1, num_images_per_prompt, 1) - negative_prompt_embeds = negative_prompt_embeds.view(batch_size * num_images_per_prompt, seq_len, -1) - - # For classifier free guidance, we need to do two forward passes. - # Here we concatenate the unconditional and text embeddings into a single batch - # to avoid doing two forward passes - prompt_embeds = torch.cat([negative_prompt_embeds, prompt_embeds]) - - return prompt_embeds - - def run_safety_checker(self, image, device, dtype): - feature_extractor_input = self.image_processor.postprocess(image, output_type="pil") - safety_checker_input = self.feature_extractor(feature_extractor_input, return_tensors="pt").to(device) - image, has_nsfw_concept = self.safety_checker( - images=image, clip_input=safety_checker_input.pixel_values.to(dtype) - ) - return image, has_nsfw_concept - - def decode_latents(self, latents): - latents = 1 / self.vae.config.scaling_factor * latents - image = self.vae.decode(latents).sample - image = (image / 2 + 0.5).clamp(0, 1) - return image - - def prepare_extra_step_kwargs(self, generator, eta): - # prepare extra kwargs for the scheduler step, since not all schedulers have the same signature - # eta (η) is only used with the DDIMScheduler, it will be ignored for other schedulers. - # eta corresponds to η in DDIM paper: https://arxiv.org/abs/2010.02502 - # and should be between [0, 1] - - accepts_eta = "eta" in set(inspect.signature(self.scheduler.step).parameters.keys()) - extra_step_kwargs = {} - if accepts_eta: - extra_step_kwargs["eta"] = eta - - # check if the scheduler accepts generator - accepts_generator = "generator" in set(inspect.signature(self.scheduler.step).parameters.keys()) - if accepts_generator: - extra_step_kwargs["generator"] = generator - return extra_step_kwargs - - def check_inputs( - self, prompt, strength, callback_steps, negative_prompt=None, prompt_embeds=None, negative_prompt_embeds=None - ): - if strength < 0 or strength > 1: - raise ValueError(f"The value of strength should in [0.0, 1.0] but is {strength}") - - if (callback_steps is None) or ( - callback_steps is not None and (not isinstance(callback_steps, int) or callback_steps <= 0) - ): - raise ValueError( - f"`callback_steps` has to be a positive integer but is {callback_steps} of type" - f" {type(callback_steps)}." - ) - - if prompt is not None and prompt_embeds is not None: - raise ValueError( - f"Cannot forward both `prompt`: {prompt} and `prompt_embeds`: {prompt_embeds}. Please make sure to" - " only forward one of the two." - ) - elif prompt is None and prompt_embeds is None: - raise ValueError( - "Provide either `prompt` or `prompt_embeds`. Cannot leave both `prompt` and `prompt_embeds` undefined." - ) - elif prompt is not None and (not isinstance(prompt, str) and not isinstance(prompt, list)): - raise ValueError(f"`prompt` has to be of type `str` or `list` but is {type(prompt)}") - - if negative_prompt is not None and negative_prompt_embeds is not None: - raise ValueError( - f"Cannot forward both `negative_prompt`: {negative_prompt} and `negative_prompt_embeds`:" - f" {negative_prompt_embeds}. Please make sure to only forward one of the two." - ) - - if prompt_embeds is not None and negative_prompt_embeds is not None: - if prompt_embeds.shape != negative_prompt_embeds.shape: - raise ValueError( - "`prompt_embeds` and `negative_prompt_embeds` must have the same shape when passed directly, but" - f" got: `prompt_embeds` {prompt_embeds.shape} != `negative_prompt_embeds`" - f" {negative_prompt_embeds.shape}." - ) - - def get_timesteps(self, num_inference_steps, strength, device): - # get the original timestep using init_timestep - init_timestep = min(int(num_inference_steps * strength), num_inference_steps) - - t_start = max(num_inference_steps - init_timestep, 0) - timesteps = self.scheduler.timesteps[t_start:] - - return timesteps, num_inference_steps - t_start - - def prepare_latents(self, image, timestep, batch_size, num_images_per_prompt, dtype, device, generator=None): - if not isinstance(image, (torch.Tensor, PIL.Image.Image, list)): - raise ValueError( - f"`image` has to be of type `torch.Tensor`, `PIL.Image.Image` or list but is {type(image)}" - ) - - image = image.to(device=device, dtype=dtype) - - batch_size = batch_size * num_images_per_prompt - if isinstance(generator, list) and len(generator) != batch_size: - raise ValueError( - f"You have passed a list of generators of length {len(generator)}, but requested an effective batch" - f" size of {batch_size}. Make sure the batch size matches the length of the generators." - ) - - if isinstance(generator, list): - init_latents = [ - self.vae.encode(image[i : i + 1]).latent_dist.sample(generator[i]) for i in range(batch_size) - ] - init_latents = torch.cat(init_latents, dim=0) - else: - init_latents = self.vae.encode(image).latent_dist.sample(generator) - - init_latents = self.vae.config.scaling_factor * init_latents - - if batch_size > init_latents.shape[0] and batch_size % init_latents.shape[0] == 0: - # expand init_latents for batch_size - deprecation_message = ( - f"You have passed {batch_size} text prompts (`prompt`), but only {init_latents.shape[0]} initial" - " images (`image`). Initial images are now duplicating to match the number of text prompts. Note" - " that this behavior is deprecated and will be removed in a version 1.0.0. Please make sure to update" - " your script to pass as many initial images as text prompts to suppress this warning." - ) - deprecate("len(prompt) != len(image)", "1.0.0", deprecation_message, standard_warn=False) - additional_image_per_prompt = batch_size // init_latents.shape[0] - init_latents = torch.cat([init_latents] * additional_image_per_prompt, dim=0) - elif batch_size > init_latents.shape[0] and batch_size % init_latents.shape[0] != 0: - raise ValueError( - f"Cannot duplicate `image` of batch size {init_latents.shape[0]} to {batch_size} text prompts." - ) - else: - init_latents = torch.cat([init_latents], dim=0) - - shape = init_latents.shape - noise = randn_tensor(shape, generator=generator, device=device, dtype=dtype) - - # get latents - init_latents = self.scheduler.add_noise(init_latents, noise, timestep) - latents = init_latents - - return latents - - @torch.no_grad() - @replace_example_docstring(EXAMPLE_DOC_STRING) - def __call__( - self, - prompt: Union[str, List[str]] = None, - image: Union[torch.FloatTensor, PIL.Image.Image] = None, - strength: float = 0.8, - num_inference_steps: Optional[int] = 50, - guidance_scale: Optional[float] = 7.5, - negative_prompt: Optional[Union[str, List[str]]] = None, - num_images_per_prompt: Optional[int] = 1, - eta: Optional[float] = 0.0, - generator: Optional[Union[torch.Generator, List[torch.Generator]]] = None, - prompt_embeds: Optional[torch.FloatTensor] = None, - negative_prompt_embeds: Optional[torch.FloatTensor] = None, - output_type: Optional[str] = "pil", - return_dict: bool = True, - callback: Optional[Callable[[int, int, torch.FloatTensor], None]] = None, - callback_steps: int = 1, - cross_attention_kwargs: Optional[Dict[str, Any]] = None, - ): - r""" - Function invoked when calling the pipeline for generation. - - Args: - prompt (`str` or `List[str]`, *optional*): - The prompt or prompts to guide the image generation. If not defined, one has to pass `prompt_embeds`. - instead. - image (`torch.FloatTensor` or `PIL.Image.Image`): - `Image`, or tensor representing an image batch, that will be used as the starting point for the - process. - strength (`float`, *optional*, defaults to 0.8): - Conceptually, indicates how much to transform the reference `image`. Must be between 0 and 1. `image` - will be used as a starting point, adding more noise to it the larger the `strength`. The number of - denoising steps depends on the amount of noise initially added. When `strength` is 1, added noise will - be maximum and the denoising process will run for the full number of iterations specified in - `num_inference_steps`. A value of 1, therefore, essentially ignores `image`. - num_inference_steps (`int`, *optional*, defaults to 50): - The number of denoising steps. More denoising steps usually lead to a higher quality image at the - expense of slower inference. This parameter will be modulated by `strength`. - guidance_scale (`float`, *optional*, defaults to 7.5): - Guidance scale as defined in [Classifier-Free Diffusion Guidance](https://arxiv.org/abs/2207.12598). - `guidance_scale` is defined as `w` of equation 2. of [Imagen - Paper](https://arxiv.org/pdf/2205.11487.pdf). Guidance scale is enabled by setting `guidance_scale > - 1`. Higher guidance scale encourages to generate images that are closely linked to the text `prompt`, - usually at the expense of lower image quality. - negative_prompt (`str` or `List[str]`, *optional*): - The prompt or prompts not to guide the image generation. If not defined, one has to pass - `negative_prompt_embeds`. instead. Ignored when not using guidance (i.e., ignored if `guidance_scale` - is less than `1`). - num_images_per_prompt (`int`, *optional*, defaults to 1): - The number of images to generate per prompt. - eta (`float`, *optional*, defaults to 0.0): - Corresponds to parameter eta (η) in the DDIM paper: https://arxiv.org/abs/2010.02502. Only applies to - [`schedulers.DDIMScheduler`], will be ignored for others. - generator (`torch.Generator`, *optional*): - One or a list of [torch generator(s)](https://pytorch.org/docs/stable/generated/torch.Generator.html) - to make generation deterministic. - prompt_embeds (`torch.FloatTensor`, *optional*): - Pre-generated text embeddings. Can be used to easily tweak text inputs, *e.g.* prompt weighting. If not - provided, text embeddings will be generated from `prompt` input argument. - negative_prompt_embeds (`torch.FloatTensor`, *optional*): - Pre-generated negative text embeddings. Can be used to easily tweak text inputs, *e.g.* prompt - weighting. If not provided, negative_prompt_embeds will be generated from `negative_prompt` input - argument. - output_type (`str`, *optional*, defaults to `"pil"`): - The output format of the generate image. Choose between - [PIL](https://pillow.readthedocs.io/en/stable/): `PIL.Image.Image` or `np.array`. - return_dict (`bool`, *optional*, defaults to `True`): - Whether or not to return a [`~pipelines.stable_diffusion.AltDiffusionPipelineOutput`] instead of a - plain tuple. - callback (`Callable`, *optional*): - A function that will be called every `callback_steps` steps during inference. The function will be - called with the following arguments: `callback(step: int, timestep: int, latents: torch.FloatTensor)`. - callback_steps (`int`, *optional*, defaults to 1): - The frequency at which the `callback` function will be called. If not specified, the callback will be - called at every step. - cross_attention_kwargs (`dict`, *optional*): - A kwargs dictionary that if specified is passed along to the `AttentionProcessor` as defined under - `self.processor` in - [diffusers.cross_attention](https://github.com/huggingface/diffusers/blob/main/src/diffusers/models/cross_attention.py). - Examples: - - Returns: - [`~pipelines.stable_diffusion.AltDiffusionPipelineOutput`] or `tuple`: - [`~pipelines.stable_diffusion.AltDiffusionPipelineOutput`] if `return_dict` is True, otherwise a `tuple. - When returning a tuple, the first element is a list with the generated images, and the second element is a - list of `bool`s denoting whether the corresponding generated image likely represents "not-safe-for-work" - (nsfw) content, according to the `safety_checker`. - """ - # 1. Check inputs. Raise error if not correct - self.check_inputs(prompt, strength, callback_steps, negative_prompt, prompt_embeds, negative_prompt_embeds) - - # 2. Define call parameters - if prompt is not None and isinstance(prompt, str): - batch_size = 1 - elif prompt is not None and isinstance(prompt, list): - batch_size = len(prompt) - else: - batch_size = prompt_embeds.shape[0] - device = self._execution_device - # here `guidance_scale` is defined analog to the guidance weight `w` of equation (2) - # of the Imagen paper: https://arxiv.org/pdf/2205.11487.pdf . `guidance_scale = 1` - # corresponds to doing no classifier free guidance. - do_classifier_free_guidance = guidance_scale > 1.0 - - # 3. Encode input prompt - prompt_embeds = self._encode_prompt( - prompt, - device, - num_images_per_prompt, - do_classifier_free_guidance, - negative_prompt, - prompt_embeds=prompt_embeds, - negative_prompt_embeds=negative_prompt_embeds, - ) - - # 4. Preprocess image - image = self.image_processor.preprocess(image) - - # 5. set timesteps - self.scheduler.set_timesteps(num_inference_steps, device=device) - timesteps, num_inference_steps = self.get_timesteps(num_inference_steps, strength, device) - latent_timestep = timesteps[:1].repeat(batch_size * num_images_per_prompt) - - # 6. Prepare latent variables - latents = self.prepare_latents( - image, latent_timestep, batch_size, num_images_per_prompt, prompt_embeds.dtype, device, generator - ) - - # 7. Prepare extra step kwargs. TODO: Logic should ideally just be moved out of the pipeline - extra_step_kwargs = self.prepare_extra_step_kwargs(generator, eta) - - # 8. Denoising loop - num_warmup_steps = len(timesteps) - num_inference_steps * self.scheduler.order - with self.progress_bar(total=num_inference_steps) as progress_bar: - for i, t in enumerate(timesteps): - # expand the latents if we are doing classifier free guidance - latent_model_input = torch.cat([latents] * 2) if do_classifier_free_guidance else latents - latent_model_input = self.scheduler.scale_model_input(latent_model_input, t) - - # predict the noise residual - noise_pred = self.unet( - latent_model_input, - t, - encoder_hidden_states=prompt_embeds, - cross_attention_kwargs=cross_attention_kwargs, - ).sample - - # perform guidance - if do_classifier_free_guidance: - noise_pred_uncond, noise_pred_text = noise_pred.chunk(2) - noise_pred = noise_pred_uncond + guidance_scale * (noise_pred_text - noise_pred_uncond) - - # compute the previous noisy sample x_t -> x_t-1 - latents = self.scheduler.step(noise_pred, t, latents, **extra_step_kwargs).prev_sample - - # call the callback, if provided - if i == len(timesteps) - 1 or ((i + 1) > num_warmup_steps and (i + 1) % self.scheduler.order == 0): - progress_bar.update() - if callback is not None and i % callback_steps == 0: - callback(i, t, latents) - - if output_type not in ["latent", "pt", "np", "pil"]: - deprecation_message = ( - f"the output_type {output_type} is outdated. Please make sure to set it to one of these instead: " - "`pil`, `np`, `pt`, `latent`" - ) - deprecate("Unsupported output_type", "1.0.0", deprecation_message, standard_warn=False) - output_type = "np" - - if output_type == "latent": - image = latents - has_nsfw_concept = None - - else: - image = self.decode_latents(latents) - - if self.safety_checker is not None: - image, has_nsfw_concept = self.run_safety_checker(image, device, prompt_embeds.dtype) - else: - has_nsfw_concept = False - - image = self.image_processor.postprocess(image, output_type=output_type) - - # Offload last model to CPU - if hasattr(self, "final_offload_hook") and self.final_offload_hook is not None: - self.final_offload_hook.offload() - - if not return_dict: - return (image, has_nsfw_concept) - - return AltDiffusionPipelineOutput(images=image, nsfw_content_detected=has_nsfw_concept) diff --git a/spaces/declare-lab/tango/diffusers/src/diffusers/pipelines/stable_diffusion/.ipynb_checkpoints/pipeline_stable_diffusion-checkpoint.py b/spaces/declare-lab/tango/diffusers/src/diffusers/pipelines/stable_diffusion/.ipynb_checkpoints/pipeline_stable_diffusion-checkpoint.py deleted file mode 100644 index 73b9178e3ab1f9da9c74e3bc97355dbb63ae02b3..0000000000000000000000000000000000000000 --- a/spaces/declare-lab/tango/diffusers/src/diffusers/pipelines/stable_diffusion/.ipynb_checkpoints/pipeline_stable_diffusion-checkpoint.py +++ /dev/null @@ -1,723 +0,0 @@ -# Copyright 2023 The HuggingFace Team. All rights reserved. -# -# Licensed under the Apache License, Version 2.0 (the "License"); -# you may not use this file except in compliance with the License. -# You may obtain a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an "AS IS" BASIS, -# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -# See the License for the specific language governing permissions and -# limitations under the License. - -import inspect -from typing import Any, Callable, Dict, List, Optional, Union - -import torch -from packaging import version -from transformers import CLIPImageProcessor, CLIPTextModel, CLIPTokenizer - -from ...configuration_utils import FrozenDict -from ...loaders import TextualInversionLoaderMixin -from ...models import AutoencoderKL, UNet2DConditionModel -from ...schedulers import KarrasDiffusionSchedulers -from ...utils import ( - deprecate, - is_accelerate_available, - is_accelerate_version, - logging, - randn_tensor, - replace_example_docstring, -) -from ..pipeline_utils import DiffusionPipeline -from . import StableDiffusionPipelineOutput -from .safety_checker import StableDiffusionSafetyChecker - - -logger = logging.get_logger(__name__) # pylint: disable=invalid-name - -EXAMPLE_DOC_STRING = """ - Examples: - ```py - >>> import torch - >>> from diffusers import StableDiffusionPipeline - - >>> pipe = StableDiffusionPipeline.from_pretrained("runwayml/stable-diffusion-v1-5", torch_dtype=torch.float16) - >>> pipe = pipe.to("cuda") - - >>> prompt = "a photo of an astronaut riding a horse on mars" - >>> image = pipe(prompt).images[0] - ``` -""" - - -class StableDiffusionPipeline(DiffusionPipeline, TextualInversionLoaderMixin): - r""" - Pipeline for text-to-image generation using Stable Diffusion. - - This model inherits from [`DiffusionPipeline`]. Check the superclass documentation for the generic methods the - library implements for all the pipelines (such as downloading or saving, running on a particular device, etc.) - - Args: - vae ([`AutoencoderKL`]): - Variational Auto-Encoder (VAE) Model to encode and decode images to and from latent representations. - text_encoder ([`CLIPTextModel`]): - Frozen text-encoder. Stable Diffusion uses the text portion of - [CLIP](https://huggingface.co/docs/transformers/model_doc/clip#transformers.CLIPTextModel), specifically - the [clip-vit-large-patch14](https://huggingface.co/openai/clip-vit-large-patch14) variant. - tokenizer (`CLIPTokenizer`): - Tokenizer of class - [CLIPTokenizer](https://huggingface.co/docs/transformers/v4.21.0/en/model_doc/clip#transformers.CLIPTokenizer). - unet ([`UNet2DConditionModel`]): Conditional U-Net architecture to denoise the encoded image latents. - scheduler ([`SchedulerMixin`]): - A scheduler to be used in combination with `unet` to denoise the encoded image latents. Can be one of - [`DDIMScheduler`], [`LMSDiscreteScheduler`], or [`PNDMScheduler`]. - safety_checker ([`StableDiffusionSafetyChecker`]): - Classification module that estimates whether generated images could be considered offensive or harmful. - Please, refer to the [model card](https://huggingface.co/runwayml/stable-diffusion-v1-5) for details. - feature_extractor ([`CLIPImageProcessor`]): - Model that extracts features from generated images to be used as inputs for the `safety_checker`. - """ - _optional_components = ["safety_checker", "feature_extractor"] - - def __init__( - self, - vae: AutoencoderKL, - text_encoder: CLIPTextModel, - tokenizer: CLIPTokenizer, - unet: UNet2DConditionModel, - scheduler: KarrasDiffusionSchedulers, - safety_checker: StableDiffusionSafetyChecker, - feature_extractor: CLIPImageProcessor, - requires_safety_checker: bool = True, - ): - super().__init__() - - if hasattr(scheduler.config, "steps_offset") and scheduler.config.steps_offset != 1: - deprecation_message = ( - f"The configuration file of this scheduler: {scheduler} is outdated. `steps_offset`" - f" should be set to 1 instead of {scheduler.config.steps_offset}. Please make sure " - "to update the config accordingly as leaving `steps_offset` might led to incorrect results" - " in future versions. If you have downloaded this checkpoint from the Hugging Face Hub," - " it would be very nice if you could open a Pull request for the `scheduler/scheduler_config.json`" - " file" - ) - deprecate("steps_offset!=1", "1.0.0", deprecation_message, standard_warn=False) - new_config = dict(scheduler.config) - new_config["steps_offset"] = 1 - scheduler._internal_dict = FrozenDict(new_config) - - if hasattr(scheduler.config, "clip_sample") and scheduler.config.clip_sample is True: - deprecation_message = ( - f"The configuration file of this scheduler: {scheduler} has not set the configuration `clip_sample`." - " `clip_sample` should be set to False in the configuration file. Please make sure to update the" - " config accordingly as not setting `clip_sample` in the config might lead to incorrect results in" - " future versions. If you have downloaded this checkpoint from the Hugging Face Hub, it would be very" - " nice if you could open a Pull request for the `scheduler/scheduler_config.json` file" - ) - deprecate("clip_sample not set", "1.0.0", deprecation_message, standard_warn=False) - new_config = dict(scheduler.config) - new_config["clip_sample"] = False - scheduler._internal_dict = FrozenDict(new_config) - - if safety_checker is None and requires_safety_checker: - logger.warning( - f"You have disabled the safety checker for {self.__class__} by passing `safety_checker=None`. Ensure" - " that you abide to the conditions of the Stable Diffusion license and do not expose unfiltered" - " results in services or applications open to the public. Both the diffusers team and Hugging Face" - " strongly recommend to keep the safety filter enabled in all public facing circumstances, disabling" - " it only for use-cases that involve analyzing network behavior or auditing its results. For more" - " information, please have a look at https://github.com/huggingface/diffusers/pull/254 ." - ) - - if safety_checker is not None and feature_extractor is None: - raise ValueError( - "Make sure to define a feature extractor when loading {self.__class__} if you want to use the safety" - " checker. If you do not want to use the safety checker, you can pass `'safety_checker=None'` instead." - ) - - is_unet_version_less_0_9_0 = hasattr(unet.config, "_diffusers_version") and version.parse( - version.parse(unet.config._diffusers_version).base_version - ) < version.parse("0.9.0.dev0") - is_unet_sample_size_less_64 = hasattr(unet.config, "sample_size") and unet.config.sample_size < 64 - if is_unet_version_less_0_9_0 and is_unet_sample_size_less_64: - deprecation_message = ( - "The configuration file of the unet has set the default `sample_size` to smaller than" - " 64 which seems highly unlikely. If your checkpoint is a fine-tuned version of any of the" - " following: \n- CompVis/stable-diffusion-v1-4 \n- CompVis/stable-diffusion-v1-3 \n-" - " CompVis/stable-diffusion-v1-2 \n- CompVis/stable-diffusion-v1-1 \n- runwayml/stable-diffusion-v1-5" - " \n- runwayml/stable-diffusion-inpainting \n you should change 'sample_size' to 64 in the" - " configuration file. Please make sure to update the config accordingly as leaving `sample_size=32`" - " in the config might lead to incorrect results in future versions. If you have downloaded this" - " checkpoint from the Hugging Face Hub, it would be very nice if you could open a Pull request for" - " the `unet/config.json` file" - ) - deprecate("sample_size<64", "1.0.0", deprecation_message, standard_warn=False) - new_config = dict(unet.config) - new_config["sample_size"] = 64 - unet._internal_dict = FrozenDict(new_config) - - self.register_modules( - vae=vae, - text_encoder=text_encoder, - tokenizer=tokenizer, - unet=unet, - scheduler=scheduler, - safety_checker=safety_checker, - feature_extractor=feature_extractor, - ) - self.vae_scale_factor = 2 ** (len(self.vae.config.block_out_channels) - 1) - self.register_to_config(requires_safety_checker=requires_safety_checker) - - def enable_vae_slicing(self): - r""" - Enable sliced VAE decoding. - - When this option is enabled, the VAE will split the input tensor in slices to compute decoding in several - steps. This is useful to save some memory and allow larger batch sizes. - """ - self.vae.enable_slicing() - - def disable_vae_slicing(self): - r""" - Disable sliced VAE decoding. If `enable_vae_slicing` was previously invoked, this method will go back to - computing decoding in one step. - """ - self.vae.disable_slicing() - - def enable_vae_tiling(self): - r""" - Enable tiled VAE decoding. - - When this option is enabled, the VAE will split the input tensor into tiles to compute decoding and encoding in - several steps. This is useful to save a large amount of memory and to allow the processing of larger images. - """ - self.vae.enable_tiling() - - def disable_vae_tiling(self): - r""" - Disable tiled VAE decoding. If `enable_vae_tiling` was previously invoked, this method will go back to - computing decoding in one step. - """ - self.vae.disable_tiling() - - def enable_sequential_cpu_offload(self, gpu_id=0): - r""" - Offloads all models to CPU using accelerate, significantly reducing memory usage. When called, unet, - text_encoder, vae and safety checker have their state dicts saved to CPU and then are moved to a - `torch.device('meta') and loaded to GPU only when their specific submodule has its `forward` method called. - Note that offloading happens on a submodule basis. Memory savings are higher than with - `enable_model_cpu_offload`, but performance is lower. - """ - if is_accelerate_available() and is_accelerate_version(">=", "0.14.0"): - from accelerate import cpu_offload - else: - raise ImportError("`enable_sequential_cpu_offload` requires `accelerate v0.14.0` or higher") - - device = torch.device(f"cuda:{gpu_id}") - - if self.device.type != "cpu": - self.to("cpu", silence_dtype_warnings=True) - torch.cuda.empty_cache() # otherwise we don't see the memory savings (but they probably exist) - - for cpu_offloaded_model in [self.unet, self.text_encoder, self.vae]: - cpu_offload(cpu_offloaded_model, device) - - if self.safety_checker is not None: - cpu_offload(self.safety_checker, execution_device=device, offload_buffers=True) - - def enable_model_cpu_offload(self, gpu_id=0): - r""" - Offloads all models to CPU using accelerate, reducing memory usage with a low impact on performance. Compared - to `enable_sequential_cpu_offload`, this method moves one whole model at a time to the GPU when its `forward` - method is called, and the model remains in GPU until the next model runs. Memory savings are lower than with - `enable_sequential_cpu_offload`, but performance is much better due to the iterative execution of the `unet`. - """ - if is_accelerate_available() and is_accelerate_version(">=", "0.17.0.dev0"): - from accelerate import cpu_offload_with_hook - else: - raise ImportError("`enable_model_cpu_offload` requires `accelerate v0.17.0` or higher.") - - device = torch.device(f"cuda:{gpu_id}") - - if self.device.type != "cpu": - self.to("cpu", silence_dtype_warnings=True) - torch.cuda.empty_cache() # otherwise we don't see the memory savings (but they probably exist) - - hook = None - for cpu_offloaded_model in [self.text_encoder, self.unet, self.vae]: - _, hook = cpu_offload_with_hook(cpu_offloaded_model, device, prev_module_hook=hook) - - if self.safety_checker is not None: - _, hook = cpu_offload_with_hook(self.safety_checker, device, prev_module_hook=hook) - - # We'll offload the last model manually. - self.final_offload_hook = hook - - @property - def _execution_device(self): - r""" - Returns the device on which the pipeline's models will be executed. After calling - `pipeline.enable_sequential_cpu_offload()` the execution device can only be inferred from Accelerate's module - hooks. - """ - if not hasattr(self.unet, "_hf_hook"): - return self.device - for module in self.unet.modules(): - if ( - hasattr(module, "_hf_hook") - and hasattr(module._hf_hook, "execution_device") - and module._hf_hook.execution_device is not None - ): - return torch.device(module._hf_hook.execution_device) - return self.device - - def _encode_prompt( - self, - prompt, - device, - num_images_per_prompt, - do_classifier_free_guidance, - negative_prompt=None, - prompt_embeds: Optional[torch.FloatTensor] = None, - negative_prompt_embeds: Optional[torch.FloatTensor] = None, - ): - r""" - Encodes the prompt into text encoder hidden states. - - Args: - prompt (`str` or `List[str]`, *optional*): - prompt to be encoded - device: (`torch.device`): - torch device - num_images_per_prompt (`int`): - number of images that should be generated per prompt - do_classifier_free_guidance (`bool`): - whether to use classifier free guidance or not - negative_prompt (`str` or `List[str]`, *optional*): - The prompt or prompts not to guide the image generation. If not defined, one has to pass - `negative_prompt_embeds` instead. Ignored when not using guidance (i.e., ignored if `guidance_scale` is - less than `1`). - prompt_embeds (`torch.FloatTensor`, *optional*): - Pre-generated text embeddings. Can be used to easily tweak text inputs, *e.g.* prompt weighting. If not - provided, text embeddings will be generated from `prompt` input argument. - negative_prompt_embeds (`torch.FloatTensor`, *optional*): - Pre-generated negative text embeddings. Can be used to easily tweak text inputs, *e.g.* prompt - weighting. If not provided, negative_prompt_embeds will be generated from `negative_prompt` input - argument. - """ - if prompt is not None and isinstance(prompt, str): - batch_size = 1 - elif prompt is not None and isinstance(prompt, list): - batch_size = len(prompt) - else: - batch_size = prompt_embeds.shape[0] - - if prompt_embeds is None: - # textual inversion: procecss multi-vector tokens if necessary - if isinstance(self, TextualInversionLoaderMixin): - prompt = self.maybe_convert_prompt(prompt, self.tokenizer) - - text_inputs = self.tokenizer( - prompt, - padding="max_length", - max_length=self.tokenizer.model_max_length, - truncation=True, - return_tensors="pt", - ) - text_input_ids = text_inputs.input_ids - untruncated_ids = self.tokenizer(prompt, padding="longest", return_tensors="pt").input_ids - - if untruncated_ids.shape[-1] >= text_input_ids.shape[-1] and not torch.equal( - text_input_ids, untruncated_ids - ): - removed_text = self.tokenizer.batch_decode( - untruncated_ids[:, self.tokenizer.model_max_length - 1 : -1] - ) - logger.warning( - "The following part of your input was truncated because CLIP can only handle sequences up to" - f" {self.tokenizer.model_max_length} tokens: {removed_text}" - ) - - if hasattr(self.text_encoder.config, "use_attention_mask") and self.text_encoder.config.use_attention_mask: - attention_mask = text_inputs.attention_mask.to(device) - else: - attention_mask = None - - prompt_embeds = self.text_encoder( - text_input_ids.to(device), - attention_mask=attention_mask, - ) - prompt_embeds = prompt_embeds[0] - - prompt_embeds = prompt_embeds.to(dtype=self.text_encoder.dtype, device=device) - - bs_embed, seq_len, _ = prompt_embeds.shape - # duplicate text embeddings for each generation per prompt, using mps friendly method - prompt_embeds = prompt_embeds.repeat(1, num_images_per_prompt, 1) - prompt_embeds = prompt_embeds.view(bs_embed * num_images_per_prompt, seq_len, -1) - - # get unconditional embeddings for classifier free guidance - if do_classifier_free_guidance and negative_prompt_embeds is None: - uncond_tokens: List[str] - if negative_prompt is None: - uncond_tokens = [""] * batch_size - elif type(prompt) is not type(negative_prompt): - raise TypeError( - f"`negative_prompt` should be the same type to `prompt`, but got {type(negative_prompt)} !=" - f" {type(prompt)}." - ) - elif isinstance(negative_prompt, str): - uncond_tokens = [negative_prompt] - elif batch_size != len(negative_prompt): - raise ValueError( - f"`negative_prompt`: {negative_prompt} has batch size {len(negative_prompt)}, but `prompt`:" - f" {prompt} has batch size {batch_size}. Please make sure that passed `negative_prompt` matches" - " the batch size of `prompt`." - ) - else: - uncond_tokens = negative_prompt - - # textual inversion: procecss multi-vector tokens if necessary - if isinstance(self, TextualInversionLoaderMixin): - uncond_tokens = self.maybe_convert_prompt(uncond_tokens, self.tokenizer) - - max_length = prompt_embeds.shape[1] - uncond_input = self.tokenizer( - uncond_tokens, - padding="max_length", - max_length=max_length, - truncation=True, - return_tensors="pt", - ) - - if hasattr(self.text_encoder.config, "use_attention_mask") and self.text_encoder.config.use_attention_mask: - attention_mask = uncond_input.attention_mask.to(device) - else: - attention_mask = None - - negative_prompt_embeds = self.text_encoder( - uncond_input.input_ids.to(device), - attention_mask=attention_mask, - ) - negative_prompt_embeds = negative_prompt_embeds[0] - - if do_classifier_free_guidance: - # duplicate unconditional embeddings for each generation per prompt, using mps friendly method - seq_len = negative_prompt_embeds.shape[1] - - negative_prompt_embeds = negative_prompt_embeds.to(dtype=self.text_encoder.dtype, device=device) - - negative_prompt_embeds = negative_prompt_embeds.repeat(1, num_images_per_prompt, 1) - negative_prompt_embeds = negative_prompt_embeds.view(batch_size * num_images_per_prompt, seq_len, -1) - - # For classifier free guidance, we need to do two forward passes. - # Here we concatenate the unconditional and text embeddings into a single batch - # to avoid doing two forward passes - prompt_embeds = torch.cat([negative_prompt_embeds, prompt_embeds]) - - return prompt_embeds - - def run_safety_checker(self, image, device, dtype): - if self.safety_checker is not None: - safety_checker_input = self.feature_extractor(self.numpy_to_pil(image), return_tensors="pt").to(device) - image, has_nsfw_concept = self.safety_checker( - images=image, clip_input=safety_checker_input.pixel_values.to(dtype) - ) - else: - has_nsfw_concept = None - return image, has_nsfw_concept - - def decode_latents(self, latents): - latents = 1 / self.vae.config.scaling_factor * latents - image = self.vae.decode(latents).sample - image = (image / 2 + 0.5).clamp(0, 1) - # we always cast to float32 as this does not cause significant overhead and is compatible with bfloat16 - image = image.cpu().permute(0, 2, 3, 1).float().numpy() - return image - - def prepare_extra_step_kwargs(self, generator, eta): - # prepare extra kwargs for the scheduler step, since not all schedulers have the same signature - # eta (η) is only used with the DDIMScheduler, it will be ignored for other schedulers. - # eta corresponds to η in DDIM paper: https://arxiv.org/abs/2010.02502 - # and should be between [0, 1] - - accepts_eta = "eta" in set(inspect.signature(self.scheduler.step).parameters.keys()) - extra_step_kwargs = {} - if accepts_eta: - extra_step_kwargs["eta"] = eta - - # check if the scheduler accepts generator - accepts_generator = "generator" in set(inspect.signature(self.scheduler.step).parameters.keys()) - if accepts_generator: - extra_step_kwargs["generator"] = generator - return extra_step_kwargs - - def check_inputs( - self, - prompt, - height, - width, - callback_steps, - negative_prompt=None, - prompt_embeds=None, - negative_prompt_embeds=None, - ): - if height % 8 != 0 or width % 8 != 0: - raise ValueError(f"`height` and `width` have to be divisible by 8 but are {height} and {width}.") - - if (callback_steps is None) or ( - callback_steps is not None and (not isinstance(callback_steps, int) or callback_steps <= 0) - ): - raise ValueError( - f"`callback_steps` has to be a positive integer but is {callback_steps} of type" - f" {type(callback_steps)}." - ) - - if prompt is not None and prompt_embeds is not None: - raise ValueError( - f"Cannot forward both `prompt`: {prompt} and `prompt_embeds`: {prompt_embeds}. Please make sure to" - " only forward one of the two." - ) - elif prompt is None and prompt_embeds is None: - raise ValueError( - "Provide either `prompt` or `prompt_embeds`. Cannot leave both `prompt` and `prompt_embeds` undefined." - ) - elif prompt is not None and (not isinstance(prompt, str) and not isinstance(prompt, list)): - raise ValueError(f"`prompt` has to be of type `str` or `list` but is {type(prompt)}") - - if negative_prompt is not None and negative_prompt_embeds is not None: - raise ValueError( - f"Cannot forward both `negative_prompt`: {negative_prompt} and `negative_prompt_embeds`:" - f" {negative_prompt_embeds}. Please make sure to only forward one of the two." - ) - - if prompt_embeds is not None and negative_prompt_embeds is not None: - if prompt_embeds.shape != negative_prompt_embeds.shape: - raise ValueError( - "`prompt_embeds` and `negative_prompt_embeds` must have the same shape when passed directly, but" - f" got: `prompt_embeds` {prompt_embeds.shape} != `negative_prompt_embeds`" - f" {negative_prompt_embeds.shape}." - ) - - def prepare_latents(self, batch_size, num_channels_latents, height, width, dtype, device, generator, latents=None): - shape = (batch_size, num_channels_latents, height // self.vae_scale_factor, width // self.vae_scale_factor) - if isinstance(generator, list) and len(generator) != batch_size: - raise ValueError( - f"You have passed a list of generators of length {len(generator)}, but requested an effective batch" - f" size of {batch_size}. Make sure the batch size matches the length of the generators." - ) - - if latents is None: - latents = randn_tensor(shape, generator=generator, device=device, dtype=dtype) - else: - latents = latents.to(device) - - # scale the initial noise by the standard deviation required by the scheduler - latents = latents * self.scheduler.init_noise_sigma - return latents - - @torch.no_grad() - @replace_example_docstring(EXAMPLE_DOC_STRING) - def __call__( - self, - prompt: Union[str, List[str]] = None, - height: Optional[int] = None, - width: Optional[int] = None, - num_inference_steps: int = 50, - guidance_scale: float = 7.5, - negative_prompt: Optional[Union[str, List[str]]] = None, - num_images_per_prompt: Optional[int] = 1, - eta: float = 0.0, - generator: Optional[Union[torch.Generator, List[torch.Generator]]] = None, - latents: Optional[torch.FloatTensor] = None, - prompt_embeds: Optional[torch.FloatTensor] = None, - negative_prompt_embeds: Optional[torch.FloatTensor] = None, - output_type: Optional[str] = "pil", - return_dict: bool = True, - callback: Optional[Callable[[int, int, torch.FloatTensor], None]] = None, - callback_steps: int = 1, - cross_attention_kwargs: Optional[Dict[str, Any]] = None, - ): - r""" - Function invoked when calling the pipeline for generation. - - Args: - prompt (`str` or `List[str]`, *optional*): - The prompt or prompts to guide the image generation. If not defined, one has to pass `prompt_embeds`. - instead. - height (`int`, *optional*, defaults to self.unet.config.sample_size * self.vae_scale_factor): - The height in pixels of the generated image. - width (`int`, *optional*, defaults to self.unet.config.sample_size * self.vae_scale_factor): - The width in pixels of the generated image. - num_inference_steps (`int`, *optional*, defaults to 50): - The number of denoising steps. More denoising steps usually lead to a higher quality image at the - expense of slower inference. - guidance_scale (`float`, *optional*, defaults to 7.5): - Guidance scale as defined in [Classifier-Free Diffusion Guidance](https://arxiv.org/abs/2207.12598). - `guidance_scale` is defined as `w` of equation 2. of [Imagen - Paper](https://arxiv.org/pdf/2205.11487.pdf). Guidance scale is enabled by setting `guidance_scale > - 1`. Higher guidance scale encourages to generate images that are closely linked to the text `prompt`, - usually at the expense of lower image quality. - negative_prompt (`str` or `List[str]`, *optional*): - The prompt or prompts not to guide the image generation. If not defined, one has to pass - `negative_prompt_embeds` instead. Ignored when not using guidance (i.e., ignored if `guidance_scale` is - less than `1`). - num_images_per_prompt (`int`, *optional*, defaults to 1): - The number of images to generate per prompt. - eta (`float`, *optional*, defaults to 0.0): - Corresponds to parameter eta (η) in the DDIM paper: https://arxiv.org/abs/2010.02502. Only applies to - [`schedulers.DDIMScheduler`], will be ignored for others. - generator (`torch.Generator` or `List[torch.Generator]`, *optional*): - One or a list of [torch generator(s)](https://pytorch.org/docs/stable/generated/torch.Generator.html) - to make generation deterministic. - latents (`torch.FloatTensor`, *optional*): - Pre-generated noisy latents, sampled from a Gaussian distribution, to be used as inputs for image - generation. Can be used to tweak the same generation with different prompts. If not provided, a latents - tensor will ge generated by sampling using the supplied random `generator`. - prompt_embeds (`torch.FloatTensor`, *optional*): - Pre-generated text embeddings. Can be used to easily tweak text inputs, *e.g.* prompt weighting. If not - provided, text embeddings will be generated from `prompt` input argument. - negative_prompt_embeds (`torch.FloatTensor`, *optional*): - Pre-generated negative text embeddings. Can be used to easily tweak text inputs, *e.g.* prompt - weighting. If not provided, negative_prompt_embeds will be generated from `negative_prompt` input - argument. - output_type (`str`, *optional*, defaults to `"pil"`): - The output format of the generate image. Choose between - [PIL](https://pillow.readthedocs.io/en/stable/): `PIL.Image.Image` or `np.array`. - return_dict (`bool`, *optional*, defaults to `True`): - Whether or not to return a [`~pipelines.stable_diffusion.StableDiffusionPipelineOutput`] instead of a - plain tuple. - callback (`Callable`, *optional*): - A function that will be called every `callback_steps` steps during inference. The function will be - called with the following arguments: `callback(step: int, timestep: int, latents: torch.FloatTensor)`. - callback_steps (`int`, *optional*, defaults to 1): - The frequency at which the `callback` function will be called. If not specified, the callback will be - called at every step. - cross_attention_kwargs (`dict`, *optional*): - A kwargs dictionary that if specified is passed along to the `AttentionProcessor` as defined under - `self.processor` in - [diffusers.cross_attention](https://github.com/huggingface/diffusers/blob/main/src/diffusers/models/cross_attention.py). - - Examples: - - Returns: - [`~pipelines.stable_diffusion.StableDiffusionPipelineOutput`] or `tuple`: - [`~pipelines.stable_diffusion.StableDiffusionPipelineOutput`] if `return_dict` is True, otherwise a `tuple. - When returning a tuple, the first element is a list with the generated images, and the second element is a - list of `bool`s denoting whether the corresponding generated image likely represents "not-safe-for-work" - (nsfw) content, according to the `safety_checker`. - """ - # 0. Default height and width to unet - height = height or self.unet.config.sample_size * self.vae_scale_factor - width = width or self.unet.config.sample_size * self.vae_scale_factor - - # 1. Check inputs. Raise error if not correct - self.check_inputs( - prompt, height, width, callback_steps, negative_prompt, prompt_embeds, negative_prompt_embeds - ) - - # 2. Define call parameters - if prompt is not None and isinstance(prompt, str): - batch_size = 1 - elif prompt is not None and isinstance(prompt, list): - batch_size = len(prompt) - else: - batch_size = prompt_embeds.shape[0] - - device = self._execution_device - # here `guidance_scale` is defined analog to the guidance weight `w` of equation (2) - # of the Imagen paper: https://arxiv.org/pdf/2205.11487.pdf . `guidance_scale = 1` - # corresponds to doing no classifier free guidance. - do_classifier_free_guidance = guidance_scale > 1.0 - - # 3. Encode input prompt - prompt_embeds = self._encode_prompt( - prompt, - device, - num_images_per_prompt, - do_classifier_free_guidance, - negative_prompt, - prompt_embeds=prompt_embeds, - negative_prompt_embeds=negative_prompt_embeds, - ) - - # 4. Prepare timesteps - self.scheduler.set_timesteps(num_inference_steps, device=device) - timesteps = self.scheduler.timesteps - - # 5. Prepare latent variables - num_channels_latents = self.unet.in_channels - latents = self.prepare_latents( - batch_size * num_images_per_prompt, - num_channels_latents, - height, - width, - prompt_embeds.dtype, - device, - generator, - latents, - ) - - # 6. Prepare extra step kwargs. TODO: Logic should ideally just be moved out of the pipeline - extra_step_kwargs = self.prepare_extra_step_kwargs(generator, eta) - - # 7. Denoising loop - num_warmup_steps = len(timesteps) - num_inference_steps * self.scheduler.order - with self.progress_bar(total=num_inference_steps) as progress_bar: - for i, t in enumerate(timesteps): - # expand the latents if we are doing classifier free guidance - latent_model_input = torch.cat([latents] * 2) if do_classifier_free_guidance else latents - latent_model_input = self.scheduler.scale_model_input(latent_model_input, t) - - # predict the noise residual - noise_pred = self.unet( - latent_model_input, - t, - encoder_hidden_states=prompt_embeds, - cross_attention_kwargs=cross_attention_kwargs, - ).sample - - # perform guidance - if do_classifier_free_guidance: - noise_pred_uncond, noise_pred_text = noise_pred.chunk(2) - noise_pred = noise_pred_uncond + guidance_scale * (noise_pred_text - noise_pred_uncond) - - # compute the previous noisy sample x_t -> x_t-1 - latents = self.scheduler.step(noise_pred, t, latents, **extra_step_kwargs).prev_sample - - # call the callback, if provided - if i == len(timesteps) - 1 or ((i + 1) > num_warmup_steps and (i + 1) % self.scheduler.order == 0): - progress_bar.update() - if callback is not None and i % callback_steps == 0: - callback(i, t, latents) - - if output_type == "latent": - image = latents - has_nsfw_concept = None - elif output_type == "pil": - # 8. Post-processing - image = self.decode_latents(latents) - - # 9. Run safety checker - image, has_nsfw_concept = self.run_safety_checker(image, device, prompt_embeds.dtype) - - # 10. Convert to PIL - image = self.numpy_to_pil(image) - else: - # 8. Post-processing - image = self.decode_latents(latents) - - # 9. Run safety checker - image, has_nsfw_concept = self.run_safety_checker(image, device, prompt_embeds.dtype) - - # Offload last model to CPU - if hasattr(self, "final_offload_hook") and self.final_offload_hook is not None: - self.final_offload_hook.offload() - - if not return_dict: - return (image, has_nsfw_concept) - - return StableDiffusionPipelineOutput(images=image, nsfw_content_detected=has_nsfw_concept) diff --git a/spaces/deeplearning/audioldm-text-to-audio-generation/audioldm/clap/training/main.py b/spaces/deeplearning/audioldm-text-to-audio-generation/audioldm/clap/training/main.py deleted file mode 100644 index 3b563a5d001be7adfbe779dee7ad8ac49aadc50d..0000000000000000000000000000000000000000 --- a/spaces/deeplearning/audioldm-text-to-audio-generation/audioldm/clap/training/main.py +++ /dev/null @@ -1,596 +0,0 @@ -from inspect import getargs -import logging -import os -import random -from datetime import datetime -import bisect -import copy -import numpy as np -import torch -import torch.backends.cudnn as cudnn -from torch import optim -from torch.cuda.amp import GradScaler -import faulthandler -import pathlib - -try: - import wandb -except ImportError: - wandb = None - -try: - import torch.utils.tensorboard as tensorboard -except ImportError: - tensorboard = None - -try: - import horovod.torch as hvd -except ImportError: - hvd = None - -from open_clip import create_model_and_transforms, trace_model, create_model -from training.data import get_data -from training.distributed import is_master, init_distributed_device, world_info_from_env -from training.logger import setup_logging -from training.params import parse_args -from training.scheduler import cosine_lr -from training.train import train_one_epoch, evaluate -from open_clip.utils import dataset_split, get_optimizer - - -def maintain_ckpts(args, startidx, all_idx_len): - for i in reversed(range(startidx, all_idx_len)): - if os.path.exists(os.path.join(args.checkpoint_path, f"epoch_top_{i}.pt")): - os.rename( - os.path.join(args.checkpoint_path, f"epoch_top_{i}.pt"), - os.path.join(args.checkpoint_path, f"epoch_top_{i+1}.pt"), - ) - if os.path.exists( - os.path.join(args.checkpoint_path, f"epoch_top_{all_idx_len}.pt") - ): - os.remove(os.path.join(args.checkpoint_path, f"epoch_top_{all_idx_len}.pt")) - return - - -def update_top_k_performance( - new_metrics_inputs, current_top_k_ckpt_metrics, args, ckpt, bignumbetter=True -): - """ - Record the top-k performance of the current epoch. - current_top_k_metrics is a dictionary of the form: {1: top_1_ckpt_measure, 2: top_2_ckpt_measure, ...} - """ - if isinstance(new_metrics_inputs, (list, tuple)): - new_metrics_inputs = np.mean(new_metrics_inputs) - return update_top_k_performance( - new_metrics_inputs, - current_top_k_ckpt_metrics, - args=args, - ckpt=ckpt, - bignumbetter=bignumbetter, - ) - elif isinstance(new_metrics_inputs, dict): - new_metrics_inputs = np.mean(list(new_metrics_inputs.values())) - return update_top_k_performance( - new_metrics_inputs, - current_top_k_ckpt_metrics, - args=args, - ckpt=ckpt, - bignumbetter=bignumbetter, - ) - elif isinstance(new_metrics_inputs, (float, int)): - update_flag = {k: False for k in current_top_k_ckpt_metrics.keys()} - sorted_keys = sorted(current_top_k_ckpt_metrics.keys()) - sorted_values = sorted( - current_top_k_ckpt_metrics.values(), reverse=bignumbetter - ) - sorted_values_ = copy.deepcopy(sorted_values) - sorted_values.append(new_metrics_inputs) - sorted_values = sorted(sorted_values, reverse=bignumbetter) - sorted_values = sorted_values[:-1] - - if sorted_values == sorted_values_: - return current_top_k_ckpt_metrics, new_metrics_inputs - else: - for i in range(len(sorted_keys)): - if current_top_k_ckpt_metrics[sorted_keys[i]] != sorted_values[i]: - current_top_k_ckpt_metrics[sorted_keys[i]] = sorted_values[i] - update_flag[sorted_keys[i]] = True - for i in range(len(update_flag)): - if update_flag[i]: - maintain_ckpts(args, i, len(sorted_keys)) - torch.save( - ckpt, - os.path.join(args.checkpoint_path, f"epoch_top_{i}.pt"), - ) - break - return current_top_k_ckpt_metrics, new_metrics_inputs - - -# def updateifNone(a, b): -# a = b if None else a -# return a - - -def is_pretrained_params(n): - return ( - n.startswith("transformer") - or n in ["positional_embedding", "text_projection"] - or n.startswith("token_embedding") - or n.startswith("ln_final") - or n.startswith("logit_scale_t") - ) - - -def random_seed(seed=42, rank=0): - torch.manual_seed(seed + rank) - np.random.seed(seed + rank) - random.seed(seed + rank) - - -def main(): - args = parse_args() - # sanitize model name for filesystem / uri use, easier if we don't use / in name as a rule? - args.amodel = args.amodel.replace("/", "-") - # download sizes.json file - - # (yusong): the below two lines are for debug - # print("setting up faulthandler") - # faulthandler.register(10) - - random.seed(args.seed) - torch.manual_seed(args.seed) - torch.cuda.manual_seed(args.seed) - torch.cuda.manual_seed_all(args.seed) - np.random.seed(args.seed) - if args.tmodel == "bert" or args.tmodel == "roberta" or args.tmodel == "bart": - assert ( - args.pretrained == "" or args.pretrained is None - ), "bert/roberta/bart text encoder does not support pretrained models." - - # get the name of the experiments - if args.name is None: - args.name = "-".join( - [ - datetime.now().strftime("%Y_%m_%d-%H_%M_%S"), - f"model_{args.amodel}", - f"lr_{args.lr}", - f"b_{args.batch_size}", - f"j_{args.workers}", - f"p_{args.precision}", - ] - ) - - # discover initial world args early so we can log properly - args.distributed = False - args.local_rank, args.rank, args.world_size = world_info_from_env() - - if args.remotedata and is_master(args): - for dataset_name in args.datasetnames: - for split in dataset_split[dataset_name]: - if not os.path.exists(f"./json_files/{dataset_name}/{split}"): - os.makedirs(f"./json_files/{dataset_name}/{split}") - os.system( - f"aws s3 cp s3://s-laion-audio/webdataset_tar/{dataset_name}/{split}/sizes.json ./json_files/{dataset_name}/{split}/sizes.json" - ) - - args.log_path = None - if is_master(args, local=args.log_local): - log_base_path = os.path.join(args.logs, args.name) - os.makedirs(log_base_path, exist_ok=True) - log_filename = f"out-{args.rank}" if args.log_local else "out.log" - args.log_path = os.path.join(log_base_path, log_filename) - if os.path.exists(args.log_path): - print( - "Error. Experiment already exists. Use --name {} to specify a new experiment." - ) - return -1 - - # Set logger - args.log_level = logging.DEBUG if args.debug else logging.INFO - setup_logging(args.log_path, args.log_level) - - # fully initialize distributed device environment - device = init_distributed_device(args) - - args.wandb = "wandb" in args.report_to or "all" in args.report_to - args.tensorboard = "tensorboard" in args.report_to or "all" in args.report_to - if is_master(args): - args.tensorboard_path = ( - os.path.join(args.logs, args.name, "tensorboard") - if args.tensorboard - else "" - ) - args.checkpoint_path = os.path.join(args.logs, args.name, "checkpoints") - for dirname in [args.tensorboard_path, args.checkpoint_path]: - if dirname: - os.makedirs(dirname, exist_ok=True) - else: - args.tensorboard_path = "" - args.checkpoint_path = "" - - if args.copy_codebase: - copy_codebase(args) - - assert args.precision in ["amp", "fp16", "fp32"] - if args.precision == "fp16": - logging.warning( - "It is recommended to use AMP mixed-precision instead of FP16. " - "FP16 support needs further verification and tuning, especially for train." - ) - - if args.horovod: - logging.info( - f"Running in horovod mode with multiple processes / nodes. Device: {args.device}." - f"Process (global: {args.rank}, local {args.local_rank}), total {args.world_size}." - ) - elif args.distributed: - logging.info( - f"Running in distributed mode with multiple processes. Device: {args.device}." - f"Process (global: {args.rank}, local {args.local_rank}), total {args.world_size}." - ) - else: - logging.info(f"Running with a single process. Device {args.device}.") - - logging.info(f"openai cache dir: {os.path.expanduser(args.openai_model_cache_dir)}") - - model, model_cfg = create_model( - args.amodel, - args.tmodel, - args.pretrained, - precision=args.precision, - device=device, - jit=args.torchscript, - force_quick_gelu=args.force_quick_gelu, - openai_model_cache_dir=os.path.expanduser(args.openai_model_cache_dir), - skip_params=True, - pretrained_audio=args.pretrained_audio, - pretrained_text=args.pretrained_text, - enable_fusion=args.enable_fusion, - fusion_type=args.fusion_type, - ) - - if args.horovod: - with torch.no_grad(): - for param in model.parameters(): - param.set_(param.contiguous()) - - if args.trace: - model = trace_model(model, batch_size=args.batch_size, device=device) - - if is_master(args): - logging.info("Model:") - logging.info(f"{str(model)}") - logging.info("Params:") - params_file = os.path.join(args.logs, args.name, "params.txt") - with open(params_file, "w") as f: - for name in sorted(vars(args)): - val = getattr(args, name) - logging.info(f" {name}: {val}") - f.write(f"{name}: {val}\n") - - if args.distributed and not args.horovod: - if args.use_bn_sync: - model = torch.nn.SyncBatchNorm.convert_sync_batchnorm(model) - ddp_args = {} - if args.ddp_static_graph: - # this doesn't exist in older PyTorch, arg only added if enabled - ddp_args["static_graph"] = True - model = torch.nn.parallel.DistributedDataParallel( - model, device_ids=[device], find_unused_parameters=True, **ddp_args - ) - - data = get_data(args, model_cfg) - assert len(data), "At least one train or eval dataset must be specified." - if args.trace: - assert "train" not in data, "Cannot train with traced model" - - exclude = ( - lambda n, p: p.ndim < 2 - or "bn" in n - or "ln" in n - or "bias" in n - or "logit_scale" in n - ) - include = lambda n, p: not exclude(n, p) - - named_parameters = list(model.named_parameters()) - - # freeze text encoder - text_freeze_parameters = [p for n, p in named_parameters if "text_branch" in n] - - if args.freeze_text: - print("Freeze Text!!!!") - for k in text_freeze_parameters: - k.requires_grad = False - - gain_or_bias_params = [ - p for n, p in named_parameters if exclude(n, p) and p.requires_grad - ] - rest_params = [p for n, p in named_parameters if include(n, p) and p.requires_grad] - - # set wd-related params to 0 if use adam optimizer - if args.optimizer == "adam": - args.wd = 0 - args.wd_pretrained = 0 - args.wd_new = 0 - - if args.train_data is None: - optimizer = None - scheduler = None - else: - total_steps = data["train"].dataloader.num_batches * args.epochs - - if args.split_opt: - for x in ["lr", "beta1", "beta2", "eps", "wd"]: - for y in ["_new", "_pretrained"]: - if getattr(args, x + y) is None: - setattr(args, x + y, getattr(args, x)) - - gain_or_bias_pretrained_params = [ - p - for n, p in named_parameters - if (exclude(n, p) and p.requires_grad) and is_pretrained_params(n) - ] - rest_pretrained_params = [ - p - for n, p in named_parameters - if (include(n, p) and p.requires_grad) and is_pretrained_params(n) - ] - gain_or_bias_new_params = [ - p - for n, p in named_parameters - if (exclude(n, p) and p.requires_grad) and (not is_pretrained_params(n)) - ] - rest_new_params = [ - p - for n, p in named_parameters - if (include(n, p) and p.requires_grad) and (not is_pretrained_params(n)) - ] - pretrained_params_optimizer = get_optimizer( - [ - {"params": gain_or_bias_pretrained_params, "weight_decay": 0.0}, - { - "params": rest_pretrained_params, - "weight_decay": args.wd_pretrained, - }, - ], - lr=args.lr_pretrained, - betas=(args.beta1_pretrained, args.beta2_pretrained), - eps=args.eps_pretrained, - momentum=args.momentum_pretrained, - optimizer_name=args.optimizer, - ) - pretrained_params_scheduler = cosine_lr( - pretrained_params_optimizer, - args.lr_pretrained, - args.warmup, - total_steps, - ) - new_params_optimizer = get_optimizer( - [ - {"params": gain_or_bias_new_params, "weight_decay": 0.0}, - {"params": rest_new_params, "weight_decay": args.wd_new}, - ], - lr=args.lr_new, - betas=(args.beta1_new, args.beta2_new), - eps=args.eps_new, - momentum=args.momentum_new, - optimizer_name=args.optimizer, - ) - - new_params_scheduler = cosine_lr( - new_params_optimizer, args.lr_new, args.warmup, total_steps - ) - - optimizer = { - "pretrained": pretrained_params_optimizer, - "new": new_params_optimizer, - } - scheduler = { - "pretrained": pretrained_params_scheduler, - "new": new_params_scheduler, - } - - if args.horovod: - pretrained_params_optimizer = hvd.DistributedOptimizer( - pretrained_params_optimizer, - named_parameters=model.named_parameters(), - ) - new_params_optimizer = hvd.DistributedOptimizer( - new_params_optimizer, named_parameters=model.named_parameters() - ) - hvd.broadcast_parameters(model.state_dict(), root_rank=0) - hvd.broadcast_optimizer_state(pretrained_params_optimizer, root_rank=0) - hvd.broadcast_optimizer_state(new_params_optimizer, root_rank=0) - else: - optimizer = get_optimizer( - [ - {"params": gain_or_bias_params, "weight_decay": 0.0}, - {"params": rest_params, "weight_decay": args.wd}, - ], - lr=args.lr, - betas=(args.beta1, args.beta2), - eps=args.eps, - momentum=args.momentum, - optimizer_name=args.optimizer, - ) - - scheduler = cosine_lr(optimizer, args.lr, args.warmup, total_steps) - - if args.horovod: - optimizer = hvd.DistributedOptimizer( - optimizer, named_parameters=model.named_parameters() - ) - hvd.broadcast_parameters(model.state_dict(), root_rank=0) - hvd.broadcast_optimizer_state(optimizer, root_rank=0) - - scaler = GradScaler() if args.precision == "amp" else None - - # optionally resume from a checkpoint - start_epoch = 0 - if args.resume is not None: - if os.path.isfile(args.resume): - checkpoint = torch.load(args.resume, map_location=device) - if "epoch" in checkpoint: - # resuming a train checkpoint w/ epoch and optimizer state - start_epoch = checkpoint["epoch"] - sd = checkpoint["state_dict"] - if not args.distributed and next(iter(sd.items()))[0].startswith( - "module" - ): - sd = {k[len("module.") :]: v for k, v in sd.items()} - model.load_state_dict(sd) - if args.split_opt: - if optimizer is not None: - for k, o_ in optimizer.items(): - o_.load_state_dict(checkpoint[k + "_" + "optimizer"]) - if optimizer is not None: - optimizer.load_state_dict(checkpoint["optimizer"]) - if scaler is not None and "scaler" in checkpoint: - scaler.load_state_dict(checkpoint["scaler"]) - logging.info( - f"=> resuming checkpoint '{args.resume}' (epoch {start_epoch})" - ) - else: - # loading a bare (model only) checkpoint for fine-tune or evaluation - model.load_state_dict(checkpoint) - logging.info( - f"=> loaded checkpoint '{args.resume}' (epoch {start_epoch})" - ) - if args.freeze_text: - print("Freeze Text!!!!") - for k in text_freeze_parameters: - k.requires_grad = False - else: - logging.info("=> no checkpoint found at '{}'".format(args.resume)) - - cudnn.benchmark = True - cudnn.deterministic = False - - # determine if this worker should save logs and checkpoints. only do so if it is rank == 0 - args.save_logs = args.logs and args.logs.lower() != "none" and is_master(args) - writer = None - if args.save_logs and args.tensorboard: - assert tensorboard is not None, "Please install tensorboard." - writer = tensorboard.SummaryWriter(args.tensorboard_path) - - if args.wandb and is_master(args): - assert wandb is not None, "Please install wandb." - logging.debug("Starting wandb.") - args.train_sz = data["train"].dataloader.num_samples - if args.val_data is not None: - args.val_sz = data["val"].dataloader.num_samples - # you will have to configure this for your project! - wandb.init( - project="clap", - notes=args.wandb_notes, - name=args.wandb_notes, - tags=[], - config=vars(args), - ) - if args.debug: - wandb.watch(model, log="all") - wandb.save(params_file) - logging.debug("Finished loading wandb.") - - if "train" not in data: - evaluate(model, data, start_epoch, args, writer) - return - elif start_epoch == 0 and "val" in data and not args.no_eval: - evaluate(model, data, 0, args, writer) - # print(f'rank {args.rank}, Start First Evaluation')# (yusong): for debug - if args.save_top_performance: - current_top_k_ckpt_metrics = { - i: 0 for i in range(args.save_top_performance) - } # initialize the top-k metric for ckpts to 0 - - # print(f'rank {args.rank}, Start Training') # (yusong): for debug - for epoch in range(start_epoch, args.epochs): - # freeze the text param after (include) args.freeze_text_after, this is -1 by default - if epoch == args.freeze_text_after: - print("Text pretrained parameters are freezed since this epoch.") - for k in text_freeze_parameters: - k.requires_grad = False - if is_master(args): - logging.info(f"Start epoch {epoch}") - - train_one_epoch(model, data, epoch, optimizer, scaler, scheduler, args, writer) - completed_epoch = epoch + 1 - - if ( - any(v in data for v in ("val", "imagenet-val", "imagenet-v2")) - and not args.no_eval - ): - metrics = evaluate(model, data, completed_epoch, args, writer) - if args.save_top_performance: - top_k_dataset = args.top_k_checkpoint_select_dataset - top_k_metric = args.top_k_checkpoint_select_metric - filtered_metrics = [ - v - for k, v in metrics.items() - if top_k_metric in k and top_k_dataset in k - ] # check all R@10 metrics (all dataset) and use it to update the ckpt - # Saving checkpoints. - if args.save_logs: - if args.split_opt: - opt_dict = { - k + "_" + "optimizer": v.state_dict() for k, v in optimizer.items() - } - else: - opt_dict = {"optimizer": optimizer.state_dict()} - checkpoint_dict = { - "epoch": completed_epoch, - "name": args.name, - "state_dict": model.state_dict(), - } - checkpoint_dict.update(opt_dict) - if scaler is not None: - checkpoint_dict["scaler"] = scaler.state_dict() - - if completed_epoch == args.epochs or ( - args.save_frequency > 0 and (completed_epoch % args.save_frequency) == 0 - ): - torch.save( - checkpoint_dict, - os.path.join(args.checkpoint_path, f"epoch_{completed_epoch}.pt"), - ) - if args.save_most_recent: - torch.save( - checkpoint_dict, - os.path.join(args.checkpoint_path, f"epoch_latest.pt"), - ) - if args.save_top_performance and not args.no_eval: - update_top_k_performance( - filtered_metrics, - current_top_k_ckpt_metrics, - args, - checkpoint_dict, - bignumbetter=True, - ) - - if args.wandb and is_master(args): - wandb.finish() - - -def copy_codebase(args): - from shutil import copytree, ignore_patterns - - new_code_path = os.path.join(args.logs, args.name, "code") - if os.path.exists(new_code_path): - print( - f"Error. Experiment already exists at {new_code_path}. Use --name to specify a new experiment." - ) - return -1 - print(f"Copying codebase to {new_code_path}") - current_code_path = os.path.realpath(__file__) - for _ in range(3): - current_code_path = os.path.dirname(current_code_path) - copytree( - current_code_path, new_code_path, ignore=ignore_patterns("log", "logs", "wandb") - ) - print("Done copying code.") - return 1 - - -if __name__ == "__main__": - main() diff --git a/spaces/diacanFperku/AutoGPT/CorelDRAW Graphics Suite X9 18.0.0.448 Keygen Crack ((EXCLUSIVE)).md b/spaces/diacanFperku/AutoGPT/CorelDRAW Graphics Suite X9 18.0.0.448 Keygen Crack ((EXCLUSIVE)).md deleted file mode 100644 index 780f9584c0bc016a5a545a91fa96a83a6f6c202e..0000000000000000000000000000000000000000 --- a/spaces/diacanFperku/AutoGPT/CorelDRAW Graphics Suite X9 18.0.0.448 Keygen Crack ((EXCLUSIVE)).md +++ /dev/null @@ -1,18 +0,0 @@ -

    CorelDRAW Graphics Suite X9 18.0.0.448 Keygen crack


    Download Zip ★★★★★ https://gohhs.com/2uFTkt



    - -Download CorelDRAW Graphics Suite X9 Full Version Free & Install Software. CorelDRAW Graphics Suite X9. 5 software is best for all who use Photoshop. CorelDRAW Graphics Suite X8 18. 0. - -You can see in the table below which version of Photoshop does CorelDRAW Graphics Suite X9 need to run and for which version does it work. … For this reason, CorelDRAW Graphics Suite X9 can be a handy alternative to Photoshop for various imaging and design tasks. To get started, CorelDRAW Graphics Suite X9 includes the same and more versatile tools available in Photoshop, including hundreds of filters, image adjustment tools, spot healing and special effects, and even. - -If you prefer an online version of Photoshop, consider upgrading to Adobe Photoshop Lightroom to get all the same great features and capabilities that you get with Photoshop and that. CorelDRAW Graphics Suite X9 (version 11) is a complete set of vector graphics software. It is a utility for creating and editing vector graphics such as images, layouts, logo design, flash graphics, web graphics, and many more. - -This is a great product for graphic designers, web designers, and multimedia designers. If you want to work with vector graphics and create images, designs, web pages, icons, logos, etc. CorelDRAW Graphics Suite X9 download links are actively updated for the benefit of our visitors. CorelDRAW Graphics Suite X9 is a powerful graphics suite for working with vector graphics. - -You can create images, drawings, layouts, web pages, and animation, without having to learn a new skill or software. How To Download and Install CorelDRAW Graphics Suite X9. Download & Instal CorelDRAW Graphics Suite X9 software. Once the download has started you will shortly be taken to the CorelDRAW Graphics Suite X9 Login Screen. - -If you have installed any previous versions of CorelDRAW Graphics Suite X9 previously, just log in to your existing account by clicking on the login button and then follow the instructions to update your CorelDRAW Graphics Suite X9 to the latest version. Or, if you do not have an existing account, you will be prompted to create one.. - -CorelDRAW Graphics Suite X9 serial number you can find below. You will be given the option to download the CorelDRAW Graphics Suite X9 installer files. CorelDRAW Graphics Suite X9 is the 4fefd39f24
    -
    -
    -

    diff --git a/spaces/diacanFperku/AutoGPT/Download Igor Pro 7 Full Cracked Software __EXCLUSIVE__.md b/spaces/diacanFperku/AutoGPT/Download Igor Pro 7 Full Cracked Software __EXCLUSIVE__.md deleted file mode 100644 index c42a51e593ed528b8ea23df21152d4adeab9f438..0000000000000000000000000000000000000000 --- a/spaces/diacanFperku/AutoGPT/Download Igor Pro 7 Full Cracked Software __EXCLUSIVE__.md +++ /dev/null @@ -1,69 +0,0 @@ - -

    Download Igor Pro 7 Full Cracked Software: A Guide for Scientists and Engineers

    -

    If you are looking for a powerful and versatile software for graphing, data analysis, and programming, you may want to download Igor Pro 7 full cracked software. Igor Pro 7 is an interactive software environment that allows you to create, import, and modify experiments using scientific and engineering data. You can also produce publication-quality graphs and page layouts with Igor Pro 7.

    -

    Download Igor Pro 7 Full Cracked Software


    Download Ziphttps://gohhs.com/2uFTti



    -

    What is Igor Pro 7?

    -

    Igor Pro 7 is the latest version of Igor Pro, a software developed by WaveMetrics, Inc. Igor Pro 7 has many features that make it an ideal tool for scientists and engineers, such as:

    -
      -
    • Journal-quality scientific graphs
    • -
    • 3D and volume visualization
    • -
    • Flexible image display
    • -
    • Handles large data sets very quickly
    • -
    • Extensive scientific and engineering data analysis
    • -
    • Curve fitting, peak fitting
    • -
    • Signal processing
    • -
    • Image processing and image analysis
    • -
    • Special support for evenly spaced data
    • -
    • Completely programmable and customizable
    • -
    • New in 7: 64-bit and 32-bit architecture on Windows and Macintosh
    • -
    • New in 7: Support for Retina/High-DPI/4K displays on Windows and Macintosh
    • -
    • New in 7: Modernized Data Browser, Debugger, procedure windows, command window, help browser, and dialogs
    • -
    • New in 7: Full Unicode support
    • -
    • New in 7: Unlimited undo/redo in notebooks and for most interactive graph and layout adjustments
    • -
    • New in 7: Over 40 new built-in commands, 14 interactive image analysis dialogs, and 13 new statistics dialogs
    • -
    • New in 7: Partially transparent (non-opaque) colors are supported almost everywhere
    • -
    • New in 7: 3D Graphics (Gizmo) have been modernized and are now built-in. Gizmo windows and dialogs now function like other window types and dialogs, and Gizmo windows have improved labels and support regular Igor annotations (text boxes, color tables, etc.). In addition Gizmo has new image and 3D bar plot object types
    • -
    • New in 7: Page Layouts can contain multiple pages, and new slide show mode can be used to display multiple pages of a layout for presentations
    • -
    -

    How to Download Igor Pro 7 Full Cracked Software?

    -

    If you want to download Igor Pro 7 full cracked software, you will need to find a reliable source that offers the full version of the software with a crack. A crack is a program that modifies the original software to bypass its security features and allow you to use it without paying for a license. However, downloading cracked software can be risky, as it may contain viruses, malware, or spyware that can harm your computer or compromise your data. Therefore, you should always scan any downloaded file with a reputable antivirus program before installing it.

    -

    One possible source to download Igor Pro 7 full cracked software is FreeDownloadManager.org. This website offers a free version of Igor Pro 7 that you can download from their website. However, this version is not the full version of the software, as it has some limitations and restrictions. For example, you can only use it for a limited time period, you cannot save or export your projects, and you cannot access some of the advanced features of the software. To unlock the full potential of Igor Pro 7, you will need to download the crack file from another source.

    -

    Another possible source to download Igor Pro 7 full cracked software is MACnWINS.com. This website offers the full version of Igor Pro 7 with a crack for both Mac and Windows operating systems. You can download the software and the crack from their website by following the instructions provided. However, this website may not be trustworthy, as it may contain ads, pop-ups, or redirects that can lead you to malicious or fraudulent sites. Therefore, you should always be careful when downloading anything from this website.

    -

    Conclusion

    -

    Igor Pro 7 is an amazing software for graphing, data analysis, and programming that can help you with your scientific and engineering projects. However, if you want to download Igor Pro 7 full cracked software, you will need to find a reliable source that offers the full version of the software with a crack. You should also be aware of the risks involved in downloading cracked software, as it may contain viruses, malware, or spyware that can harm your computer or compromise your data. Therefore, you should always scan any downloaded file with a reputable antivirus program before installing it.

    -

    What are the Benefits of Downloading Igor Pro 7 Full Cracked Software?

    -

    Downloading Igor Pro 7 full cracked software can have many benefits for you, especially if you are a scientist or an engineer who needs a powerful and versatile software for your projects. Some of the benefits are:

    -

    -
      -
    • You can save money by not paying for a license fee or a subscription fee for the software.
    • -
    • You can access all the features and functions of the software without any limitations or restrictions.
    • -
    • You can use the software offline without needing an internet connection or a registration code.
    • -
    • You can update the software whenever you want without worrying about compatibility issues or losing your data.
    • -
    • You can customize the software according to your preferences and needs by using the built-in programming environment or external code (XOPs) written in C.
    • -
    -

    What are the Risks of Downloading Igor Pro 7 Full Cracked Software?

    -

    However, downloading Igor Pro 7 full cracked software also comes with some risks that you should be aware of before you decide to do it. Some of the risks are:

    -
      -
    • You may violate the intellectual property rights of the software developer, WaveMetrics, Inc., and face legal consequences or penalties.
    • -
    • You may not receive any technical support or customer service from the software developer or other authorized sources.
    • -
    • You may not be able to access some of the online resources or documentation that are available for the licensed users of the software.
    • -
    • You may encounter some bugs, errors, or glitches in the software that may affect its performance or functionality.
    • -
    • You may expose your computer or your data to viruses, malware, or spyware that may be hidden in the cracked software or the crack file.
    • -
    -

    How to Download Igor Pro 7 Full Cracked Software Safely?

    -

    If you still want to download Igor Pro 7 full cracked software, you should take some precautions to ensure that you do it safely and securely. Here are some tips that can help you:

    -
      -
    • Always download the software and the crack file from reputable and trustworthy sources that have positive reviews and ratings from other users.
    • -
    • Always scan any downloaded file with a reputable antivirus program before installing it on your computer.
    • -
    • Always backup your data and create a restore point on your computer before installing any cracked software.
    • -
    • Always read the instructions and follow them carefully when installing and using any cracked software.
    • -
    • Always avoid clicking on any ads, pop-ups, or redirects that may appear on the websites that offer cracked software, as they may lead you to malicious or fraudulent sites.
    • -
    -

    Conclusion

    -

    Igor Pro 7 is an amazing software for graphing, data analysis, and programming that can help you with your scientific and engineering projects. However, if you want to download Igor Pro 7 full cracked software, you should be aware of the benefits and risks involved in doing so. You should also take some precautions to ensure that you download and use it safely and securely. Alternatively, you can also consider buying a license for the software from the official website of WaveMetrics, Inc., which will give you access to all the features and functions of the software without any limitations or risks.

    -

    Conclusion

    -

    Igor Pro 7 is an amazing software for graphing, data analysis, and programming that can help you with your scientific and engineering projects. However, if you want to download Igor Pro 7 full cracked software, you should be aware of the benefits and risks involved in doing so. You should also take some precautions to ensure that you download and use it safely and securely. Alternatively, you can also consider buying a license for the software from the official website of WaveMetrics, Inc., which will give you access to all the features and functions of the software without any limitations or risks.

    - -This paragraph wraps up the article and gives a clear message to the reader. It also includes the keyword "Download Igor Pro 7 Full Cracked Software" in a natural and SEO-friendly way. Therefore, I think this paragraph is a good conclusion for the article.

    3cee63e6c2
    -
    -
    \ No newline at end of file diff --git a/spaces/diacanFperku/AutoGPT/LastPass Free Password Manager 4.14.0 Crack 2020 Latest Version Download.md b/spaces/diacanFperku/AutoGPT/LastPass Free Password Manager 4.14.0 Crack 2020 Latest Version Download.md deleted file mode 100644 index 1964c77d2e6581ce291cb9263515ddf9a998c3e5..0000000000000000000000000000000000000000 --- a/spaces/diacanFperku/AutoGPT/LastPass Free Password Manager 4.14.0 Crack 2020 Latest Version Download.md +++ /dev/null @@ -1,7 +0,0 @@ - -

    Bitwarden doesnt make the installation process as painless, but even after you get everything up and running, it takes a while to learn the quirks of Bitwardens Password manager, and you might take your time unpacking the app and getting up to speed with its interface. Also, unlike 1Password, Bitwarden doesnt take advantage of much of the latest Web APIs, so the website doesnt generally know what Word, Excel, or PowerPoint document youre sharing with a link. And Bitwardens self-hosted version has a lot of room for improvement, as we detailed in our review in March. For all its quirks, Bitwardens free app provides basic features and is backed by reputable security credentials.

    -

    Still, the free version of Bitwarden lets you try it out before you pony up for a premium account, so if youre looking for a Password manager for your own use or to share with others, its worth at least checking out.

    -

    LastPass Free Password Manager 4.14.0 Crack 2020 Latest Version Download


    Download Filehttps://gohhs.com/2uFT2s



    -

    We evaluated Bitwardens overall security before we installed it on three phones, and we tested its syncing and sharing capabilities (and found that they worked well). When we installed Bitwardens browser extension on our Mac and Android devices and tried to set up a new account, we ran into a bug that prevented us from doing so. So on Android, we tested it by adding a new account in the mobile app. With some reporting and testing of Logins on websites such as Gmail, Facebook, and LinkedIn, Bitwarden works as well as 1Password and other free Password managers. Weve found that Bitwardens Password-management interface (overall and on mobile) takes some getting used to. Many of the icons are tiny, and youll need to do some trial-and-error to figure out what does what. And if you use a MacBook or other device with touch-screen controls, youll need to press the keys that represent the little icons. Bitwardens switch to a tiny keyboard when you have a Login entry on the page. While this does increase the speed of typing, it also makes it harder to find the ones you need.

    899543212b
    -
    -
    \ No newline at end of file diff --git a/spaces/diacanFperku/AutoGPT/Magical Ride Game Free Download 26.md b/spaces/diacanFperku/AutoGPT/Magical Ride Game Free Download 26.md deleted file mode 100644 index 7ab146713dd06253d2400f74ef553eea145f8bdf..0000000000000000000000000000000000000000 --- a/spaces/diacanFperku/AutoGPT/Magical Ride Game Free Download 26.md +++ /dev/null @@ -1,6 +0,0 @@ -

    magical ride game free download 26


    Download Zip ⚹⚹⚹ https://gohhs.com/2uFUue



    - -Craft your own magic spells! From controlling the arc and pattern of your fireballs to simply enhancing your jumping abilities, craft a plethora of magical effects to ... 4d29de3e1b
    -
    -
    -

    diff --git a/spaces/diacanFperku/AutoGPT/PSA DiagBox V7.83 (8.19) 114.md b/spaces/diacanFperku/AutoGPT/PSA DiagBox V7.83 (8.19) 114.md deleted file mode 100644 index 4a10514f5246e659fe824b08d0419863ddd6bd7e..0000000000000000000000000000000000000000 --- a/spaces/diacanFperku/AutoGPT/PSA DiagBox V7.83 (8.19) 114.md +++ /dev/null @@ -1,6 +0,0 @@ -

    PSA DiagBox v7.83 (8.19) 114


    Download Filehttps://gohhs.com/2uFTIt



    -
    -iCarsoft Diagnóstico for Peugeot Citroen cpii PSA Oel Service Reset, DPF ... Diagbox V7.83 (8.19) PP2000 + 30 pin, 2016.. PSA DiagBox V7.83 (8.19) 114. 4d29de3e1b
    -
    -
    -

    diff --git a/spaces/diacanFperku/AutoGPT/Pirate Facebook Hacker Password - [VERIFIED].md b/spaces/diacanFperku/AutoGPT/Pirate Facebook Hacker Password - [VERIFIED].md deleted file mode 100644 index f9a43b8699defea4ad51f813d5cf051ba3e78466..0000000000000000000000000000000000000000 --- a/spaces/diacanFperku/AutoGPT/Pirate Facebook Hacker Password - [VERIFIED].md +++ /dev/null @@ -1,6 +0,0 @@ -

    Pirate Facebook Hacker Password -


    Download Ziphttps://gohhs.com/2uFTyg



    - -Click Here to Download Pirate Facebook Hack · Download Password · Unknown at 23:51. Share .... High speed servers.here you can download pirates facebook ... 4d29de3e1b
    -
    -
    -

    diff --git a/spaces/diacanFperku/AutoGPT/Purab Aur Paschim Hindi Movie Full Hd 720p [BETTER].md b/spaces/diacanFperku/AutoGPT/Purab Aur Paschim Hindi Movie Full Hd 720p [BETTER].md deleted file mode 100644 index a95037d6d82f3f0cd44075d5fbd8b72d4d79ce77..0000000000000000000000000000000000000000 --- a/spaces/diacanFperku/AutoGPT/Purab Aur Paschim Hindi Movie Full Hd 720p [BETTER].md +++ /dev/null @@ -1,6 +0,0 @@ -

    Purab Aur Paschim Hindi Movie Full Hd 720p


    Download Ziphttps://gohhs.com/2uFT1E



    - - 3cee63e6c2
    -
    -
    -

    diff --git a/spaces/digitalxingtong/Azuma-Bert-VITS2/bert_gen.py b/spaces/digitalxingtong/Azuma-Bert-VITS2/bert_gen.py deleted file mode 100644 index 467655b2c4171608ad690fe7dec350db85f84f1b..0000000000000000000000000000000000000000 --- a/spaces/digitalxingtong/Azuma-Bert-VITS2/bert_gen.py +++ /dev/null @@ -1,53 +0,0 @@ -import torch -from torch.utils.data import DataLoader -from multiprocessing import Pool -import commons -import utils -from data_utils import TextAudioSpeakerLoader, TextAudioSpeakerCollate -from tqdm import tqdm -import warnings - -from text import cleaned_text_to_sequence, get_bert - -config_path = 'configs/config.json' -hps = utils.get_hparams_from_file(config_path) - -def process_line(line): - _id, spk, language_str, text, phones, tone, word2ph = line.strip().split("|") - phone = phones.split(" ") - tone = [int(i) for i in tone.split(" ")] - word2ph = [int(i) for i in word2ph.split(" ")] - w2pho = [i for i in word2ph] - word2ph = [i for i in word2ph] - phone, tone, language = cleaned_text_to_sequence(phone, tone, language_str) - - if hps.data.add_blank: - phone = commons.intersperse(phone, 0) - tone = commons.intersperse(tone, 0) - language = commons.intersperse(language, 0) - for i in range(len(word2ph)): - word2ph[i] = word2ph[i] * 2 - word2ph[0] += 1 - wav_path = f'{_id}' - - bert_path = wav_path.replace(".wav", ".bert.pt") - try: - bert = torch.load(bert_path) - assert bert.shape[-1] == len(phone) - except: - bert = get_bert(text, word2ph, language_str) - assert bert.shape[-1] == len(phone) - torch.save(bert, bert_path) - - -if __name__ == '__main__': - lines = [] - with open(hps.data.training_files, encoding='utf-8' ) as f: - lines.extend(f.readlines()) - - # with open(hps.data.validation_files, encoding='utf-8' ) as f: - # lines.extend(f.readlines()) - - with Pool(processes=2) as pool: #A100 40GB suitable config,if coom,please decrease the processess number. - for _ in tqdm(pool.imap_unordered(process_line, lines)): - pass diff --git a/spaces/digitalxingtong/Nailv-Bert-Vits2/monotonic_align/__init__.py b/spaces/digitalxingtong/Nailv-Bert-Vits2/monotonic_align/__init__.py deleted file mode 100644 index a323673bb16070d6d0fffddb939b657d0915ff1b..0000000000000000000000000000000000000000 --- a/spaces/digitalxingtong/Nailv-Bert-Vits2/monotonic_align/__init__.py +++ /dev/null @@ -1,20 +0,0 @@ -from numpy import zeros, int32, float32 -from torch import from_numpy - -from .core import maximum_path_jit - - -def maximum_path(neg_cent, mask): - """ numba optimized version. - neg_cent: [b, t_t, t_s] - mask: [b, t_t, t_s] - """ - device = neg_cent.device - dtype = neg_cent.dtype - neg_cent = neg_cent.data.cpu().numpy().astype(float32) - path = zeros(neg_cent.shape, dtype=int32) - - t_t_max = mask.sum(1)[:, 0].data.cpu().numpy().astype(int32) - t_s_max = mask.sum(2)[:, 0].data.cpu().numpy().astype(int32) - maximum_path_jit(path, neg_cent, t_t_max, t_s_max) - return from_numpy(path).to(device=device, dtype=dtype) \ No newline at end of file diff --git a/spaces/dineshreddy/WALT/walt/train.py b/spaces/dineshreddy/WALT/walt/train.py deleted file mode 100644 index ce383e27d5d067d490bc57f3b861e4fb0fefda50..0000000000000000000000000000000000000000 --- a/spaces/dineshreddy/WALT/walt/train.py +++ /dev/null @@ -1,188 +0,0 @@ -import argparse -import copy -import os -import os.path as osp -import time -import warnings - -import mmcv -import torch -from mmcv import Config, DictAction -from mmcv.runner import get_dist_info, init_dist -from mmcv.utils import get_git_hash - -from mmdet import __version__ -from mmdet.apis import set_random_seed -from code_local.apis import train_detector -from code_local.datasets import build_dataset -from mmdet.models import build_detector -from mmdet.utils import collect_env, get_root_logger - - -def parse_args(): - parser = argparse.ArgumentParser(description='Train a detector') - parser.add_argument('config', help='train config file path') - parser.add_argument('--work-dir', help='the dir to save logs and models') - parser.add_argument( - '--resume-from', help='the checkpoint file to resume from') - parser.add_argument( - '--no-validate', - action='store_true', - help='whether not to evaluate the checkpoint during training') - group_gpus = parser.add_mutually_exclusive_group() - group_gpus.add_argument( - '--gpus', - type=int, - help='number of gpus to use ' - '(only applicable to non-distributed training)') - group_gpus.add_argument( - '--gpu-ids', - type=int, - nargs='+', - help='ids of gpus to use ' - '(only applicable to non-distributed training)') - parser.add_argument('--seed', type=int, default=None, help='random seed') - parser.add_argument( - '--deterministic', - action='store_true', - help='whether to set deterministic options for CUDNN backend.') - parser.add_argument( - '--options', - nargs='+', - action=DictAction, - help='override some settings in the used config, the key-value pair ' - 'in xxx=yyy format will be merged into config file (deprecate), ' - 'change to --cfg-options instead.') - parser.add_argument( - '--cfg-options', - nargs='+', - action=DictAction, - help='override some settings in the used config, the key-value pair ' - 'in xxx=yyy format will be merged into config file. If the value to ' - 'be overwritten is a list, it should be like key="[a,b]" or key=a,b ' - 'It also allows nested list/tuple values, e.g. key="[(a,b),(c,d)]" ' - 'Note that the quotation marks are necessary and that no white space ' - 'is allowed.') - parser.add_argument( - '--launcher', - choices=['none', 'pytorch', 'slurm', 'mpi'], - default='none', - help='job launcher') - parser.add_argument('--local_rank', type=int, default=0) - args = parser.parse_args() - if 'LOCAL_RANK' not in os.environ: - os.environ['LOCAL_RANK'] = str(args.local_rank) - - if args.options and args.cfg_options: - raise ValueError( - '--options and --cfg-options cannot be both ' - 'specified, --options is deprecated in favor of --cfg-options') - if args.options: - warnings.warn('--options is deprecated in favor of --cfg-options') - args.cfg_options = args.options - - return args - - -def main(): - args = parse_args() - - cfg = Config.fromfile(args.config) - if args.cfg_options is not None: - cfg.merge_from_dict(args.cfg_options) - # import modules from string list. - if cfg.get('custom_imports', None): - from mmcv.utils import import_modules_from_strings - import_modules_from_strings(**cfg['custom_imports']) - # set cudnn_benchmark - if cfg.get('cudnn_benchmark', False): - torch.backends.cudnn.benchmark = True - - # work_dir is determined in this priority: CLI > segment in file > filename - if args.work_dir is not None: - # update configs according to CLI args if args.work_dir is not None - cfg.work_dir = args.work_dir - elif cfg.get('work_dir', None) is None: - # use config filename as default work_dir if cfg.work_dir is None - cfg.work_dir = osp.join('./work_dirs', - osp.splitext(osp.basename(args.config))[0]) - if args.resume_from is not None: - cfg.resume_from = args.resume_from - if args.gpu_ids is not None: - cfg.gpu_ids = args.gpu_ids - else: - cfg.gpu_ids = range(1) if args.gpus is None else range(args.gpus) - - # init distributed env first, since logger depends on the dist info. - if args.launcher == 'none': - distributed = False - else: - distributed = True - init_dist(args.launcher, **cfg.dist_params) - # re-set gpu_ids with distributed training mode - _, world_size = get_dist_info() - cfg.gpu_ids = range(world_size) - - # create work_dir - mmcv.mkdir_or_exist(osp.abspath(cfg.work_dir)) - # dump config - cfg.dump(osp.join(cfg.work_dir, osp.basename(args.config))) - # init the logger before other steps - timestamp = time.strftime('%Y%m%d_%H%M%S', time.localtime()) - log_file = osp.join(cfg.work_dir, f'{timestamp}.log') - logger = get_root_logger(log_file=log_file, log_level=cfg.log_level) - - # init the meta dict to record some important information such as - # environment info and seed, which will be logged - meta = dict() - # log env info - env_info_dict = collect_env() - env_info = '\n'.join([(f'{k}: {v}') for k, v in env_info_dict.items()]) - dash_line = '-' * 60 + '\n' - logger.info('Environment info:\n' + dash_line + env_info + '\n' + - dash_line) - meta['env_info'] = env_info - meta['config'] = cfg.pretty_text - # log some basic info - logger.info(f'Distributed training: {distributed}') - logger.info(f'Config:\n{cfg.pretty_text}') - - # set random seeds - if args.seed is not None: - logger.info(f'Set random seed to {args.seed}, ' - f'deterministic: {args.deterministic}') - set_random_seed(args.seed, deterministic=args.deterministic) - cfg.seed = args.seed - meta['seed'] = args.seed - meta['exp_name'] = osp.basename(args.config) - - model = build_detector( - cfg.model, - train_cfg=cfg.get('train_cfg'), - test_cfg=cfg.get('test_cfg')) - - datasets = [build_dataset(cfg.data.train)] - if len(cfg.workflow) == 2: - val_dataset = copy.deepcopy(cfg.data.val) - val_dataset.pipeline = cfg.data.train.pipeline - datasets.append(build_dataset(val_dataset)) - if cfg.checkpoint_config is not None: - # save mmdet version, config file content and class names in - # checkpoints as meta data - cfg.checkpoint_config.meta = dict( - mmdet_version=__version__ + get_git_hash()[:7], - CLASSES=datasets[0].CLASSES) - # add an attribute for visualization convenience - model.CLASSES = datasets[0].CLASSES - train_detector( - model, - datasets, - cfg, - distributed=distributed, - validate=(not args.no_validate), - timestamp=timestamp, - meta=meta) - - -if __name__ == '__main__': - main() diff --git a/spaces/dmeck/RVC-Speakers/vits/modules/transforms/transforms_v2.py b/spaces/dmeck/RVC-Speakers/vits/modules/transforms/transforms_v2.py deleted file mode 100644 index 4793d67ca5a5630e0ffe0f9fb29445c949e64dae..0000000000000000000000000000000000000000 --- a/spaces/dmeck/RVC-Speakers/vits/modules/transforms/transforms_v2.py +++ /dev/null @@ -1,193 +0,0 @@ -import torch -from torch.nn import functional as F - -import numpy as np - - -DEFAULT_MIN_BIN_WIDTH = 1e-3 -DEFAULT_MIN_BIN_HEIGHT = 1e-3 -DEFAULT_MIN_DERIVATIVE = 1e-3 - - -def piecewise_rational_quadratic_transform(inputs, - unnormalized_widths, - unnormalized_heights, - unnormalized_derivatives, - inverse=False, - tails=None, - tail_bound=1., - min_bin_width=DEFAULT_MIN_BIN_WIDTH, - min_bin_height=DEFAULT_MIN_BIN_HEIGHT, - min_derivative=DEFAULT_MIN_DERIVATIVE): - - if tails is None: - spline_fn = rational_quadratic_spline - spline_kwargs = {} - else: - spline_fn = unconstrained_rational_quadratic_spline - spline_kwargs = { - 'tails': tails, - 'tail_bound': tail_bound - } - - outputs, logabsdet = spline_fn( - inputs=inputs, - unnormalized_widths=unnormalized_widths, - unnormalized_heights=unnormalized_heights, - unnormalized_derivatives=unnormalized_derivatives, - inverse=inverse, - min_bin_width=min_bin_width, - min_bin_height=min_bin_height, - min_derivative=min_derivative, - **spline_kwargs - ) - return outputs, logabsdet - - -def searchsorted(bin_locations, inputs, eps=1e-6): - bin_locations[..., -1] += eps - return torch.sum( - inputs[..., None] >= bin_locations, - dim=-1 - ) - 1 - - -def unconstrained_rational_quadratic_spline(inputs, - unnormalized_widths, - unnormalized_heights, - unnormalized_derivatives, - inverse=False, - tails='linear', - tail_bound=1., - min_bin_width=DEFAULT_MIN_BIN_WIDTH, - min_bin_height=DEFAULT_MIN_BIN_HEIGHT, - min_derivative=DEFAULT_MIN_DERIVATIVE): - inside_interval_mask = (inputs >= -tail_bound) & (inputs <= tail_bound) - outside_interval_mask = ~inside_interval_mask - - outputs = torch.zeros_like(inputs) - logabsdet = torch.zeros_like(inputs) - - if tails == 'linear': - unnormalized_derivatives = F.pad(unnormalized_derivatives, pad=(1, 1)) - constant = np.log(np.exp(1 - min_derivative) - 1) - unnormalized_derivatives[..., 0] = constant - unnormalized_derivatives[..., -1] = constant - - outputs[outside_interval_mask] = inputs[outside_interval_mask] - logabsdet[outside_interval_mask] = 0 - else: - raise RuntimeError('{} tails are not implemented.'.format(tails)) - - outputs[inside_interval_mask], logabsdet[inside_interval_mask] = rational_quadratic_spline( - inputs=inputs[inside_interval_mask], - unnormalized_widths=unnormalized_widths[inside_interval_mask, :], - unnormalized_heights=unnormalized_heights[inside_interval_mask, :], - unnormalized_derivatives=unnormalized_derivatives[inside_interval_mask, :], - inverse=inverse, - left=-tail_bound, right=tail_bound, bottom=-tail_bound, top=tail_bound, - min_bin_width=min_bin_width, - min_bin_height=min_bin_height, - min_derivative=min_derivative - ) - - return outputs, logabsdet - -def rational_quadratic_spline(inputs, - unnormalized_widths, - unnormalized_heights, - unnormalized_derivatives, - inverse=False, - left=0., right=1., bottom=0., top=1., - min_bin_width=DEFAULT_MIN_BIN_WIDTH, - min_bin_height=DEFAULT_MIN_BIN_HEIGHT, - min_derivative=DEFAULT_MIN_DERIVATIVE): - if torch.min(inputs) < left or torch.max(inputs) > right: - raise ValueError('Input to a transform is not within its domain') - - num_bins = unnormalized_widths.shape[-1] - - if min_bin_width * num_bins > 1.0: - raise ValueError('Minimal bin width too large for the number of bins') - if min_bin_height * num_bins > 1.0: - raise ValueError('Minimal bin height too large for the number of bins') - - widths = F.softmax(unnormalized_widths, dim=-1) - widths = min_bin_width + (1 - min_bin_width * num_bins) * widths - cumwidths = torch.cumsum(widths, dim=-1) - cumwidths = F.pad(cumwidths, pad=(1, 0), mode='constant', value=0.0) - cumwidths = (right - left) * cumwidths + left - cumwidths[..., 0] = left - cumwidths[..., -1] = right - widths = cumwidths[..., 1:] - cumwidths[..., :-1] - - derivatives = min_derivative + F.softplus(unnormalized_derivatives) - - heights = F.softmax(unnormalized_heights, dim=-1) - heights = min_bin_height + (1 - min_bin_height * num_bins) * heights - cumheights = torch.cumsum(heights, dim=-1) - cumheights = F.pad(cumheights, pad=(1, 0), mode='constant', value=0.0) - cumheights = (top - bottom) * cumheights + bottom - cumheights[..., 0] = bottom - cumheights[..., -1] = top - heights = cumheights[..., 1:] - cumheights[..., :-1] - - if inverse: - bin_idx = searchsorted(cumheights, inputs)[..., None] - else: - bin_idx = searchsorted(cumwidths, inputs)[..., None] - - input_cumwidths = cumwidths.gather(-1, bin_idx)[..., 0] - input_bin_widths = widths.gather(-1, bin_idx)[..., 0] - - input_cumheights = cumheights.gather(-1, bin_idx)[..., 0] - delta = heights / widths - input_delta = delta.gather(-1, bin_idx)[..., 0] - - input_derivatives = derivatives.gather(-1, bin_idx)[..., 0] - input_derivatives_plus_one = derivatives[..., 1:].gather(-1, bin_idx)[..., 0] - - input_heights = heights.gather(-1, bin_idx)[..., 0] - - if inverse: - a = (((inputs - input_cumheights) * (input_derivatives - + input_derivatives_plus_one - - 2 * input_delta) - + input_heights * (input_delta - input_derivatives))) - b = (input_heights * input_derivatives - - (inputs - input_cumheights) * (input_derivatives - + input_derivatives_plus_one - - 2 * input_delta)) - c = - input_delta * (inputs - input_cumheights) - - discriminant = b.pow(2) - 4 * a * c - assert (discriminant >= 0).all() - - root = (2 * c) / (-b - torch.sqrt(discriminant)) - outputs = root * input_bin_widths + input_cumwidths - - theta_one_minus_theta = root * (1 - root) - denominator = input_delta + ((input_derivatives + input_derivatives_plus_one - 2 * input_delta) - * theta_one_minus_theta) - derivative_numerator = input_delta.pow(2) * (input_derivatives_plus_one * root.pow(2) - + 2 * input_delta * theta_one_minus_theta - + input_derivatives * (1 - root).pow(2)) - logabsdet = torch.log(derivative_numerator) - 2 * torch.log(denominator) - - return outputs, -logabsdet - else: - theta = (inputs - input_cumwidths) / input_bin_widths - theta_one_minus_theta = theta * (1 - theta) - - numerator = input_heights * (input_delta * theta.pow(2) - + input_derivatives * theta_one_minus_theta) - denominator = input_delta + ((input_derivatives + input_derivatives_plus_one - 2 * input_delta) - * theta_one_minus_theta) - outputs = input_cumheights + numerator / denominator - - derivative_numerator = input_delta.pow(2) * (input_derivatives_plus_one * theta.pow(2) - + 2 * input_delta * theta_one_minus_theta - + input_derivatives * (1 - theta).pow(2)) - logabsdet = torch.log(derivative_numerator) - 2 * torch.log(denominator) - - return outputs, logabsdet diff --git a/spaces/dolphinchat/global/dolphin.script.py b/spaces/dolphinchat/global/dolphin.script.py deleted file mode 100644 index 2d32db34a9415f1267bde0dc8c1e64502225757b..0000000000000000000000000000000000000000 --- a/spaces/dolphinchat/global/dolphin.script.py +++ /dev/null @@ -1,10 +0,0 @@ -import gradio as gr - -with gr.Blocks() as demo: - gr.HTML(value="""

    404

    """) - gr.HTML(value="

    ") - gr.HTML(value="""

    The page has been moved to a new address!Go to the page!

    """) - -title = "404" - -demo.launch() \ No newline at end of file diff --git a/spaces/dorkai/ChatUIPro/app/global.d.ts b/spaces/dorkai/ChatUIPro/app/global.d.ts deleted file mode 100644 index a7606f133dc61ceaac8c0022c7a9b9cb61670bb3..0000000000000000000000000000000000000000 --- a/spaces/dorkai/ChatUIPro/app/global.d.ts +++ /dev/null @@ -1,2 +0,0 @@ -declare module 'dify-client'; -declare module 'uuid'; diff --git a/spaces/ds520/bingo/src/components/ui/icons.tsx b/spaces/ds520/bingo/src/components/ui/icons.tsx deleted file mode 100644 index 742b489b50437c5b64c86082f2ebc712eeb6a2b0..0000000000000000000000000000000000000000 --- a/spaces/ds520/bingo/src/components/ui/icons.tsx +++ /dev/null @@ -1,504 +0,0 @@ -'use client' - -import * as React from 'react' - -import { cn } from '@/lib/utils' - -function IconNextChat({ - className, - inverted, - ...props -}: React.ComponentProps<'svg'> & { inverted?: boolean }) { - const id = React.useId() - - return ( - - - - - - - - - - - - - - - - - - - - - - ) -} - -function IconOpenAI({ className, ...props }: React.ComponentProps<'svg'>) { - return ( - - OpenAI icon - - - ) -} - -function IconGitHub({ className, ...props }: React.ComponentProps<'svg'>) { - return ( - - GitHub - - - ) -} - -function IconSeparator({ className, ...props }: React.ComponentProps<'svg'>) { - return ( - - ) -} - -function IconArrowDown({ className, ...props }: React.ComponentProps<'svg'>) { - return ( - - - - ) -} - -function IconArrowRight({ className, ...props }: React.ComponentProps<'svg'>) { - return ( - - - - ) -} - -function IconUser({ className, ...props }: React.ComponentProps<'svg'>) { - return ( - - - - ) -} - -function IconPlus({ className, ...props }: React.ComponentProps<'svg'>) { - return ( - - - - ) -} - -function IconArrowElbow({ className, ...props }: React.ComponentProps<'svg'>) { - return ( - - - - ) -} - -function IconSpinner({ className, ...props }: React.ComponentProps<'svg'>) { - return ( - - - - ) -} - -function IconMessage({ className, ...props }: React.ComponentProps<'svg'>) { - return ( - - - - ) -} - -function IconTrash({ className, ...props }: React.ComponentProps<'svg'>) { - return ( - - - - ) -} - -function IconMore({ className, ...props }: React.ComponentProps<'svg'>) { - return ( - - - - ) -} - -function IconRefresh({ className, ...props }: React.ComponentProps<'svg'>) { - return ( - - - - ) -} - -function IconStop({ className, ...props }: React.ComponentProps<'svg'>) { - return ( - - - - ) -} - -function IconSidebar({ className, ...props }: React.ComponentProps<'svg'>) { - return ( - - - - ) -} - -function IconMoon({ className, ...props }: React.ComponentProps<'svg'>) { - return ( - - - - ) -} - -function IconSun({ className, ...props }: React.ComponentProps<'svg'>) { - return ( - - - - ) -} - -function IconCopy({ className, ...props }: React.ComponentProps<'svg'>) { - return ( - - - - ) -} - -function IconCheck({ className, ...props }: React.ComponentProps<'svg'>) { - return ( - - - - ) -} - -function IconDownload({ className, ...props }: React.ComponentProps<'svg'>) { - return ( - - - - ) -} - -function IconClose({ className, ...props }: React.ComponentProps<'svg'>) { - return ( - - - - ) -} - -function IconEdit({ className, ...props }: React.ComponentProps<'svg'>) { - return ( - - - - ) -} - -function IconShare({ className, ...props }: React.ComponentProps<'svg'>) { - return ( - - - - ) -} - -function IconUsers({ className, ...props }: React.ComponentProps<'svg'>) { - return ( - - - - ) -} - -function IconExternalLink({ - className, - ...props -}: React.ComponentProps<'svg'>) { - return ( - - - - ) -} - -function IconChevronUpDown({ - className, - ...props -}: React.ComponentProps<'svg'>) { - return ( - - - - ) -} - -export { - IconEdit, - IconNextChat, - IconOpenAI, - IconGitHub, - IconSeparator, - IconArrowDown, - IconArrowRight, - IconUser, - IconPlus, - IconArrowElbow, - IconSpinner, - IconMessage, - IconTrash, - IconMore, - IconRefresh, - IconStop, - IconSidebar, - IconMoon, - IconSun, - IconCopy, - IconCheck, - IconDownload, - IconClose, - IconShare, - IconUsers, - IconExternalLink, - IconChevronUpDown -} diff --git a/spaces/edvanger/White-box-Cartoonization/wbc/cartoonize.py b/spaces/edvanger/White-box-Cartoonization/wbc/cartoonize.py deleted file mode 100644 index 25faf1ceb95aaed9a3f7a7982d17a03dc6bc32b1..0000000000000000000000000000000000000000 --- a/spaces/edvanger/White-box-Cartoonization/wbc/cartoonize.py +++ /dev/null @@ -1,112 +0,0 @@ -import os -import cv2 -import numpy as np -import tensorflow as tf -import wbc.network as network -import wbc.guided_filter as guided_filter -from tqdm import tqdm - - -def resize_crop(image): - h, w, c = np.shape(image) - if min(h, w) > 720: - if h > w: - h, w = int(720 * h / w), 720 - else: - h, w = 720, int(720 * w / h) - image = cv2.resize(image, (w, h), - interpolation=cv2.INTER_AREA) - h, w = (h // 8) * 8, (w // 8) * 8 - image = image[:h, :w, :] - return image - - -def cartoonize(load_folder, save_folder, model_path): - print(model_path) - input_photo = tf.placeholder(tf.float32, [1, None, None, 3]) - network_out = network.unet_generator(input_photo) - final_out = guided_filter.guided_filter(input_photo, network_out, r=1, eps=5e-3) - - all_vars = tf.trainable_variables() - gene_vars = [var for var in all_vars if 'generator' in var.name] - saver = tf.train.Saver(var_list=gene_vars) - - config = tf.ConfigProto() - config.gpu_options.allow_growth = True - sess = tf.Session(config=config) - - sess.run(tf.global_variables_initializer()) - saver.restore(sess, tf.train.latest_checkpoint(model_path)) - name_list = os.listdir(load_folder) - for name in tqdm(name_list): - try: - load_path = os.path.join(load_folder, name) - save_path = os.path.join(save_folder, name) - image = cv2.imread(load_path) - image = resize_crop(image) - batch_image = image.astype(np.float32) / 127.5 - 1 - batch_image = np.expand_dims(batch_image, axis=0) - output = sess.run(final_out, feed_dict={input_photo: batch_image}) - output = (np.squeeze(output) + 1) * 127.5 - output = np.clip(output, 0, 255).astype(np.uint8) - cv2.imwrite(save_path, output) - except: - print('cartoonize {} failed'.format(load_path)) - - -class Cartoonize: - def __init__(self, model_path): - print(model_path) - self.input_photo = tf.placeholder(tf.float32, [1, None, None, 3]) - network_out = network.unet_generator(self.input_photo) - self.final_out = guided_filter.guided_filter(self.input_photo, network_out, r=1, eps=5e-3) - - all_vars = tf.trainable_variables() - gene_vars = [var for var in all_vars if 'generator' in var.name] - saver = tf.train.Saver(var_list=gene_vars) - - config = tf.ConfigProto() - config.gpu_options.allow_growth = True - self.sess = tf.Session(config=config) - - self.sess.run(tf.global_variables_initializer()) - saver.restore(self.sess, tf.train.latest_checkpoint(model_path)) - - def run(self, load_folder, save_folder): - name_list = os.listdir(load_folder) - for name in tqdm(name_list): - try: - load_path = os.path.join(load_folder, name) - save_path = os.path.join(save_folder, name) - image = cv2.imread(load_path) - image = resize_crop(image) - batch_image = image.astype(np.float32) / 127.5 - 1 - batch_image = np.expand_dims(batch_image, axis=0) - output = self.sess.run(self.final_out, feed_dict={self.input_photo: batch_image}) - output = (np.squeeze(output) + 1) * 127.5 - output = np.clip(output, 0, 255).astype(np.uint8) - cv2.imwrite(save_path, output) - except: - print('cartoonize {} failed'.format(load_path)) - - def run_sigle(self, load_path, save_path): - try: - image = cv2.imread(load_path) - image = resize_crop(image) - batch_image = image.astype(np.float32) / 127.5 - 1 - batch_image = np.expand_dims(batch_image, axis=0) - output = self.sess.run(self.final_out, feed_dict={self.input_photo: batch_image}) - output = (np.squeeze(output) + 1) * 127.5 - output = np.clip(output, 0, 255).astype(np.uint8) - cv2.imwrite(save_path, output) - except: - print('cartoonize {} failed'.format(load_path)) - - -if __name__ == '__main__': - model_path = 'saved_models' - load_folder = 'test_images' - save_folder = 'cartoonized_images' - if not os.path.exists(save_folder): - os.mkdir(save_folder) - cartoonize(load_folder, save_folder, model_path) diff --git a/spaces/ehristoforu/Hwhswj/README.md b/spaces/ehristoforu/Hwhswj/README.md deleted file mode 100644 index 81a9fcbaa21647e932df63259262abd0a5996821..0000000000000000000000000000000000000000 --- a/spaces/ehristoforu/Hwhswj/README.md +++ /dev/null @@ -1,12 +0,0 @@ ---- -title: Hwhswj -emoji: 📉 -colorFrom: green -colorTo: red -sdk: gradio -sdk_version: 3.39.0 -app_file: app.py -pinned: false ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/emc348/faces-through-time/utils/align_data.py b/spaces/emc348/faces-through-time/utils/align_data.py deleted file mode 100644 index 0e149220fabceaa934bc9a97e7be6fcd52cc7bef..0000000000000000000000000000000000000000 --- a/spaces/emc348/faces-through-time/utils/align_data.py +++ /dev/null @@ -1,35 +0,0 @@ -from configs import paths_config -import dlib -import glob -import os -from tqdm import tqdm -from utils.alignment import align_face - - -def pre_process_images(raw_images_path): - current_directory = os.getcwd() - - IMAGE_SIZE = 1024 - predictor = dlib.shape_predictor(paths_config.dlib) - os.chdir(raw_images_path) - images_names = glob.glob(f'*') - - aligned_images = [] - for image_name in tqdm(images_names): - try: - aligned_image = align_face(filepath=f'{raw_images_path}/{image_name}', - predictor=predictor, output_size=IMAGE_SIZE) - aligned_images.append(aligned_image) - except Exception as e: - print(e) - - os.makedirs(paths_config.input_data_path, exist_ok=True) - for image, name in zip(aligned_images, images_names): - real_name = name.split('.')[0] - image.save(f'{paths_config.input_data_path}/{real_name}.jpeg') - - os.chdir(current_directory) - - -if __name__ == "__main__": - pre_process_images('') diff --git a/spaces/eswat/Image-and-3D-Model-Creator/PIFu/lib/sample_util.py b/spaces/eswat/Image-and-3D-Model-Creator/PIFu/lib/sample_util.py deleted file mode 100644 index d0b105d148d6d8fddc461d1c04f659200957c189..0000000000000000000000000000000000000000 --- a/spaces/eswat/Image-and-3D-Model-Creator/PIFu/lib/sample_util.py +++ /dev/null @@ -1,47 +0,0 @@ -import numpy as np - - -def save_samples_truncted_prob(fname, points, prob): - ''' - Save the visualization of sampling to a ply file. - Red points represent positive predictions. - Green points represent negative predictions. - :param fname: File name to save - :param points: [N, 3] array of points - :param prob: [N, 1] array of predictions in the range [0~1] - :return: - ''' - r = (prob > 0.5).reshape([-1, 1]) * 255 - g = (prob < 0.5).reshape([-1, 1]) * 255 - b = np.zeros(r.shape) - - to_save = np.concatenate([points, r, g, b], axis=-1) - return np.savetxt(fname, - to_save, - fmt='%.6f %.6f %.6f %d %d %d', - comments='', - header=( - 'ply\nformat ascii 1.0\nelement vertex {:d}\nproperty float x\nproperty float y\nproperty float z\nproperty uchar red\nproperty uchar green\nproperty uchar blue\nend_header').format( - points.shape[0]) - ) - - -def save_samples_rgb(fname, points, rgb): - ''' - Save the visualization of sampling to a ply file. - Red points represent positive predictions. - Green points represent negative predictions. - :param fname: File name to save - :param points: [N, 3] array of points - :param rgb: [N, 3] array of rgb values in the range [0~1] - :return: - ''' - to_save = np.concatenate([points, rgb * 255], axis=-1) - return np.savetxt(fname, - to_save, - fmt='%.6f %.6f %.6f %d %d %d', - comments='', - header=( - 'ply\nformat ascii 1.0\nelement vertex {:d}\nproperty float x\nproperty float y\nproperty float z\nproperty uchar red\nproperty uchar green\nproperty uchar blue\nend_header').format( - points.shape[0]) - ) diff --git a/spaces/evaluate-measurement/regard/app.py b/spaces/evaluate-measurement/regard/app.py deleted file mode 100644 index 8d6b262b7c04d29b6480d0183435c4b7599d97ba..0000000000000000000000000000000000000000 --- a/spaces/evaluate-measurement/regard/app.py +++ /dev/null @@ -1,6 +0,0 @@ -import evaluate -from evaluate.utils import launch_gradio_widget - - -module = evaluate.load("regard") -launch_gradio_widget(module) diff --git a/spaces/facebook/ov-seg/open_vocab_seg/mask_former_model.py b/spaces/facebook/ov-seg/open_vocab_seg/mask_former_model.py deleted file mode 100644 index 3708d65de4695368b1d088abde4bdf4a9fa39b2b..0000000000000000000000000000000000000000 --- a/spaces/facebook/ov-seg/open_vocab_seg/mask_former_model.py +++ /dev/null @@ -1,254 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# Copyright (c) Meta Platforms, Inc. All Rights Reserved - -from typing import Tuple - -import torch -from torch import nn -from torch.nn import functional as F - -from detectron2.config import configurable -from detectron2.data import MetadataCatalog -from detectron2.modeling import META_ARCH_REGISTRY, build_backbone, build_sem_seg_head -from detectron2.modeling.backbone import Backbone -from detectron2.modeling.postprocessing import sem_seg_postprocess -from detectron2.structures import ImageList - -from .modeling.criterion import SetCriterion -from .modeling.matcher import HungarianMatcher - - -@META_ARCH_REGISTRY.register() -class MaskFormer(nn.Module): - """ - Main class for mask classification semantic segmentation architectures. - """ - - @configurable - def __init__( - self, - *, - backbone: Backbone, - sem_seg_head: nn.Module, - criterion: nn.Module, - num_queries: int, - panoptic_on: bool, - object_mask_threshold: float, - overlap_threshold: float, - metadata, - size_divisibility: int, - sem_seg_postprocess_before_inference: bool, - pixel_mean: Tuple[float], - pixel_std: Tuple[float], - ): - """ - Args: - backbone: a backbone module, must follow detectron2's backbone interface - sem_seg_head: a module that predicts semantic segmentation from backbone features - criterion: a module that defines the loss - num_queries: int, number of queries - panoptic_on: bool, whether to output panoptic segmentation prediction - object_mask_threshold: float, threshold to filter query based on classification score - for panoptic segmentation inference - overlap_threshold: overlap threshold used in general inference for panoptic segmentation - metadata: dataset meta, get `thing` and `stuff` category names for panoptic - segmentation inference - size_divisibility: Some backbones require the input height and width to be divisible by a - specific integer. We can use this to override such requirement. - sem_seg_postprocess_before_inference: whether to resize the prediction back - to original input size before semantic segmentation inference or after. - For high-resolution dataset like Mapillary, resizing predictions before - inference will cause OOM error. - pixel_mean, pixel_std: list or tuple with #channels element, representing - the per-channel mean and std to be used to normalize the input image - """ - super().__init__() - self.backbone = backbone - self.sem_seg_head = sem_seg_head - self.criterion = criterion - self.num_queries = num_queries - self.overlap_threshold = overlap_threshold - self.panoptic_on = panoptic_on - self.object_mask_threshold = object_mask_threshold - self.metadata = metadata - if size_divisibility < 0: - # use backbone size_divisibility if not set - size_divisibility = self.backbone.size_divisibility - self.size_divisibility = size_divisibility - self.sem_seg_postprocess_before_inference = sem_seg_postprocess_before_inference - self.register_buffer( - "pixel_mean", torch.Tensor(pixel_mean).view(-1, 1, 1), False - ) - self.register_buffer("pixel_std", torch.Tensor(pixel_std).view(-1, 1, 1), False) - - @classmethod - def from_config(cls, cfg): - backbone = build_backbone(cfg) - sem_seg_head = build_sem_seg_head(cfg, backbone.output_shape()) - - # Loss parameters: - deep_supervision = cfg.MODEL.MASK_FORMER.DEEP_SUPERVISION - no_object_weight = cfg.MODEL.MASK_FORMER.NO_OBJECT_WEIGHT - dice_weight = cfg.MODEL.MASK_FORMER.DICE_WEIGHT - mask_weight = cfg.MODEL.MASK_FORMER.MASK_WEIGHT - - # building criterion - matcher = HungarianMatcher( - cost_class=1, - cost_mask=mask_weight, - cost_dice=dice_weight, - ) - - weight_dict = {"loss_ce": 1, "loss_mask": mask_weight, "loss_dice": dice_weight} - if deep_supervision: - dec_layers = cfg.MODEL.MASK_FORMER.DEC_LAYERS - aux_weight_dict = {} - for i in range(dec_layers - 1): - aux_weight_dict.update({k + f"_{i}": v for k, v in weight_dict.items()}) - weight_dict.update(aux_weight_dict) - - losses = ["labels", "masks"] - - criterion = SetCriterion( - sem_seg_head.num_classes, - matcher=matcher, - weight_dict=weight_dict, - eos_coef=no_object_weight, - losses=losses, - ) - - return { - "backbone": backbone, - "sem_seg_head": sem_seg_head, - "criterion": criterion, - "num_queries": cfg.MODEL.MASK_FORMER.NUM_OBJECT_QUERIES, - "panoptic_on": cfg.MODEL.MASK_FORMER.TEST.PANOPTIC_ON, - "object_mask_threshold": cfg.MODEL.MASK_FORMER.TEST.OBJECT_MASK_THRESHOLD, - "overlap_threshold": cfg.MODEL.MASK_FORMER.TEST.OVERLAP_THRESHOLD, - "metadata": MetadataCatalog.get(cfg.DATASETS.TRAIN[0]), - "size_divisibility": cfg.MODEL.MASK_FORMER.SIZE_DIVISIBILITY, - "sem_seg_postprocess_before_inference": ( - cfg.MODEL.MASK_FORMER.TEST.SEM_SEG_POSTPROCESSING_BEFORE_INFERENCE - or cfg.MODEL.MASK_FORMER.TEST.PANOPTIC_ON - ), - "pixel_mean": cfg.MODEL.PIXEL_MEAN, - "pixel_std": cfg.MODEL.PIXEL_STD, - } - - @property - def device(self): - return self.pixel_mean.device - - def forward(self, batched_inputs): - """ - Args: - batched_inputs: a list, batched outputs of :class:`DatasetMapper`. - Each item in the list contains the inputs for one image. - For now, each item in the list is a dict that contains: - * "image": Tensor, image in (C, H, W) format. - * "instances": per-region ground truth - * Other information that's included in the original dicts, such as: - "height", "width" (int): the output resolution of the model (may be different - from input resolution), used in inference. - Returns: - list[dict]: - each dict has the results for one image. The dict contains the following keys: - - * "sem_seg": - A Tensor that represents the - per-pixel segmentation prediced by the head. - The prediction has shape KxHxW that represents the logits of - each class for each pixel. - * "panoptic_seg": - A tuple that represent panoptic output - panoptic_seg (Tensor): of shape (height, width) where the values are ids for each segment. - segments_info (list[dict]): Describe each segment in `panoptic_seg`. - Each dict contains keys "id", "category_id", "isthing". - """ - images = [x["image"].to(self.device) for x in batched_inputs] - images = [(x - self.pixel_mean) / self.pixel_std for x in images] - images = ImageList.from_tensors(images, self.size_divisibility) - - features = self.backbone(images.tensor) - outputs = self.sem_seg_head(features) - - if self.training: - # mask classification target - if "instances" in batched_inputs[0]: - gt_instances = [x["instances"].to(self.device) for x in batched_inputs] - targets = self.prepare_targets(gt_instances, images) - else: - targets = None - - # bipartite matching-based loss - losses = self.criterion(outputs, targets) - - for k in list(losses.keys()): - if k in self.criterion.weight_dict: - losses[k] *= self.criterion.weight_dict[k] - else: - # remove this loss if not specified in `weight_dict` - losses.pop(k) - - return losses - else: - mask_cls_results = outputs["pred_logits"] - mask_pred_results = outputs["pred_masks"] - # upsample masks - mask_pred_results = F.interpolate( - mask_pred_results, - size=(images.tensor.shape[-2], images.tensor.shape[-1]), - mode="bilinear", - align_corners=False, - ) - - processed_results = [] - for mask_cls_result, mask_pred_result, input_per_image, image_size in zip( - mask_cls_results, mask_pred_results, batched_inputs, images.image_sizes - ): - height = input_per_image.get("height", image_size[0]) - width = input_per_image.get("width", image_size[1]) - - if self.sem_seg_postprocess_before_inference: - mask_pred_result = sem_seg_postprocess( - mask_pred_result, image_size, height, width - ) - - # semantic segmentation inference - r = self.semantic_inference(mask_cls_result, mask_pred_result) - if not self.sem_seg_postprocess_before_inference: - r = sem_seg_postprocess(r, image_size, height, width) - processed_results.append({"sem_seg": r}) - - # panoptic segmentation inference - if self.panoptic_on: - panoptic_r = self.panoptic_inference( - mask_cls_result, mask_pred_result - ) - processed_results[-1]["panoptic_seg"] = panoptic_r - - return processed_results - - def prepare_targets(self, targets, images): - h, w = images.tensor.shape[-2:] - new_targets = [] - for targets_per_image in targets: - # pad gt - gt_masks = targets_per_image.gt_masks - padded_masks = torch.zeros( - (gt_masks.shape[0], h, w), dtype=gt_masks.dtype, device=gt_masks.device - ) - padded_masks[:, : gt_masks.shape[1], : gt_masks.shape[2]] = gt_masks - new_targets.append( - { - "labels": targets_per_image.gt_classes, - "masks": padded_masks, - } - ) - return new_targets - - def semantic_inference(self, mask_cls, mask_pred): - mask_cls = F.softmax(mask_cls, dim=-1)[..., :-1] - mask_pred = mask_pred.sigmoid() - semseg = torch.einsum("qc,qhw->chw", mask_cls, mask_pred) - return semseg diff --git a/spaces/failfast/2D-GameCreator/.github/CONTRIBUTING.md b/spaces/failfast/2D-GameCreator/.github/CONTRIBUTING.md deleted file mode 100644 index 605676e1f78bedb3dbbefd64f9c589ca2efc0ce1..0000000000000000000000000000000000000000 --- a/spaces/failfast/2D-GameCreator/.github/CONTRIBUTING.md +++ /dev/null @@ -1,18 +0,0 @@ -# Contributing - -When contributing to this repository, please first discuss the change you wish to make via issue, -email, or any other method with the owners of this repository before making a change. - -Please note we have a code of conduct, please follow it in all your interactions with the project. - -## Pull Request Process - -Ensure any install or build dependencies are removed before the end of the layer when doing a build. -Fork the repository and create a new branch (feature/my-feature) Commit changes following the -"conventional-changelog" rules. Do not modify any versions manually. Don't build new versions. Use -the PULL_REQUEST_TEMPLATE - -## Reporting issues - -Ensure any install or build dependencies are removed before the end of the layer when doing a build. -Create a new issue (bug/some-bug) Always list "yarn version", "node version" Use the ISSUE_TEMPLATE diff --git a/spaces/falterWliame/Face_Mask_Detection/Jogo Caca Niquel Halloween Ex 30 Linhas Gratis ((HOT)).md b/spaces/falterWliame/Face_Mask_Detection/Jogo Caca Niquel Halloween Ex 30 Linhas Gratis ((HOT)).md deleted file mode 100644 index c753a317d93df6ff3a498d3be5ade229cfe3f5db..0000000000000000000000000000000000000000 --- a/spaces/falterWliame/Face_Mask_Detection/Jogo Caca Niquel Halloween Ex 30 Linhas Gratis ((HOT)).md +++ /dev/null @@ -1,26 +0,0 @@ -

    jogo caca niquel halloween ex 30 linhas gratis


    Download Filehttps://urlca.com/2uDdGx



    -
    -tranformado em 30 jogo para android tronxe transformado em android. Como andrxe, jogo para pc. The spongy crayons below the plug are protected by two spongy layers of vinyl. Free download game show online - Game Show Games! - Free Game Download - PC Games - Test Drive Unlimited 2 -...Q: - -Could a BB harm the human body? - -I was reading about a restaurant called No Bone Zone in Tokyo. It's a part of their promotion that they serve healthy but cheap and delicious "BB" meals. I was wondering what those "BB" meals were about. I saw in one of the articles that they serve "BB" meals with chicken. Chicken is one of the best sources of protein for human body. So, is it harmful to the human body that they are serving these meals? What are the sources of bacteria that they are using? - -A: - -Yes, a bb is safe for human consumption. - -From Wikipedia: - -BB is an acronym for boneless, baked, or broiled. Baked means it's a boneless, meatless, or vegetarian food. Broiled means it's a broiled meat dish. - -It's because they removed the bones that it is considered to be healthy and safe for human consumption. - -It's also worth noting that no bones were cooked in the preparation of this food, so as long as you wash it and eat it, it should be safe for you. - - FILED - - NOT FOR PUBLICATION JAN 08 2011 4fefd39f24
    -
    -
    -

    diff --git a/spaces/falterWliame/Face_Mask_Detection/Karam Full Movie Download In Hd 720p.md b/spaces/falterWliame/Face_Mask_Detection/Karam Full Movie Download In Hd 720p.md deleted file mode 100644 index afc9e23a1fc1047f8aba5aabd6813666154e7be9..0000000000000000000000000000000000000000 --- a/spaces/falterWliame/Face_Mask_Detection/Karam Full Movie Download In Hd 720p.md +++ /dev/null @@ -1,44 +0,0 @@ -

    Karam Full Movie Download In Hd 720p


    Download File ✫✫✫ https://urlca.com/2uDdvU



    - -Watch Online Online Full Free Download John Abraham Priyanka Chopra Bollywood Movies Online Full Free Download Priyanka Chopra Bollywood movies with best quality. Watch Online Online Full Free Download John Abraham Priyanka Chopra Bollywood Movies Online Full Free Download, Watch Online Online Full Free Download John Abraham Priyanka Chopra Bollywood Movies Online Full Free Download, Watch Online Online Full Free Download Priyanka Chopra Bollywood movies with best quality. Like John Abraham Priyanka Chopra Bollywood Movies Online Full Free Download, Watch Online Online Full Free Download John Abraham Priyanka Chopra Bollywood Movies Online Full Free Download.This article is part of a series that looks at how to hack into your daily habits. - -Stay with me. - -In this article, I’m going to explain to you how you can hack your habits, so that they work for you and not against you. - -You’ve probably heard many times, “To be successful, you need to work on your habits.” - -Do you know how you can really change your habits? - -Hacking Your Habits - -There are different strategies to change your habits. However, I think a good place to start is to look at what you currently think are the habits that are holding you back from achieving your goals. - -In this article, I’m going to guide you through a step-by-step process to hack your habits, and be successful at them. - -How to Hack Your Habits - -Let’s take the example of saying ‘good morning’ to a colleague. - -You are often doing this, even when you’re running late for a meeting. - -What if you could learn a routine to do this in advance? - -If you did it in advance, you would have taken the hassle and stress out of getting to the meeting. - -Why not try out this technique? - -In the beginning, when you start a routine, I would suggest starting by doing it once a week. - -Just try it out. - -This will give you a good idea of how it feels. - -If it feels OK, then I would do it 2 or 3 times a week. - -Try to work out your schedule so that you can complete the habit each week. - -So, if you’re trying to start this new habit, then try it out for about a month, and 4fefd39f24
    -
    -
    -

    diff --git a/spaces/falterWliame/Face_Mask_Detection/Nur Virgin Siyasetin Sosyolojisi Indir.pdf.md b/spaces/falterWliame/Face_Mask_Detection/Nur Virgin Siyasetin Sosyolojisi Indir.pdf.md deleted file mode 100644 index b4c1acc2784a96a12620a20c93eee146da212094..0000000000000000000000000000000000000000 --- a/spaces/falterWliame/Face_Mask_Detection/Nur Virgin Siyasetin Sosyolojisi Indir.pdf.md +++ /dev/null @@ -1,6 +0,0 @@ -

    Nur Virgin Siyasetin Sosyolojisi Indir.pdf


    DOWNLOAD ::: https://urlca.com/2uDd5H



    - -PDF download Citation download. About Share. Groups, Ideologies and Discourses: Glimpses of the Turkic Speaking World ISTANBULER TEXTE UND ... 1fdad05405
    -
    -
    -

    diff --git a/spaces/fatiXbelha/sd/Enjoy Melon Playground with the April Fools Update - APK Download Link.md b/spaces/fatiXbelha/sd/Enjoy Melon Playground with the April Fools Update - APK Download Link.md deleted file mode 100644 index 2473c2a406035aeff25b8b9eaba0f1693c4b81e7..0000000000000000000000000000000000000000 --- a/spaces/fatiXbelha/sd/Enjoy Melon Playground with the April Fools Update - APK Download Link.md +++ /dev/null @@ -1,124 +0,0 @@ -
    -

    Melon Playground April Fools Update APK: A Fun and Creative Sandbox Game

    -

    Do you love sandbox games where you can create your own scenarios and have fun with different items and characters? If so, you might want to check out Melon Playground, a simple but addictive game that lets you unleash your imagination and creativity. And if you want to spice things up, you can also download the April Fools Update APK, which adds some hilarious pranks and surprises to the game. In this article, we will tell you everything you need to know about Melon Playground and its April Fools Update APK.

    -

    melon playground april fools update apk


    DOWNLOAD https://urllie.com/2uNCFm



    -

    What is Melon Playground?

    -

    Melon Playground is a sandbox game developed by playducky.com, a studio that specializes in casual and simulation games. It is available for Android devices and has over 10 million downloads on Google Play. The game is inspired by other popular sandbox games like Garry's Mod and Minecraft, but with a simpler and more accessible gameplay.

    -

    A simple sandbox game with endless possibilities

    -

    The main idea of Melon Playground is that you can create your own scenarios using a variety of items and characters. You can choose from different environments, such as a city, a farm, a desert, or a space station. You can also spawn different objects, such as melee weapons, guns, barrels, vehicles, animals, and even melons. You can then interact with these objects using touch controls or physics-based mechanics. You can also customize the appearance and behavior of the characters, such as their clothes, hair, skin color, voice, and emotions.

    -

    How to play Melon Playground

    -

    The game is very easy to play and does not require any tutorials or instructions. You just need to tap on the screen to open the menu, where you can select the items and characters you want to spawn. You can also drag and drop them to move them around or rotate them. You can also tap on them to activate their functions or change their properties. For example, you can tap on a gun to shoot it, or tap on a character to make them talk or change their expression. You can also use pinch gestures to zoom in or out of the scene.

    -

    melon playground mod editor apk
    -melon playground doors mod apk
    -melon playground update 13.0 apk
    -melon playground discord server link
    -melon playground ios download free
    -melon playground ricochet mode apk
    -melon playground sandbox game android
    -melon playground indie game developer sliz
    -melon playground community manager twitter
    -melon playground net energy gain experiment
    -melon playground 11.0 update log apk
    -melon playground modding tutorial apk
    -melon playground new additions coming soon
    -melon playground 10k followers celebration
    -melon playground nuclear fusion reaction apk
    -melon playground official twitter account
    -melon playground doors hotel mod apk
    -melon playground 12.0 update release date
    -melon playground ios mod editor possible
    -melon playground smashing fruits fun apk
    -melon playground 11.3 bugfix update apk
    -melon playground google play store link
    -melon playground holy grail fusion experiment
    -melon playground 100 million degrees apk
    -melon playground mini sun creation apk
    -melon playground kstar facility collaboration
    -melon playground korea institute of fusion energy
    -melon playground physics problem to engineering one
    -melon playground stable experiment for 30 seconds
    -melon playground seven times hotter than sun core
    -melon playground temperature in kelvin apk
    -melon playground sun fact sheet reference apk
    -melon playground solar core wikipedia article apk
    -melon playground solar atmosphere information apk
    -melon playground radiative zone temperature apk
    -melon playground convection zone temperature apk
    -melon playground surface gas pressure apk
    -melon playground photosphere composition apk
    -melon playground chromosphere thickness apk
    -melon playground sun spot cycle apk
    -melon playground hydrogen plasma density apk
    -melon playground fusion of hydrogen nuclei apk
    -melon playground helium production rate apk
    -melon playground solar neutrino problem apk

    -

    What are the features of Melon Playground

    -

    Melon Playground has many features that make it a fun and creative sandbox game. Some of these features are:

    -
      -
    • A large collection of items and characters that you can use to create your own scenarios.
    • -
    • A realistic physics engine that allows you to interact with the objects in various ways.
    • -
    • A user-friendly interface that lets you easily spawn and manipulate the items and characters.
    • -
    • A screenshot mode that lets you capture your creations and share them with others.
    • -
    • A multiplayer mode that lets you play with other players online or locally.
    • -
    • A modding system that lets you download and install custom mods made by other users or create your own mods using the in-game editor.
    • -
    -

    What is the April Fools Update?

    -

    The April Fools Update is a special update that was released on April 1st, 2023. It is a prank-filled update that adds some new items and modes to the game that are designed to trick or surprise the players. The update is not available on Google Play, but you can download it as an APK file from third-party websites. The update is compatible with the latest version of Melon Playground (15.1.104) and does not require any root access or permissions.

    -

    A prank-filled update with new items and modes

    -

    The April Fools Update adds some new items and modes to the game that are meant to be funny or shocking. Some of these items and modes are:

    -
      -
    • A fake update screen that appears when you launch the game, making you think that the game is updating or downloading something.
    • -
    • A fake virus alert that pops up randomly, making you think that your device is infected or hacked.
    • -
    • A fake crash screen that appears when you try to exit the game, making you think that the game has crashed or frozen.
    • -
    • A fake melon bomb that explodes when you spawn it, causing a loud noise and a screen shake.
    • -
    • A fake melon launcher that shoots melons instead of bullets, causing a mess and a lot of laughter.
    • -
    • A fake melon mode that turns everything in the game into melons, including the items, the characters, and the environment.
    • -
    -

    How to download and install the April Fools Update APK

    -

    If you want to try the April Fools Update, you will need to download and install the APK file from a third-party website. Here are the steps to do so:

    -
      -
    1. Go to a reliable website that offers the APK file, such as [APKPure] or [APKMirror].
    2. -
    3. Search for Melon Playground April Fools Update APK and download the file to your device.
    4. -
    5. Before installing the APK file, make sure that you have enabled the option to install apps from unknown sources in your device settings.
    6. -
    7. Locate the APK file in your device storage and tap on it to install it.
    8. -
    9. Launch the game and enjoy the April Fools Update.
    10. -
    -

    What are the benefits of using the April Fools Update APK

    -

    The April Fools Update APK is not an official update from the developers, but it is a fun and harmless way to enjoy some pranks and surprises in Melon Playground. Some of the benefits of using the APK are:

    -
      -
    • You can experience some new items and modes that are not available in the original game.
    • -
    • You can prank your friends or family by making them play the game and watch their reactions.
    • -
    • You can have a good laugh and relieve some stress by playing with the fake items and modes.
    • -
    • You can uninstall the APK anytime if you want to go back to the original game.
    • -
    -

    Conclusion

    -

    Melon Playground is a fun and creative sandbox game that lets you create your own scenarios and have fun with different items and characters. The April Fools Update APK is a prank-filled update that adds some new items and modes to the game that are designed to trick or surprise the players. If you want to try something different and hilarious in Melon Playground, you can download and install the April Fools Update APK from a third-party website. However, be careful not to fall for any of the pranks yourself!

    -

    Why you should try Melon Playground April Fools Update APK

    -

    Melon Playground April Fools Update APK is a great way to spice up your gameplay and have some laughs. It is easy to download and install, and it does not affect your original game. You can enjoy some new items and modes that are not available in the original game, such as fake melons, fake virus alerts, fake update screens, and more. You can also prank your friends or family by making them play the game and watch their reactions. You can have a good time and relieve some stress by playing with the fake items and modes. You can uninstall the APK anytime if you want to go back to the original game. So, what are you waiting for? Download Melon Playground April Fools Update APK today and have some fun!

    -

    FAQs

    -

    Here are some frequently asked questions about Melon Playground April Fools Update APK:

    -

    Is Melon Playground April Fools Update APK safe to use?

    -

    Yes, Melon Playground April Fools Update APK is safe to use as long as you download it from a reliable website. It does not contain any malware or viruses, and it does not require any root access or permissions. However, you should always be careful when downloading any APK files from unknown sources, as they may contain harmful or malicious content.

    -

    Does Melon Playground April Fools Update APK work on all devices?

    -

    Melon Playground April Fools Update APK works on most Android devices that support Melon Playground. However, some devices may not be compatible with the update or may experience some glitches or errors. If you encounter any problems while using the update, you can try reinstalling it or uninstalling it completely.

    -

    Can I play online with other players using Melon Playground April Fools Update APK?

    Yes, you can play online with other players using Melon Playground April Fools Update APK. However, you may not be able to join some servers or games that are using the original version of Melon Playground. You may also encounter some compatibility issues or bugs while playing online. If you want to play online without any problems, you may want to use the original version of Melon Playground instead.

    -

    How can I uninstall Melon Playground April Fools Update APK?

    -

    If you want to uninstall Melon Playground April Fools Update APK, you can follow these steps:

    -
      -
    1. Go to your device settings and tap on Apps or Applications.
    2. -
    3. Find and tap on Melon Playground.
    4. -
    5. Tap on Uninstall and confirm your action.
    6. -
    7. Wait for the uninstallation process to finish.
    8. -
    9. You can also delete the APK file from your device storage if you want to free up some space.
    10. -
    -

    Where can I find more information about Melon Playground and its updates?

    -

    If you want to find more information about Melon Playground and its updates, you can visit the following sources:

    -
      -
    • The official website of playducky.com, the developer of Melon Playground.
    • -
    • The official Facebook page of playducky.com, where they post news and updates about their games.
    • -
    • The official YouTube channel of playducky.com, where they upload videos and trailers of their games.
    • -
    • The official Discord server of playducky.com, where you can chat with other players and developers.
    • -
    • The official Reddit community of Melon Playground, where you can share your creations and feedback.
    • -

    401be4b1e0
    -
    -
    \ No newline at end of file diff --git a/spaces/fatiXbelha/sd/Fidget Spinner APK Download and Enjoy the Most Relaxing Simulator.md b/spaces/fatiXbelha/sd/Fidget Spinner APK Download and Enjoy the Most Relaxing Simulator.md deleted file mode 100644 index 5ae15428219de63653920c67b38361185fc944c1..0000000000000000000000000000000000000000 --- a/spaces/fatiXbelha/sd/Fidget Spinner APK Download and Enjoy the Most Relaxing Simulator.md +++ /dev/null @@ -1,131 +0,0 @@ - -

    Fidget Spinner Apkpure: The Ultimate Guide for Spinner Lovers

    -

    If you are looking for a fun and relaxing way to relieve stress, improve focus, and enjoy yourself, you might want to try out a fidget spinner. A fidget spinner is a toy that consists of a ball bearing in the center of a multi-lobed flat structure made from metal or plastic that spins along its axis with pressure. Fidget spinners have become very popular in recent years, especially among children and adults who have trouble focusing or who need to fidget to relieve nervous energy, anxiety, or psychological stress.

    -

    fidget spinner apkpure


    Download Zip > https://urllie.com/2uNIeK



    -

    But what if you don't have a physical fidget spinner or you want to try out different types and styles of spinners without spending money? Well, there is an app for that. It's called fidget spinner apkpure, and it's one of the best apps for spinner lovers. Fidget spinner apkpure is an app that allows you to download, install, customize, and play with hundreds of different kinds of fidget spinners on your Android device. You can also do some cool tricks with them and share your creations with others.

    -

    In this article, we will show you how to use fidget spinner apkpure to have fun and relax with your favorite spinners. We will also tell you some of the features and advantages of this app, as well as answer some frequently asked questions about it. So, let's get started!

    -

    How to Download and Install Fidget Spinner Apkpure on Your Device?

    -

    The first thing you need to do is to download and install fidget spinner apkpure on your Android device. Here are the steps:

    -
      -
    1. Go to https://apkpure.com/fidget-spinner/com.ketchapp.fingerspinner on your browser.
    2. -
    3. Click on the green "Download APK" button.
    4. -
    5. Wait for the download to finish and then open the file.
    6. -
    7. Allow the installation of unknown sources if prompted.
    8. -
    9. Follow the instructions on the screen to install the app.
    10. -
    11. Launch the app and enjoy!
    12. -
    -

    How to Use Fidget Spinner Apkpure to Customize and Play with Different Types of Spinners?

    -

    Once you have installed fidget spinner apkpure on your device, you can start customizing and playing with different types of spinners. Here are some of the things you can do:

    -
      -
    • To customize your spinner, tap on the gear icon on the top right corner of the screen. You can change the color, shape, size, speed, and sound of your spinner. You can also unlock new spinners by watching ads or using coins.
    • -
    • To play with your spinner, swipe your finger on the screen to spin it. You can also tilt your device to change the direction of the spin.
    • -
    • To see how long you can spin your spinner, tap on the timer icon on the top left corner of the screen. You can also see your best score and rank on the leaderboard.
    • -
    • To see how many spins you can do in a row without stopping, tap on the challenge icon on the bottom right corner of the screen. You can also see your best streak and rank on the leaderboard.
    • -
    -

    How to Do Some Cool Tricks with Fidget Spinner Apkpure?

    -

    Another fun thing you can do with fidget spinner apkpure is to do some cool tricks with your spinners. Here are some of the tricks you can try:

    -

    fidget spinner apk download apkpure
    -fidget spinner simulator apkpure
    -fidget spinner game apkpure
    -fidget spinner mod apk apkpure
    -fidget spinner hack apk apkpure
    -fidget spinner pro apk apkpure
    -fidget spinner 3d apk apkpure
    -fidget spinner online apkpure
    -fidget spinner free download apkpure
    -fidget spinner app apkpure
    -fidget spinner android apkpure
    -fidget spinner offline apkpure
    -fidget spinner ketchapp apkpure
    -fidget spinner premium apk apkpure
    -fidget spinner unlimited money apkpure
    -fidget spinner best apk apkpure
    -fidget spinner latest version apkpure
    -fidget spinner real apk apkpure
    -fidget spinner hd apk apkpure
    -fidget spinner fun apk apkpure
    -fidget spinner challenge apk apkpure
    -fidget spinner tricks apk apkpure
    -fidget spinner tips apk apkpure
    -fidget spinner guide apk apkpure
    -fidget spinner cheats apk apkpure
    -fidget spinner neon apk apkpure
    -fidget spinner glow apk apkpure
    -fidget spinner rainbow apk apkpure
    -fidget spinner metal apk apkpure
    -fidget spinner gold apk apkpure
    -fidget spinner diamond apk apkpure
    -fidget spinner custom apk apkpure
    -fidget spinner creator apk apkpure
    -fidget spinner maker apk apkpure
    -fidget spinner editor apk apkpure
    -fidget spinner design apk apkpure
    -fidget spinner collection apk apkpure
    -fidget spinner shop apk apkpure
    -fidget spinner store apk apkpure
    -fidget spinner market apk apkpure
    -fidget spinner world apk apkpure
    -fidget spinner master apk apkpure
    -fidget spinner legend apk apkpure
    -fidget spinner hero apk apkpure
    -fidget spinner ninja apk apkpure
    -fidget spinner battle apk apkpure
    -fidget spinner duel apk apkpure
    -fidget spinner arena apk apkpure
    -fidget spinner tournament apk apkpure
    -fidget spinner leaderboard apk apkpure

    -
      -
    • To do a flip, swipe your finger on the screen and then quickly tap it again.
    • -
    • To do a bounce, swipe your finger on the screen and then tilt your device to make the spinner bounce off the edges.
    • -
    • To do a spin transfer, swipe your finger on the screen and then tap on another spinner to transfer the spin to it.
    • -
    • To do a combo, swipe your finger on the screen and then tap on multiple spinners to spin them all at once.
    • -
    -

    You can also watch some videos of other users doing amazing tricks with their spinners on the app. Just tap on the video icon on the bottom left corner of the screen and enjoy!

    -

    Conclusion

    -

    Fidget spinner apkpure is a great app for spinner lovers who want to have fun and relax with their favorite spinners. You can download, install, customize, and play with hundreds of different kinds of spinners on your Android device. You can also do some cool tricks with them and share your creations with others. Fidget spinner apkpure is easy to use, safe and secure, and free to download.

    -

    So, what are you waiting for? Download fidget spinner apkpure today and enjoy spinning your way to happiness! You can find the app here: https://apkpure.com/fidget-spinner/com.ketchapp.fingerspinner.

    -

    Remember, fidget spinner apkpure is not just an app, it's a lifestyle. Spin it, love it, live it!

    -

    FAQs

    -

    What are some of the best fidget spinner types available on apkpure?

    -

    There are many types of fidget spinners available on apkpure, such as metal, plastic, wood, glow in the dark, rainbow, emoji, animal, superhero, and more. You can also create your own spinners by mixing and matching different parts and colors. Some of the best fidget spinner types are:

    - - - - - - - - - - - - - - - - - - - - - - - - - -
    TypeDescription
    MetalMetal spinners are durable, heavy, and shiny. They have a smooth and fast spin that lasts for a long time. They also make a satisfying sound when spinning.
    Glow in the darkGlow in the dark spinners are fun and colorful. They glow in different colors when exposed to light or darkness. They are great for playing at night or in low-light conditions.
    EmojiEmoji spinners are cute and expressive. They have different emoji faces on each lobe that change when spinning. They are perfect for showing your mood or personality.
    SuperheroSuperhero spinners are cool and powerful. They have different superhero logos or symbols on each lobe that represent your favorite heroes. They are ideal for fans of comics or movies.
    RainbowRainbow spinners are beautiful and vibrant. They have different colors on each lobe that create a rainbow effect when spinning. They are suitable for anyone who loves colors or diversity.
    -

    How can I share my fidget spinner creations with others on apkpure?

    -

    You can share your fidget spinner creations with others on apkpure by using the share button on the top right corner of the screen. You can choose to share your spinner as an image or a video. You can also choose to share it via social media platforms such as Facebook, Twitter, Instagram, WhatsApp, or email.

    -

    Is fidget spinner apkpure safe and secure to use?

    -

    Yes, fidget spinner apkpure is safe and secure to use. The app does not require any permissions or access to your personal data or device settings. The app also does not contain any viruses, malware, or spyware that could harm your device or compromise your privacy. The app is verified by apkpure.com, which is a trusted source for downloading apps.

    -

    How can I contact the developers of fidget spinner apkpure for feedback or support?

    -

    You can contact the developers of fidget spinner apkpure for feedback or support by using the feedback button on the bottom right corner of the screen. You can also email them at support@ketchapp.com or visit their website at https://www.ketchapp.com/. They are always happy to hear from their users and to improve their app based on their feedback and suggestions.

    -

    What are some of the alternatives to fidget spinner apkpure?

    -

    If you are looking for some other apps that let you play with fidget spinners, you might want to check out these alternatives:

    -
      -
    • Fidget Spinner by Words Mobile: This app lets you spin over 100 different types of fidget spinners, each with its own unique sound and vibration. You can also collect coins and upgrade your spinners to make them faster and more stable.
    • -
    • Fidget Spinner Simulator by Fidget Spinner Games: This app lets you create your own custom fidget spinners, choosing from various shapes, colors, materials, and stickers. You can also spin your spinners in different environments, such as a classroom, a park, or a space station.
    • -
    • Fidget Spinner 3D by Timuz Games: This app lets you experience the realistic physics and graphics of fidget spinners in 3D. You can also unlock new spinners by completing challenges and achievements.
    • -

    401be4b1e0
    -
    -
    \ No newline at end of file diff --git a/spaces/fffiloni/controlnet-animation-doodle/node_modules/engine.io-parser/build/cjs/contrib/base64-arraybuffer.js b/spaces/fffiloni/controlnet-animation-doodle/node_modules/engine.io-parser/build/cjs/contrib/base64-arraybuffer.js deleted file mode 100644 index b92118e51bad07359d321be13795fb5cc1fdb874..0000000000000000000000000000000000000000 --- a/spaces/fffiloni/controlnet-animation-doodle/node_modules/engine.io-parser/build/cjs/contrib/base64-arraybuffer.js +++ /dev/null @@ -1,48 +0,0 @@ -"use strict"; -Object.defineProperty(exports, "__esModule", { value: true }); -exports.decode = exports.encode = void 0; -// imported from https://github.com/socketio/base64-arraybuffer -const chars = 'ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789+/'; -// Use a lookup table to find the index. -const lookup = typeof Uint8Array === 'undefined' ? [] : new Uint8Array(256); -for (let i = 0; i < chars.length; i++) { - lookup[chars.charCodeAt(i)] = i; -} -const encode = (arraybuffer) => { - let bytes = new Uint8Array(arraybuffer), i, len = bytes.length, base64 = ''; - for (i = 0; i < len; i += 3) { - base64 += chars[bytes[i] >> 2]; - base64 += chars[((bytes[i] & 3) << 4) | (bytes[i + 1] >> 4)]; - base64 += chars[((bytes[i + 1] & 15) << 2) | (bytes[i + 2] >> 6)]; - base64 += chars[bytes[i + 2] & 63]; - } - if (len % 3 === 2) { - base64 = base64.substring(0, base64.length - 1) + '='; - } - else if (len % 3 === 1) { - base64 = base64.substring(0, base64.length - 2) + '=='; - } - return base64; -}; -exports.encode = encode; -const decode = (base64) => { - let bufferLength = base64.length * 0.75, len = base64.length, i, p = 0, encoded1, encoded2, encoded3, encoded4; - if (base64[base64.length - 1] === '=') { - bufferLength--; - if (base64[base64.length - 2] === '=') { - bufferLength--; - } - } - const arraybuffer = new ArrayBuffer(bufferLength), bytes = new Uint8Array(arraybuffer); - for (i = 0; i < len; i += 4) { - encoded1 = lookup[base64.charCodeAt(i)]; - encoded2 = lookup[base64.charCodeAt(i + 1)]; - encoded3 = lookup[base64.charCodeAt(i + 2)]; - encoded4 = lookup[base64.charCodeAt(i + 3)]; - bytes[p++] = (encoded1 << 2) | (encoded2 >> 4); - bytes[p++] = ((encoded2 & 15) << 4) | (encoded3 >> 2); - bytes[p++] = ((encoded3 & 3) << 6) | (encoded4 & 63); - } - return arraybuffer; -}; -exports.decode = decode; diff --git a/spaces/fkhuggingme/gpt-academic/crazy_functional.py b/spaces/fkhuggingme/gpt-academic/crazy_functional.py deleted file mode 100644 index 4b29aef50a59265fc0b5d6b6dea68c6efea939bb..0000000000000000000000000000000000000000 --- a/spaces/fkhuggingme/gpt-academic/crazy_functional.py +++ /dev/null @@ -1,239 +0,0 @@ -from toolbox import HotReload # HotReload 的意思是热更新,修改函数插件后,不需要重启程序,代码直接生效 - - -def get_crazy_functions(): - ###################### 第一组插件 ########################### - from crazy_functions.读文章写摘要 import 读文章写摘要 - from crazy_functions.生成函数注释 import 批量生成函数注释 - from crazy_functions.解析项目源代码 import 解析项目本身 - from crazy_functions.解析项目源代码 import 解析一个Python项目 - from crazy_functions.解析项目源代码 import 解析一个C项目的头文件 - from crazy_functions.解析项目源代码 import 解析一个C项目 - from crazy_functions.解析项目源代码 import 解析一个Golang项目 - from crazy_functions.解析项目源代码 import 解析一个Java项目 - from crazy_functions.解析项目源代码 import 解析一个Rect项目 - from crazy_functions.高级功能函数模板 import 高阶功能模板函数 - from crazy_functions.代码重写为全英文_多线程 import 全项目切换英文 - from crazy_functions.Latex全文润色 import Latex英文润色 - from crazy_functions.询问多个大语言模型 import 同时问询 - from crazy_functions.解析项目源代码 import 解析一个Lua项目 - from crazy_functions.解析项目源代码 import 解析一个CSharp项目 - from crazy_functions.总结word文档 import 总结word文档 - from crazy_functions.解析JupyterNotebook import 解析ipynb文件 - from crazy_functions.对话历史存档 import 对话历史存档 - from crazy_functions.对话历史存档 import 载入对话历史存档 - from crazy_functions.对话历史存档 import 删除所有本地对话历史记录 - - from crazy_functions.批量Markdown翻译 import Markdown英译中 - function_plugins = { - "解析整个Python项目": { - "Color": "stop", # 按钮颜色 - "Function": HotReload(解析一个Python项目) - }, - "载入对话历史存档": { - "AsButton":False, - "Function": HotReload(载入对话历史存档) - }, - "删除所有本地对话历史记录(请谨慎操作)": { - "AsButton":False, - "Function": HotReload(删除所有本地对话历史记录) - }, - "[测试功能] 解析Jupyter Notebook文件": { - "Color": "stop", - "AsButton":False, - "Function": HotReload(解析ipynb文件), - "AdvancedArgs": True, # 调用时,唤起高级参数输入区(默认False) - "ArgsReminder": "若输入0,则不解析notebook中的Markdown块", # 高级参数输入区的显示提示 - }, - "批量总结Word文档": { - "Color": "stop", - "Function": HotReload(总结word文档) - }, - "解析整个C++项目头文件": { - "Color": "stop", # 按钮颜色 - "AsButton": False, # 加入下拉菜单中 - "Function": HotReload(解析一个C项目的头文件) - }, - "解析整个C++项目(.cpp/.hpp/.c/.h)": { - "Color": "stop", # 按钮颜色 - "AsButton": False, # 加入下拉菜单中 - "Function": HotReload(解析一个C项目) - }, - "解析整个Go项目": { - "Color": "stop", # 按钮颜色 - "AsButton": False, # 加入下拉菜单中 - "Function": HotReload(解析一个Golang项目) - }, - "解析整个Java项目": { - "Color": "stop", # 按钮颜色 - "AsButton": False, # 加入下拉菜单中 - "Function": HotReload(解析一个Java项目) - }, - "解析整个React项目": { - "Color": "stop", # 按钮颜色 - "AsButton": False, # 加入下拉菜单中 - "Function": HotReload(解析一个Rect项目) - }, - "解析整个Lua项目": { - "Color": "stop", # 按钮颜色 - "AsButton": False, # 加入下拉菜单中 - "Function": HotReload(解析一个Lua项目) - }, - "解析整个CSharp项目": { - "Color": "stop", # 按钮颜色 - "AsButton": False, # 加入下拉菜单中 - "Function": HotReload(解析一个CSharp项目) - }, - "读Tex论文写摘要": { - "Color": "stop", # 按钮颜色 - "Function": HotReload(读文章写摘要) - }, - "Markdown/Readme英译中": { - # HotReload 的意思是热更新,修改函数插件代码后,不需要重启程序,代码直接生效 - "Color": "stop", - "Function": HotReload(Markdown英译中) - }, - "批量生成函数注释": { - "Color": "stop", # 按钮颜色 - "AsButton": False, # 加入下拉菜单中 - "Function": HotReload(批量生成函数注释) - }, - "保存当前的对话": { - "Function": HotReload(对话历史存档) - }, - "[多线程Demo] 解析此项目本身(源码自译解)": { - "AsButton": False, # 加入下拉菜单中 - "Function": HotReload(解析项目本身) - }, - "[多线程demo] 把本项目源代码切换成全英文": { - # HotReload 的意思是热更新,修改函数插件代码后,不需要重启程序,代码直接生效 - "AsButton": False, # 加入下拉菜单中 - "Function": HotReload(全项目切换英文) - }, - "[插件demo] 历史上的今天": { - # HotReload 的意思是热更新,修改函数插件代码后,不需要重启程序,代码直接生效 - "Function": HotReload(高阶功能模板函数) - }, - - } - ###################### 第二组插件 ########################### - # [第二组插件]: 经过充分测试 - from crazy_functions.批量总结PDF文档 import 批量总结PDF文档 - from crazy_functions.批量总结PDF文档pdfminer import 批量总结PDF文档pdfminer - from crazy_functions.批量翻译PDF文档_多线程 import 批量翻译PDF文档 - from crazy_functions.谷歌检索小助手 import 谷歌检索小助手 - from crazy_functions.理解PDF文档内容 import 理解PDF文档内容标准文件输入 - from crazy_functions.Latex全文润色 import Latex中文润色 - from crazy_functions.Latex全文翻译 import Latex中译英 - from crazy_functions.Latex全文翻译 import Latex英译中 - from crazy_functions.批量Markdown翻译 import Markdown中译英 - - function_plugins.update({ - "批量翻译PDF文档(多线程)": { - "Color": "stop", - "AsButton": True, # 加入下拉菜单中 - "Function": HotReload(批量翻译PDF文档) - }, - "询问多个GPT模型": { - "Color": "stop", # 按钮颜色 - "Function": HotReload(同时问询) - }, - "[测试功能] 批量总结PDF文档": { - "Color": "stop", - "AsButton": False, # 加入下拉菜单中 - # HotReload 的意思是热更新,修改函数插件代码后,不需要重启程序,代码直接生效 - "Function": HotReload(批量总结PDF文档) - }, - "[测试功能] 批量总结PDF文档pdfminer": { - "Color": "stop", - "AsButton": False, # 加入下拉菜单中 - "Function": HotReload(批量总结PDF文档pdfminer) - }, - "谷歌学术检索助手(输入谷歌学术搜索页url)": { - "Color": "stop", - "AsButton": False, # 加入下拉菜单中 - "Function": HotReload(谷歌检索小助手) - }, - - "理解PDF文档内容 (模仿ChatPDF)": { - # HotReload 的意思是热更新,修改函数插件代码后,不需要重启程序,代码直接生效 - "Color": "stop", - "AsButton": False, # 加入下拉菜单中 - "Function": HotReload(理解PDF文档内容标准文件输入) - }, - "[测试功能] 英文Latex项目全文润色(输入路径或上传压缩包)": { - # HotReload 的意思是热更新,修改函数插件代码后,不需要重启程序,代码直接生效 - "Color": "stop", - "AsButton": False, # 加入下拉菜单中 - "Function": HotReload(Latex英文润色) - }, - "[测试功能] 中文Latex项目全文润色(输入路径或上传压缩包)": { - # HotReload 的意思是热更新,修改函数插件代码后,不需要重启程序,代码直接生效 - "Color": "stop", - "AsButton": False, # 加入下拉菜单中 - "Function": HotReload(Latex中文润色) - }, - "[测试功能] Latex项目全文中译英(输入路径或上传压缩包)": { - # HotReload 的意思是热更新,修改函数插件代码后,不需要重启程序,代码直接生效 - "Color": "stop", - "AsButton": False, # 加入下拉菜单中 - "Function": HotReload(Latex中译英) - }, - "[测试功能] Latex项目全文英译中(输入路径或上传压缩包)": { - # HotReload 的意思是热更新,修改函数插件代码后,不需要重启程序,代码直接生效 - "Color": "stop", - "AsButton": False, # 加入下拉菜单中 - "Function": HotReload(Latex英译中) - }, - "[测试功能] 批量Markdown中译英(输入路径或上传压缩包)": { - # HotReload 的意思是热更新,修改函数插件代码后,不需要重启程序,代码直接生效 - "Color": "stop", - "AsButton": False, # 加入下拉菜单中 - "Function": HotReload(Markdown中译英) - }, - - - }) - - ###################### 第三组插件 ########################### - # [第三组插件]: 尚未充分测试的函数插件,放在这里 - from crazy_functions.下载arxiv论文翻译摘要 import 下载arxiv论文并翻译摘要 - function_plugins.update({ - "一键下载arxiv论文并翻译摘要(先在input输入编号,如1812.10695)": { - "Color": "stop", - "AsButton": False, # 加入下拉菜单中 - "Function": HotReload(下载arxiv论文并翻译摘要) - } - }) - - from crazy_functions.联网的ChatGPT import 连接网络回答问题 - function_plugins.update({ - "连接网络回答问题(先输入问题,再点击按钮,需要访问谷歌)": { - "Color": "stop", - "AsButton": False, # 加入下拉菜单中 - "Function": HotReload(连接网络回答问题) - } - }) - - from crazy_functions.解析项目源代码 import 解析任意code项目 - function_plugins.update({ - "解析项目源代码(手动指定和筛选源代码文件类型)": { - "Color": "stop", - "AsButton": False, - "AdvancedArgs": True, # 调用时,唤起高级参数输入区(默认False) - "ArgsReminder": "输入时用逗号隔开, *代表通配符, 加了^代表不匹配; 不输入代表全部匹配。例如: \"*.c, ^*.cpp, config.toml, ^*.toml\"", # 高级参数输入区的显示提示 - "Function": HotReload(解析任意code项目) - }, - }) - from crazy_functions.询问多个大语言模型 import 同时问询_指定模型 - function_plugins.update({ - "询问多个GPT模型(手动指定询问哪些模型)": { - "Color": "stop", - "AsButton": False, - "AdvancedArgs": True, # 调用时,唤起高级参数输入区(默认False) - "ArgsReminder": "支持任意数量的llm接口,用&符号分隔。例如chatglm&gpt-3.5-turbo&api2d-gpt-4", # 高级参数输入区的显示提示 - "Function": HotReload(同时问询_指定模型) - }, - }) - ###################### 第n组插件 ########################### - return function_plugins diff --git "a/spaces/fkhuggingme/gpt-academic/crazy_functions/\345\257\271\350\257\235\345\216\206\345\217\262\345\255\230\346\241\243.py" "b/spaces/fkhuggingme/gpt-academic/crazy_functions/\345\257\271\350\257\235\345\216\206\345\217\262\345\255\230\346\241\243.py" deleted file mode 100644 index bc75875b5c8c8c7bf1d0628c3da3cee1c34e7e2d..0000000000000000000000000000000000000000 --- "a/spaces/fkhuggingme/gpt-academic/crazy_functions/\345\257\271\350\257\235\345\216\206\345\217\262\345\255\230\346\241\243.py" +++ /dev/null @@ -1,143 +0,0 @@ -from toolbox import CatchException, update_ui -from .crazy_utils import request_gpt_model_in_new_thread_with_ui_alive -import re - -def write_chat_to_file(chatbot, history=None, file_name=None): - """ - 将对话记录history以Markdown格式写入文件中。如果没有指定文件名,则使用当前时间生成文件名。 - """ - import os - import time - if file_name is None: - file_name = 'chatGPT对话历史' + time.strftime("%Y-%m-%d-%H-%M-%S", time.localtime()) + '.html' - os.makedirs('./gpt_log/', exist_ok=True) - with open(f'./gpt_log/{file_name}', 'w', encoding='utf8') as f: - from theme import advanced_css - f.write(f'对话历史') - for i, contents in enumerate(chatbot): - for j, content in enumerate(contents): - try: # 这个bug没找到触发条件,暂时先这样顶一下 - if type(content) != str: content = str(content) - except: - continue - f.write(content) - if j == 0: - f.write('
    ') - f.write('
    \n\n') - f.write('
    \n\n raw chat context:\n') - f.write('') - for h in history: - f.write("\n>>>" + h) - f.write('') - res = '对话历史写入:' + os.path.abspath(f'./gpt_log/{file_name}') - print(res) - return res - -def gen_file_preview(file_name): - try: - with open(file_name, 'r', encoding='utf8') as f: - file_content = f.read() - # pattern to match the text between and - pattern = re.compile(r'.*?', flags=re.DOTALL) - file_content = re.sub(pattern, '', file_content) - html, history = file_content.split('
    \n\n raw chat context:\n') - history = history.strip('') - history = history.strip('') - history = history.split("\n>>>") - return list(filter(lambda x:x!="", history))[0][:100] - except: - return "" - -def read_file_to_chat(chatbot, history, file_name): - with open(file_name, 'r', encoding='utf8') as f: - file_content = f.read() - # pattern to match the text between and - pattern = re.compile(r'.*?', flags=re.DOTALL) - file_content = re.sub(pattern, '', file_content) - html, history = file_content.split('
    \n\n raw chat context:\n') - history = history.strip('') - history = history.strip('') - history = history.split("\n>>>") - history = list(filter(lambda x:x!="", history)) - html = html.split('
    \n\n') - html = list(filter(lambda x:x!="", html)) - chatbot.clear() - for i, h in enumerate(html): - i_say, gpt_say = h.split('
    ') - chatbot.append([i_say, gpt_say]) - chatbot.append([f"存档文件详情?", f"[Local Message] 载入对话{len(html)}条,上下文{len(history)}条。"]) - return chatbot, history - -@CatchException -def 对话历史存档(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, web_port): - """ - txt 输入栏用户输入的文本,例如需要翻译的一段话,再例如一个包含了待处理文件的路径 - llm_kwargs gpt模型参数,如温度和top_p等,一般原样传递下去就行 - plugin_kwargs 插件模型的参数,暂时没有用武之地 - chatbot 聊天显示框的句柄,用于显示给用户 - history 聊天历史,前情提要 - system_prompt 给gpt的静默提醒 - web_port 当前软件运行的端口号 - """ - - chatbot.append(("保存当前对话", - f"[Local Message] {write_chat_to_file(chatbot, history)},您可以调用“载入对话历史存档”还原当下的对话。\n警告!被保存的对话历史可以被使用该系统的任何人查阅。")) - yield from update_ui(chatbot=chatbot, history=history) # 刷新界面 # 由于请求gpt需要一段时间,我们先及时地做一次界面更新 - -def hide_cwd(str): - import os - current_path = os.getcwd() - replace_path = "." - return str.replace(current_path, replace_path) - -@CatchException -def 载入对话历史存档(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, web_port): - """ - txt 输入栏用户输入的文本,例如需要翻译的一段话,再例如一个包含了待处理文件的路径 - llm_kwargs gpt模型参数,如温度和top_p等,一般原样传递下去就行 - plugin_kwargs 插件模型的参数,暂时没有用武之地 - chatbot 聊天显示框的句柄,用于显示给用户 - history 聊天历史,前情提要 - system_prompt 给gpt的静默提醒 - web_port 当前软件运行的端口号 - """ - from .crazy_utils import get_files_from_everything - success, file_manifest, _ = get_files_from_everything(txt, type='.html') - - if not success: - if txt == "": txt = '空空如也的输入栏' - import glob - local_history = "
    ".join(["`"+hide_cwd(f)+f" ({gen_file_preview(f)})"+"`" for f in glob.glob(f'gpt_log/**/chatGPT对话历史*.html', recursive=True)]) - chatbot.append([f"正在查找对话历史文件(html格式): {txt}", f"找不到任何html文件: {txt}。但本地存储了以下历史文件,您可以将任意一个文件路径粘贴到输入区,然后重试:
    {local_history}"]) - yield from update_ui(chatbot=chatbot, history=history) # 刷新界面 - return - - try: - chatbot, history = read_file_to_chat(chatbot, history, file_manifest[0]) - yield from update_ui(chatbot=chatbot, history=history) # 刷新界面 - except: - chatbot.append([f"载入对话历史文件", f"对话历史文件损坏!"]) - yield from update_ui(chatbot=chatbot, history=history) # 刷新界面 - return - -@CatchException -def 删除所有本地对话历史记录(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, web_port): - """ - txt 输入栏用户输入的文本,例如需要翻译的一段话,再例如一个包含了待处理文件的路径 - llm_kwargs gpt模型参数,如温度和top_p等,一般原样传递下去就行 - plugin_kwargs 插件模型的参数,暂时没有用武之地 - chatbot 聊天显示框的句柄,用于显示给用户 - history 聊天历史,前情提要 - system_prompt 给gpt的静默提醒 - web_port 当前软件运行的端口号 - """ - - import glob, os - local_history = "
    ".join(["`"+hide_cwd(f)+"`" for f in glob.glob(f'gpt_log/**/chatGPT对话历史*.html', recursive=True)]) - for f in glob.glob(f'gpt_log/**/chatGPT对话历史*.html', recursive=True): - os.remove(f) - chatbot.append([f"删除所有历史对话文件", f"已删除
    {local_history}"]) - yield from update_ui(chatbot=chatbot, history=history) # 刷新界面 - return - - diff --git a/spaces/fkunn1326/Kokohachi-NoAI-Diffusion/app.py b/spaces/fkunn1326/Kokohachi-NoAI-Diffusion/app.py deleted file mode 100644 index b124d4f360d665548202c919a0d30defe2f345e5..0000000000000000000000000000000000000000 --- a/spaces/fkunn1326/Kokohachi-NoAI-Diffusion/app.py +++ /dev/null @@ -1,3 +0,0 @@ -import gradio as gr - -gr.Interface.load("models/Kokohachi/NoAI-Diffusion").launch() \ No newline at end of file diff --git a/spaces/flax-community/spanish-image-captioning/sections/acknowledgements.md b/spaces/flax-community/spanish-image-captioning/sections/acknowledgements.md deleted file mode 100644 index 6acc209cf5110bb2758763a808b7d39ab95ca945..0000000000000000000000000000000000000000 --- a/spaces/flax-community/spanish-image-captioning/sections/acknowledgements.md +++ /dev/null @@ -1,6 +0,0 @@ -## Acknowledgements -We'd like to thank [Abheesht Sharma](https://huggingface.co/abheesht) for helping in the discussions in the initial phases. [Luke Melas](https://github.com/lukemelas) helped us get the cleaned CC-12M data on our TPU-VMs and we are very grateful to him. - -This project would not be possible without the help of [Patrick](https://huggingface.co/patrickvonplaten) and [Suraj](https://huggingface.co/valhalla) who met with us and helped us review our approach and guided us throughout the project. We especially thank Patrick for going out of the way and allowing us extra TPU time so that we could work on this project. - -Last but not the least, we thank the Google Team for helping answer our queries on the Slack channel, and for providing us TPU-VMs. \ No newline at end of file diff --git a/spaces/florim/MedGPT/autogpt/prompt.py b/spaces/florim/MedGPT/autogpt/prompt.py deleted file mode 100644 index 03c132acdf26d08deeee119e41a561f430957806..0000000000000000000000000000000000000000 --- a/spaces/florim/MedGPT/autogpt/prompt.py +++ /dev/null @@ -1,204 +0,0 @@ -from colorama import Fore - -from autogpt.config import Config -from autogpt.config.ai_config import AIConfig -from autogpt.config.config import Config -from autogpt.logs import logger -from autogpt.promptgenerator import PromptGenerator -from autogpt.setup import prompt_user -from autogpt.utils import clean_input - -CFG = Config() - - -def get_prompt() -> str: - """ - This function generates a prompt string that includes various constraints, - commands, resources, and performance evaluations. - - Returns: - str: The generated prompt string. - """ - - # Initialize the Config object - cfg = Config() - - # Initialize the PromptGenerator object - prompt_generator = PromptGenerator() - - # Add constraints to the PromptGenerator object - prompt_generator.add_constraint( - "~4000 word limit for short term memory. Your short term memory is short, so" - " immediately save important information to files." - ) - prompt_generator.add_constraint( - "If you are unsure how you previously did something or want to recall past" - " events, thinking about similar events will help you remember." - ) - prompt_generator.add_constraint("No user assistance") - prompt_generator.add_constraint( - 'Exclusively use the commands listed in double quotes e.g. "command name"' - ) - prompt_generator.add_constraint( - "Use subprocesses for commands that will not terminate within a few minutes" - ) - - # Define the command list - commands = [ - ("Google Search", "google", {"input": ""}), - ( - "Browse Website", - "browse_website", - {"url": "", "question": ""}, - ), - ( - "Start GPT Agent", - "start_agent", - {"name": "", "task": "", "prompt": ""}, - ), - ( - "Message GPT Agent", - "message_agent", - {"key": "", "message": ""}, - ), - ("List GPT Agents", "list_agents", {}), - ("Delete GPT Agent", "delete_agent", {"key": ""}), - ( - "Clone Repository", - "clone_repository", - {"repository_url": "", "clone_path": ""}, - ), - ("Write to file", "write_to_file", {"file": "", "text": ""}), - ("Read file", "read_file", {"file": ""}), - ("Append to file", "append_to_file", {"file": "", "text": ""}), - ("Delete file", "delete_file", {"file": ""}), - ("Search Files", "search_files", {"directory": ""}), - ("Analyze Code", "analyze_code", {"code": ""}), - ( - "Get Improved Code", - "improve_code", - {"suggestions": "", "code": ""}, - ), - ( - "Write Tests", - "write_tests", - {"code": "", "focus": ""}, - ), - ("Execute Python File", "execute_python_file", {"file": ""}), - ("Task Complete (Shutdown)", "task_complete", {"reason": ""}), - ("Generate Image", "generate_image", {"prompt": ""}), - ("Send Tweet", "send_tweet", {"text": ""}), - ] - - # Only add the audio to text command if the model is specified - if cfg.huggingface_audio_to_text_model: - commands.append( - ("Convert Audio to text", "read_audio_from_file", {"file": ""}), - ) - - # Only add shell command to the prompt if the AI is allowed to execute it - if cfg.execute_local_commands: - commands.append( - ( - "Execute Shell Command, non-interactive commands only", - "execute_shell", - {"command_line": ""}, - ), - ) - commands.append( - ( - "Execute Shell Command Popen, non-interactive commands only", - "execute_shell_popen", - {"command_line": ""}, - ), - ) - - # Only add the download file command if the AI is allowed to execute it - if cfg.allow_downloads: - commands.append( - ( - "Downloads a file from the internet, and stores it locally", - "download_file", - {"url": "", "file": ""}, - ), - ) - - # Add these command last. - commands.append( - ("Do Nothing", "do_nothing", {}), - ) - commands.append( - ("Task Complete (Shutdown)", "task_complete", {"reason": ""}), - ) - - # Add commands to the PromptGenerator object - for command_label, command_name, args in commands: - prompt_generator.add_command(command_label, command_name, args) - - # Add resources to the PromptGenerator object - prompt_generator.add_resource( - "Internet access for searches and information gathering." - ) - prompt_generator.add_resource("Long Term memory management.") - prompt_generator.add_resource( - "GPT-3.5 powered Agents for delegation of simple tasks." - ) - prompt_generator.add_resource("File output.") - - # Add performance evaluations to the PromptGenerator object - prompt_generator.add_performance_evaluation( - "Continuously review and analyze your actions to ensure you are performing to" - " the best of your abilities." - ) - prompt_generator.add_performance_evaluation( - "Constructively self-criticize your big-picture behavior constantly." - ) - prompt_generator.add_performance_evaluation( - "Reflect on past decisions and strategies to refine your approach." - ) - prompt_generator.add_performance_evaluation( - "Every command has a cost, so be smart and efficient. Aim to complete tasks in" - " the least number of steps." - ) - - # Generate the prompt string - return prompt_generator.generate_prompt_string() - - -def construct_prompt() -> str: - """Construct the prompt for the AI to respond to - - Returns: - str: The prompt string - """ - config = AIConfig.load(CFG.ai_settings_file) - if CFG.skip_reprompt and config.ai_name: - logger.typewriter_log("Name :", Fore.GREEN, config.ai_name) - logger.typewriter_log("Role :", Fore.GREEN, config.ai_role) - logger.typewriter_log("Goals:", Fore.GREEN, f"{config.ai_goals}") - elif config.ai_name: - logger.typewriter_log( - "Welcome back! ", - Fore.GREEN, - f"Would you like me to return to being {config.ai_name}?", - speak_text=True, - ) - should_continue = clean_input( - f"""Continue with the last settings? -Name: {config.ai_name} -Role: {config.ai_role} -Goals: {config.ai_goals} -Continue (y/n): """ - ) - if should_continue.lower() == "n": - config = AIConfig() - - if not config.ai_name: - config = prompt_user() - config.save(CFG.ai_settings_file) - - # Get rid of this global: - global ai_name - ai_name = config.ai_name - - return config.construct_full_prompt() diff --git a/spaces/fuckyoudeki/AutoGPT/autogpt/speech/brian.py b/spaces/fuckyoudeki/AutoGPT/autogpt/speech/brian.py deleted file mode 100644 index 821fdf2f482a9cfa928e5c9680152ad6766d8326..0000000000000000000000000000000000000000 --- a/spaces/fuckyoudeki/AutoGPT/autogpt/speech/brian.py +++ /dev/null @@ -1,40 +0,0 @@ -""" Brian speech module for autogpt """ -import os - -import requests -from playsound import playsound - -from autogpt.speech.base import VoiceBase - - -class BrianSpeech(VoiceBase): - """Brian speech module for autogpt""" - - def _setup(self) -> None: - """Setup the voices, API key, etc.""" - pass - - def _speech(self, text: str, _: int = 0) -> bool: - """Speak text using Brian with the streamelements API - - Args: - text (str): The text to speak - - Returns: - bool: True if the request was successful, False otherwise - """ - tts_url = ( - f"https://api.streamelements.com/kappa/v2/speech?voice=Brian&text={text}" - ) - response = requests.get(tts_url) - - if response.status_code == 200: - with open("speech.mp3", "wb") as f: - f.write(response.content) - playsound("speech.mp3") - os.remove("speech.mp3") - return True - else: - print("Request failed with status code:", response.status_code) - print("Response content:", response.content) - return False diff --git a/spaces/gaviego/mnist/train.py b/spaces/gaviego/mnist/train.py deleted file mode 100644 index 1fae2a5a4708b1dec35055ec3ed894ba16bd4744..0000000000000000000000000000000000000000 --- a/spaces/gaviego/mnist/train.py +++ /dev/null @@ -1,53 +0,0 @@ -import torch -import torch.nn as nn -import torch.optim as optim -import torchvision -import torchvision.transforms as transforms -from models import Net -# Load the MNIST dataset -train_set = torchvision.datasets.MNIST(root='./data', train=True, - download=True, transform=transforms.ToTensor()) -test_set = torchvision.datasets.MNIST(root='./data', train=False, - download=True, transform=transforms.ToTensor()) - -train_loader = torch.utils.data.DataLoader(train_set, batch_size=32, - shuffle=True) -test_loader = torch.utils.data.DataLoader(test_set, batch_size=32, - shuffle=False) - - - -net = Net() - -# Use CrossEntropyLoss for multi-class classification -criterion = nn.CrossEntropyLoss() - -optimizer = optim.SGD(net.parameters(), lr=0.01) - -# Train the model -for epoch in range(50): # Loop over the dataset multiple times - for i, data in enumerate(train_loader, 0): - inputs, labels = data - - optimizer.zero_grad() - outputs = net(inputs) - loss = criterion(outputs, labels) - loss.backward() - optimizer.step() - -print('Finished Training') - -# Test the model -correct = 0 -total = 0 -with torch.no_grad(): - for data in test_loader: - images, labels = data - outputs = net(images) - _, predicted = torch.max(outputs.data, 1) - total += labels.size(0) - correct += (predicted == labels).sum().item() - -print(f'Accuracy of the network on test images: {100 * correct / total}%') - -torch.save(net,'mnist.pth') \ No newline at end of file diff --git a/spaces/georgesX/finetuned_diffusion/app.py b/spaces/georgesX/finetuned_diffusion/app.py deleted file mode 100644 index e2d3bb2d0344d6fdfa1d5cbc3a5443c73c5c00b6..0000000000000000000000000000000000000000 --- a/spaces/georgesX/finetuned_diffusion/app.py +++ /dev/null @@ -1,349 +0,0 @@ -from diffusers import AutoencoderKL, UNet2DConditionModel, StableDiffusionPipeline, StableDiffusionImg2ImgPipeline, DPMSolverMultistepScheduler -import gradio as gr -import torch -from PIL import Image -import utils -import datetime -import time -import psutil -import random - - -start_time = time.time() -is_colab = utils.is_google_colab() -state = None -current_steps = 25 - -class Model: - def __init__(self, name, path="", prefix=""): - self.name = name - self.path = path - self.prefix = prefix - self.pipe_t2i = None - self.pipe_i2i = None - -models = [ - Model("Arcane", "nitrosocke/Arcane-Diffusion", "arcane style "), - Model("Dreamlike Diffusion 1.0", "dreamlike-art/dreamlike-diffusion-1.0", "dreamlikeart "), - Model("Archer", "nitrosocke/archer-diffusion", "archer style "), - Model("Anything V3", "Linaqruf/anything-v3.0", ""), - Model("Modern Disney", "nitrosocke/mo-di-diffusion", "modern disney style "), - Model("Classic Disney", "nitrosocke/classic-anim-diffusion", "classic disney style "), - Model("Loving Vincent (Van Gogh)", "dallinmackay/Van-Gogh-diffusion", "lvngvncnt "), - Model("Wavyfusion", "wavymulder/wavyfusion", "wa-vy style "), - Model("Analog Diffusion", "wavymulder/Analog-Diffusion", "analog style "), - Model("Redshift renderer (Cinema4D)", "nitrosocke/redshift-diffusion", "redshift style "), - Model("Midjourney v4 style", "prompthero/midjourney-v4-diffusion", "mdjrny-v4 style "), - Model("Waifu", "hakurei/waifu-diffusion"), - Model("Cyberpunk Anime", "DGSpitzer/Cyberpunk-Anime-Diffusion", "dgs illustration style "), - Model("Elden Ring", "nitrosocke/elden-ring-diffusion", "elden ring style "), - Model("TrinArt v2", "naclbit/trinart_stable_diffusion_v2"), - Model("Spider-Verse", "nitrosocke/spider-verse-diffusion", "spiderverse style "), - Model("Balloon Art", "Fictiverse/Stable_Diffusion_BalloonArt_Model", "BalloonArt "), - Model("Tron Legacy", "dallinmackay/Tron-Legacy-diffusion", "trnlgcy "), - Model("Pokémon", "lambdalabs/sd-pokemon-diffusers"), - Model("Pony Diffusion", "AstraliteHeart/pony-diffusion"), - Model("Robo Diffusion", "nousr/robo-diffusion"), - Model("Epic Diffusion", "johnslegers/epic-diffusion") - ] - -custom_model = None -if is_colab: - models.insert(0, Model("Custom model")) - custom_model = models[0] - -last_mode = "txt2img" -current_model = models[1] if is_colab else models[0] -current_model_path = current_model.path - -if is_colab: - pipe = StableDiffusionPipeline.from_pretrained( - current_model.path, - torch_dtype=torch.float16, - scheduler=DPMSolverMultistepScheduler.from_pretrained(current_model.path, subfolder="scheduler"), - safety_checker=lambda images, clip_input: (images, False) - ) - -else: - pipe = StableDiffusionPipeline.from_pretrained( - current_model.path, - torch_dtype=torch.float16, - scheduler=DPMSolverMultistepScheduler.from_pretrained(current_model.path, subfolder="scheduler") - ) - -if torch.cuda.is_available(): - pipe = pipe.to("cuda") - pipe.enable_xformers_memory_efficient_attention() - -device = "GPU 🔥" if torch.cuda.is_available() else "CPU 🥶" - -def error_str(error, title="Error"): - return f"""#### {title} - {error}""" if error else "" - -def update_state(new_state): - global state - state = new_state - -def update_state_info(old_state): - if state and state != old_state: - return gr.update(value=state) - -def custom_model_changed(path): - models[0].path = path - global current_model - current_model = models[0] - -def on_model_change(model_name): - - prefix = "Enter prompt. \"" + next((m.prefix for m in models if m.name == model_name), None) + "\" is prefixed automatically" if model_name != models[0].name else "Don't forget to use the custom model prefix in the prompt!" - - return gr.update(visible = model_name == models[0].name), gr.update(placeholder=prefix) - -def on_steps_change(steps): - global current_steps - current_steps = steps - -def pipe_callback(step: int, timestep: int, latents: torch.FloatTensor): - update_state(f"{step}/{current_steps} steps")#\nTime left, sec: {timestep/100:.0f}") - -def inference(model_name, prompt, guidance, steps, n_images=1, width=512, height=512, seed=0, img=None, strength=0.5, neg_prompt=""): - - update_state(" ") - - print(psutil.virtual_memory()) # print memory usage - - global current_model - for model in models: - if model.name == model_name: - current_model = model - model_path = current_model.path - - # generator = torch.Generator('cuda').manual_seed(seed) if seed != 0 else None - if seed == 0: - seed = random.randint(0, 2147483647) - - generator = torch.Generator('cuda').manual_seed(seed) - - try: - if img is not None: - return img_to_img(model_path, prompt, n_images, neg_prompt, img, strength, guidance, steps, width, height, generator, seed), f"Done. Seed: {seed}" - else: - return txt_to_img(model_path, prompt, n_images, neg_prompt, guidance, steps, width, height, generator, seed), f"Done. Seed: {seed}" - except Exception as e: - return None, error_str(e) - -def txt_to_img(model_path, prompt, n_images, neg_prompt, guidance, steps, width, height, generator, seed): - - print(f"{datetime.datetime.now()} txt_to_img, model: {current_model.name}") - - global last_mode - global pipe - global current_model_path - if model_path != current_model_path or last_mode != "txt2img": - current_model_path = model_path - - update_state(f"Loading {current_model.name} text-to-image model...") - - if is_colab or current_model == custom_model: - pipe = StableDiffusionPipeline.from_pretrained( - current_model_path, - torch_dtype=torch.float16, - scheduler=DPMSolverMultistepScheduler.from_pretrained(current_model.path, subfolder="scheduler"), - safety_checker=lambda images, clip_input: (images, False) - ) - else: - pipe = StableDiffusionPipeline.from_pretrained( - current_model_path, - torch_dtype=torch.float16, - scheduler=DPMSolverMultistepScheduler.from_pretrained(current_model.path, subfolder="scheduler") - ) - # pipe = pipe.to("cpu") - # pipe = current_model.pipe_t2i - - if torch.cuda.is_available(): - pipe = pipe.to("cuda") - pipe.enable_xformers_memory_efficient_attention() - last_mode = "txt2img" - - prompt = current_model.prefix + prompt - result = pipe( - prompt, - negative_prompt = neg_prompt, - num_images_per_prompt=n_images, - num_inference_steps = int(steps), - guidance_scale = guidance, - width = width, - height = height, - generator = generator, - callback=pipe_callback) - - # update_state(f"Done. Seed: {seed}") - - return replace_nsfw_images(result) - -def img_to_img(model_path, prompt, n_images, neg_prompt, img, strength, guidance, steps, width, height, generator, seed): - - print(f"{datetime.datetime.now()} img_to_img, model: {model_path}") - - global last_mode - global pipe - global current_model_path - if model_path != current_model_path or last_mode != "img2img": - current_model_path = model_path - - update_state(f"Loading {current_model.name} image-to-image model...") - - if is_colab or current_model == custom_model: - pipe = StableDiffusionImg2ImgPipeline.from_pretrained( - current_model_path, - torch_dtype=torch.float16, - scheduler=DPMSolverMultistepScheduler.from_pretrained(current_model.path, subfolder="scheduler"), - safety_checker=lambda images, clip_input: (images, False) - ) - else: - pipe = StableDiffusionImg2ImgPipeline.from_pretrained( - current_model_path, - torch_dtype=torch.float16, - scheduler=DPMSolverMultistepScheduler.from_pretrained(current_model.path, subfolder="scheduler") - ) - # pipe = pipe.to("cpu") - # pipe = current_model.pipe_i2i - - if torch.cuda.is_available(): - pipe = pipe.to("cuda") - pipe.enable_xformers_memory_efficient_attention() - last_mode = "img2img" - - prompt = current_model.prefix + prompt - ratio = min(height / img.height, width / img.width) - img = img.resize((int(img.width * ratio), int(img.height * ratio)), Image.LANCZOS) - result = pipe( - prompt, - negative_prompt = neg_prompt, - num_images_per_prompt=n_images, - image = img, - num_inference_steps = int(steps), - strength = strength, - guidance_scale = guidance, - # width = width, - # height = height, - generator = generator, - callback=pipe_callback) - - # update_state(f"Done. Seed: {seed}") - - return replace_nsfw_images(result) - -def replace_nsfw_images(results): - - if is_colab: - return results.images - - for i in range(len(results.images)): - if results.nsfw_content_detected[i]: - results.images[i] = Image.open("nsfw.png") - return results.images - -# css = """.finetuned-diffusion-div div{display:inline-flex;align-items:center;gap:.8rem;font-size:1.75rem}.finetuned-diffusion-div div h1{font-weight:900;margin-bottom:7px}.finetuned-diffusion-div p{margin-bottom:10px;font-size:94%}a{text-decoration:underline}.tabs{margin-top:0;margin-bottom:0}#gallery{min-height:20rem} -# """ -with gr.Blocks(css="style.css") as demo: - gr.HTML( - f""" -
    -
    -

    Finetuned Diffusion

    -
    -

    - Demo for multiple fine-tuned Stable Diffusion models, trained on different styles:
    - Arcane, Archer, Elden Ring, Spider-Verse, Modern Disney, Classic Disney, Loving Vincent (Van Gogh), Redshift renderer (Cinema4D), Midjourney v4 style, Waifu, Pokémon, Pony Diffusion, Robo Diffusion, Cyberpunk Anime, Tron Legacy, Balloon Art + in colab notebook you can load any other Diffusers 🧨 SD model hosted on HuggingFace 🤗. -

    -

    You can skip the queue and load custom models in the colab: Open In Colab

    - Running on {device}{(" in a Google Colab." if is_colab else "")} -

    -

    You can also duplicate this space and upgrade to gpu by going to settings:
    - Duplicate Space

    -
    - """ - ) - with gr.Row(): - - with gr.Column(scale=55): - with gr.Group(): - model_name = gr.Dropdown(label="Model", choices=[m.name for m in models], value=current_model.name) - with gr.Box(visible=False) as custom_model_group: - custom_model_path = gr.Textbox(label="Custom model path", placeholder="Path to model, e.g. nitrosocke/Arcane-Diffusion", interactive=True) - gr.HTML("
    Custom models have to be downloaded first, so give it some time.
    ") - - with gr.Row(): - prompt = gr.Textbox(label="Prompt", show_label=False, max_lines=2,placeholder="Enter prompt. Style applied automatically").style(container=False) - generate = gr.Button(value="Generate").style(rounded=(False, True, True, False)) - - - # image_out = gr.Image(height=512) - gallery = gr.Gallery(label="Generated images", show_label=False, elem_id="gallery").style(grid=[2], height="auto") - - state_info = gr.Textbox(label="State", show_label=False, max_lines=2).style(container=False) - error_output = gr.Markdown() - - with gr.Column(scale=45): - with gr.Tab("Options"): - with gr.Group(): - neg_prompt = gr.Textbox(label="Negative prompt", placeholder="What to exclude from the image") - - n_images = gr.Slider(label="Images", value=1, minimum=1, maximum=4, step=1) - - with gr.Row(): - guidance = gr.Slider(label="Guidance scale", value=7.5, maximum=15) - steps = gr.Slider(label="Steps", value=current_steps, minimum=2, maximum=75, step=1) - - with gr.Row(): - width = gr.Slider(label="Width", value=512, minimum=64, maximum=1024, step=8) - height = gr.Slider(label="Height", value=512, minimum=64, maximum=1024, step=8) - - seed = gr.Slider(0, 2147483647, label='Seed (0 = random)', value=0, step=1) - - with gr.Tab("Image to image"): - with gr.Group(): - image = gr.Image(label="Image", height=256, tool="editor", type="pil") - strength = gr.Slider(label="Transformation strength", minimum=0, maximum=1, step=0.01, value=0.5) - - if is_colab: - model_name.change(on_model_change, inputs=model_name, outputs=[custom_model_group, prompt], queue=False) - custom_model_path.change(custom_model_changed, inputs=custom_model_path, outputs=None) - # n_images.change(lambda n: gr.Gallery().style(grid=[2 if n > 1 else 1], height="auto"), inputs=n_images, outputs=gallery) - steps.change(on_steps_change, inputs=[steps], outputs=[], queue=False) - - inputs = [model_name, prompt, guidance, steps, n_images, width, height, seed, image, strength, neg_prompt] - outputs = [gallery, error_output] - prompt.submit(inference, inputs=inputs, outputs=outputs) - generate.click(inference, inputs=inputs, outputs=outputs) - - ex = gr.Examples([ - [models[7].name, "tiny cute and adorable kitten adventurer dressed in a warm overcoat with survival gear on a winters day", 7.5, 25], - [models[4].name, "portrait of dwayne johnson", 7.0, 35], - [models[5].name, "portrait of a beautiful alyx vance half life", 10, 25], - [models[6].name, "Aloy from Horizon: Zero Dawn, half body portrait, smooth, detailed armor, beautiful face, illustration", 7.0, 30], - [models[5].name, "fantasy portrait painting, digital art", 4.0, 20], - ], inputs=[model_name, prompt, guidance, steps], outputs=outputs, fn=inference, cache_examples=False) - - gr.HTML(""" -
    -
    -

    Models by @nitrosocke, @haruu1367, @Helixngc7293, @dal_mack, @prompthero and others. ❤️

    -

    This space uses the DPM-Solver++ sampler by Cheng Lu, et al..

    -

    Space by:
    - Twitter Follow
    - GitHub followers



    - Buy Me A Coffee

    -

    visitors

    -
    - """) - - demo.load(update_state_info, inputs=state_info, outputs=state_info, every=0.5, show_progress=False) - -print(f"Space built in {time.time() - start_time:.2f} seconds") - -# if not is_colab: -demo.queue(concurrency_count=1) -demo.launch(debug=is_colab, share=is_colab) diff --git a/spaces/gerardo/elon_or_not/README.md b/spaces/gerardo/elon_or_not/README.md deleted file mode 100644 index 38e89b988eb11b9f3b5ab84e90633979b7fa3054..0000000000000000000000000000000000000000 --- a/spaces/gerardo/elon_or_not/README.md +++ /dev/null @@ -1,13 +0,0 @@ ---- -title: Elon_or_not -emoji: 🏢 -colorFrom: purple -colorTo: yellow -sdk: gradio -sdk_version: 2.9.4 -app_file: app.py -pinned: false -license: apache-2.0 ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces#reference diff --git a/spaces/gotiQspiryo/whisper-ui/Circuitlogix Pro V7 04 Cracked Rar.md b/spaces/gotiQspiryo/whisper-ui/Circuitlogix Pro V7 04 Cracked Rar.md deleted file mode 100644 index f306b68ff5632b33d84d39b0c3de87be376d0ee1..0000000000000000000000000000000000000000 --- a/spaces/gotiQspiryo/whisper-ui/Circuitlogix Pro V7 04 Cracked Rar.md +++ /dev/null @@ -1,62 +0,0 @@ -## Circuitlogix Pro V7 04 Cracked Rar - - - -**Download File ---> [https://mauletnaci.blogspot.com/?download=2twu5Z](https://mauletnaci.blogspot.com/?download=2twu5Z)** - - - -# How to Download and Install Circuitlogix Pro V7 04 Cracked Rar - - - -Circuitlogix Pro is a powerful and easy-to-use software for designing and simulating electronic circuits. It allows you to create virtual electronic circuits using a large database of components and test them for issues before proceeding to building the actual board. Circuitlogix Pro also supports mixed-mode simulation, which means you can combine analog and digital components in the same circuit. - - - -If you want to download and install Circuitlogix Pro V7 04 cracked rar, you will need to follow these steps: - - - -1. Download the Circuitlogix Pro V7 04 cracked rar file from a reliable source. You can use the link below, which is from a web search result[^1^]. - - -[https://rophocorti.weebly.com/circuitlogix-pro-v7-04-cracked-rar.html](https://rophocorti.weebly.com/circuitlogix-pro-v7-04-cracked-rar.html) - -2. Extract the rar file using a software like WinRAR or 7-Zip. You will get a folder named CircuitLogix.Pro.v7.04.cracked.rar. - -3. Open the folder and run the setup.exe file. Follow the instructions on the screen to install Circuitlogix Pro on your computer. - -4. After the installation is complete, copy the file named CLX.exe from the folder Crack to the installation directory of Circuitlogix Pro. This will replace the original file and activate the software. - -5. Enjoy using Circuitlogix Pro V7 04 cracked rar for free! - - - -Note: This article is for educational purposes only. We do not condone or encourage piracy or illegal use of software. Please buy the original software from the official website if you like it and want to support the developers. - - - -Circuitlogix Pro V7 04 cracked rar is a useful tool for students, hobbyists, and professionals who want to learn and practice electronics. It offers a realistic and interactive simulation environment that mimics the behavior of real-world components and devices. You can also view the waveforms and measurements of various signals and parameters using the built-in oscilloscope and multimeter. - - - -With Circuitlogix Pro V7 04 cracked rar, you can design and test circuits of any complexity and size. You can choose from over 10,000 components, including resistors, capacitors, transistors, diodes, LEDs, switches, relays, logic gates, microcontrollers, sensors, displays, and more. You can also create your own custom components and libraries using the Component Editor. Circuitlogix Pro supports SPICE models and subcircuits, which means you can import and use components from other sources as well. - - - -Circuitlogix Pro V7 04 cracked rar also has a 3D mode that allows you to view and edit your circuits in three dimensions. You can rotate, zoom, and pan the 3D view to see your circuit from different angles and perspectives. You can also export your 3D circuits as images or videos for presentation or documentation purposes. - - - -Circuitlogix Pro V7 04 cracked rar is not only a simulation software, but also a learning platform. It comes with a comprehensive help system that explains the features and functions of the software and the components. It also has a tutorial mode that guides you through the basics of circuit design and simulation. You can also access hundreds of sample circuits and projects that demonstrate various concepts and applications of electronics. - - - -Circuitlogix Pro V7 04 cracked rar is compatible with Windows XP, Vista, 7, 8, and 10. It requires a minimum of 512 MB of RAM and 100 MB of hard disk space. It also supports multiple languages, including English, French, German, Spanish, Portuguese, Italian, Russian, Chinese, Japanese, and Korean. - - - -If you want to download and install Circuitlogix Pro V7 04 cracked rar for free, you can follow the steps mentioned above. However, we recommend that you purchase the original software from the official website if you want to enjoy the full benefits and support of Circuitlogix Pro. You can also check out the other products from Logic Design Inc., such as Circuitlogix Student Version, LogicSim, RoboLogix, and PLCLogix. - - 1b8d091108 \ No newline at end of file diff --git a/spaces/gotiQspiryo/whisper-ui/examples/Bullet Witch Torrent Download [key Serial Number].md b/spaces/gotiQspiryo/whisper-ui/examples/Bullet Witch Torrent Download [key Serial Number].md deleted file mode 100644 index 56cda1832713fe7a4440e786185b246e5942d809..0000000000000000000000000000000000000000 --- a/spaces/gotiQspiryo/whisper-ui/examples/Bullet Witch Torrent Download [key Serial Number].md +++ /dev/null @@ -1,6 +0,0 @@ -

    Bullet Witch Torrent Download [key serial number]


    Download ····· https://urlgoal.com/2uyNc4



    -
    -See the best & latest Army Dlc 2 Cheat Code on isCoupon. ... Bullet Witch is an action-adventure third-person shooter, which follows a young witch as ... Key Generator (Keygen) Serial Key/Code Steam dota 2 dota 2 cheat dota 2 cheat download ... FULL GAME – REBOOT EDITION – CRACKED – DIRECT LINK – TORRENT ... 1fdad05405
    -
    -
    -

    diff --git a/spaces/gotiQspiryo/whisper-ui/examples/Hello Kitty Food Town v1.9 Mod Apk Hack (Unlimited Money) Latest Version Tips and Tricks.md b/spaces/gotiQspiryo/whisper-ui/examples/Hello Kitty Food Town v1.9 Mod Apk Hack (Unlimited Money) Latest Version Tips and Tricks.md deleted file mode 100644 index aa9c89e46fd20f8340093e9c33e09bc87d26f960..0000000000000000000000000000000000000000 --- a/spaces/gotiQspiryo/whisper-ui/examples/Hello Kitty Food Town v1.9 Mod Apk Hack (Unlimited Money) Latest Version Tips and Tricks.md +++ /dev/null @@ -1,5 +0,0 @@ -
    -

    My Talking Hello Kitty (com.unicogames.talking.hellokitty) is a game mod apk on Android, download the latest version of My Talking Hello Kitty Hack Mod (Unlimited Money / Gems) 2022 for Android. This game mod apk can be played for free and does not require root.

    -

    Hello Kitty Food Town v1.9 Mod Apk Hack (Unlimited Money) Latest Version


    Download · https://urlgoal.com/2uyN98



    aaccfb2cb3
    -
    -
    \ No newline at end of file diff --git a/spaces/gotiQspiryo/whisper-ui/examples/Keygen LINK Corel Draw X7 Bagas31.md b/spaces/gotiQspiryo/whisper-ui/examples/Keygen LINK Corel Draw X7 Bagas31.md deleted file mode 100644 index 7ed3aa4dd1db561004e5ee0f7eecd1e0a01c0ba6..0000000000000000000000000000000000000000 --- a/spaces/gotiQspiryo/whisper-ui/examples/Keygen LINK Corel Draw X7 Bagas31.md +++ /dev/null @@ -1,6 +0,0 @@ -

    keygen corel draw x7 bagas31


    Download Zip >>> https://urlgoal.com/2uyNvZ



    - -activation code corel draw x7 32 bit corel draw x7 activation code free keygen ... serial number coreldraw 2017 64 bit download keygen coreldraw x7 bagas31 . 1fdad05405
    -
    -
    -

    diff --git a/spaces/gradio/HuBERT/fairseq/models/lightconv_lm.py b/spaces/gradio/HuBERT/fairseq/models/lightconv_lm.py deleted file mode 100644 index 1d9efc4e42a5ecc1b83338055f18ade5a83ea666..0000000000000000000000000000000000000000 --- a/spaces/gradio/HuBERT/fairseq/models/lightconv_lm.py +++ /dev/null @@ -1,306 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -from fairseq import utils -from fairseq.models import ( - FairseqLanguageModel, - register_model, - register_model_architecture, -) -from fairseq.models.lightconv import Embedding, LightConvDecoder -from fairseq.modules import AdaptiveInput, CharacterTokenEmbedder - - -@register_model("lightconv_lm") -class LightConvLanguageModel(FairseqLanguageModel): - def __init__(self, decoder): - super().__init__(decoder) - - @staticmethod - def add_args(parser): - """Add model-specific arguments to the parser.""" - parser.add_argument( - "--dropout", - default=0.1, - type=float, - metavar="D", - help="dropout probability", - ) - parser.add_argument( - "--attention-dropout", - default=0.0, - type=float, - metavar="D", - help="dropout probability for attention weights", - ) - parser.add_argument( - "--relu-dropout", - default=0.0, - type=float, - metavar="D", - help="dropout probability after ReLU in FFN", - ) - parser.add_argument( - "--input-dropout", - type=float, - metavar="D", - help="dropout probability of the inputs", - ) - parser.add_argument( - "--decoder-embed-dim", - type=int, - metavar="N", - help="decoder embedding dimension", - ) - parser.add_argument( - "--decoder-output-dim", - type=int, - metavar="N", - help="decoder output dimension", - ) - parser.add_argument( - "--decoder-input-dim", type=int, metavar="N", help="decoder input dimension" - ) - parser.add_argument( - "--decoder-ffn-embed-dim", - type=int, - metavar="N", - help="decoder embedding dimension for FFN", - ) - parser.add_argument( - "--decoder-layers", type=int, metavar="N", help="num decoder layers" - ) - parser.add_argument( - "--decoder-attention-heads", - type=int, - metavar="N", - help="num decoder attention heads or LightConv/DynamicConv heads", - ) - parser.add_argument( - "--decoder-normalize-before", - default=False, - action="store_true", - help="apply layernorm before each decoder block", - ) - parser.add_argument( - "--adaptive-softmax-cutoff", - metavar="EXPR", - help="comma separated list of adaptive softmax cutoff points. " - "Must be used with adaptive_loss criterion", - ) - parser.add_argument( - "--adaptive-softmax-dropout", - type=float, - metavar="D", - help="sets adaptive softmax dropout for the tail projections", - ) - parser.add_argument( - "--adaptive-softmax-factor", - type=float, - metavar="N", - help="adaptive input factor", - ) - parser.add_argument( - "--no-token-positional-embeddings", - default=False, - action="store_true", - help="if set, disables positional embeddings (outside self attention)", - ) - parser.add_argument( - "--share-decoder-input-output-embed", - default=False, - action="store_true", - help="share decoder input and output embeddings", - ) - parser.add_argument( - "--character-embeddings", - default=False, - action="store_true", - help="if set, uses character embedding convolutions to produce token embeddings", - ) - parser.add_argument( - "--character-filters", - type=str, - metavar="LIST", - default="[(1, 64), (2, 128), (3, 192), (4, 256), (5, 256), (6, 256), (7, 256)]", - help="size of character embeddings", - ) - parser.add_argument( - "--character-embedding-dim", - type=int, - metavar="N", - default=4, - help="size of character embeddings", - ) - parser.add_argument( - "--char-embedder-highway-layers", - type=int, - metavar="N", - default=2, - help="number of highway layers for character token embeddder", - ) - parser.add_argument( - "--adaptive-input", - default=False, - action="store_true", - help="if set, uses adaptive input", - ) - parser.add_argument( - "--adaptive-input-factor", - type=float, - metavar="N", - help="adaptive input factor", - ) - parser.add_argument( - "--adaptive-input-cutoff", - metavar="EXPR", - help="comma separated list of adaptive input cutoff points.", - ) - parser.add_argument( - "--tie-adaptive-weights", - action="store_true", - help="if set, ties the weights of adaptive softmax and adaptive input", - ) - parser.add_argument( - "--tie-adaptive-proj", - action="store_true", - help="if set, ties the projection weights of adaptive softmax and adaptive input", - ) - parser.add_argument( - "--decoder-learned-pos", - action="store_true", - help="use learned positional embeddings in the decoder", - ) - - """LightConv and DynamicConv arguments""" - parser.add_argument( - "--decoder-kernel-size-list", - type=lambda x: utils.eval_str_list(x, int), - help='list of kernel size (default: "[3,7,15,31,31,31]")', - ) - parser.add_argument( - "--decoder-glu", type=utils.eval_bool, help="glu after in proj" - ) - parser.add_argument( - "--decoder-conv-type", - default="dynamic", - type=str, - choices=["dynamic", "lightweight"], - help="type of convolution", - ) - parser.add_argument("--weight-softmax", default=True, type=utils.eval_bool) - parser.add_argument( - "--weight-dropout", - type=float, - metavar="D", - help="dropout probability for conv weights", - ) - - @classmethod - def build_model(cls, args, task): - """Build a new model instance.""" - - # make sure all arguments are present in older models - base_lm_architecture(args) - - if getattr(args, "max_source_positions", None) is None: - args.max_source_positions = args.tokens_per_sample - if getattr(args, "max_target_positions", None) is None: - args.max_target_positions = args.tokens_per_sample - - if args.character_embeddings: - embed_tokens = CharacterTokenEmbedder( - task.dictionary, - eval(args.character_filters), - args.character_embedding_dim, - args.decoder_embed_dim, - args.char_embedder_highway_layers, - ) - elif args.adaptive_input: - embed_tokens = AdaptiveInput( - len(task.dictionary), - task.dictionary.pad(), - args.decoder_input_dim, - args.adaptive_input_factor, - args.decoder_embed_dim, - utils.eval_str_list(args.adaptive_input_cutoff, type=int), - ) - else: - embed_tokens = Embedding( - len(task.dictionary), args.decoder_input_dim, task.dictionary.pad() - ) - - if args.tie_adaptive_weights: - assert args.adaptive_input - assert args.adaptive_input_factor == args.adaptive_softmax_factor - assert ( - args.adaptive_softmax_cutoff == args.adaptive_input_cutoff - ), "{} != {}".format( - args.adaptive_softmax_cutoff, args.adaptive_input_cutoff - ) - assert args.decoder_input_dim == args.decoder_output_dim - - decoder = LightConvDecoder( - args, - task.output_dictionary, - embed_tokens, - no_encoder_attn=True, - final_norm=False, - ) - return LightConvLanguageModel(decoder) - - -@register_model_architecture("lightconv_lm", "lightconv_lm") -def base_lm_architecture(args): - args.decoder_embed_dim = getattr(args, "decoder_embed_dim", 512) - args.decoder_ffn_embed_dim = getattr(args, "decoder_ffn_embed_dim", 2048) - args.decoder_layers = getattr(args, "decoder_layers", 6) - args.decoder_attention_heads = getattr(args, "decoder_attention_heads", 8) - args.adaptive_softmax_cutoff = getattr(args, "adaptive_softmax_cutoff", None) - args.adaptive_softmax_dropout = getattr(args, "adaptive_softmax_dropout", 0) - args.adaptive_softmax_factor = getattr(args, "adaptive_softmax_factor", 4) - args.decoder_learned_pos = getattr(args, "decoder_learned_pos", False) - - args.character_embeddings = getattr(args, "character_embeddings", False) - - args.decoder_output_dim = getattr( - args, "decoder_output_dim", args.decoder_embed_dim - ) - args.decoder_input_dim = getattr(args, "decoder_input_dim", args.decoder_embed_dim) - args.decoder_conv_dim = getattr(args, "decoder_conv_dim", args.decoder_embed_dim) - - # The model training is not stable without this - args.decoder_normalize_before = True - - args.adaptive_input = getattr(args, "adaptive_input", False) - args.adaptive_input_factor = getattr(args, "adaptive_input_factor", 4) - args.adaptive_input_cutoff = getattr(args, "adaptive_input_cutoff", None) - - args.tie_adaptive_weights = getattr(args, "tie_adaptive_weights", False) - args.tie_adaptive_proj = getattr(args, "tie_adaptive_proj", False) - - args.decoder_kernel_size_list = getattr( - args, "decoder_kernel_size_list", [3, 7, 15, 31, 31, 31] - ) - if len(args.decoder_kernel_size_list) == 1: - args.decoder_kernel_size_list = ( - args.decoder_kernel_size_list * args.decoder_layers - ) - assert ( - len(args.decoder_kernel_size_list) == args.decoder_layers - ), "decoder_kernel_size_list doesn't match decoder_layers" - args.decoder_glu = getattr(args, "decoder_glu", True) - args.input_dropout = getattr(args, "input_dropout", 0.1) - args.weight_dropout = getattr(args, "weight_dropout", args.attention_dropout) - - -@register_model_architecture("lightconv_lm", "lightconv_lm_gbw") -def lightconv_lm_gbw(args): - args.decoder_embed_dim = getattr(args, "decoder_embed_dim", 512) - args.dropout = getattr(args, "dropout", 0.1) - args.attention_dropout = getattr(args, "attention_dropout", 0.1) - args.decoder_ffn_embed_dim = getattr(args, "decoder_ffn_embed_dim", 4096) - args.decoder_attention_heads = getattr(args, "decoder_attention_heads", 16) - base_lm_architecture(args) diff --git a/spaces/gradio/HuBERT/fairseq/modules/quantization/pq/pq.py b/spaces/gradio/HuBERT/fairseq/modules/quantization/pq/pq.py deleted file mode 100644 index eddc2eb34602403f10979f54cd23a45bc2f104d5..0000000000000000000000000000000000000000 --- a/spaces/gradio/HuBERT/fairseq/modules/quantization/pq/pq.py +++ /dev/null @@ -1,128 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -from .em import EM, EmptyClusterResolveError - - -class PQ(EM): - """ - Quantizes the layer weights W with the standard Product Quantization - technique. This learns a codebook of codewords or centroids of size - block_size from W. For further reference on using PQ to quantize - neural networks, see "And the Bit Goes Down: Revisiting the Quantization - of Neural Networks", Stock et al., ICLR 2020. - - PQ is performed in two steps: - (1) The matrix W (weights or fully-connected or convolutional layer) - is reshaped to (block_size, -1). - - If W is fully-connected (2D), its columns are split into - blocks of size block_size. - - If W is convolutional (4D), its filters are split along the - spatial dimension. - (2) We apply the standard EM/k-means algorithm to the resulting reshaped matrix. - - Args: - - W: weight matrix to quantize of size (in_features x out_features) - - block_size: size of the blocks (subvectors) - - n_centroids: number of centroids - - n_iter: number of k-means iterations - - eps: for cluster reassignment when an empty cluster is found - - max_tentatives for cluster reassignment when an empty cluster is found - - verbose: print information after each iteration - - Remarks: - - block_size be compatible with the shape of W - """ - - def __init__( - self, - W, - block_size, - n_centroids=256, - n_iter=20, - eps=1e-6, - max_tentatives=30, - verbose=True, - ): - self.block_size = block_size - W_reshaped = self._reshape(W) - super(PQ, self).__init__( - W_reshaped, - n_centroids=n_centroids, - n_iter=n_iter, - eps=eps, - max_tentatives=max_tentatives, - verbose=verbose, - ) - - def _reshape(self, W): - """ - Reshapes the matrix W as expained in step (1). - """ - - # fully connected: by convention the weight has size out_features x in_features - if len(W.size()) == 2: - self.out_features, self.in_features = W.size() - assert ( - self.in_features % self.block_size == 0 - ), "Linear: n_blocks must be a multiple of in_features" - return ( - W.reshape(self.out_features, -1, self.block_size) - .permute(2, 1, 0) - .flatten(1, 2) - ) - - # convolutional: we reshape along the spatial dimension - elif len(W.size()) == 4: - self.out_channels, self.in_channels, self.k_h, self.k_w = W.size() - assert ( - self.in_channels * self.k_h * self.k_w - ) % self.block_size == 0, ( - "Conv2d: n_blocks must be a multiple of in_channels * k_h * k_w" - ) - return ( - W.reshape(self.out_channels, -1, self.block_size) - .permute(2, 1, 0) - .flatten(1, 2) - ) - # not implemented - else: - raise NotImplementedError(W.size()) - - def encode(self): - """ - Performs self.n_iter EM steps. - """ - - self.initialize_centroids() - for i in range(self.n_iter): - try: - self.step(i) - except EmptyClusterResolveError: - break - - def decode(self): - """ - Returns the encoded full weight matrix. Must be called after - the encode function. - """ - - # fully connected case - if "k_h" not in self.__dict__: - return ( - self.centroids[self.assignments] - .reshape(-1, self.out_features, self.block_size) - .permute(1, 0, 2) - .flatten(1, 2) - ) - - # convolutional case - else: - return ( - self.centroids[self.assignments] - .reshape(-1, self.out_channels, self.block_size) - .permute(1, 0, 2) - .reshape(self.out_channels, self.in_channels, self.k_h, self.k_w) - ) diff --git a/spaces/gradio/longformer/scripts/triviaqa_utils/evaluation_utils.py b/spaces/gradio/longformer/scripts/triviaqa_utils/evaluation_utils.py deleted file mode 100644 index 3528887ba89cdc216461093b56e1b85fda61505c..0000000000000000000000000000000000000000 --- a/spaces/gradio/longformer/scripts/triviaqa_utils/evaluation_utils.py +++ /dev/null @@ -1,157 +0,0 @@ -# -*- coding: utf-8 -*- -""" Official evaluation script for v1.0 of the TriviaQA dataset. -Extended from the evaluation script for v1.1 of the SQuAD dataset. """ - -from __future__ import print_function -from collections import Counter -import string -import re -import sys -import argparse -from . import file_utils -from . import dataset_utils - - -def normalize_answer(s): - """Lower text and remove punctuation, articles and extra whitespace.""" - - def remove_articles(text): - return re.sub(r'\b(a|an|the)\b', ' ', text) - - def white_space_fix(text): - return ' '.join(text.split()) - - def handle_punc(text): - exclude = set(string.punctuation + "".join([u"‘", u"’", u"´", u"`"])) - return ''.join(ch if ch not in exclude else ' ' for ch in text) - - def lower(text): - return text.lower() - - def replace_underscore(text): - return text.replace('_', ' ') - - return white_space_fix(remove_articles(handle_punc(lower(replace_underscore(s))))).strip() - - -def f1_score(prediction, ground_truth): - prediction_tokens = normalize_answer(prediction).split() - ground_truth_tokens = normalize_answer(ground_truth).split() - common = Counter(prediction_tokens) & Counter(ground_truth_tokens) - num_same = sum(common.values()) - if num_same == 0: - return 0 - precision = 1.0 * num_same / len(prediction_tokens) - recall = 1.0 * num_same / len(ground_truth_tokens) - f1 = (2 * precision * recall) / (precision + recall) - return f1 - - -def exact_match_score(prediction, ground_truth): - return int(normalize_answer(prediction) == normalize_answer(ground_truth)) - - -def metric_max_over_ground_truths(metric_fn, prediction, ground_truths): - scores_for_ground_truths = [] - for ground_truth in ground_truths: - score = metric_fn(prediction, ground_truth) - scores_for_ground_truths.append(score) - return max(scores_for_ground_truths) - - -def is_exact_match(answer_object, prediction): - ground_truths = get_ground_truths(answer_object) - for ground_truth in ground_truths: - if exact_match_score(prediction, ground_truth): - return True - return False - - -def has_exact_match(ground_truths, candidates): - for ground_truth in ground_truths: - if ground_truth in candidates: - return True - return False - - -def get_ground_truths(answer): - return answer['NormalizedAliases'] + [normalize_answer(ans) for ans in answer.get('HumanAnswers', [])] - - -def get_oracle_score(ground_truth, predicted_answers, qid_list=None, mute=False): - exact_match = common = 0 - if qid_list is None: - qid_list = ground_truth.keys() - for qid in qid_list: - if qid not in predicted_answers: - if not mute: - message = 'Irrelavant question {} will receive score 0.'.format(qid) - print(message, file=sys.stderr) - continue - common += 1 - prediction = normalize_answer(predicted_answers[qid]) - ground_truths = get_ground_truths(ground_truth[qid]) - em_for_this_question = has_exact_match(ground_truths, prediction) - exact_match += int(em_for_this_question) - - exact_match = 100.0 * exact_match / len(qid_list) - - return {'oracle_exact_match': exact_match, 'common': common, 'denominator': len(qid_list), - 'pred_len': len(predicted_answers), 'gold_len': len(ground_truth)} - - -def evaluate_triviaqa(ground_truth, predicted_answers, qid_list=None, mute=False): - f1 = exact_match = common = 0 - if qid_list is None: - qid_list = ground_truth.keys() - for qid in qid_list: - if qid not in predicted_answers: - if not mute: - message = 'Missed question {} will receive score 0.'.format(qid) - print(message, file=sys.stderr) - continue - if qid not in ground_truth: - if not mute: - message = 'Irrelavant question {} will receive score 0.'.format(qid) - print(message, file=sys.stderr) - continue - common += 1 - prediction = predicted_answers[qid] - ground_truths = get_ground_truths(ground_truth[qid]) - em_for_this_question = metric_max_over_ground_truths( - exact_match_score, prediction, ground_truths) - if em_for_this_question == 0 and not mute: - print("em=0:", prediction, ground_truths) - exact_match += em_for_this_question - f1_for_this_question = metric_max_over_ground_truths( - f1_score, prediction, ground_truths) - f1 += f1_for_this_question - - exact_match = 100.0 * exact_match / len(qid_list) - f1 = 100.0 * f1 / len(qid_list) - - return {'exact_match': exact_match, 'f1': f1, 'common': common, 'denominator': len(qid_list), - 'pred_len': len(predicted_answers), 'gold_len': len(ground_truth)} - - -def get_args(): - parser = argparse.ArgumentParser( - description='Evaluation for TriviaQA {}'.format(expected_version)) - parser.add_argument('--dataset_file', help='Dataset file') - parser.add_argument('--prediction_file', help='Prediction File') - args = parser.parse_args() - return args - - -if __name__ == '__main__': - expected_version = 1.0 - args = get_args() - - dataset_json = dataset_utils.read_triviaqa_data(args.dataset_file) - if dataset_json['Version'] != expected_version: - print('Evaluation expects v-{} , but got dataset with v-{}'.format(expected_version, dataset_json['Version']), - file=sys.stderr) - key_to_ground_truth = dataset_utils.get_key_to_ground_truth(dataset_json) - predictions = file_utils.read_json(args.prediction_file) - eval_dict = evaluate_triviaqa(key_to_ground_truth, predictions) - print(eval_dict) diff --git a/spaces/grisiemjahand/Image-and-3D-Model-Creator/PIFu/apps/train_color.py b/spaces/grisiemjahand/Image-and-3D-Model-Creator/PIFu/apps/train_color.py deleted file mode 100644 index 3c1aeb9f33ff7ebf95489cef9a3e96e8af7ee3d7..0000000000000000000000000000000000000000 --- a/spaces/grisiemjahand/Image-and-3D-Model-Creator/PIFu/apps/train_color.py +++ /dev/null @@ -1,191 +0,0 @@ -import sys -import os - -sys.path.insert(0, os.path.abspath(os.path.join(os.path.dirname(__file__), '..'))) -ROOT_PATH = os.path.dirname(os.path.dirname(os.path.abspath(__file__))) - -import time -import json -import numpy as np -import cv2 -import random -import torch -import torch.nn as nn -from torch.utils.data import DataLoader -from tqdm import tqdm - -from lib.options import BaseOptions -from lib.mesh_util import * -from lib.sample_util import * -from lib.train_util import * -from lib.data import * -from lib.model import * -from lib.geometry import index - -# get options -opt = BaseOptions().parse() - -def train_color(opt): - # set cuda - cuda = torch.device('cuda:%d' % opt.gpu_id) - - train_dataset = TrainDataset(opt, phase='train') - test_dataset = TrainDataset(opt, phase='test') - - projection_mode = train_dataset.projection_mode - - # create data loader - train_data_loader = DataLoader(train_dataset, - batch_size=opt.batch_size, shuffle=not opt.serial_batches, - num_workers=opt.num_threads, pin_memory=opt.pin_memory) - - print('train data size: ', len(train_data_loader)) - - # NOTE: batch size should be 1 and use all the points for evaluation - test_data_loader = DataLoader(test_dataset, - batch_size=1, shuffle=False, - num_workers=opt.num_threads, pin_memory=opt.pin_memory) - print('test data size: ', len(test_data_loader)) - - # create net - netG = HGPIFuNet(opt, projection_mode).to(device=cuda) - - lr = opt.learning_rate - - # Always use resnet for color regression - netC = ResBlkPIFuNet(opt).to(device=cuda) - optimizerC = torch.optim.Adam(netC.parameters(), lr=opt.learning_rate) - - def set_train(): - netG.eval() - netC.train() - - def set_eval(): - netG.eval() - netC.eval() - - print('Using NetworkG: ', netG.name, 'networkC: ', netC.name) - - # load checkpoints - if opt.load_netG_checkpoint_path is not None: - print('loading for net G ...', opt.load_netG_checkpoint_path) - netG.load_state_dict(torch.load(opt.load_netG_checkpoint_path, map_location=cuda)) - else: - model_path_G = '%s/%s/netG_latest' % (opt.checkpoints_path, opt.name) - print('loading for net G ...', model_path_G) - netG.load_state_dict(torch.load(model_path_G, map_location=cuda)) - - if opt.load_netC_checkpoint_path is not None: - print('loading for net C ...', opt.load_netC_checkpoint_path) - netC.load_state_dict(torch.load(opt.load_netC_checkpoint_path, map_location=cuda)) - - if opt.continue_train: - if opt.resume_epoch < 0: - model_path_C = '%s/%s/netC_latest' % (opt.checkpoints_path, opt.name) - else: - model_path_C = '%s/%s/netC_epoch_%d' % (opt.checkpoints_path, opt.name, opt.resume_epoch) - - print('Resuming from ', model_path_C) - netC.load_state_dict(torch.load(model_path_C, map_location=cuda)) - - os.makedirs(opt.checkpoints_path, exist_ok=True) - os.makedirs(opt.results_path, exist_ok=True) - os.makedirs('%s/%s' % (opt.checkpoints_path, opt.name), exist_ok=True) - os.makedirs('%s/%s' % (opt.results_path, opt.name), exist_ok=True) - - opt_log = os.path.join(opt.results_path, opt.name, 'opt.txt') - with open(opt_log, 'w') as outfile: - outfile.write(json.dumps(vars(opt), indent=2)) - - # training - start_epoch = 0 if not opt.continue_train else max(opt.resume_epoch,0) - for epoch in range(start_epoch, opt.num_epoch): - epoch_start_time = time.time() - - set_train() - iter_data_time = time.time() - for train_idx, train_data in enumerate(train_data_loader): - iter_start_time = time.time() - # retrieve the data - image_tensor = train_data['img'].to(device=cuda) - calib_tensor = train_data['calib'].to(device=cuda) - color_sample_tensor = train_data['color_samples'].to(device=cuda) - - image_tensor, calib_tensor = reshape_multiview_tensors(image_tensor, calib_tensor) - - if opt.num_views > 1: - color_sample_tensor = reshape_sample_tensor(color_sample_tensor, opt.num_views) - - rgb_tensor = train_data['rgbs'].to(device=cuda) - - with torch.no_grad(): - netG.filter(image_tensor) - resC, error = netC.forward(image_tensor, netG.get_im_feat(), color_sample_tensor, calib_tensor, labels=rgb_tensor) - - optimizerC.zero_grad() - error.backward() - optimizerC.step() - - iter_net_time = time.time() - eta = ((iter_net_time - epoch_start_time) / (train_idx + 1)) * len(train_data_loader) - ( - iter_net_time - epoch_start_time) - - if train_idx % opt.freq_plot == 0: - print( - 'Name: {0} | Epoch: {1} | {2}/{3} | Err: {4:.06f} | LR: {5:.06f} | dataT: {6:.05f} | netT: {7:.05f} | ETA: {8:02d}:{9:02d}'.format( - opt.name, epoch, train_idx, len(train_data_loader), - error.item(), - lr, - iter_start_time - iter_data_time, - iter_net_time - iter_start_time, int(eta // 60), - int(eta - 60 * (eta // 60)))) - - if train_idx % opt.freq_save == 0 and train_idx != 0: - torch.save(netC.state_dict(), '%s/%s/netC_latest' % (opt.checkpoints_path, opt.name)) - torch.save(netC.state_dict(), '%s/%s/netC_epoch_%d' % (opt.checkpoints_path, opt.name, epoch)) - - if train_idx % opt.freq_save_ply == 0: - save_path = '%s/%s/pred_col.ply' % (opt.results_path, opt.name) - rgb = resC[0].transpose(0, 1).cpu() * 0.5 + 0.5 - points = color_sample_tensor[0].transpose(0, 1).cpu() - save_samples_rgb(save_path, points.detach().numpy(), rgb.detach().numpy()) - - iter_data_time = time.time() - - #### test - with torch.no_grad(): - set_eval() - - if not opt.no_num_eval: - test_losses = {} - print('calc error (test) ...') - test_color_error = calc_error_color(opt, netG, netC, cuda, test_dataset, 100) - print('eval test | color error:', test_color_error) - test_losses['test_color'] = test_color_error - - print('calc error (train) ...') - train_dataset.is_train = False - train_color_error = calc_error_color(opt, netG, netC, cuda, train_dataset, 100) - train_dataset.is_train = True - print('eval train | color error:', train_color_error) - test_losses['train_color'] = train_color_error - - if not opt.no_gen_mesh: - print('generate mesh (test) ...') - for gen_idx in tqdm(range(opt.num_gen_mesh_test)): - test_data = random.choice(test_dataset) - save_path = '%s/%s/test_eval_epoch%d_%s.obj' % ( - opt.results_path, opt.name, epoch, test_data['name']) - gen_mesh_color(opt, netG, netC, cuda, test_data, save_path) - - print('generate mesh (train) ...') - train_dataset.is_train = False - for gen_idx in tqdm(range(opt.num_gen_mesh_test)): - train_data = random.choice(train_dataset) - save_path = '%s/%s/train_eval_epoch%d_%s.obj' % ( - opt.results_path, opt.name, epoch, train_data['name']) - gen_mesh_color(opt, netG, netC, cuda, train_data, save_path) - train_dataset.is_train = True - -if __name__ == '__main__': - train_color(opt) \ No newline at end of file diff --git a/spaces/gulabpatel/Question-Answering_roberta/README.md b/spaces/gulabpatel/Question-Answering_roberta/README.md deleted file mode 100644 index 8fff2fdbc4f8cfa0982ebbba1fbd263e05e2de8e..0000000000000000000000000000000000000000 --- a/spaces/gulabpatel/Question-Answering_roberta/README.md +++ /dev/null @@ -1,37 +0,0 @@ ---- -title: Question Answering_roberta -emoji: 👀 -colorFrom: green -colorTo: gray -sdk: gradio -app_file: app.py -pinned: false ---- - -# Configuration - -`title`: _string_ -Display title for the Space - -`emoji`: _string_ -Space emoji (emoji-only character allowed) - -`colorFrom`: _string_ -Color for Thumbnail gradient (red, yellow, green, blue, indigo, purple, pink, gray) - -`colorTo`: _string_ -Color for Thumbnail gradient (red, yellow, green, blue, indigo, purple, pink, gray) - -`sdk`: _string_ -Can be either `gradio` or `streamlit` - -`sdk_version` : _string_ -Only applicable for `streamlit` SDK. -See [doc](https://hf.co/docs/hub/spaces) for more info on supported versions. - -`app_file`: _string_ -Path to your main application file (which contains either `gradio` or `streamlit` Python code). -Path is relative to the root of the repository. - -`pinned`: _boolean_ -Whether the Space stays on top of your list. diff --git a/spaces/gyugnsu/DragGan-Inversion/torch_utils/ops/bias_act.h b/spaces/gyugnsu/DragGan-Inversion/torch_utils/ops/bias_act.h deleted file mode 100644 index 60b81c6058d54638a6d74a13046fa388442d767d..0000000000000000000000000000000000000000 --- a/spaces/gyugnsu/DragGan-Inversion/torch_utils/ops/bias_act.h +++ /dev/null @@ -1,38 +0,0 @@ -// Copyright (c) 2021, NVIDIA CORPORATION & AFFILIATES. All rights reserved. -// -// NVIDIA CORPORATION and its licensors retain all intellectual property -// and proprietary rights in and to this software, related documentation -// and any modifications thereto. Any use, reproduction, disclosure or -// distribution of this software and related documentation without an express -// license agreement from NVIDIA CORPORATION is strictly prohibited. - -//------------------------------------------------------------------------ -// CUDA kernel parameters. - -struct bias_act_kernel_params -{ - const void* x; // [sizeX] - const void* b; // [sizeB] or NULL - const void* xref; // [sizeX] or NULL - const void* yref; // [sizeX] or NULL - const void* dy; // [sizeX] or NULL - void* y; // [sizeX] - - int grad; - int act; - float alpha; - float gain; - float clamp; - - int sizeX; - int sizeB; - int stepB; - int loopX; -}; - -//------------------------------------------------------------------------ -// CUDA kernel selection. - -template void* choose_bias_act_kernel(const bias_act_kernel_params& p); - -//------------------------------------------------------------------------ diff --git a/spaces/gyugnsu/DragGan-Inversion/visualizer_drag.py b/spaces/gyugnsu/DragGan-Inversion/visualizer_drag.py deleted file mode 100644 index 033cf03a57c17f95107d204ad5a8c817d9a8614f..0000000000000000000000000000000000000000 --- a/spaces/gyugnsu/DragGan-Inversion/visualizer_drag.py +++ /dev/null @@ -1,429 +0,0 @@ -# Copyright (c) 2021, NVIDIA CORPORATION & AFFILIATES. All rights reserved. -# -# NVIDIA CORPORATION and its licensors retain all intellectual property -# and proprietary rights in and to this software, related documentation -# and any modifications thereto. Any use, reproduction, disclosure or -# distribution of this software and related documentation without an express -# license agreement from NVIDIA CORPORATION is strictly prohibited. - -import click -import os - -import multiprocessing -import numpy as np -import torch -import imgui -import dnnlib -from gui_utils import imgui_window -from gui_utils import imgui_utils -from gui_utils import gl_utils -from gui_utils import text_utils -from viz import renderer -from viz import pickle_widget -from viz import latent_widget -from viz import drag_widget -from viz import capture_widget - -# ---------------------------------------------------------------------------- - - -class Visualizer(imgui_window.ImguiWindow): - def __init__(self, capture_dir=None): - super().__init__(title='DragGAN', window_width=3840, window_height=2160) - - # Internals. - self._last_error_print = None - self._async_renderer = AsyncRenderer() - self._defer_rendering = 0 - self._tex_img = None - self._tex_obj = None - self._mask_obj = None - self._image_area = None - self._status = dnnlib.EasyDict() - - # Widget interface. - self.args = dnnlib.EasyDict() - self.result = dnnlib.EasyDict() - self.pane_w = 0 - self.label_w = 0 - self.button_w = 0 - self.image_w = 0 - self.image_h = 0 - - # Widgets. - self.pickle_widget = pickle_widget.PickleWidget(self) - self.latent_widget = latent_widget.LatentWidget(self) - self.drag_widget = drag_widget.DragWidget(self) - self.capture_widget = capture_widget.CaptureWidget(self) - - if capture_dir is not None: - self.capture_widget.path = capture_dir - - # Initialize window. - self.set_position(0, 0) - self._adjust_font_size() - self.skip_frame() # Layout may change after first frame. - - def close(self): - super().close() - if self._async_renderer is not None: - self._async_renderer.close() - self._async_renderer = None - - def add_recent_pickle(self, pkl, ignore_errors=False): - self.pickle_widget.add_recent(pkl, ignore_errors=ignore_errors) - - def load_pickle(self, pkl, ignore_errors=False): - self.pickle_widget.load(pkl, ignore_errors=ignore_errors) - - def print_error(self, error): - error = str(error) - if error != self._last_error_print: - print('\n' + error + '\n') - self._last_error_print = error - - def defer_rendering(self, num_frames=1): - self._defer_rendering = max(self._defer_rendering, num_frames) - - def clear_result(self): - self._async_renderer.clear_result() - - def set_async(self, is_async): - if is_async != self._async_renderer.is_async: - self._async_renderer.set_async(is_async) - self.clear_result() - if 'image' in self.result: - self.result.message = 'Switching rendering process...' - self.defer_rendering() - - def _adjust_font_size(self): - old = self.font_size - self.set_font_size( - min(self.content_width / 120, self.content_height / 60)) - if self.font_size != old: - self.skip_frame() # Layout changed. - - def check_update_mask(self, **args): - update_mask = False - if 'pkl' in self._status: - if self._status.pkl != args['pkl']: - update_mask = True - self._status.pkl = args['pkl'] - if 'w0_seed' in self._status: - if self._status.w0_seed != args['w0_seed']: - update_mask = True - self._status.w0_seed = args['w0_seed'] - return update_mask - - def capture_image_frame(self): - self.capture_next_frame() - captured_frame = self.pop_captured_frame() - captured_image = None - if captured_frame is not None: - x1, y1, w, h = self._image_area - captured_image = captured_frame[y1:y1+h, x1:x1+w, :] - return captured_image - - def get_drag_info(self): - seed = self.latent_widget.seed - points = self.drag_widget.points - targets = self.drag_widget.targets - mask = self.drag_widget.mask - w = self._async_renderer._renderer_obj.w - return seed, points, targets, mask, w - - def draw_frame(self): - self.begin_frame() - self.args = dnnlib.EasyDict() - self.pane_w = self.font_size * 18 - self.button_w = self.font_size * 5 - self.label_w = round(self.font_size * 4.5) - - # Detect mouse dragging in the result area. - if self._image_area is not None: - if not hasattr(self.drag_widget, 'width'): - self.drag_widget.init_mask(self.image_w, self.image_h) - clicked, down, img_x, img_y = imgui_utils.click_hidden_window( - '##image_area', self._image_area[0], self._image_area[1], self._image_area[2], self._image_area[3], self.image_w, self.image_h) - self.drag_widget.action(clicked, down, img_x, img_y) - - # Begin control pane. - imgui.set_next_window_position(0, 0) - imgui.set_next_window_size(self.pane_w, self.content_height) - imgui.begin('##control_pane', closable=False, flags=( - imgui.WINDOW_NO_TITLE_BAR | imgui.WINDOW_NO_RESIZE | imgui.WINDOW_NO_MOVE)) - - # Widgets. - expanded, _visible = imgui_utils.collapsing_header( - 'Network & latent', default=True) - self.pickle_widget(expanded) - self.latent_widget(expanded) - expanded, _visible = imgui_utils.collapsing_header( - 'Drag', default=True) - self.drag_widget(expanded) - expanded, _visible = imgui_utils.collapsing_header( - 'Capture', default=True) - self.capture_widget(expanded) - - # Render. - if self.is_skipping_frames(): - pass - elif self._defer_rendering > 0: - self._defer_rendering -= 1 - elif self.args.pkl is not None: - self._async_renderer.set_args(**self.args) - result = self._async_renderer.get_result() - if result is not None: - self.result = result - if 'stop' in self.result and self.result.stop: - self.drag_widget.stop_drag() - if 'points' in self.result: - self.drag_widget.set_points(self.result.points) - if 'init_net' in self.result: - if self.result.init_net: - self.drag_widget.reset_point() - - if self.check_update_mask(**self.args): - h, w, _ = self.result.image.shape - self.drag_widget.init_mask(w, h) - - # Display. - max_w = self.content_width - self.pane_w - max_h = self.content_height - pos = np.array([self.pane_w + max_w / 2, max_h / 2]) - if 'image' in self.result: - if self._tex_img is not self.result.image: - self._tex_img = self.result.image - if self._tex_obj is None or not self._tex_obj.is_compatible(image=self._tex_img): - self._tex_obj = gl_utils.Texture( - image=self._tex_img, bilinear=False, mipmap=False) - else: - self._tex_obj.update(self._tex_img) - self.image_h, self.image_w = self._tex_obj.height, self._tex_obj.width - zoom = min(max_w / self._tex_obj.width, - max_h / self._tex_obj.height) - zoom = np.floor(zoom) if zoom >= 1 else zoom - self._tex_obj.draw(pos=pos, zoom=zoom, align=0.5, rint=True) - if self.drag_widget.show_mask and hasattr(self.drag_widget, 'mask'): - mask = ((1-self.drag_widget.mask.unsqueeze(-1)) - * 255).to(torch.uint8) - if self._mask_obj is None or not self._mask_obj.is_compatible(image=self._tex_img): - self._mask_obj = gl_utils.Texture( - image=mask, bilinear=False, mipmap=False) - else: - self._mask_obj.update(mask) - self._mask_obj.draw(pos=pos, zoom=zoom, - align=0.5, rint=True, alpha=0.15) - - if self.drag_widget.mode in ['flexible', 'fixed']: - posx, posy = imgui.get_mouse_pos() - if posx >= self.pane_w: - pos_c = np.array([posx, posy]) - gl_utils.draw_circle( - center=pos_c, radius=self.drag_widget.r_mask * zoom, alpha=0.5) - - rescale = self._tex_obj.width / 512 * zoom - - for point in self.drag_widget.targets: - pos_x = self.pane_w + max_w / 2 + \ - (point[1] - self.image_w//2) * zoom - pos_y = max_h / 2 + (point[0] - self.image_h//2) * zoom - gl_utils.draw_circle(center=np.array([pos_x, pos_y]), color=[ - 0, 0, 1], radius=9 * rescale) - - for point in self.drag_widget.points: - pos_x = self.pane_w + max_w / 2 + \ - (point[1] - self.image_w//2) * zoom - pos_y = max_h / 2 + (point[0] - self.image_h//2) * zoom - gl_utils.draw_circle(center=np.array([pos_x, pos_y]), color=[ - 1, 0, 0], radius=9 * rescale) - - for point, target in zip(self.drag_widget.points, self.drag_widget.targets): - t_x = self.pane_w + max_w / 2 + \ - (target[1] - self.image_w//2) * zoom - t_y = max_h / 2 + (target[0] - self.image_h//2) * zoom - - p_x = self.pane_w + max_w / 2 + \ - (point[1] - self.image_w//2) * zoom - p_y = max_h / 2 + (point[0] - self.image_h//2) * zoom - - gl_utils.draw_arrow(p_x, p_y, t_x, t_y, - l=8 * rescale, width=3 * rescale) - - imshow_w = int(self._tex_obj.width * zoom) - imshow_h = int(self._tex_obj.height * zoom) - self._image_area = [int(self.pane_w + max_w / 2 - imshow_w / 2), - int(max_h / 2 - imshow_h / 2), imshow_w, imshow_h] - if 'error' in self.result: - self.print_error(self.result.error) - if 'message' not in self.result: - self.result.message = str(self.result.error) - if 'message' in self.result: - tex = text_utils.get_texture( - self.result.message, size=self.font_size, max_width=max_w, max_height=max_h, outline=2) - tex.draw(pos=pos, align=0.5, rint=True, color=1) - - # End frame. - self._adjust_font_size() - imgui.end() - self.end_frame() - -# ---------------------------------------------------------------------------- - - -class AsyncRenderer: - def __init__(self): - self._closed = False - self._is_async = False - self._cur_args = None - self._cur_result = None - self._cur_stamp = 0 - self._renderer_obj = None - self._args_queue = None - self._result_queue = None - self._process = None - - def close(self): - self._closed = True - self._renderer_obj = None - if self._process is not None: - self._process.terminate() - self._process = None - self._args_queue = None - self._result_queue = None - - @property - def is_async(self): - return self._is_async - - def set_async(self, is_async): - self._is_async = is_async - - def set_args(self, **args): - assert not self._closed - args2 = args.copy() - args_mask = args2.pop('mask') - if self._cur_args: - _cur_args = self._cur_args.copy() - cur_args_mask = _cur_args.pop('mask') - else: - _cur_args = self._cur_args - # if args != self._cur_args: - if args2 != _cur_args: - if self._is_async: - self._set_args_async(**args) - else: - self._set_args_sync(**args) - self._cur_args = args - - def _set_args_async(self, **args): - if self._process is None: - self._args_queue = multiprocessing.Queue() - self._result_queue = multiprocessing.Queue() - try: - multiprocessing.set_start_method('spawn') - except RuntimeError: - pass - self._process = multiprocessing.Process(target=self._process_fn, args=( - self._args_queue, self._result_queue), daemon=True) - self._process.start() - self._args_queue.put([args, self._cur_stamp]) - - def _set_args_sync(self, **args): - if self._renderer_obj is None: - self._renderer_obj = renderer.Renderer() - self._cur_result = self._renderer_obj.render(**args) - - def get_result(self): - assert not self._closed - if self._result_queue is not None: - while self._result_queue.qsize() > 0: - result, stamp = self._result_queue.get() - if stamp == self._cur_stamp: - self._cur_result = result - return self._cur_result - - def clear_result(self): - assert not self._closed - self._cur_args = None - self._cur_result = None - self._cur_stamp += 1 - - @staticmethod - def _process_fn(args_queue, result_queue): - renderer_obj = renderer.Renderer() - cur_args = None - cur_stamp = None - while True: - args, stamp = args_queue.get() - while args_queue.qsize() > 0: - args, stamp = args_queue.get() - if args != cur_args or stamp != cur_stamp: - result = renderer_obj.render(**args) - if 'error' in result: - result.error = renderer.CapturedException(result.error) - result_queue.put([result, stamp]) - cur_args = args - cur_stamp = stamp - -# ---------------------------------------------------------------------------- - - -@click.command() -@click.argument('pkls', metavar='PATH', nargs=-1) -@click.option('--capture-dir', help='Where to save screenshot captures', metavar='PATH', default=None) -@click.option('--browse-dir', help='Specify model path for the \'Browse...\' button', metavar='PATH') -def main( - pkls, - capture_dir, - browse_dir -): - """Interactive model visualizer. - - Optional PATH argument can be used specify which .pkl file to load. - """ - viz = Visualizer(capture_dir=capture_dir) - - if browse_dir is not None: - viz.pickle_widget.search_dirs = [browse_dir] - - # List pickles. - if len(pkls) > 0: - for pkl in pkls: - viz.add_recent_pickle(pkl) - viz.load_pickle(pkls[0]) - else: - pretrained = [ - 'https://api.ngc.nvidia.com/v2/models/nvidia/research/stylegan2/versions/1/files/stylegan2-afhqcat-512x512.pkl', - 'https://api.ngc.nvidia.com/v2/models/nvidia/research/stylegan2/versions/1/files/stylegan2-afhqdog-512x512.pkl', - 'https://api.ngc.nvidia.com/v2/models/nvidia/research/stylegan2/versions/1/files/stylegan2-afhqv2-512x512.pkl', - 'https://api.ngc.nvidia.com/v2/models/nvidia/research/stylegan2/versions/1/files/stylegan2-afhqwild-512x512.pkl', - 'https://api.ngc.nvidia.com/v2/models/nvidia/research/stylegan2/versions/1/files/stylegan2-brecahad-512x512.pkl', - 'https://api.ngc.nvidia.com/v2/models/nvidia/research/stylegan2/versions/1/files/stylegan2-celebahq-256x256.pkl', - 'https://api.ngc.nvidia.com/v2/models/nvidia/research/stylegan2/versions/1/files/stylegan2-cifar10-32x32.pkl', - 'https://api.ngc.nvidia.com/v2/models/nvidia/research/stylegan2/versions/1/files/stylegan2-ffhq-1024x1024.pkl', - 'https://api.ngc.nvidia.com/v2/models/nvidia/research/stylegan2/versions/1/files/stylegan2-ffhq-256x256.pkl', - 'https://api.ngc.nvidia.com/v2/models/nvidia/research/stylegan2/versions/1/files/stylegan2-ffhq-512x512.pkl', - 'https://api.ngc.nvidia.com/v2/models/nvidia/research/stylegan2/versions/1/files/stylegan2-ffhqu-1024x1024.pkl', - 'https://api.ngc.nvidia.com/v2/models/nvidia/research/stylegan2/versions/1/files/stylegan2-ffhqu-256x256.pkl', - 'https://api.ngc.nvidia.com/v2/models/nvidia/research/stylegan2/versions/1/files/stylegan2-lsundog-256x256.pkl', - 'https://api.ngc.nvidia.com/v2/models/nvidia/research/stylegan2/versions/1/files/stylegan2-metfaces-1024x1024.pkl', - 'https://api.ngc.nvidia.com/v2/models/nvidia/research/stylegan2/versions/1/files/stylegan2-metfacesu-1024x1024.pkl' - ] - - # Populate recent pickles list with pretrained model URLs. - for url in pretrained: - viz.add_recent_pickle(url) - - # Run. - while not viz.should_close(): - viz.draw_frame() - viz.close() - -# ---------------------------------------------------------------------------- - - -if __name__ == "__main__": - main() - -# ---------------------------------------------------------------------------- diff --git a/spaces/hank1996/yolopv2/utils/aws/userdata.sh b/spaces/hank1996/yolopv2/utils/aws/userdata.sh deleted file mode 100644 index 06b606ba68e46f1c2e44930d25e509e9ded49ce8..0000000000000000000000000000000000000000 --- a/spaces/hank1996/yolopv2/utils/aws/userdata.sh +++ /dev/null @@ -1,29 +0,0 @@ - -#!/bin/bash -# AWS EC2 instance startup script https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/user-data.html -# This script will run only once on first instance start (for a re-start script see mime.sh) -# /home/ubuntu (ubuntu) or /home/ec2-user (amazon-linux) is working dir -# Use >300 GB SSD - -cd home/ubuntu -if [ ! -d yolor ]; then - echo "Running first-time script." # install dependencies, download COCO, pull Docker - git clone -b paper https://github.com/WongKinYiu/yolor && sudo chmod -R 777 yolor - cd yolor - bash data/scripts/get_coco.sh && echo "Data done." & - sudo docker pull nvcr.io/nvidia/pytorch:21.08-py3 && echo "Docker done." & - python -m pip install --upgrade pip && pip install -r requirements.txt && python detect.py && echo "Requirements done." & - wait && echo "All tasks done." # finish background tasks -else - echo "Running re-start script." # resume interrupted runs - i=0 - list=$(sudo docker ps -qa) # container list i.e. $'one\ntwo\nthree\nfour' - while IFS= read -r id; do - ((i++)) - echo "restarting container $i: $id" - sudo docker start $id - # sudo docker exec -it $id python train.py --resume # single-GPU - sudo docker exec -d $id python utils/aws/resume.py # multi-scenario - done <<<"$list" -fi - diff --git a/spaces/haoqi7/research/lrt/lrt.py b/spaces/haoqi7/research/lrt/lrt.py deleted file mode 100644 index 85d64fec546d8948d766befb9bf5abb7cabccd00..0000000000000000000000000000000000000000 --- a/spaces/haoqi7/research/lrt/lrt.py +++ /dev/null @@ -1,144 +0,0 @@ -from .clustering import * -from typing import List -import textdistance as td -from .utils import UnionFind, ArticleList -from .academic_query import AcademicQuery -import streamlit as st -from tokenizers import Tokenizer -from .clustering.clusters import KeyphraseCount - - - -class LiteratureResearchTool: - def __init__(self, cluster_config: Configuration = None): - self.literature_search = AcademicQuery - self.cluster_pipeline = ClusterPipeline(cluster_config) - - - def __postprocess_clusters__(self, clusters: ClusterList,query: str) ->ClusterList: - ''' - add top-5 keyphrases to each cluster - :param clusters: - :return: clusters - ''' - def condition(x: KeyphraseCount, y: KeyphraseCount): - return td.ratcliff_obershelp(x.keyphrase, y.keyphrase) > 0.8 - - def valid_keyphrase(x:KeyphraseCount): - tmp = x.keyphrase - return tmp is not None and tmp != '' and not tmp.isspace() and len(tmp)!=1\ - and tmp != query - - - for cluster in clusters: - - keyphrases = cluster.get_keyphrases() # [kc] - keyphrases = list(filter(valid_keyphrase,keyphrases)) - unionfind = UnionFind(keyphrases, condition) - unionfind.union_step() - - tmp = unionfind.get_unions() # dict(root_id = [kc]) - tmp = tmp.values() # [[kc]] - # [[kc]] -> [ new kc] -> sorted - tmp = [KeyphraseCount.reduce(x) for x in tmp] - keyphrases = sorted(tmp,key= lambda x: x.count,reverse=True)[:5] - keyphrases = [x.keyphrase for x in keyphrases] - - # keyphrases = sorted(list(unionfind.get_unions().values()), key=len, reverse=True)[:5] # top-5 keyphrases: list - # for i in keyphrases: - # tmp = '/'.join(i) - # cluster.top_5_keyphrases.append(tmp) - cluster.top_5_keyphrases = keyphrases - - return clusters - - def __call__(self, - query: str, - num_papers: int, - start_year: int, - end_year: int, - max_k: int, - platforms: List[str] = ['IEEE', 'Arxiv', 'Paper with Code'], - loading_ctx_manager = None, - standardization = False - ): - - - for platform in platforms: - if loading_ctx_manager: - with loading_ctx_manager(): - clusters, articles = self.__platformPipeline__(platform,query,num_papers,start_year,end_year,max_k,standardization) - else: - clusters, articles = self.__platformPipeline__(platform, query, num_papers, start_year, end_year,max_k,standardization) - - clusters.sort() - yield clusters,articles - - - - def __platformPipeline__(self,platforn_name:str, - query: str, - num_papers: int, - start_year: int, - end_year: int, - max_k: int, - standardization - ) -> (ClusterList,ArticleList): - - @st.cache(hash_funcs={Tokenizer: Tokenizer.__hash__},allow_output_mutation=True) - def ieee_process( - query: str, - num_papers: int, - start_year: int, - end_year: int, - ): - articles = ArticleList.parse_ieee_articles( - self.literature_search.ieee(query, start_year, end_year, num_papers)) # ArticleList - abstracts = articles.getAbstracts() # List[str] - clusters = self.cluster_pipeline(abstracts,max_k,standardization) - clusters = self.__postprocess_clusters__(clusters,query) - return clusters, articles - - @st.cache(hash_funcs={Tokenizer: Tokenizer.__hash__},allow_output_mutation=True) - def arxiv_process( - query: str, - num_papers: int, - ): - articles = ArticleList.parse_arxiv_articles( - self.literature_search.arxiv(query, num_papers)) # ArticleList - abstracts = articles.getAbstracts() # List[str] - clusters = self.cluster_pipeline(abstracts,max_k,standardization) - clusters = self.__postprocess_clusters__(clusters,query) - return clusters, articles - - @st.cache(hash_funcs={Tokenizer: Tokenizer.__hash__},allow_output_mutation=True) - def pwc_process( - query: str, - num_papers: int, - ): - articles = ArticleList.parse_pwc_articles( - self.literature_search.paper_with_code(query, num_papers)) # ArticleList - abstracts = articles.getAbstracts() # List[str] - clusters = self.cluster_pipeline(abstracts,max_k,standardization) - clusters = self.__postprocess_clusters__(clusters,query) - return clusters, articles - - if platforn_name == 'IEEE': - return ieee_process(query,num_papers,start_year,end_year) - elif platforn_name == 'Arxiv': - return arxiv_process(query,num_papers) - elif platforn_name == 'Paper with Code': - return pwc_process(query,num_papers) - else: - raise RuntimeError('This platform is not supported. Please open an issue on the GitHub.') - - - - - - - - - - - diff --git a/spaces/haotiz/glip-zeroshot-demo/maskrcnn_benchmark/modeling/detector/generalized_rcnn.py b/spaces/haotiz/glip-zeroshot-demo/maskrcnn_benchmark/modeling/detector/generalized_rcnn.py deleted file mode 100644 index c8ef1cbc21268516c8c6a94a0bf6c8f997b27ed0..0000000000000000000000000000000000000000 --- a/spaces/haotiz/glip-zeroshot-demo/maskrcnn_benchmark/modeling/detector/generalized_rcnn.py +++ /dev/null @@ -1,124 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. All Rights Reserved. -""" -Implements the Generalized R-CNN framework -""" - -import torch -from torch import nn - -from maskrcnn_benchmark.structures.image_list import to_image_list - -from ..backbone import build_backbone -from ..rpn import build_rpn -from ..roi_heads import build_roi_heads - -import timeit - -class GeneralizedRCNN(nn.Module): - """ - Main class for Generalized R-CNN. Currently supports boxes and masks. - It consists of three main parts: - - backbone - - rpn - - heads: takes the features + the proposals from the RPN and computes - detections / masks from it. - """ - - def __init__(self, cfg): - super(GeneralizedRCNN, self).__init__() - - self.backbone = build_backbone(cfg) - self.rpn = build_rpn(cfg) - self.roi_heads = build_roi_heads(cfg) - self.DEBUG = cfg.MODEL.DEBUG - self.ONNX = cfg.MODEL.ONNX - self.freeze_backbone = cfg.MODEL.BACKBONE.FREEZE - self.freeze_fpn = cfg.MODEL.FPN.FREEZE - self.freeze_rpn = cfg.MODEL.RPN.FREEZE - - if cfg.MODEL.LINEAR_PROB: - assert cfg.MODEL.BACKBONE.FREEZE, "For linear probing, backbone should be frozen!" - if hasattr(self.backbone, 'fpn'): - assert cfg.MODEL.FPN.FREEZE, "For linear probing, FPN should be frozen!" - self.linear_prob = cfg.MODEL.LINEAR_PROB - - def train(self, mode=True): - """Convert the model into training mode while keep layers freezed.""" - super(GeneralizedRCNN, self).train(mode) - if self.freeze_backbone: - self.backbone.body.eval() - for p in self.backbone.body.parameters(): - p.requires_grad = False - if self.freeze_fpn: - self.backbone.fpn.eval() - for p in self.backbone.fpn.parameters(): - p.requires_grad = False - if self.freeze_rpn: - self.rpn.eval() - for p in self.rpn.parameters(): - p.requires_grad = False - if self.linear_prob: - if self.rpn is not None: - for key, value in self.rpn.named_parameters(): - if not ('bbox_pred' in key or 'cls_logits' in key or 'centerness' in key or 'cosine_scale' in key): - value.requires_grad = False - if self.roi_heads is not None: - for key, value in self.roi_heads.named_parameters(): - if not ('bbox_pred' in key or 'cls_logits' in key or 'centerness' in key or 'cosine_scale' in key): - value.requires_grad = False - - def forward(self, images, targets=None): - """ - Arguments: - images (list[Tensor] or ImageList): images to be processed - targets (list[BoxList]): ground-truth boxes present in the image (optional) - - Returns: - result (list[BoxList] or dict[Tensor]): the output from the model. - During training, it returns a dict[Tensor] which contains the losses. - During testing, it returns list[BoxList] contains additional fields - like `scores`, `labels` and `mask` (for Mask R-CNN models). - - """ - if self.training and targets is None: - raise ValueError("In training mode, targets should be passed") - - if self.DEBUG: debug_info = {} - if self.DEBUG: debug_info['input_size'] = images[0].size() - if self.DEBUG: tic = timeit.time.perf_counter() - - if self.ONNX: - features = self.backbone(images) - else: - images = to_image_list(images) - features = self.backbone(images.tensors) - - if self.DEBUG: debug_info['feat_time'] = timeit.time.perf_counter() - tic - if self.DEBUG: debug_info['feat_size'] = [feat.size() for feat in features] - if self.DEBUG: tic = timeit.time.perf_counter() - - proposals, proposal_losses = self.rpn(images, features, targets) - - if self.DEBUG: debug_info['rpn_time'] = timeit.time.perf_counter() - tic - if self.DEBUG: debug_info['#rpn'] = [prop for prop in proposals] - if self.DEBUG: tic = timeit.time.perf_counter() - - if self.roi_heads: - x, result, detector_losses = self.roi_heads(features, proposals, targets) - else: - # RPN-only models don't have roi_heads - x = features - result = proposals - detector_losses = {} - - if self.DEBUG: debug_info['rcnn_time'] = timeit.time.perf_counter() - tic - if self.DEBUG: debug_info['#rcnn'] = result - if self.DEBUG: return result, debug_info - - if self.training: - losses = {} - losses.update(detector_losses) - losses.update(proposal_losses) - return losses - - return result \ No newline at end of file diff --git a/spaces/heiyubili/bingo/src/lib/isomorphic/browser.ts b/spaces/heiyubili/bingo/src/lib/isomorphic/browser.ts deleted file mode 100644 index de125b1f1786d1618cb1ff47f403d76c6784f4ce..0000000000000000000000000000000000000000 --- a/spaces/heiyubili/bingo/src/lib/isomorphic/browser.ts +++ /dev/null @@ -1,11 +0,0 @@ -'use client' - -const debug = console.info.bind(console) - -class WebSocketAlias extends WebSocket { - constructor(address: string | URL, ...args: any) { - super(address) - } -} - -export default { fetch, WebSocket: WebSocketAlias, debug } diff --git a/spaces/hilloworld/chatgpt/Dockerfile b/spaces/hilloworld/chatgpt/Dockerfile deleted file mode 100644 index 3698c7cb7938e025afc53b18a571ae2961fbdffe..0000000000000000000000000000000000000000 --- a/spaces/hilloworld/chatgpt/Dockerfile +++ /dev/null @@ -1,34 +0,0 @@ -# Build Stage -# 使用 golang:alpine 作为构建阶段的基础镜像 -FROM golang:alpine AS builder - -# 添加 git,以便之后能从GitHub克隆项目 -RUN apk --no-cache add git - -# 从 GitHub 克隆 go-proxy-bingai 项目到 /workspace/app 目录下 -RUN git clone https://github.com/Harry-zklcdc/go-proxy-bingai.git /workspace/app - -# 设置工作目录为之前克隆的项目目录 -WORKDIR /workspace/app - -# 编译 go 项目。-ldflags="-s -w" 是为了减少编译后的二进制大小 -RUN go build -ldflags="-s -w" -tags netgo -trimpath -o go-proxy-bingai main.go - -# Runtime Stage -# 使用轻量级的 alpine 镜像作为运行时的基础镜像 -FROM alpine - -# 设置工作目录 -WORKDIR /workspace/app - -# 从构建阶段复制编译后的二进制文件到运行时镜像中 -COPY --from=builder /workspace/app/go-proxy-bingai . - -# 设置环境变量,此处为随机字符 -ENV Go_Proxy_BingAI_USER_TOKEN_1="kJs8hD92ncMzLaoQWYtX5rG6bE3fZ4iO" - -# 暴露8080端口 -EXPOSE 8080 - -# 容器启动时运行的命令 -CMD ["/workspace/app/go-proxy-bingai"] \ No newline at end of file diff --git a/spaces/huak95/personaGPT_custom/frontend/styles/bootstrap.css b/spaces/huak95/personaGPT_custom/frontend/styles/bootstrap.css deleted file mode 100644 index 5211c9c377e6ddcbcd12de512cff00c8c047fd5a..0000000000000000000000000000000000000000 --- a/spaces/huak95/personaGPT_custom/frontend/styles/bootstrap.css +++ /dev/null @@ -1,7 +0,0 @@ -/*! - * Bootstrap v4.5.0 (https://getbootstrap.com/) - * Copyright 2011-2020 The Bootstrap Authors - * Copyright 2011-2020 Twitter, Inc. - * Licensed under MIT (https://github.com/twbs/bootstrap/blob/master/LICENSE) - */:root{--blue:#7a3016;--indigo:#6610f2;--purple:#6f42c1;--pink:#e83e8c;--red:#dc3545;--orange:#fd7e14;--yellow:#ffc107;--green:#28a745;--teal:#20c997;--cyan:#17a2b8;--white:#fff;--gray:#6c757d;--gray-dark:#343a40;--primary:#7a3016;--secondary:#6c757d;--success:#28a745;--info:#17a2b8;--warning:#ffc107;--danger:#dc3545;--light:#f8f9fa;--dark:#343a40;--breakpoint-xs:0;--breakpoint-sm:576px;--breakpoint-md:768px;--breakpoint-lg:992px;--breakpoint-xl:1200px;--font-family-sans-serif:-apple-system,BlinkMacSystemFont,"Segoe UI",Roboto,"Helvetica Neue",Arial,"Noto Sans",sans-serif,"Apple Color Emoji","Segoe UI Emoji","Segoe UI Symbol","Noto Color Emoji";--font-family-monospace:SFMono-Regular,Menlo,Monaco,Consolas,"Liberation Mono","Courier New",monospace}*,::after,::before{box-sizing:border-box}html{font-family:sans-serif;line-height:1.15;-webkit-text-size-adjust:100%;-webkit-tap-highlight-color:transparent}article,aside,figcaption,figure,footer,header,hgroup,main,nav,section{display:block}body{margin:0;font-family:-apple-system,BlinkMacSystemFont,"Segoe UI",Roboto,"Helvetica Neue",Arial,"Noto Sans",sans-serif,"Apple Color Emoji","Segoe UI Emoji","Segoe UI Symbol","Noto Color Emoji";font-size:1rem;font-weight:400;line-height:1.5;color:#212529;text-align:left;background-color:#fff}[tabindex="-1"]:focus:not(:focus-visible){outline:0!important}hr{box-sizing:content-box;height:0;overflow:visible}h1,h2,h3,h4,h5,h6{margin-top:0;margin-bottom:.5rem}p{margin-top:0;margin-bottom:1rem}abbr[data-original-title],abbr[title]{text-decoration:underline;-webkit-text-decoration:underline dotted;text-decoration:underline dotted;cursor:help;border-bottom:0;-webkit-text-decoration-skip-ink:none;text-decoration-skip-ink:none}address{margin-bottom:1rem;font-style:normal;line-height:inherit}dl,ol,ul{margin-top:0;margin-bottom:1rem}ol ol,ol ul,ul ol,ul ul{margin-bottom:0}dt{font-weight:700}dd{margin-bottom:.5rem;margin-left:0}blockquote{margin:0 0 1rem}b,strong{font-weight:bolder}small{font-size:80%}sub,sup{position:relative;font-size:75%;line-height:0;vertical-align:baseline}sub{bottom:-.25em}sup{top:-.5em}a{color:#7a3016;text-decoration:none;background-color:transparent}a:hover{color:#0056b3;text-decoration:underline}a:not([href]){color:inherit;text-decoration:none}a:not([href]):hover{color:inherit;text-decoration:none}code,kbd,pre,samp{font-family:SFMono-Regular,Menlo,Monaco,Consolas,"Liberation Mono","Courier New",monospace;font-size:1em}pre{margin-top:0;margin-bottom:1rem;overflow:auto;-ms-overflow-style:scrollbar}figure{margin:0 0 1rem}img{vertical-align:middle;border-style:none}svg{overflow:hidden;vertical-align:middle}table{border-collapse:collapse}caption{padding-top:.75rem;padding-bottom:.75rem;color:#6c757d;text-align:left;caption-side:bottom}th{text-align:inherit}label{display:inline-block;margin-bottom:.5rem}button{border-radius:0}button:focus{outline:1px dotted;outline:5px auto -webkit-focus-ring-color}button,input,optgroup,select,textarea{margin:0;font-family:inherit;font-size:inherit;line-height:inherit}button,input{overflow:visible}button,select{text-transform:none}[role=button]{cursor:pointer}select{word-wrap:normal}[type=button],[type=reset],[type=submit],button{-webkit-appearance:button}[type=button]:not(:disabled),[type=reset]:not(:disabled),[type=submit]:not(:disabled),button:not(:disabled){cursor:pointer}[type=button]::-moz-focus-inner,[type=reset]::-moz-focus-inner,[type=submit]::-moz-focus-inner,button::-moz-focus-inner{padding:0;border-style:none}input[type=checkbox],input[type=radio]{box-sizing:border-box;padding:0}textarea{overflow:auto;resize:vertical}fieldset{min-width:0;padding:0;margin:0;border:0}legend{display:block;width:100%;max-width:100%;padding:0;margin-bottom:.5rem;font-size:1.5rem;line-height:inherit;color:inherit;white-space:normal}progress{vertical-align:baseline}[type=number]::-webkit-inner-spin-button,[type=number]::-webkit-outer-spin-button{height:auto}[type=search]{outline-offset:-2px;-webkit-appearance:none}[type=search]::-webkit-search-decoration{-webkit-appearance:none}::-webkit-file-upload-button{font:inherit;-webkit-appearance:button}output{display:inline-block}summary{display:list-item;cursor:pointer}template{display:none}[hidden]{display:none!important}.h1,.h2,.h3,.h4,.h5,.h6,h1,h2,h3,h4,h5,h6{margin-bottom:.5rem;font-weight:500;line-height:1.2}.h1,h1{font-size:2.5rem}.h2,h2{font-size:2rem}.h3,h3{font-size:1.75rem}.h4,h4{font-size:1.5rem}.h5,h5{font-size:1.25rem}.h6,h6{font-size:1rem}.lead{font-size:1.25rem;font-weight:300}.display-1{font-size:6rem;font-weight:300;line-height:1.2}.display-2{font-size:5.5rem;font-weight:300;line-height:1.2}.display-3{font-size:4.5rem;font-weight:300;line-height:1.2}.display-4{font-size:3.5rem;font-weight:300;line-height:1.2}hr{margin-top:1rem;margin-bottom:1rem;border:0;border-top:1px solid rgba(0,0,0,.1)}.small,small{font-size:80%;font-weight:400}.mark,mark{padding:.2em;background-color:#fcf8e3}.list-unstyled{padding-left:0;list-style:none}.list-inline{padding-left:0;list-style:none}.list-inline-item{display:inline-block}.list-inline-item:not(:last-child){margin-right:.5rem}.initialism{font-size:90%;text-transform:uppercase}.blockquote{margin-bottom:1rem;font-size:1.25rem}.blockquote-footer{display:block;font-size:80%;color:#6c757d}.blockquote-footer::before{content:"\2014\00A0"}.img-fluid{max-width:100%;height:auto}.img-thumbnail{padding:.25rem;background-color:#fff;border:1px solid #dee2e6;border-radius:.25rem;max-width:100%;height:auto}.figure{display:inline-block}.figure-img{margin-bottom:.5rem;line-height:1}.figure-caption{font-size:90%;color:#6c757d}code{font-size:87.5%;color:#e83e8c;word-wrap:break-word}a>code{color:inherit}kbd{padding:.2rem .4rem;font-size:87.5%;color:#fff;background-color:#212529;border-radius:.2rem}kbd kbd{padding:0;font-size:100%;font-weight:700}pre{display:block;font-size:87.5%;color:#212529}pre code{font-size:inherit;color:inherit;word-break:normal}.pre-scrollable{max-height:340px;overflow-y:scroll}.container{width:100%;padding-right:15px;padding-left:15px;margin-right:auto;margin-left:auto}@media (min-width:576px){.container{max-width:540px}}@media (min-width:768px){.container{max-width:720px}}@media (min-width:992px){.container{max-width:960px}}@media (min-width:1200px){.container{max-width:1140px}}.container-fluid,.container-lg,.container-md,.container-sm,.container-xl{width:100%;padding-right:15px;padding-left:15px;margin-right:auto;margin-left:auto}@media (min-width:576px){.container,.container-sm{max-width:540px}}@media (min-width:768px){.container,.container-md,.container-sm{max-width:720px}}@media (min-width:992px){.container,.container-lg,.container-md,.container-sm{max-width:960px}}@media (min-width:1200px){.container,.container-lg,.container-md,.container-sm,.container-xl{max-width:1140px}}.row{display:-ms-flexbox;display:flex;-ms-flex-wrap:wrap;flex-wrap:wrap;margin-right:-15px;margin-left:-15px}.no-gutters{margin-right:0;margin-left:0}.no-gutters>.col,.no-gutters>[class*=col-]{padding-right:0;padding-left:0}.col,.col-1,.col-10,.col-11,.col-12,.col-2,.col-3,.col-4,.col-5,.col-6,.col-7,.col-8,.col-9,.col-auto,.col-lg,.col-lg-1,.col-lg-10,.col-lg-11,.col-lg-12,.col-lg-2,.col-lg-3,.col-lg-4,.col-lg-5,.col-lg-6,.col-lg-7,.col-lg-8,.col-lg-9,.col-lg-auto,.col-md,.col-md-1,.col-md-10,.col-md-11,.col-md-12,.col-md-2,.col-md-3,.col-md-4,.col-md-5,.col-md-6,.col-md-7,.col-md-8,.col-md-9,.col-md-auto,.col-sm,.col-sm-1,.col-sm-10,.col-sm-11,.col-sm-12,.col-sm-2,.col-sm-3,.col-sm-4,.col-sm-5,.col-sm-6,.col-sm-7,.col-sm-8,.col-sm-9,.col-sm-auto,.col-xl,.col-xl-1,.col-xl-10,.col-xl-11,.col-xl-12,.col-xl-2,.col-xl-3,.col-xl-4,.col-xl-5,.col-xl-6,.col-xl-7,.col-xl-8,.col-xl-9,.col-xl-auto{position:relative;width:100%;padding-right:15px;padding-left:15px}.col{-ms-flex-preferred-size:0;flex-basis:0;-ms-flex-positive:1;flex-grow:1;min-width:0;max-width:100%}.row-cols-1>*{-ms-flex:0 0 100%;flex:0 0 100%;max-width:100%}.row-cols-2>*{-ms-flex:0 0 50%;flex:0 0 50%;max-width:50%}.row-cols-3>*{-ms-flex:0 0 33.333333%;flex:0 0 33.333333%;max-width:33.333333%}.row-cols-4>*{-ms-flex:0 0 25%;flex:0 0 25%;max-width:25%}.row-cols-5>*{-ms-flex:0 0 20%;flex:0 0 20%;max-width:20%}.row-cols-6>*{-ms-flex:0 0 16.666667%;flex:0 0 16.666667%;max-width:16.666667%}.col-auto{-ms-flex:0 0 auto;flex:0 0 auto;width:auto;max-width:100%}.col-1{-ms-flex:0 0 8.333333%;flex:0 0 8.333333%;max-width:8.333333%}.col-2{-ms-flex:0 0 16.666667%;flex:0 0 16.666667%;max-width:16.666667%}.col-3{-ms-flex:0 0 25%;flex:0 0 25%;max-width:25%}.col-4{-ms-flex:0 0 33.333333%;flex:0 0 33.333333%;max-width:33.333333%}.col-5{-ms-flex:0 0 41.666667%;flex:0 0 41.666667%;max-width:41.666667%}.col-6{-ms-flex:0 0 50%;flex:0 0 50%;max-width:50%}.col-7{-ms-flex:0 0 58.333333%;flex:0 0 58.333333%;max-width:58.333333%}.col-8{-ms-flex:0 0 66.666667%;flex:0 0 66.666667%;max-width:66.666667%}.col-9{-ms-flex:0 0 75%;flex:0 0 75%;max-width:75%}.col-10{-ms-flex:0 0 83.333333%;flex:0 0 83.333333%;max-width:83.333333%}.col-11{-ms-flex:0 0 91.666667%;flex:0 0 91.666667%;max-width:91.666667%}.col-12{-ms-flex:0 0 100%;flex:0 0 100%;max-width:100%}.order-first{-ms-flex-order:-1;order:-1}.order-last{-ms-flex-order:13;order:13}.order-0{-ms-flex-order:0;order:0}.order-1{-ms-flex-order:1;order:1}.order-2{-ms-flex-order:2;order:2}.order-3{-ms-flex-order:3;order:3}.order-4{-ms-flex-order:4;order:4}.order-5{-ms-flex-order:5;order:5}.order-6{-ms-flex-order:6;order:6}.order-7{-ms-flex-order:7;order:7}.order-8{-ms-flex-order:8;order:8}.order-9{-ms-flex-order:9;order:9}.order-10{-ms-flex-order:10;order:10}.order-11{-ms-flex-order:11;order:11}.order-12{-ms-flex-order:12;order:12}.offset-1{margin-left:8.333333%}.offset-2{margin-left:16.666667%}.offset-3{margin-left:25%}.offset-4{margin-left:33.333333%}.offset-5{margin-left:41.666667%}.offset-6{margin-left:50%}.offset-7{margin-left:58.333333%}.offset-8{margin-left:66.666667%}.offset-9{margin-left:75%}.offset-10{margin-left:83.333333%}.offset-11{margin-left:91.666667%}@media (min-width:576px){.col-sm{-ms-flex-preferred-size:0;flex-basis:0;-ms-flex-positive:1;flex-grow:1;min-width:0;max-width:100%}.row-cols-sm-1>*{-ms-flex:0 0 100%;flex:0 0 100%;max-width:100%}.row-cols-sm-2>*{-ms-flex:0 0 50%;flex:0 0 50%;max-width:50%}.row-cols-sm-3>*{-ms-flex:0 0 33.333333%;flex:0 0 33.333333%;max-width:33.333333%}.row-cols-sm-4>*{-ms-flex:0 0 25%;flex:0 0 25%;max-width:25%}.row-cols-sm-5>*{-ms-flex:0 0 20%;flex:0 0 20%;max-width:20%}.row-cols-sm-6>*{-ms-flex:0 0 16.666667%;flex:0 0 16.666667%;max-width:16.666667%}.col-sm-auto{-ms-flex:0 0 auto;flex:0 0 auto;width:auto;max-width:100%}.col-sm-1{-ms-flex:0 0 8.333333%;flex:0 0 8.333333%;max-width:8.333333%}.col-sm-2{-ms-flex:0 0 16.666667%;flex:0 0 16.666667%;max-width:16.666667%}.col-sm-3{-ms-flex:0 0 25%;flex:0 0 25%;max-width:25%}.col-sm-4{-ms-flex:0 0 33.333333%;flex:0 0 33.333333%;max-width:33.333333%}.col-sm-5{-ms-flex:0 0 41.666667%;flex:0 0 41.666667%;max-width:41.666667%}.col-sm-6{-ms-flex:0 0 50%;flex:0 0 50%;max-width:50%}.col-sm-7{-ms-flex:0 0 58.333333%;flex:0 0 58.333333%;max-width:58.333333%}.col-sm-8{-ms-flex:0 0 66.666667%;flex:0 0 66.666667%;max-width:66.666667%}.col-sm-9{-ms-flex:0 0 75%;flex:0 0 75%;max-width:75%}.col-sm-10{-ms-flex:0 0 83.333333%;flex:0 0 83.333333%;max-width:83.333333%}.col-sm-11{-ms-flex:0 0 91.666667%;flex:0 0 91.666667%;max-width:91.666667%}.col-sm-12{-ms-flex:0 0 100%;flex:0 0 100%;max-width:100%}.order-sm-first{-ms-flex-order:-1;order:-1}.order-sm-last{-ms-flex-order:13;order:13}.order-sm-0{-ms-flex-order:0;order:0}.order-sm-1{-ms-flex-order:1;order:1}.order-sm-2{-ms-flex-order:2;order:2}.order-sm-3{-ms-flex-order:3;order:3}.order-sm-4{-ms-flex-order:4;order:4}.order-sm-5{-ms-flex-order:5;order:5}.order-sm-6{-ms-flex-order:6;order:6}.order-sm-7{-ms-flex-order:7;order:7}.order-sm-8{-ms-flex-order:8;order:8}.order-sm-9{-ms-flex-order:9;order:9}.order-sm-10{-ms-flex-order:10;order:10}.order-sm-11{-ms-flex-order:11;order:11}.order-sm-12{-ms-flex-order:12;order:12}.offset-sm-0{margin-left:0}.offset-sm-1{margin-left:8.333333%}.offset-sm-2{margin-left:16.666667%}.offset-sm-3{margin-left:25%}.offset-sm-4{margin-left:33.333333%}.offset-sm-5{margin-left:41.666667%}.offset-sm-6{margin-left:50%}.offset-sm-7{margin-left:58.333333%}.offset-sm-8{margin-left:66.666667%}.offset-sm-9{margin-left:75%}.offset-sm-10{margin-left:83.333333%}.offset-sm-11{margin-left:91.666667%}}@media (min-width:768px){.col-md{-ms-flex-preferred-size:0;flex-basis:0;-ms-flex-positive:1;flex-grow:1;min-width:0;max-width:100%}.row-cols-md-1>*{-ms-flex:0 0 100%;flex:0 0 100%;max-width:100%}.row-cols-md-2>*{-ms-flex:0 0 50%;flex:0 0 50%;max-width:50%}.row-cols-md-3>*{-ms-flex:0 0 33.333333%;flex:0 0 33.333333%;max-width:33.333333%}.row-cols-md-4>*{-ms-flex:0 0 25%;flex:0 0 25%;max-width:25%}.row-cols-md-5>*{-ms-flex:0 0 20%;flex:0 0 20%;max-width:20%}.row-cols-md-6>*{-ms-flex:0 0 16.666667%;flex:0 0 16.666667%;max-width:16.666667%}.col-md-auto{-ms-flex:0 0 auto;flex:0 0 auto;width:auto;max-width:100%}.col-md-1{-ms-flex:0 0 8.333333%;flex:0 0 8.333333%;max-width:8.333333%}.col-md-2{-ms-flex:0 0 16.666667%;flex:0 0 16.666667%;max-width:16.666667%}.col-md-3{-ms-flex:0 0 25%;flex:0 0 25%;max-width:25%}.col-md-4{-ms-flex:0 0 33.333333%;flex:0 0 33.333333%;max-width:33.333333%}.col-md-5{-ms-flex:0 0 41.666667%;flex:0 0 41.666667%;max-width:41.666667%}.col-md-6{-ms-flex:0 0 50%;flex:0 0 50%;max-width:50%}.col-md-7{-ms-flex:0 0 58.333333%;flex:0 0 58.333333%;max-width:58.333333%}.col-md-8{-ms-flex:0 0 66.666667%;flex:0 0 66.666667%;max-width:66.666667%}.col-md-9{-ms-flex:0 0 75%;flex:0 0 75%;max-width:75%}.col-md-10{-ms-flex:0 0 83.333333%;flex:0 0 83.333333%;max-width:83.333333%}.col-md-11{-ms-flex:0 0 91.666667%;flex:0 0 91.666667%;max-width:91.666667%}.col-md-12{-ms-flex:0 0 100%;flex:0 0 100%;max-width:100%}.order-md-first{-ms-flex-order:-1;order:-1}.order-md-last{-ms-flex-order:13;order:13}.order-md-0{-ms-flex-order:0;order:0}.order-md-1{-ms-flex-order:1;order:1}.order-md-2{-ms-flex-order:2;order:2}.order-md-3{-ms-flex-order:3;order:3}.order-md-4{-ms-flex-order:4;order:4}.order-md-5{-ms-flex-order:5;order:5}.order-md-6{-ms-flex-order:6;order:6}.order-md-7{-ms-flex-order:7;order:7}.order-md-8{-ms-flex-order:8;order:8}.order-md-9{-ms-flex-order:9;order:9}.order-md-10{-ms-flex-order:10;order:10}.order-md-11{-ms-flex-order:11;order:11}.order-md-12{-ms-flex-order:12;order:12}.offset-md-0{margin-left:0}.offset-md-1{margin-left:8.333333%}.offset-md-2{margin-left:16.666667%}.offset-md-3{margin-left:25%}.offset-md-4{margin-left:33.333333%}.offset-md-5{margin-left:41.666667%}.offset-md-6{margin-left:50%}.offset-md-7{margin-left:58.333333%}.offset-md-8{margin-left:66.666667%}.offset-md-9{margin-left:75%}.offset-md-10{margin-left:83.333333%}.offset-md-11{margin-left:91.666667%}}@media (min-width:992px){.col-lg{-ms-flex-preferred-size:0;flex-basis:0;-ms-flex-positive:1;flex-grow:1;min-width:0;max-width:100%}.row-cols-lg-1>*{-ms-flex:0 0 100%;flex:0 0 100%;max-width:100%}.row-cols-lg-2>*{-ms-flex:0 0 50%;flex:0 0 50%;max-width:50%}.row-cols-lg-3>*{-ms-flex:0 0 33.333333%;flex:0 0 33.333333%;max-width:33.333333%}.row-cols-lg-4>*{-ms-flex:0 0 25%;flex:0 0 25%;max-width:25%}.row-cols-lg-5>*{-ms-flex:0 0 20%;flex:0 0 20%;max-width:20%}.row-cols-lg-6>*{-ms-flex:0 0 16.666667%;flex:0 0 16.666667%;max-width:16.666667%}.col-lg-auto{-ms-flex:0 0 auto;flex:0 0 auto;width:auto;max-width:100%}.col-lg-1{-ms-flex:0 0 8.333333%;flex:0 0 8.333333%;max-width:8.333333%}.col-lg-2{-ms-flex:0 0 16.666667%;flex:0 0 16.666667%;max-width:16.666667%}.col-lg-3{-ms-flex:0 0 25%;flex:0 0 25%;max-width:25%}.col-lg-4{-ms-flex:0 0 33.333333%;flex:0 0 33.333333%;max-width:33.333333%}.col-lg-5{-ms-flex:0 0 41.666667%;flex:0 0 41.666667%;max-width:41.666667%}.col-lg-6{-ms-flex:0 0 50%;flex:0 0 50%;max-width:50%}.col-lg-7{-ms-flex:0 0 58.333333%;flex:0 0 58.333333%;max-width:58.333333%}.col-lg-8{-ms-flex:0 0 66.666667%;flex:0 0 66.666667%;max-width:66.666667%}.col-lg-9{-ms-flex:0 0 75%;flex:0 0 75%;max-width:75%}.col-lg-10{-ms-flex:0 0 83.333333%;flex:0 0 83.333333%;max-width:83.333333%}.col-lg-11{-ms-flex:0 0 91.666667%;flex:0 0 91.666667%;max-width:91.666667%}.col-lg-12{-ms-flex:0 0 100%;flex:0 0 100%;max-width:100%}.order-lg-first{-ms-flex-order:-1;order:-1}.order-lg-last{-ms-flex-order:13;order:13}.order-lg-0{-ms-flex-order:0;order:0}.order-lg-1{-ms-flex-order:1;order:1}.order-lg-2{-ms-flex-order:2;order:2}.order-lg-3{-ms-flex-order:3;order:3}.order-lg-4{-ms-flex-order:4;order:4}.order-lg-5{-ms-flex-order:5;order:5}.order-lg-6{-ms-flex-order:6;order:6}.order-lg-7{-ms-flex-order:7;order:7}.order-lg-8{-ms-flex-order:8;order:8}.order-lg-9{-ms-flex-order:9;order:9}.order-lg-10{-ms-flex-order:10;order:10}.order-lg-11{-ms-flex-order:11;order:11}.order-lg-12{-ms-flex-order:12;order:12}.offset-lg-0{margin-left:0}.offset-lg-1{margin-left:8.333333%}.offset-lg-2{margin-left:16.666667%}.offset-lg-3{margin-left:25%}.offset-lg-4{margin-left:33.333333%}.offset-lg-5{margin-left:41.666667%}.offset-lg-6{margin-left:50%}.offset-lg-7{margin-left:58.333333%}.offset-lg-8{margin-left:66.666667%}.offset-lg-9{margin-left:75%}.offset-lg-10{margin-left:83.333333%}.offset-lg-11{margin-left:91.666667%}}@media (min-width:1200px){.col-xl{-ms-flex-preferred-size:0;flex-basis:0;-ms-flex-positive:1;flex-grow:1;min-width:0;max-width:100%}.row-cols-xl-1>*{-ms-flex:0 0 100%;flex:0 0 100%;max-width:100%}.row-cols-xl-2>*{-ms-flex:0 0 50%;flex:0 0 50%;max-width:50%}.row-cols-xl-3>*{-ms-flex:0 0 33.333333%;flex:0 0 33.333333%;max-width:33.333333%}.row-cols-xl-4>*{-ms-flex:0 0 25%;flex:0 0 25%;max-width:25%}.row-cols-xl-5>*{-ms-flex:0 0 20%;flex:0 0 20%;max-width:20%}.row-cols-xl-6>*{-ms-flex:0 0 16.666667%;flex:0 0 16.666667%;max-width:16.666667%}.col-xl-auto{-ms-flex:0 0 auto;flex:0 0 auto;width:auto;max-width:100%}.col-xl-1{-ms-flex:0 0 8.333333%;flex:0 0 8.333333%;max-width:8.333333%}.col-xl-2{-ms-flex:0 0 16.666667%;flex:0 0 16.666667%;max-width:16.666667%}.col-xl-3{-ms-flex:0 0 25%;flex:0 0 25%;max-width:25%}.col-xl-4{-ms-flex:0 0 33.333333%;flex:0 0 33.333333%;max-width:33.333333%}.col-xl-5{-ms-flex:0 0 41.666667%;flex:0 0 41.666667%;max-width:41.666667%}.col-xl-6{-ms-flex:0 0 50%;flex:0 0 50%;max-width:50%}.col-xl-7{-ms-flex:0 0 58.333333%;flex:0 0 58.333333%;max-width:58.333333%}.col-xl-8{-ms-flex:0 0 66.666667%;flex:0 0 66.666667%;max-width:66.666667%}.col-xl-9{-ms-flex:0 0 75%;flex:0 0 75%;max-width:75%}.col-xl-10{-ms-flex:0 0 83.333333%;flex:0 0 83.333333%;max-width:83.333333%}.col-xl-11{-ms-flex:0 0 91.666667%;flex:0 0 91.666667%;max-width:91.666667%}.col-xl-12{-ms-flex:0 0 100%;flex:0 0 100%;max-width:100%}.order-xl-first{-ms-flex-order:-1;order:-1}.order-xl-last{-ms-flex-order:13;order:13}.order-xl-0{-ms-flex-order:0;order:0}.order-xl-1{-ms-flex-order:1;order:1}.order-xl-2{-ms-flex-order:2;order:2}.order-xl-3{-ms-flex-order:3;order:3}.order-xl-4{-ms-flex-order:4;order:4}.order-xl-5{-ms-flex-order:5;order:5}.order-xl-6{-ms-flex-order:6;order:6}.order-xl-7{-ms-flex-order:7;order:7}.order-xl-8{-ms-flex-order:8;order:8}.order-xl-9{-ms-flex-order:9;order:9}.order-xl-10{-ms-flex-order:10;order:10}.order-xl-11{-ms-flex-order:11;order:11}.order-xl-12{-ms-flex-order:12;order:12}.offset-xl-0{margin-left:0}.offset-xl-1{margin-left:8.333333%}.offset-xl-2{margin-left:16.666667%}.offset-xl-3{margin-left:25%}.offset-xl-4{margin-left:33.333333%}.offset-xl-5{margin-left:41.666667%}.offset-xl-6{margin-left:50%}.offset-xl-7{margin-left:58.333333%}.offset-xl-8{margin-left:66.666667%}.offset-xl-9{margin-left:75%}.offset-xl-10{margin-left:83.333333%}.offset-xl-11{margin-left:91.666667%}}.table{width:100%;margin-bottom:1rem;color:#212529}.table td,.table th{padding:.75rem;vertical-align:top;border-top:1px solid #dee2e6}.table thead th{vertical-align:bottom;border-bottom:2px solid #dee2e6}.table tbody+tbody{border-top:2px solid #dee2e6}.table-sm td,.table-sm th{padding:.3rem}.table-bordered{border:1px solid #dee2e6}.table-bordered td,.table-bordered th{border:1px solid #dee2e6}.table-bordered thead td,.table-bordered thead th{border-bottom-width:2px}.table-borderless tbody+tbody,.table-borderless td,.table-borderless th,.table-borderless thead th{border:0}.table-striped tbody tr:nth-of-type(odd){background-color:rgba(0,0,0,.05)}.table-hover tbody tr:hover{color:#212529;background-color:rgba(0,0,0,.075)}.table-primary,.table-primary>td,.table-primary>th{background-color:#b8daff}.table-primary tbody+tbody,.table-primary td,.table-primary th,.table-primary thead th{border-color:#7abaff}.table-hover .table-primary:hover{background-color:#9fcdff}.table-hover .table-primary:hover>td,.table-hover .table-primary:hover>th{background-color:#9fcdff}.table-secondary,.table-secondary>td,.table-secondary>th{background-color:#d6d8db}.table-secondary tbody+tbody,.table-secondary td,.table-secondary th,.table-secondary thead th{border-color:#b3b7bb}.table-hover .table-secondary:hover{background-color:#c8cbcf}.table-hover .table-secondary:hover>td,.table-hover .table-secondary:hover>th{background-color:#c8cbcf}.table-success,.table-success>td,.table-success>th{background-color:#c3e6cb}.table-success tbody+tbody,.table-success td,.table-success th,.table-success thead th{border-color:#8fd19e}.table-hover .table-success:hover{background-color:#b1dfbb}.table-hover .table-success:hover>td,.table-hover .table-success:hover>th{background-color:#b1dfbb}.table-info,.table-info>td,.table-info>th{background-color:#bee5eb}.table-info tbody+tbody,.table-info td,.table-info th,.table-info thead th{border-color:#86cfda}.table-hover .table-info:hover{background-color:#abdde5}.table-hover .table-info:hover>td,.table-hover .table-info:hover>th{background-color:#abdde5}.table-warning,.table-warning>td,.table-warning>th{background-color:#ffeeba}.table-warning tbody+tbody,.table-warning td,.table-warning th,.table-warning thead th{border-color:#ffdf7e}.table-hover .table-warning:hover{background-color:#ffe8a1}.table-hover .table-warning:hover>td,.table-hover .table-warning:hover>th{background-color:#ffe8a1}.table-danger,.table-danger>td,.table-danger>th{background-color:#f5c6cb}.table-danger tbody+tbody,.table-danger td,.table-danger th,.table-danger thead th{border-color:#ed969e}.table-hover .table-danger:hover{background-color:#f1b0b7}.table-hover .table-danger:hover>td,.table-hover .table-danger:hover>th{background-color:#f1b0b7}.table-light,.table-light>td,.table-light>th{background-color:#fdfdfe}.table-light tbody+tbody,.table-light td,.table-light th,.table-light thead th{border-color:#fbfcfc}.table-hover .table-light:hover{background-color:#ececf6}.table-hover .table-light:hover>td,.table-hover .table-light:hover>th{background-color:#ececf6}.table-dark,.table-dark>td,.table-dark>th{background-color:#c6c8ca}.table-dark tbody+tbody,.table-dark td,.table-dark th,.table-dark thead th{border-color:#95999c}.table-hover .table-dark:hover{background-color:#b9bbbe}.table-hover .table-dark:hover>td,.table-hover .table-dark:hover>th{background-color:#b9bbbe}.table-active,.table-active>td,.table-active>th{background-color:rgba(0,0,0,.075)}.table-hover .table-active:hover{background-color:rgba(0,0,0,.075)}.table-hover .table-active:hover>td,.table-hover .table-active:hover>th{background-color:rgba(0,0,0,.075)}.table .thead-dark th{color:#fff;background-color:#343a40;border-color:#454d55}.table .thead-light th{color:#495057;background-color:#e9ecef;border-color:#dee2e6}.table-dark{color:#fff;background-color:#343a40}.table-dark td,.table-dark th,.table-dark thead th{border-color:#454d55}.table-dark.table-bordered{border:0}.table-dark.table-striped tbody tr:nth-of-type(odd){background-color:rgba(255,255,255,.05)}.table-dark.table-hover tbody tr:hover{color:#fff;background-color:rgba(255,255,255,.075)}@media (max-width:575.98px){.table-responsive-sm{display:block;width:100%;overflow-x:auto;-webkit-overflow-scrolling:touch}.table-responsive-sm>.table-bordered{border:0}}@media (max-width:767.98px){.table-responsive-md{display:block;width:100%;overflow-x:auto;-webkit-overflow-scrolling:touch}.table-responsive-md>.table-bordered{border:0}}@media (max-width:991.98px){.table-responsive-lg{display:block;width:100%;overflow-x:auto;-webkit-overflow-scrolling:touch}.table-responsive-lg>.table-bordered{border:0}}@media (max-width:1199.98px){.table-responsive-xl{display:block;width:100%;overflow-x:auto;-webkit-overflow-scrolling:touch}.table-responsive-xl>.table-bordered{border:0}}.table-responsive{display:block;width:100%;overflow-x:auto;-webkit-overflow-scrolling:touch}.table-responsive>.table-bordered{border:0}.form-control{display:block;width:100%;height:calc(1.5em + .75rem + 2px);padding:.375rem .75rem;font-size:1rem;font-weight:400;line-height:1.5;color:#495057;background-color:#fff;background-clip:padding-box;border:1px solid #ced4da;border-radius:.25rem;transition:border-color .15s ease-in-out,box-shadow .15s ease-in-out}@media (prefers-reduced-motion:reduce){.form-control{transition:none}}.form-control::-ms-expand{background-color:transparent;border:0}.form-control:-moz-focusring{color:transparent;text-shadow:0 0 0 #495057}.form-control:focus{color:#495057;background-color:#fff;border-color:#80bdff;outline:0;box-shadow:0 0 0 .2rem rgba(0,123,255,.25)}.form-control::-webkit-input-placeholder{color:#6c757d;opacity:1}.form-control::-moz-placeholder{color:#6c757d;opacity:1}.form-control:-ms-input-placeholder{color:#6c757d;opacity:1}.form-control::-ms-input-placeholder{color:#6c757d;opacity:1}.form-control::placeholder{color:#6c757d;opacity:1}.form-control:disabled,.form-control[readonly]{background-color:#e9ecef;opacity:1}input[type=date].form-control,input[type=datetime-local].form-control,input[type=month].form-control,input[type=time].form-control{-webkit-appearance:none;-moz-appearance:none;appearance:none}select.form-control:focus::-ms-value{color:#495057;background-color:#fff}.form-control-file,.form-control-range{display:block;width:100%}.col-form-label{padding-top:calc(.375rem + 1px);padding-bottom:calc(.375rem + 1px);margin-bottom:0;font-size:inherit;line-height:1.5}.col-form-label-lg{padding-top:calc(.5rem + 1px);padding-bottom:calc(.5rem + 1px);font-size:1.25rem;line-height:1.5}.col-form-label-sm{padding-top:calc(.25rem + 1px);padding-bottom:calc(.25rem + 1px);font-size:.875rem;line-height:1.5}.form-control-plaintext{display:block;width:100%;padding:.375rem 0;margin-bottom:0;font-size:1rem;line-height:1.5;color:#212529;background-color:transparent;border:solid transparent;border-width:1px 0}.form-control-plaintext.form-control-lg,.form-control-plaintext.form-control-sm{padding-right:0;padding-left:0}.form-control-sm{height:calc(1.5em + .5rem + 2px);padding:.25rem .5rem;font-size:.875rem;line-height:1.5;border-radius:.2rem}.form-control-lg{height:calc(1.5em + 1rem + 2px);padding:.5rem 1rem;font-size:1.25rem;line-height:1.5;border-radius:.3rem}select.form-control[multiple],select.form-control[size]{height:auto}textarea.form-control{height:auto}.form-group{margin-bottom:1rem}.form-text{display:block;margin-top:.25rem}.form-row{display:-ms-flexbox;display:flex;-ms-flex-wrap:wrap;flex-wrap:wrap;margin-right:-5px;margin-left:-5px}.form-row>.col,.form-row>[class*=col-]{padding-right:5px;padding-left:5px}.form-check{position:relative;display:block;padding-left:1.25rem}.form-check-input{position:absolute;margin-top:.3rem;margin-left:-1.25rem}.form-check-input:disabled~.form-check-label,.form-check-input[disabled]~.form-check-label{color:#6c757d}.form-check-label{margin-bottom:0}.form-check-inline{display:-ms-inline-flexbox;display:inline-flex;-ms-flex-align:center;align-items:center;padding-left:0;margin-right:.75rem}.form-check-inline .form-check-input{position:static;margin-top:0;margin-right:.3125rem;margin-left:0}.valid-feedback{display:none;width:100%;margin-top:.25rem;font-size:80%;color:#28a745}.valid-tooltip{position:absolute;top:100%;z-index:5;display:none;max-width:100%;padding:.25rem .5rem;margin-top:.1rem;font-size:.875rem;line-height:1.5;color:#fff;background-color:rgba(40,167,69,.9);border-radius:.25rem}.is-valid~.valid-feedback,.is-valid~.valid-tooltip,.was-validated :valid~.valid-feedback,.was-validated :valid~.valid-tooltip{display:block}.form-control.is-valid,.was-validated .form-control:valid{border-color:#28a745;padding-right:calc(1.5em + .75rem);background-image:url("data:image/svg+xml,%3csvg xmlns='http://www.w3.org/2000/svg' width='8' height='8' viewBox='0 0 8 8'%3e%3cpath fill='%2328a745' d='M2.3 6.73L.6 4.53c-.4-1.04.46-1.4 1.1-.8l1.1 1.4 3.4-3.8c.6-.63 1.6-.27 1.2.7l-4 4.6c-.43.5-.8.4-1.1.1z'/%3e%3c/svg%3e");background-repeat:no-repeat;background-position:right calc(.375em + .1875rem) center;background-size:calc(.75em + .375rem) calc(.75em + .375rem)}.form-control.is-valid:focus,.was-validated .form-control:valid:focus{border-color:#28a745;box-shadow:0 0 0 .2rem rgba(40,167,69,.25)}.was-validated textarea.form-control:valid,textarea.form-control.is-valid{padding-right:calc(1.5em + .75rem);background-position:top calc(.375em + .1875rem) right calc(.375em + .1875rem)}.custom-select.is-valid,.was-validated .custom-select:valid{border-color:#28a745;padding-right:calc(.75em + 2.3125rem);background:url("data:image/svg+xml,%3csvg xmlns='http://www.w3.org/2000/svg' width='4' height='5' viewBox='0 0 4 5'%3e%3cpath fill='%23343a40' d='M2 0L0 2h4zm0 5L0 3h4z'/%3e%3c/svg%3e") no-repeat right .75rem center/8px 10px,url("data:image/svg+xml,%3csvg xmlns='http://www.w3.org/2000/svg' width='8' height='8' viewBox='0 0 8 8'%3e%3cpath fill='%2328a745' d='M2.3 6.73L.6 4.53c-.4-1.04.46-1.4 1.1-.8l1.1 1.4 3.4-3.8c.6-.63 1.6-.27 1.2.7l-4 4.6c-.43.5-.8.4-1.1.1z'/%3e%3c/svg%3e") #fff no-repeat center right 1.75rem/calc(.75em + .375rem) calc(.75em + .375rem)}.custom-select.is-valid:focus,.was-validated .custom-select:valid:focus{border-color:#28a745;box-shadow:0 0 0 .2rem rgba(40,167,69,.25)}.form-check-input.is-valid~.form-check-label,.was-validated .form-check-input:valid~.form-check-label{color:#28a745}.form-check-input.is-valid~.valid-feedback,.form-check-input.is-valid~.valid-tooltip,.was-validated .form-check-input:valid~.valid-feedback,.was-validated .form-check-input:valid~.valid-tooltip{display:block}.custom-control-input.is-valid~.custom-control-label,.was-validated .custom-control-input:valid~.custom-control-label{color:#28a745}.custom-control-input.is-valid~.custom-control-label::before,.was-validated .custom-control-input:valid~.custom-control-label::before{border-color:#28a745}.custom-control-input.is-valid:checked~.custom-control-label::before,.was-validated .custom-control-input:valid:checked~.custom-control-label::before{border-color:#34ce57;background-color:#34ce57}.custom-control-input.is-valid:focus~.custom-control-label::before,.was-validated .custom-control-input:valid:focus~.custom-control-label::before{box-shadow:0 0 0 .2rem rgba(40,167,69,.25)}.custom-control-input.is-valid:focus:not(:checked)~.custom-control-label::before,.was-validated .custom-control-input:valid:focus:not(:checked)~.custom-control-label::before{border-color:#28a745}.custom-file-input.is-valid~.custom-file-label,.was-validated .custom-file-input:valid~.custom-file-label{border-color:#28a745}.custom-file-input.is-valid:focus~.custom-file-label,.was-validated .custom-file-input:valid:focus~.custom-file-label{border-color:#28a745;box-shadow:0 0 0 .2rem rgba(40,167,69,.25)}.invalid-feedback{display:none;width:100%;margin-top:.25rem;font-size:80%;color:#dc3545}.invalid-tooltip{position:absolute;top:100%;z-index:5;display:none;max-width:100%;padding:.25rem .5rem;margin-top:.1rem;font-size:.875rem;line-height:1.5;color:#fff;background-color:rgba(220,53,69,.9);border-radius:.25rem}.is-invalid~.invalid-feedback,.is-invalid~.invalid-tooltip,.was-validated :invalid~.invalid-feedback,.was-validated :invalid~.invalid-tooltip{display:block}.form-control.is-invalid,.was-validated .form-control:invalid{border-color:#dc3545;padding-right:calc(1.5em + .75rem);background-image:url("data:image/svg+xml,%3csvg xmlns='http://www.w3.org/2000/svg' width='12' height='12' fill='none' stroke='%23dc3545' viewBox='0 0 12 12'%3e%3ccircle cx='6' cy='6' r='4.5'/%3e%3cpath stroke-linejoin='round' d='M5.8 3.6h.4L6 6.5z'/%3e%3ccircle cx='6' cy='8.2' r='.6' fill='%23dc3545' stroke='none'/%3e%3c/svg%3e");background-repeat:no-repeat;background-position:right calc(.375em + .1875rem) center;background-size:calc(.75em + .375rem) calc(.75em + .375rem)}.form-control.is-invalid:focus,.was-validated .form-control:invalid:focus{border-color:#dc3545;box-shadow:0 0 0 .2rem rgba(220,53,69,.25)}.was-validated textarea.form-control:invalid,textarea.form-control.is-invalid{padding-right:calc(1.5em + .75rem);background-position:top calc(.375em + .1875rem) right calc(.375em + .1875rem)}.custom-select.is-invalid,.was-validated .custom-select:invalid{border-color:#dc3545;padding-right:calc(.75em + 2.3125rem);background:url("data:image/svg+xml,%3csvg xmlns='http://www.w3.org/2000/svg' width='4' height='5' viewBox='0 0 4 5'%3e%3cpath fill='%23343a40' d='M2 0L0 2h4zm0 5L0 3h4z'/%3e%3c/svg%3e") no-repeat right .75rem center/8px 10px,url("data:image/svg+xml,%3csvg xmlns='http://www.w3.org/2000/svg' width='12' height='12' fill='none' stroke='%23dc3545' viewBox='0 0 12 12'%3e%3ccircle cx='6' cy='6' r='4.5'/%3e%3cpath stroke-linejoin='round' d='M5.8 3.6h.4L6 6.5z'/%3e%3ccircle cx='6' cy='8.2' r='.6' fill='%23dc3545' stroke='none'/%3e%3c/svg%3e") #fff no-repeat center right 1.75rem/calc(.75em + .375rem) calc(.75em + .375rem)}.custom-select.is-invalid:focus,.was-validated .custom-select:invalid:focus{border-color:#dc3545;box-shadow:0 0 0 .2rem rgba(220,53,69,.25)}.form-check-input.is-invalid~.form-check-label,.was-validated .form-check-input:invalid~.form-check-label{color:#dc3545}.form-check-input.is-invalid~.invalid-feedback,.form-check-input.is-invalid~.invalid-tooltip,.was-validated .form-check-input:invalid~.invalid-feedback,.was-validated .form-check-input:invalid~.invalid-tooltip{display:block}.custom-control-input.is-invalid~.custom-control-label,.was-validated .custom-control-input:invalid~.custom-control-label{color:#dc3545}.custom-control-input.is-invalid~.custom-control-label::before,.was-validated .custom-control-input:invalid~.custom-control-label::before{border-color:#dc3545}.custom-control-input.is-invalid:checked~.custom-control-label::before,.was-validated .custom-control-input:invalid:checked~.custom-control-label::before{border-color:#e4606d;background-color:#e4606d}.custom-control-input.is-invalid:focus~.custom-control-label::before,.was-validated .custom-control-input:invalid:focus~.custom-control-label::before{box-shadow:0 0 0 .2rem rgba(220,53,69,.25)}.custom-control-input.is-invalid:focus:not(:checked)~.custom-control-label::before,.was-validated .custom-control-input:invalid:focus:not(:checked)~.custom-control-label::before{border-color:#dc3545}.custom-file-input.is-invalid~.custom-file-label,.was-validated .custom-file-input:invalid~.custom-file-label{border-color:#dc3545}.custom-file-input.is-invalid:focus~.custom-file-label,.was-validated .custom-file-input:invalid:focus~.custom-file-label{border-color:#dc3545;box-shadow:0 0 0 .2rem rgba(220,53,69,.25)}.form-inline{display:-ms-flexbox;display:flex;-ms-flex-flow:row wrap;flex-flow:row wrap;-ms-flex-align:center;align-items:center}.form-inline .form-check{width:100%}@media (min-width:576px){.form-inline label{display:-ms-flexbox;display:flex;-ms-flex-align:center;align-items:center;-ms-flex-pack:center;justify-content:center;margin-bottom:0}.form-inline .form-group{display:-ms-flexbox;display:flex;-ms-flex:0 0 auto;flex:0 0 auto;-ms-flex-flow:row wrap;flex-flow:row wrap;-ms-flex-align:center;align-items:center;margin-bottom:0}.form-inline .form-control{display:inline-block;width:auto;vertical-align:middle}.form-inline .form-control-plaintext{display:inline-block}.form-inline .custom-select,.form-inline .input-group{width:auto}.form-inline .form-check{display:-ms-flexbox;display:flex;-ms-flex-align:center;align-items:center;-ms-flex-pack:center;justify-content:center;width:auto;padding-left:0}.form-inline .form-check-input{position:relative;-ms-flex-negative:0;flex-shrink:0;margin-top:0;margin-right:.25rem;margin-left:0}.form-inline .custom-control{-ms-flex-align:center;align-items:center;-ms-flex-pack:center;justify-content:center}.form-inline .custom-control-label{margin-bottom:0}}.btn{display:inline-block;font-weight:400;color:#212529;text-align:center;vertical-align:middle;-webkit-user-select:none;-moz-user-select:none;-ms-user-select:none;user-select:none;background-color:transparent;border:1px solid transparent;padding:.375rem .75rem;font-size:1rem;line-height:1.5;border-radius:.25rem;transition:color .15s ease-in-out,background-color .15s ease-in-out,border-color .15s ease-in-out,box-shadow .15s ease-in-out}@media (prefers-reduced-motion:reduce){.btn{transition:none}}.btn:hover{color:#212529;text-decoration:none}.btn.focus,.btn:focus{outline:0;box-shadow:0 0 0 .2rem rgba(0,123,255,.25)}.btn.disabled,.btn:disabled{opacity:.65}.btn:not(:disabled):not(.disabled){cursor:pointer}a.btn.disabled,fieldset:disabled a.btn{pointer-events:none}.btn-primary{color:#fff;background-color:#7a3016;border-color:#7a3016}.btn-primary:hover{color:#fff;background-color:#0069d9;border-color:#0062cc}.btn-primary.focus,.btn-primary:focus{color:#fff;background-color:#0069d9;border-color:#0062cc;box-shadow:0 0 0 .2rem rgba(38,143,255,.5)}.btn-primary.disabled,.btn-primary:disabled{color:#fff;background-color:#7a3016;border-color:#7a3016}.btn-primary:not(:disabled):not(.disabled).active,.btn-primary:not(:disabled):not(.disabled):active,.show>.btn-primary.dropdown-toggle{color:#fff;background-color:#0062cc;border-color:#005cbf}.btn-primary:not(:disabled):not(.disabled).active:focus,.btn-primary:not(:disabled):not(.disabled):active:focus,.show>.btn-primary.dropdown-toggle:focus{box-shadow:0 0 0 .2rem rgba(38,143,255,.5)}.btn-secondary{color:#fff;background-color:#6c757d;border-color:#6c757d}.btn-secondary:hover{color:#fff;background-color:#5a6268;border-color:#545b62}.btn-secondary.focus,.btn-secondary:focus{color:#fff;background-color:#5a6268;border-color:#545b62;box-shadow:0 0 0 .2rem rgba(130,138,145,.5)}.btn-secondary.disabled,.btn-secondary:disabled{color:#fff;background-color:#6c757d;border-color:#6c757d}.btn-secondary:not(:disabled):not(.disabled).active,.btn-secondary:not(:disabled):not(.disabled):active,.show>.btn-secondary.dropdown-toggle{color:#fff;background-color:#545b62;border-color:#4e555b}.btn-secondary:not(:disabled):not(.disabled).active:focus,.btn-secondary:not(:disabled):not(.disabled):active:focus,.show>.btn-secondary.dropdown-toggle:focus{box-shadow:0 0 0 .2rem rgba(130,138,145,.5)}.btn-success{color:#fff;background-color:#28a745;border-color:#28a745}.btn-success:hover{color:#fff;background-color:#218838;border-color:#1e7e34}.btn-success.focus,.btn-success:focus{color:#fff;background-color:#218838;border-color:#1e7e34;box-shadow:0 0 0 .2rem rgba(72,180,97,.5)}.btn-success.disabled,.btn-success:disabled{color:#fff;background-color:#28a745;border-color:#28a745}.btn-success:not(:disabled):not(.disabled).active,.btn-success:not(:disabled):not(.disabled):active,.show>.btn-success.dropdown-toggle{color:#fff;background-color:#1e7e34;border-color:#1c7430}.btn-success:not(:disabled):not(.disabled).active:focus,.btn-success:not(:disabled):not(.disabled):active:focus,.show>.btn-success.dropdown-toggle:focus{box-shadow:0 0 0 .2rem rgba(72,180,97,.5)}.btn-info{color:#fff;background-color:#17a2b8;border-color:#17a2b8}.btn-info:hover{color:#fff;background-color:#138496;border-color:#117a8b}.btn-info.focus,.btn-info:focus{color:#fff;background-color:#138496;border-color:#117a8b;box-shadow:0 0 0 .2rem rgba(58,176,195,.5)}.btn-info.disabled,.btn-info:disabled{color:#fff;background-color:#17a2b8;border-color:#17a2b8}.btn-info:not(:disabled):not(.disabled).active,.btn-info:not(:disabled):not(.disabled):active,.show>.btn-info.dropdown-toggle{color:#fff;background-color:#117a8b;border-color:#10707f}.btn-info:not(:disabled):not(.disabled).active:focus,.btn-info:not(:disabled):not(.disabled):active:focus,.show>.btn-info.dropdown-toggle:focus{box-shadow:0 0 0 .2rem rgba(58,176,195,.5)}.btn-warning{color:#212529;background-color:#ffc107;border-color:#ffc107}.btn-warning:hover{color:#212529;background-color:#e0a800;border-color:#d39e00}.btn-warning.focus,.btn-warning:focus{color:#212529;background-color:#e0a800;border-color:#d39e00;box-shadow:0 0 0 .2rem rgba(222,170,12,.5)}.btn-warning.disabled,.btn-warning:disabled{color:#212529;background-color:#ffc107;border-color:#ffc107}.btn-warning:not(:disabled):not(.disabled).active,.btn-warning:not(:disabled):not(.disabled):active,.show>.btn-warning.dropdown-toggle{color:#212529;background-color:#d39e00;border-color:#c69500}.btn-warning:not(:disabled):not(.disabled).active:focus,.btn-warning:not(:disabled):not(.disabled):active:focus,.show>.btn-warning.dropdown-toggle:focus{box-shadow:0 0 0 .2rem rgba(222,170,12,.5)}.btn-danger{color:#fff;background-color:#dc3545;border-color:#dc3545}.btn-danger:hover{color:#fff;background-color:#c82333;border-color:#bd2130}.btn-danger.focus,.btn-danger:focus{color:#fff;background-color:#c82333;border-color:#bd2130;box-shadow:0 0 0 .2rem rgba(225,83,97,.5)}.btn-danger.disabled,.btn-danger:disabled{color:#fff;background-color:#dc3545;border-color:#dc3545}.btn-danger:not(:disabled):not(.disabled).active,.btn-danger:not(:disabled):not(.disabled):active,.show>.btn-danger.dropdown-toggle{color:#fff;background-color:#bd2130;border-color:#b21f2d}.btn-danger:not(:disabled):not(.disabled).active:focus,.btn-danger:not(:disabled):not(.disabled):active:focus,.show>.btn-danger.dropdown-toggle:focus{box-shadow:0 0 0 .2rem rgba(225,83,97,.5)}.btn-light{color:#212529;background-color:#f8f9fa;border-color:#f8f9fa}.btn-light:hover{color:#212529;background-color:#e2e6ea;border-color:#dae0e5}.btn-light.focus,.btn-light:focus{color:#212529;background-color:#e2e6ea;border-color:#dae0e5;box-shadow:0 0 0 .2rem rgba(216,217,219,.5)}.btn-light.disabled,.btn-light:disabled{color:#212529;background-color:#f8f9fa;border-color:#f8f9fa}.btn-light:not(:disabled):not(.disabled).active,.btn-light:not(:disabled):not(.disabled):active,.show>.btn-light.dropdown-toggle{color:#212529;background-color:#dae0e5;border-color:#d3d9df}.btn-light:not(:disabled):not(.disabled).active:focus,.btn-light:not(:disabled):not(.disabled):active:focus,.show>.btn-light.dropdown-toggle:focus{box-shadow:0 0 0 .2rem rgba(216,217,219,.5)}.btn-dark{color:#fff;background-color:#343a40;border-color:#343a40}.btn-dark:hover{color:#fff;background-color:#23272b;border-color:#1d2124}.btn-dark.focus,.btn-dark:focus{color:#fff;background-color:#23272b;border-color:#1d2124;box-shadow:0 0 0 .2rem rgba(82,88,93,.5)}.btn-dark.disabled,.btn-dark:disabled{color:#fff;background-color:#343a40;border-color:#343a40}.btn-dark:not(:disabled):not(.disabled).active,.btn-dark:not(:disabled):not(.disabled):active,.show>.btn-dark.dropdown-toggle{color:#fff;background-color:#1d2124;border-color:#171a1d}.btn-dark:not(:disabled):not(.disabled).active:focus,.btn-dark:not(:disabled):not(.disabled):active:focus,.show>.btn-dark.dropdown-toggle:focus{box-shadow:0 0 0 .2rem rgba(82,88,93,.5)}.btn-outline-primary{color:#7a3016;border-color:#7a3016}.btn-outline-primary:hover{color:#fff;background-color:#7a3016;border-color:#7a3016}.btn-outline-primary.focus,.btn-outline-primary:focus{box-shadow:0 0 0 .2rem rgba(0,123,255,.5)}.btn-outline-primary.disabled,.btn-outline-primary:disabled{color:#7a3016;background-color:transparent}.btn-outline-primary:not(:disabled):not(.disabled).active,.btn-outline-primary:not(:disabled):not(.disabled):active,.show>.btn-outline-primary.dropdown-toggle{color:#fff;background-color:#7a3016;border-color:#7a3016}.btn-outline-primary:not(:disabled):not(.disabled).active:focus,.btn-outline-primary:not(:disabled):not(.disabled):active:focus,.show>.btn-outline-primary.dropdown-toggle:focus{box-shadow:0 0 0 .2rem rgba(0,123,255,.5)}.btn-outline-secondary{color:#6c757d;border-color:#6c757d}.btn-outline-secondary:hover{color:#fff;background-color:#6c757d;border-color:#6c757d}.btn-outline-secondary.focus,.btn-outline-secondary:focus{box-shadow:0 0 0 .2rem rgba(108,117,125,.5)}.btn-outline-secondary.disabled,.btn-outline-secondary:disabled{color:#6c757d;background-color:transparent}.btn-outline-secondary:not(:disabled):not(.disabled).active,.btn-outline-secondary:not(:disabled):not(.disabled):active,.show>.btn-outline-secondary.dropdown-toggle{color:#fff;background-color:#6c757d;border-color:#6c757d}.btn-outline-secondary:not(:disabled):not(.disabled).active:focus,.btn-outline-secondary:not(:disabled):not(.disabled):active:focus,.show>.btn-outline-secondary.dropdown-toggle:focus{box-shadow:0 0 0 .2rem rgba(108,117,125,.5)}.btn-outline-success{color:#28a745;border-color:#28a745}.btn-outline-success:hover{color:#fff;background-color:#28a745;border-color:#28a745}.btn-outline-success.focus,.btn-outline-success:focus{box-shadow:0 0 0 .2rem rgba(40,167,69,.5)}.btn-outline-success.disabled,.btn-outline-success:disabled{color:#28a745;background-color:transparent}.btn-outline-success:not(:disabled):not(.disabled).active,.btn-outline-success:not(:disabled):not(.disabled):active,.show>.btn-outline-success.dropdown-toggle{color:#fff;background-color:#28a745;border-color:#28a745}.btn-outline-success:not(:disabled):not(.disabled).active:focus,.btn-outline-success:not(:disabled):not(.disabled):active:focus,.show>.btn-outline-success.dropdown-toggle:focus{box-shadow:0 0 0 .2rem rgba(40,167,69,.5)}.btn-outline-info{color:#17a2b8;border-color:#17a2b8}.btn-outline-info:hover{color:#fff;background-color:#17a2b8;border-color:#17a2b8}.btn-outline-info.focus,.btn-outline-info:focus{box-shadow:0 0 0 .2rem rgba(23,162,184,.5)}.btn-outline-info.disabled,.btn-outline-info:disabled{color:#17a2b8;background-color:transparent}.btn-outline-info:not(:disabled):not(.disabled).active,.btn-outline-info:not(:disabled):not(.disabled):active,.show>.btn-outline-info.dropdown-toggle{color:#fff;background-color:#17a2b8;border-color:#17a2b8}.btn-outline-info:not(:disabled):not(.disabled).active:focus,.btn-outline-info:not(:disabled):not(.disabled):active:focus,.show>.btn-outline-info.dropdown-toggle:focus{box-shadow:0 0 0 .2rem rgba(23,162,184,.5)}.btn-outline-warning{color:#ffc107;border-color:#ffc107}.btn-outline-warning:hover{color:#212529;background-color:#ffc107;border-color:#ffc107}.btn-outline-warning.focus,.btn-outline-warning:focus{box-shadow:0 0 0 .2rem rgba(255,193,7,.5)}.btn-outline-warning.disabled,.btn-outline-warning:disabled{color:#ffc107;background-color:transparent}.btn-outline-warning:not(:disabled):not(.disabled).active,.btn-outline-warning:not(:disabled):not(.disabled):active,.show>.btn-outline-warning.dropdown-toggle{color:#212529;background-color:#ffc107;border-color:#ffc107}.btn-outline-warning:not(:disabled):not(.disabled).active:focus,.btn-outline-warning:not(:disabled):not(.disabled):active:focus,.show>.btn-outline-warning.dropdown-toggle:focus{box-shadow:0 0 0 .2rem rgba(255,193,7,.5)}.btn-outline-danger{color:#dc3545;border-color:#dc3545}.btn-outline-danger:hover{color:#fff;background-color:#dc3545;border-color:#dc3545}.btn-outline-danger.focus,.btn-outline-danger:focus{box-shadow:0 0 0 .2rem rgba(220,53,69,.5)}.btn-outline-danger.disabled,.btn-outline-danger:disabled{color:#dc3545;background-color:transparent}.btn-outline-danger:not(:disabled):not(.disabled).active,.btn-outline-danger:not(:disabled):not(.disabled):active,.show>.btn-outline-danger.dropdown-toggle{color:#fff;background-color:#dc3545;border-color:#dc3545}.btn-outline-danger:not(:disabled):not(.disabled).active:focus,.btn-outline-danger:not(:disabled):not(.disabled):active:focus,.show>.btn-outline-danger.dropdown-toggle:focus{box-shadow:0 0 0 .2rem rgba(220,53,69,.5)}.btn-outline-light{color:#f8f9fa;border-color:#f8f9fa}.btn-outline-light:hover{color:#212529;background-color:#f8f9fa;border-color:#f8f9fa}.btn-outline-light.focus,.btn-outline-light:focus{box-shadow:0 0 0 .2rem rgba(248,249,250,.5)}.btn-outline-light.disabled,.btn-outline-light:disabled{color:#f8f9fa;background-color:transparent}.btn-outline-light:not(:disabled):not(.disabled).active,.btn-outline-light:not(:disabled):not(.disabled):active,.show>.btn-outline-light.dropdown-toggle{color:#212529;background-color:#f8f9fa;border-color:#f8f9fa}.btn-outline-light:not(:disabled):not(.disabled).active:focus,.btn-outline-light:not(:disabled):not(.disabled):active:focus,.show>.btn-outline-light.dropdown-toggle:focus{box-shadow:0 0 0 .2rem rgba(248,249,250,.5)}.btn-outline-dark{color:#343a40;border-color:#343a40}.btn-outline-dark:hover{color:#fff;background-color:#343a40;border-color:#343a40}.btn-outline-dark.focus,.btn-outline-dark:focus{box-shadow:0 0 0 .2rem rgba(52,58,64,.5)}.btn-outline-dark.disabled,.btn-outline-dark:disabled{color:#343a40;background-color:transparent}.btn-outline-dark:not(:disabled):not(.disabled).active,.btn-outline-dark:not(:disabled):not(.disabled):active,.show>.btn-outline-dark.dropdown-toggle{color:#fff;background-color:#343a40;border-color:#343a40}.btn-outline-dark:not(:disabled):not(.disabled).active:focus,.btn-outline-dark:not(:disabled):not(.disabled):active:focus,.show>.btn-outline-dark.dropdown-toggle:focus{box-shadow:0 0 0 .2rem rgba(52,58,64,.5)}.btn-link{font-weight:400;color:#7a3016;text-decoration:none}.btn-link:hover{color:#0056b3;text-decoration:underline}.btn-link.focus,.btn-link:focus{text-decoration:underline}.btn-link.disabled,.btn-link:disabled{color:#6c757d;pointer-events:none}.btn-group-lg>.btn,.btn-lg{padding:.5rem 1rem;font-size:1.25rem;line-height:1.5;border-radius:.3rem}.btn-group-sm>.btn,.btn-sm{padding:.25rem .5rem;font-size:.875rem;line-height:1.5;border-radius:.2rem}.btn-block{display:block;width:100%}.btn-block+.btn-block{margin-top:.5rem}input[type=button].btn-block,input[type=reset].btn-block,input[type=submit].btn-block{width:100%}.fade{transition:opacity .15s linear}@media (prefers-reduced-motion:reduce){.fade{transition:none}}.fade:not(.show){opacity:0}.collapse:not(.show){display:none}.collapsing{position:relative;height:0;overflow:hidden;transition:height .35s ease}@media (prefers-reduced-motion:reduce){.collapsing{transition:none}}.dropdown,.dropleft,.dropright,.dropup{position:relative}.dropdown-toggle{white-space:nowrap}.dropdown-toggle::after{display:inline-block;margin-left:.255em;vertical-align:.255em;content:"";border-top:.3em solid;border-right:.3em solid transparent;border-bottom:0;border-left:.3em solid transparent}.dropdown-toggle:empty::after{margin-left:0}.dropdown-menu{position:absolute;top:100%;left:0;z-index:1000;display:none;float:left;min-width:10rem;padding:.5rem 0;margin:.125rem 0 0;font-size:1rem;color:#212529;text-align:left;list-style:none;background-color:#fff;background-clip:padding-box;border:1px solid rgba(0,0,0,.15);border-radius:.25rem}.dropdown-menu-left{right:auto;left:0}.dropdown-menu-right{right:0;left:auto}@media (min-width:576px){.dropdown-menu-sm-left{right:auto;left:0}.dropdown-menu-sm-right{right:0;left:auto}}@media (min-width:768px){.dropdown-menu-md-left{right:auto;left:0}.dropdown-menu-md-right{right:0;left:auto}}@media (min-width:992px){.dropdown-menu-lg-left{right:auto;left:0}.dropdown-menu-lg-right{right:0;left:auto}}@media (min-width:1200px){.dropdown-menu-xl-left{right:auto;left:0}.dropdown-menu-xl-right{right:0;left:auto}}.dropup .dropdown-menu{top:auto;bottom:100%;margin-top:0;margin-bottom:.125rem}.dropup .dropdown-toggle::after{display:inline-block;margin-left:.255em;vertical-align:.255em;content:"";border-top:0;border-right:.3em solid transparent;border-bottom:.3em solid;border-left:.3em solid transparent}.dropup .dropdown-toggle:empty::after{margin-left:0}.dropright .dropdown-menu{top:0;right:auto;left:100%;margin-top:0;margin-left:.125rem}.dropright .dropdown-toggle::after{display:inline-block;margin-left:.255em;vertical-align:.255em;content:"";border-top:.3em solid transparent;border-right:0;border-bottom:.3em solid transparent;border-left:.3em solid}.dropright .dropdown-toggle:empty::after{margin-left:0}.dropright .dropdown-toggle::after{vertical-align:0}.dropleft .dropdown-menu{top:0;right:100%;left:auto;margin-top:0;margin-right:.125rem}.dropleft .dropdown-toggle::after{display:inline-block;margin-left:.255em;vertical-align:.255em;content:""}.dropleft .dropdown-toggle::after{display:none}.dropleft .dropdown-toggle::before{display:inline-block;margin-right:.255em;vertical-align:.255em;content:"";border-top:.3em solid transparent;border-right:.3em solid;border-bottom:.3em solid transparent}.dropleft .dropdown-toggle:empty::after{margin-left:0}.dropleft .dropdown-toggle::before{vertical-align:0}.dropdown-menu[x-placement^=bottom],.dropdown-menu[x-placement^=left],.dropdown-menu[x-placement^=right],.dropdown-menu[x-placement^=top]{right:auto;bottom:auto}.dropdown-divider{height:0;margin:.5rem 0;overflow:hidden;border-top:1px solid #e9ecef}.dropdown-item{display:block;width:100%;padding:.25rem 1.5rem;clear:both;font-weight:400;color:#212529;text-align:inherit;white-space:nowrap;background-color:transparent;border:0}.dropdown-item:focus,.dropdown-item:hover{color:#16181b;text-decoration:none;background-color:#f8f9fa}.dropdown-item.active,.dropdown-item:active{color:#fff;text-decoration:none;background-color:#7a3016}.dropdown-item.disabled,.dropdown-item:disabled{color:#6c757d;pointer-events:none;background-color:transparent}.dropdown-menu.show{display:block}.dropdown-header{display:block;padding:.5rem 1.5rem;margin-bottom:0;font-size:.875rem;color:#6c757d;white-space:nowrap}.dropdown-item-text{display:block;padding:.25rem 1.5rem;color:#212529}.btn-group,.btn-group-vertical{position:relative;display:-ms-inline-flexbox;display:inline-flex;vertical-align:middle}.btn-group-vertical>.btn,.btn-group>.btn{position:relative;-ms-flex:1 1 auto;flex:1 1 auto}.btn-group-vertical>.btn:hover,.btn-group>.btn:hover{z-index:1}.btn-group-vertical>.btn.active,.btn-group-vertical>.btn:active,.btn-group-vertical>.btn:focus,.btn-group>.btn.active,.btn-group>.btn:active,.btn-group>.btn:focus{z-index:1}.btn-toolbar{display:-ms-flexbox;display:flex;-ms-flex-wrap:wrap;flex-wrap:wrap;-ms-flex-pack:start;justify-content:flex-start}.btn-toolbar .input-group{width:auto}.btn-group>.btn-group:not(:first-child),.btn-group>.btn:not(:first-child){margin-left:-1px}.btn-group>.btn-group:not(:last-child)>.btn,.btn-group>.btn:not(:last-child):not(.dropdown-toggle){border-top-right-radius:0;border-bottom-right-radius:0}.btn-group>.btn-group:not(:first-child)>.btn,.btn-group>.btn:not(:first-child){border-top-left-radius:0;border-bottom-left-radius:0}.dropdown-toggle-split{padding-right:.5625rem;padding-left:.5625rem}.dropdown-toggle-split::after,.dropright .dropdown-toggle-split::after,.dropup .dropdown-toggle-split::after{margin-left:0}.dropleft .dropdown-toggle-split::before{margin-right:0}.btn-group-sm>.btn+.dropdown-toggle-split,.btn-sm+.dropdown-toggle-split{padding-right:.375rem;padding-left:.375rem}.btn-group-lg>.btn+.dropdown-toggle-split,.btn-lg+.dropdown-toggle-split{padding-right:.75rem;padding-left:.75rem}.btn-group-vertical{-ms-flex-direction:column;flex-direction:column;-ms-flex-align:start;align-items:flex-start;-ms-flex-pack:center;justify-content:center}.btn-group-vertical>.btn,.btn-group-vertical>.btn-group{width:100%}.btn-group-vertical>.btn-group:not(:first-child),.btn-group-vertical>.btn:not(:first-child){margin-top:-1px}.btn-group-vertical>.btn-group:not(:last-child)>.btn,.btn-group-vertical>.btn:not(:last-child):not(.dropdown-toggle){border-bottom-right-radius:0;border-bottom-left-radius:0}.btn-group-vertical>.btn-group:not(:first-child)>.btn,.btn-group-vertical>.btn:not(:first-child){border-top-left-radius:0;border-top-right-radius:0}.btn-group-toggle>.btn,.btn-group-toggle>.btn-group>.btn{margin-bottom:0}.btn-group-toggle>.btn input[type=checkbox],.btn-group-toggle>.btn input[type=radio],.btn-group-toggle>.btn-group>.btn input[type=checkbox],.btn-group-toggle>.btn-group>.btn input[type=radio]{position:absolute;clip:rect(0,0,0,0);pointer-events:none}.input-group{position:relative;display:-ms-flexbox;display:flex;-ms-flex-wrap:wrap;flex-wrap:wrap;-ms-flex-align:stretch;align-items:stretch;width:100%}.input-group>.custom-file,.input-group>.custom-select,.input-group>.form-control,.input-group>.form-control-plaintext{position:relative;-ms-flex:1 1 auto;flex:1 1 auto;width:1%;min-width:0;margin-bottom:0}.input-group>.custom-file+.custom-file,.input-group>.custom-file+.custom-select,.input-group>.custom-file+.form-control,.input-group>.custom-select+.custom-file,.input-group>.custom-select+.custom-select,.input-group>.custom-select+.form-control,.input-group>.form-control+.custom-file,.input-group>.form-control+.custom-select,.input-group>.form-control+.form-control,.input-group>.form-control-plaintext+.custom-file,.input-group>.form-control-plaintext+.custom-select,.input-group>.form-control-plaintext+.form-control{margin-left:-1px}.input-group>.custom-file .custom-file-input:focus~.custom-file-label,.input-group>.custom-select:focus,.input-group>.form-control:focus{z-index:3}.input-group>.custom-file .custom-file-input:focus{z-index:4}.input-group>.custom-select:not(:last-child),.input-group>.form-control:not(:last-child){border-top-right-radius:0;border-bottom-right-radius:0}.input-group>.custom-select:not(:first-child),.input-group>.form-control:not(:first-child){border-top-left-radius:0;border-bottom-left-radius:0}.input-group>.custom-file{display:-ms-flexbox;display:flex;-ms-flex-align:center;align-items:center}.input-group>.custom-file:not(:last-child) .custom-file-label,.input-group>.custom-file:not(:last-child) .custom-file-label::after{border-top-right-radius:0;border-bottom-right-radius:0}.input-group>.custom-file:not(:first-child) .custom-file-label{border-top-left-radius:0;border-bottom-left-radius:0}.input-group-append,.input-group-prepend{display:-ms-flexbox;display:flex}.input-group-append .btn,.input-group-prepend .btn{position:relative;z-index:2}.input-group-append .btn:focus,.input-group-prepend .btn:focus{z-index:3}.input-group-append .btn+.btn,.input-group-append .btn+.input-group-text,.input-group-append .input-group-text+.btn,.input-group-append .input-group-text+.input-group-text,.input-group-prepend .btn+.btn,.input-group-prepend .btn+.input-group-text,.input-group-prepend .input-group-text+.btn,.input-group-prepend .input-group-text+.input-group-text{margin-left:-1px}.input-group-prepend{margin-right:-1px}.input-group-append{margin-left:-1px}.input-group-text{display:-ms-flexbox;display:flex;-ms-flex-align:center;align-items:center;padding:.375rem .75rem;margin-bottom:0;font-size:1rem;font-weight:400;line-height:1.5;color:#495057;text-align:center;white-space:nowrap;background-color:#e9ecef;border:1px solid #ced4da;border-radius:.25rem}.input-group-text input[type=checkbox],.input-group-text input[type=radio]{margin-top:0}.input-group-lg>.custom-select,.input-group-lg>.form-control:not(textarea){height:calc(1.5em + 1rem + 2px)}.input-group-lg>.custom-select,.input-group-lg>.form-control,.input-group-lg>.input-group-append>.btn,.input-group-lg>.input-group-append>.input-group-text,.input-group-lg>.input-group-prepend>.btn,.input-group-lg>.input-group-prepend>.input-group-text{padding:.5rem 1rem;font-size:1.25rem;line-height:1.5;border-radius:.3rem}.input-group-sm>.custom-select,.input-group-sm>.form-control:not(textarea){height:calc(1.5em + .5rem + 2px)}.input-group-sm>.custom-select,.input-group-sm>.form-control,.input-group-sm>.input-group-append>.btn,.input-group-sm>.input-group-append>.input-group-text,.input-group-sm>.input-group-prepend>.btn,.input-group-sm>.input-group-prepend>.input-group-text{padding:.25rem .5rem;font-size:.875rem;line-height:1.5;border-radius:.2rem}.input-group-lg>.custom-select,.input-group-sm>.custom-select{padding-right:1.75rem}.input-group>.input-group-append:last-child>.btn:not(:last-child):not(.dropdown-toggle),.input-group>.input-group-append:last-child>.input-group-text:not(:last-child),.input-group>.input-group-append:not(:last-child)>.btn,.input-group>.input-group-append:not(:last-child)>.input-group-text,.input-group>.input-group-prepend>.btn,.input-group>.input-group-prepend>.input-group-text{border-top-right-radius:0;border-bottom-right-radius:0}.input-group>.input-group-append>.btn,.input-group>.input-group-append>.input-group-text,.input-group>.input-group-prepend:first-child>.btn:not(:first-child),.input-group>.input-group-prepend:first-child>.input-group-text:not(:first-child),.input-group>.input-group-prepend:not(:first-child)>.btn,.input-group>.input-group-prepend:not(:first-child)>.input-group-text{border-top-left-radius:0;border-bottom-left-radius:0}.custom-control{position:relative;display:block;min-height:1.5rem;padding-left:1.5rem}.custom-control-inline{display:-ms-inline-flexbox;display:inline-flex;margin-right:1rem}.custom-control-input{position:absolute;left:0;z-index:-1;width:1rem;height:1.25rem;opacity:0}.custom-control-input:checked~.custom-control-label::before{color:#fff;border-color:#7a3016;background-color:#7a3016}.custom-control-input:focus~.custom-control-label::before{box-shadow:0 0 0 .2rem rgba(0,123,255,.25)}.custom-control-input:focus:not(:checked)~.custom-control-label::before{border-color:#80bdff}.custom-control-input:not(:disabled):active~.custom-control-label::before{color:#fff;background-color:#b3d7ff;border-color:#b3d7ff}.custom-control-input:disabled~.custom-control-label,.custom-control-input[disabled]~.custom-control-label{color:#6c757d}.custom-control-input:disabled~.custom-control-label::before,.custom-control-input[disabled]~.custom-control-label::before{background-color:#e9ecef}.custom-control-label{position:relative;margin-bottom:0;vertical-align:top}.custom-control-label::before{position:absolute;top:.25rem;left:-1.5rem;display:block;width:1rem;height:1rem;pointer-events:none;content:"";background-color:#fff;border:#adb5bd solid 1px}.custom-control-label::after{position:absolute;top:.25rem;left:-1.5rem;display:block;width:1rem;height:1rem;content:"";background:no-repeat 50%/50% 50%}.custom-checkbox .custom-control-label::before{border-radius:.25rem}.custom-checkbox .custom-control-input:checked~.custom-control-label::after{background-image:url("data:image/svg+xml,%3csvg xmlns='http://www.w3.org/2000/svg' width='8' height='8' viewBox='0 0 8 8'%3e%3cpath fill='%23fff' d='M6.564.75l-3.59 3.612-1.538-1.55L0 4.26l2.974 2.99L8 2.193z'/%3e%3c/svg%3e")}.custom-checkbox .custom-control-input:indeterminate~.custom-control-label::before{border-color:#7a3016;background-color:#7a3016}.custom-checkbox .custom-control-input:indeterminate~.custom-control-label::after{background-image:url("data:image/svg+xml,%3csvg xmlns='http://www.w3.org/2000/svg' width='4' height='4' viewBox='0 0 4 4'%3e%3cpath stroke='%23fff' d='M0 2h4'/%3e%3c/svg%3e")}.custom-checkbox .custom-control-input:disabled:checked~.custom-control-label::before{background-color:rgba(0,123,255,.5)}.custom-checkbox .custom-control-input:disabled:indeterminate~.custom-control-label::before{background-color:rgba(0,123,255,.5)}.custom-radio .custom-control-label::before{border-radius:50%}.custom-radio .custom-control-input:checked~.custom-control-label::after{background-image:url("data:image/svg+xml,%3csvg xmlns='http://www.w3.org/2000/svg' width='12' height='12' viewBox='-4 -4 8 8'%3e%3ccircle r='3' fill='%23fff'/%3e%3c/svg%3e")}.custom-radio .custom-control-input:disabled:checked~.custom-control-label::before{background-color:rgba(0,123,255,.5)}.custom-switch{padding-left:2.25rem}.custom-switch .custom-control-label::before{left:-2.25rem;width:1.75rem;pointer-events:all;border-radius:.5rem}.custom-switch .custom-control-label::after{top:calc(.25rem + 2px);left:calc(-2.25rem + 2px);width:calc(1rem - 4px);height:calc(1rem - 4px);background-color:#adb5bd;border-radius:.5rem;transition:background-color .15s ease-in-out,border-color .15s ease-in-out,box-shadow .15s ease-in-out,-webkit-transform .15s ease-in-out;transition:transform .15s ease-in-out,background-color .15s ease-in-out,border-color .15s ease-in-out,box-shadow .15s ease-in-out;transition:transform .15s ease-in-out,background-color .15s ease-in-out,border-color .15s ease-in-out,box-shadow .15s ease-in-out,-webkit-transform .15s ease-in-out}@media (prefers-reduced-motion:reduce){.custom-switch .custom-control-label::after{transition:none}}.custom-switch .custom-control-input:checked~.custom-control-label::after{background-color:#fff;-webkit-transform:translateX(.75rem);transform:translateX(.75rem)}.custom-switch .custom-control-input:disabled:checked~.custom-control-label::before{background-color:rgba(0,123,255,.5)}.custom-select{display:inline-block;width:100%;height:calc(1.5em + .75rem + 2px);padding:.375rem 1.75rem .375rem .75rem;font-size:1rem;font-weight:400;line-height:1.5;color:#495057;vertical-align:middle;background:#fff url("data:image/svg+xml,%3csvg xmlns='http://www.w3.org/2000/svg' width='4' height='5' viewBox='0 0 4 5'%3e%3cpath fill='%23343a40' d='M2 0L0 2h4zm0 5L0 3h4z'/%3e%3c/svg%3e") no-repeat right .75rem center/8px 10px;border:1px solid #ced4da;border-radius:.25rem;-webkit-appearance:none;-moz-appearance:none;appearance:none}.custom-select:focus{border-color:#80bdff;outline:0;box-shadow:0 0 0 .2rem rgba(0,123,255,.25)}.custom-select:focus::-ms-value{color:#495057;background-color:#fff}.custom-select[multiple],.custom-select[size]:not([size="1"]){height:auto;padding-right:.75rem;background-image:none}.custom-select:disabled{color:#6c757d;background-color:#e9ecef}.custom-select::-ms-expand{display:none}.custom-select:-moz-focusring{color:transparent;text-shadow:0 0 0 #495057}.custom-select-sm{height:calc(1.5em + .5rem + 2px);padding-top:.25rem;padding-bottom:.25rem;padding-left:.5rem;font-size:.875rem}.custom-select-lg{height:calc(1.5em + 1rem + 2px);padding-top:.5rem;padding-bottom:.5rem;padding-left:1rem;font-size:1.25rem}.custom-file{position:relative;display:inline-block;width:100%;height:calc(1.5em + .75rem + 2px);margin-bottom:0}.custom-file-input{position:relative;z-index:2;width:100%;height:calc(1.5em + .75rem + 2px);margin:0;opacity:0}.custom-file-input:focus~.custom-file-label{border-color:#80bdff;box-shadow:0 0 0 .2rem rgba(0,123,255,.25)}.custom-file-input:disabled~.custom-file-label,.custom-file-input[disabled]~.custom-file-label{background-color:#e9ecef}.custom-file-input:lang(en)~.custom-file-label::after{content:"Browse"}.custom-file-input~.custom-file-label[data-browse]::after{content:attr(data-browse)}.custom-file-label{position:absolute;top:0;right:0;left:0;z-index:1;height:calc(1.5em + .75rem + 2px);padding:.375rem .75rem;font-weight:400;line-height:1.5;color:#495057;background-color:#fff;border:1px solid #ced4da;border-radius:.25rem}.custom-file-label::after{position:absolute;top:0;right:0;bottom:0;z-index:3;display:block;height:calc(1.5em + .75rem);padding:.375rem .75rem;line-height:1.5;color:#495057;content:"Browse";background-color:#e9ecef;border-left:inherit;border-radius:0 .25rem .25rem 0}.custom-range{width:100%;height:1.4rem;padding:0;background-color:transparent;-webkit-appearance:none;-moz-appearance:none;appearance:none}.custom-range:focus{outline:0}.custom-range:focus::-webkit-slider-thumb{box-shadow:0 0 0 1px #fff,0 0 0 .2rem rgba(0,123,255,.25)}.custom-range:focus::-moz-range-thumb{box-shadow:0 0 0 1px #fff,0 0 0 .2rem rgba(0,123,255,.25)}.custom-range:focus::-ms-thumb{box-shadow:0 0 0 1px #fff,0 0 0 .2rem rgba(0,123,255,.25)}.custom-range::-moz-focus-outer{border:0}.custom-range::-webkit-slider-thumb{width:1rem;height:1rem;margin-top:-.25rem;background-color:#7a3016;border:0;border-radius:1rem;-webkit-transition:background-color .15s ease-in-out,border-color .15s ease-in-out,box-shadow .15s ease-in-out;transition:background-color .15s ease-in-out,border-color .15s ease-in-out,box-shadow .15s ease-in-out;-webkit-appearance:none;appearance:none}@media (prefers-reduced-motion:reduce){.custom-range::-webkit-slider-thumb{-webkit-transition:none;transition:none}}.custom-range::-webkit-slider-thumb:active{background-color:#b3d7ff}.custom-range::-webkit-slider-runnable-track{width:100%;height:.5rem;color:transparent;cursor:pointer;background-color:#dee2e6;border-color:transparent;border-radius:1rem}.custom-range::-moz-range-thumb{width:1rem;height:1rem;background-color:#7a3016;border:0;border-radius:1rem;-moz-transition:background-color .15s ease-in-out,border-color .15s ease-in-out,box-shadow .15s ease-in-out;transition:background-color .15s ease-in-out,border-color .15s ease-in-out,box-shadow .15s ease-in-out;-moz-appearance:none;appearance:none}@media (prefers-reduced-motion:reduce){.custom-range::-moz-range-thumb{-moz-transition:none;transition:none}}.custom-range::-moz-range-thumb:active{background-color:#b3d7ff}.custom-range::-moz-range-track{width:100%;height:.5rem;color:transparent;cursor:pointer;background-color:#dee2e6;border-color:transparent;border-radius:1rem}.custom-range::-ms-thumb{width:1rem;height:1rem;margin-top:0;margin-right:.2rem;margin-left:.2rem;background-color:#7a3016;border:0;border-radius:1rem;-ms-transition:background-color .15s ease-in-out,border-color .15s ease-in-out,box-shadow .15s ease-in-out;transition:background-color .15s ease-in-out,border-color .15s ease-in-out,box-shadow .15s ease-in-out;appearance:none}@media (prefers-reduced-motion:reduce){.custom-range::-ms-thumb{-ms-transition:none;transition:none}}.custom-range::-ms-thumb:active{background-color:#b3d7ff}.custom-range::-ms-track{width:100%;height:.5rem;color:transparent;cursor:pointer;background-color:transparent;border-color:transparent;border-width:.5rem}.custom-range::-ms-fill-lower{background-color:#dee2e6;border-radius:1rem}.custom-range::-ms-fill-upper{margin-right:15px;background-color:#dee2e6;border-radius:1rem}.custom-range:disabled::-webkit-slider-thumb{background-color:#adb5bd}.custom-range:disabled::-webkit-slider-runnable-track{cursor:default}.custom-range:disabled::-moz-range-thumb{background-color:#adb5bd}.custom-range:disabled::-moz-range-track{cursor:default}.custom-range:disabled::-ms-thumb{background-color:#adb5bd}.custom-control-label::before,.custom-file-label,.custom-select{transition:background-color .15s ease-in-out,border-color .15s ease-in-out,box-shadow .15s ease-in-out}@media (prefers-reduced-motion:reduce){.custom-control-label::before,.custom-file-label,.custom-select{transition:none}}.nav{display:-ms-flexbox;display:flex;-ms-flex-wrap:wrap;flex-wrap:wrap;padding-left:0;margin-bottom:0;list-style:none}.nav-link{display:block;padding:.5rem 1rem}.nav-link:focus,.nav-link:hover{text-decoration:none}.nav-link.disabled{color:#6c757d;pointer-events:none;cursor:default}.nav-tabs{border-bottom:1px solid #dee2e6}.nav-tabs .nav-item{margin-bottom:-1px}.nav-tabs .nav-link{border:1px solid transparent;border-top-left-radius:.25rem;border-top-right-radius:.25rem}.nav-tabs .nav-link:focus,.nav-tabs .nav-link:hover{border-color:#e9ecef #e9ecef #dee2e6}.nav-tabs .nav-link.disabled{color:#6c757d;background-color:transparent;border-color:transparent}.nav-tabs .nav-item.show .nav-link,.nav-tabs .nav-link.active{color:#495057;background-color:#fff;border-color:#dee2e6 #dee2e6 #fff}.nav-tabs .dropdown-menu{margin-top:-1px;border-top-left-radius:0;border-top-right-radius:0}.nav-pills .nav-link{border-radius:.25rem}.nav-pills .nav-link.active,.nav-pills .show>.nav-link{color:#fff;background-color:#7a3016}.nav-fill .nav-item{-ms-flex:1 1 auto;flex:1 1 auto;text-align:center}.nav-justified .nav-item{-ms-flex-preferred-size:0;flex-basis:0;-ms-flex-positive:1;flex-grow:1;text-align:center}.tab-content>.tab-pane{display:none}.tab-content>.active{display:block}.navbar{position:relative;display:-ms-flexbox;display:flex;-ms-flex-wrap:wrap;flex-wrap:wrap;-ms-flex-align:center;align-items:center;-ms-flex-pack:justify;justify-content:space-between;padding:.5rem 1rem}.navbar .container,.navbar .container-fluid,.navbar .container-lg,.navbar .container-md,.navbar .container-sm,.navbar .container-xl{display:-ms-flexbox;display:flex;-ms-flex-wrap:wrap;flex-wrap:wrap;-ms-flex-align:center;align-items:center;-ms-flex-pack:justify;justify-content:space-between}.navbar-brand{display:inline-block;padding-top:.3125rem;padding-bottom:.3125rem;margin-right:1rem;font-size:1.25rem;line-height:inherit;white-space:nowrap}.navbar-brand:focus,.navbar-brand:hover{text-decoration:none}.navbar-nav{display:-ms-flexbox;display:flex;-ms-flex-direction:column;flex-direction:column;padding-left:0;margin-bottom:0;list-style:none}.navbar-nav .nav-link{padding-right:0;padding-left:0}.navbar-nav .dropdown-menu{position:static;float:none}.navbar-text{display:inline-block;padding-top:.5rem;padding-bottom:.5rem}.navbar-collapse{-ms-flex-preferred-size:100%;flex-basis:100%;-ms-flex-positive:1;flex-grow:1;-ms-flex-align:center;align-items:center}.navbar-toggler{padding:.25rem .75rem;font-size:1.25rem;line-height:1;background-color:transparent;border:1px solid transparent;border-radius:.25rem}.navbar-toggler:focus,.navbar-toggler:hover{text-decoration:none}.navbar-toggler-icon{display:inline-block;width:1.5em;height:1.5em;vertical-align:middle;content:"";background:no-repeat center center;background-size:100% 100%}@media (max-width:575.98px){.navbar-expand-sm>.container,.navbar-expand-sm>.container-fluid,.navbar-expand-sm>.container-lg,.navbar-expand-sm>.container-md,.navbar-expand-sm>.container-sm,.navbar-expand-sm>.container-xl{padding-right:0;padding-left:0}}@media (min-width:576px){.navbar-expand-sm{-ms-flex-flow:row nowrap;flex-flow:row nowrap;-ms-flex-pack:start;justify-content:flex-start}.navbar-expand-sm .navbar-nav{-ms-flex-direction:row;flex-direction:row}.navbar-expand-sm .navbar-nav .dropdown-menu{position:absolute}.navbar-expand-sm .navbar-nav .nav-link{padding-right:.5rem;padding-left:.5rem}.navbar-expand-sm>.container,.navbar-expand-sm>.container-fluid,.navbar-expand-sm>.container-lg,.navbar-expand-sm>.container-md,.navbar-expand-sm>.container-sm,.navbar-expand-sm>.container-xl{-ms-flex-wrap:nowrap;flex-wrap:nowrap}.navbar-expand-sm .navbar-collapse{display:-ms-flexbox!important;display:flex!important;-ms-flex-preferred-size:auto;flex-basis:auto}.navbar-expand-sm .navbar-toggler{display:none}}@media (max-width:767.98px){.navbar-expand-md>.container,.navbar-expand-md>.container-fluid,.navbar-expand-md>.container-lg,.navbar-expand-md>.container-md,.navbar-expand-md>.container-sm,.navbar-expand-md>.container-xl{padding-right:0;padding-left:0}}@media (min-width:768px){.navbar-expand-md{-ms-flex-flow:row nowrap;flex-flow:row nowrap;-ms-flex-pack:start;justify-content:flex-start}.navbar-expand-md .navbar-nav{-ms-flex-direction:row;flex-direction:row}.navbar-expand-md .navbar-nav .dropdown-menu{position:absolute}.navbar-expand-md .navbar-nav .nav-link{padding-right:.5rem;padding-left:.5rem}.navbar-expand-md>.container,.navbar-expand-md>.container-fluid,.navbar-expand-md>.container-lg,.navbar-expand-md>.container-md,.navbar-expand-md>.container-sm,.navbar-expand-md>.container-xl{-ms-flex-wrap:nowrap;flex-wrap:nowrap}.navbar-expand-md .navbar-collapse{display:-ms-flexbox!important;display:flex!important;-ms-flex-preferred-size:auto;flex-basis:auto}.navbar-expand-md .navbar-toggler{display:none}}@media (max-width:991.98px){.navbar-expand-lg>.container,.navbar-expand-lg>.container-fluid,.navbar-expand-lg>.container-lg,.navbar-expand-lg>.container-md,.navbar-expand-lg>.container-sm,.navbar-expand-lg>.container-xl{padding-right:0;padding-left:0}}@media (min-width:992px){.navbar-expand-lg{-ms-flex-flow:row nowrap;flex-flow:row nowrap;-ms-flex-pack:start;justify-content:flex-start}.navbar-expand-lg .navbar-nav{-ms-flex-direction:row;flex-direction:row}.navbar-expand-lg .navbar-nav .dropdown-menu{position:absolute}.navbar-expand-lg .navbar-nav .nav-link{padding-right:.5rem;padding-left:.5rem}.navbar-expand-lg>.container,.navbar-expand-lg>.container-fluid,.navbar-expand-lg>.container-lg,.navbar-expand-lg>.container-md,.navbar-expand-lg>.container-sm,.navbar-expand-lg>.container-xl{-ms-flex-wrap:nowrap;flex-wrap:nowrap}.navbar-expand-lg .navbar-collapse{display:-ms-flexbox!important;display:flex!important;-ms-flex-preferred-size:auto;flex-basis:auto}.navbar-expand-lg .navbar-toggler{display:none}}@media (max-width:1199.98px){.navbar-expand-xl>.container,.navbar-expand-xl>.container-fluid,.navbar-expand-xl>.container-lg,.navbar-expand-xl>.container-md,.navbar-expand-xl>.container-sm,.navbar-expand-xl>.container-xl{padding-right:0;padding-left:0}}@media (min-width:1200px){.navbar-expand-xl{-ms-flex-flow:row nowrap;flex-flow:row nowrap;-ms-flex-pack:start;justify-content:flex-start}.navbar-expand-xl .navbar-nav{-ms-flex-direction:row;flex-direction:row}.navbar-expand-xl .navbar-nav .dropdown-menu{position:absolute}.navbar-expand-xl .navbar-nav .nav-link{padding-right:.5rem;padding-left:.5rem}.navbar-expand-xl>.container,.navbar-expand-xl>.container-fluid,.navbar-expand-xl>.container-lg,.navbar-expand-xl>.container-md,.navbar-expand-xl>.container-sm,.navbar-expand-xl>.container-xl{-ms-flex-wrap:nowrap;flex-wrap:nowrap}.navbar-expand-xl .navbar-collapse{display:-ms-flexbox!important;display:flex!important;-ms-flex-preferred-size:auto;flex-basis:auto}.navbar-expand-xl .navbar-toggler{display:none}}.navbar-expand{-ms-flex-flow:row nowrap;flex-flow:row nowrap;-ms-flex-pack:start;justify-content:flex-start}.navbar-expand>.container,.navbar-expand>.container-fluid,.navbar-expand>.container-lg,.navbar-expand>.container-md,.navbar-expand>.container-sm,.navbar-expand>.container-xl{padding-right:0;padding-left:0}.navbar-expand .navbar-nav{-ms-flex-direction:row;flex-direction:row}.navbar-expand .navbar-nav .dropdown-menu{position:absolute}.navbar-expand .navbar-nav .nav-link{padding-right:.5rem;padding-left:.5rem}.navbar-expand>.container,.navbar-expand>.container-fluid,.navbar-expand>.container-lg,.navbar-expand>.container-md,.navbar-expand>.container-sm,.navbar-expand>.container-xl{-ms-flex-wrap:nowrap;flex-wrap:nowrap}.navbar-expand .navbar-collapse{display:-ms-flexbox!important;display:flex!important;-ms-flex-preferred-size:auto;flex-basis:auto}.navbar-expand .navbar-toggler{display:none}.navbar-light .navbar-brand{color:rgba(0,0,0,.9)}.navbar-light .navbar-brand:focus,.navbar-light .navbar-brand:hover{color:rgba(0,0,0,.9)}.navbar-light .navbar-nav .nav-link{color:rgba(0,0,0,.5)}.navbar-light .navbar-nav .nav-link:focus,.navbar-light .navbar-nav .nav-link:hover{color:rgba(0,0,0,.7)}.navbar-light .navbar-nav .nav-link.disabled{color:rgba(0,0,0,.3)}.navbar-light .navbar-nav .active>.nav-link,.navbar-light .navbar-nav .nav-link.active,.navbar-light .navbar-nav .nav-link.show,.navbar-light .navbar-nav .show>.nav-link{color:rgba(0,0,0,.9)}.navbar-light .navbar-toggler{color:rgba(0,0,0,.5);border-color:rgba(0,0,0,.1)}.navbar-light .navbar-toggler-icon{background-image:url("data:image/svg+xml,%3csvg xmlns='http://www.w3.org/2000/svg' width='30' height='30' viewBox='0 0 30 30'%3e%3cpath stroke='rgba%280, 0, 0, 0.5%29' stroke-linecap='round' stroke-miterlimit='10' stroke-width='2' d='M4 7h22M4 15h22M4 23h22'/%3e%3c/svg%3e")}.navbar-light .navbar-text{color:rgba(0,0,0,.5)}.navbar-light .navbar-text a{color:rgba(0,0,0,.9)}.navbar-light .navbar-text a:focus,.navbar-light .navbar-text a:hover{color:rgba(0,0,0,.9)}.navbar-dark .navbar-brand{color:#fff}.navbar-dark .navbar-brand:focus,.navbar-dark .navbar-brand:hover{color:#fff}.navbar-dark .navbar-nav .nav-link{color:rgba(255,255,255,.5)}.navbar-dark .navbar-nav .nav-link:focus,.navbar-dark .navbar-nav .nav-link:hover{color:rgba(255,255,255,.75)}.navbar-dark .navbar-nav .nav-link.disabled{color:rgba(255,255,255,.25)}.navbar-dark .navbar-nav .active>.nav-link,.navbar-dark .navbar-nav .nav-link.active,.navbar-dark .navbar-nav .nav-link.show,.navbar-dark .navbar-nav .show>.nav-link{color:#fff}.navbar-dark .navbar-toggler{color:rgba(255,255,255,.5);border-color:rgba(255,255,255,.1)}.navbar-dark .navbar-toggler-icon{background-image:url("data:image/svg+xml,%3csvg xmlns='http://www.w3.org/2000/svg' width='30' height='30' viewBox='0 0 30 30'%3e%3cpath stroke='rgba%28255, 255, 255, 0.5%29' stroke-linecap='round' stroke-miterlimit='10' stroke-width='2' d='M4 7h22M4 15h22M4 23h22'/%3e%3c/svg%3e")}.navbar-dark .navbar-text{color:rgba(255,255,255,.5)}.navbar-dark .navbar-text a{color:#fff}.navbar-dark .navbar-text a:focus,.navbar-dark .navbar-text a:hover{color:#fff}.card{position:relative;display:-ms-flexbox;display:flex;-ms-flex-direction:column;flex-direction:column;min-width:0;word-wrap:break-word;background-color:#fff;background-clip:border-box;border:1px solid rgba(0,0,0,.125);border-radius:.25rem}.card>hr{margin-right:0;margin-left:0}.card>.list-group{border-top:inherit;border-bottom:inherit}.card>.list-group:first-child{border-top-width:0;border-top-left-radius:calc(.25rem - 1px);border-top-right-radius:calc(.25rem - 1px)}.card>.list-group:last-child{border-bottom-width:0;border-bottom-right-radius:calc(.25rem - 1px);border-bottom-left-radius:calc(.25rem - 1px)}.card-body{-ms-flex:1 1 auto;flex:1 1 auto;min-height:1px;padding:1.25rem}.card-title{margin-bottom:.75rem}.card-subtitle{margin-top:-.375rem;margin-bottom:0}.card-text:last-child{margin-bottom:0}.card-link:hover{text-decoration:none}.card-link+.card-link{margin-left:1.25rem}.card-header{padding:.75rem 1.25rem;margin-bottom:0;background-color:rgba(0,0,0,.03);border-bottom:1px solid rgba(0,0,0,.125)}.card-header:first-child{border-radius:calc(.25rem - 1px) calc(.25rem - 1px) 0 0}.card-header+.list-group .list-group-item:first-child{border-top:0}.card-footer{padding:.75rem 1.25rem;background-color:rgba(0,0,0,.03);border-top:1px solid rgba(0,0,0,.125)}.card-footer:last-child{border-radius:0 0 calc(.25rem - 1px) calc(.25rem - 1px)}.card-header-tabs{margin-right:-.625rem;margin-bottom:-.75rem;margin-left:-.625rem;border-bottom:0}.card-header-pills{margin-right:-.625rem;margin-left:-.625rem}.card-img-overlay{position:absolute;top:0;right:0;bottom:0;left:0;padding:1.25rem}.card-img,.card-img-bottom,.card-img-top{-ms-flex-negative:0;flex-shrink:0;width:100%}.card-img,.card-img-top{border-top-left-radius:calc(.25rem - 1px);border-top-right-radius:calc(.25rem - 1px)}.card-img,.card-img-bottom{border-bottom-right-radius:calc(.25rem - 1px);border-bottom-left-radius:calc(.25rem - 1px)}.card-deck .card{margin-bottom:15px}@media (min-width:576px){.card-deck{display:-ms-flexbox;display:flex;-ms-flex-flow:row wrap;flex-flow:row wrap;margin-right:-15px;margin-left:-15px}.card-deck .card{-ms-flex:1 0 0%;flex:1 0 0%;margin-right:15px;margin-bottom:0;margin-left:15px}}.card-group>.card{margin-bottom:15px}@media (min-width:576px){.card-group{display:-ms-flexbox;display:flex;-ms-flex-flow:row wrap;flex-flow:row wrap}.card-group>.card{-ms-flex:1 0 0%;flex:1 0 0%;margin-bottom:0}.card-group>.card+.card{margin-left:0;border-left:0}.card-group>.card:not(:last-child){border-top-right-radius:0;border-bottom-right-radius:0}.card-group>.card:not(:last-child) .card-header,.card-group>.card:not(:last-child) .card-img-top{border-top-right-radius:0}.card-group>.card:not(:last-child) .card-footer,.card-group>.card:not(:last-child) .card-img-bottom{border-bottom-right-radius:0}.card-group>.card:not(:first-child){border-top-left-radius:0;border-bottom-left-radius:0}.card-group>.card:not(:first-child) .card-header,.card-group>.card:not(:first-child) .card-img-top{border-top-left-radius:0}.card-group>.card:not(:first-child) .card-footer,.card-group>.card:not(:first-child) .card-img-bottom{border-bottom-left-radius:0}}.card-columns .card{margin-bottom:.75rem}@media (min-width:576px){.card-columns{-webkit-column-count:3;-moz-column-count:3;column-count:3;-webkit-column-gap:1.25rem;-moz-column-gap:1.25rem;column-gap:1.25rem;orphans:1;widows:1}.card-columns .card{display:inline-block;width:100%}}.accordion>.card{overflow:hidden}.accordion>.card:not(:last-of-type){border-bottom:0;border-bottom-right-radius:0;border-bottom-left-radius:0}.accordion>.card:not(:first-of-type){border-top-left-radius:0;border-top-right-radius:0}.accordion>.card>.card-header{border-radius:0;margin-bottom:-1px}.breadcrumb{display:-ms-flexbox;display:flex;-ms-flex-wrap:wrap;flex-wrap:wrap;padding:.75rem 1rem;margin-bottom:1rem;list-style:none;background-color:#e9ecef;border-radius:.25rem}.breadcrumb-item{display:-ms-flexbox;display:flex}.breadcrumb-item+.breadcrumb-item{padding-left:.5rem}.breadcrumb-item+.breadcrumb-item::before{display:inline-block;padding-right:.5rem;color:#6c757d;content:"/"}.breadcrumb-item+.breadcrumb-item:hover::before{text-decoration:underline}.breadcrumb-item+.breadcrumb-item:hover::before{text-decoration:none}.breadcrumb-item.active{color:#6c757d}.pagination{display:-ms-flexbox;display:flex;padding-left:0;list-style:none;border-radius:.25rem}.page-link{position:relative;display:block;padding:.5rem .75rem;margin-left:-1px;line-height:1.25;color:#7a3016;background-color:#fff;border:1px solid #dee2e6}.page-link:hover{z-index:2;color:#0056b3;text-decoration:none;background-color:#e9ecef;border-color:#dee2e6}.page-link:focus{z-index:3;outline:0;box-shadow:0 0 0 .2rem rgba(0,123,255,.25)}.page-item:first-child .page-link{margin-left:0;border-top-left-radius:.25rem;border-bottom-left-radius:.25rem}.page-item:last-child .page-link{border-top-right-radius:.25rem;border-bottom-right-radius:.25rem}.page-item.active .page-link{z-index:3;color:#fff;background-color:#7a3016;border-color:#7a3016}.page-item.disabled .page-link{color:#6c757d;pointer-events:none;cursor:auto;background-color:#fff;border-color:#dee2e6}.pagination-lg .page-link{padding:.75rem 1.5rem;font-size:1.25rem;line-height:1.5}.pagination-lg .page-item:first-child .page-link{border-top-left-radius:.3rem;border-bottom-left-radius:.3rem}.pagination-lg .page-item:last-child .page-link{border-top-right-radius:.3rem;border-bottom-right-radius:.3rem}.pagination-sm .page-link{padding:.25rem .5rem;font-size:.875rem;line-height:1.5}.pagination-sm .page-item:first-child .page-link{border-top-left-radius:.2rem;border-bottom-left-radius:.2rem}.pagination-sm .page-item:last-child .page-link{border-top-right-radius:.2rem;border-bottom-right-radius:.2rem}.badge{display:inline-block;padding:.25em .4em;font-size:75%;font-weight:700;line-height:1;text-align:center;white-space:nowrap;vertical-align:baseline;border-radius:.25rem;transition:color .15s ease-in-out,background-color .15s ease-in-out,border-color .15s ease-in-out,box-shadow .15s ease-in-out}@media (prefers-reduced-motion:reduce){.badge{transition:none}}a.badge:focus,a.badge:hover{text-decoration:none}.badge:empty{display:none}.btn .badge{position:relative;top:-1px}.badge-pill{padding-right:.6em;padding-left:.6em;border-radius:10rem}.badge-primary{color:#fff;background-color:#7a3016}a.badge-primary:focus,a.badge-primary:hover{color:#fff;background-color:#0062cc}a.badge-primary.focus,a.badge-primary:focus{outline:0;box-shadow:0 0 0 .2rem rgba(0,123,255,.5)}.badge-secondary{color:#fff;background-color:#6c757d}a.badge-secondary:focus,a.badge-secondary:hover{color:#fff;background-color:#545b62}a.badge-secondary.focus,a.badge-secondary:focus{outline:0;box-shadow:0 0 0 .2rem rgba(108,117,125,.5)}.badge-success{color:#fff;background-color:#28a745}a.badge-success:focus,a.badge-success:hover{color:#fff;background-color:#1e7e34}a.badge-success.focus,a.badge-success:focus{outline:0;box-shadow:0 0 0 .2rem rgba(40,167,69,.5)}.badge-info{color:#fff;background-color:#17a2b8}a.badge-info:focus,a.badge-info:hover{color:#fff;background-color:#117a8b}a.badge-info.focus,a.badge-info:focus{outline:0;box-shadow:0 0 0 .2rem rgba(23,162,184,.5)}.badge-warning{color:#212529;background-color:#ffc107}a.badge-warning:focus,a.badge-warning:hover{color:#212529;background-color:#d39e00}a.badge-warning.focus,a.badge-warning:focus{outline:0;box-shadow:0 0 0 .2rem rgba(255,193,7,.5)}.badge-danger{color:#fff;background-color:#dc3545}a.badge-danger:focus,a.badge-danger:hover{color:#fff;background-color:#bd2130}a.badge-danger.focus,a.badge-danger:focus{outline:0;box-shadow:0 0 0 .2rem rgba(220,53,69,.5)}.badge-light{color:#212529;background-color:#f8f9fa}a.badge-light:focus,a.badge-light:hover{color:#212529;background-color:#dae0e5}a.badge-light.focus,a.badge-light:focus{outline:0;box-shadow:0 0 0 .2rem rgba(248,249,250,.5)}.badge-dark{color:#fff;background-color:#343a40}a.badge-dark:focus,a.badge-dark:hover{color:#fff;background-color:#1d2124}a.badge-dark.focus,a.badge-dark:focus{outline:0;box-shadow:0 0 0 .2rem rgba(52,58,64,.5)}.jumbotron{padding:2rem 1rem;margin-bottom:2rem;background-color:#e9ecef;border-radius:.3rem}@media (min-width:576px){.jumbotron{padding:4rem 2rem}}.jumbotron-fluid{padding-right:0;padding-left:0;border-radius:0}.alert{position:relative;padding:.75rem 1.25rem;margin-bottom:1rem;border:1px solid transparent;border-radius:.25rem}.alert-heading{color:inherit}.alert-link{font-weight:700}.alert-dismissible{padding-right:4rem}.alert-dismissible .close{position:absolute;top:0;right:0;padding:.75rem 1.25rem;color:inherit}.alert-primary{color:#004085;background-color:#cce5ff;border-color:#b8daff}.alert-primary hr{border-top-color:#9fcdff}.alert-primary .alert-link{color:#002752}.alert-secondary{color:#383d41;background-color:#e2e3e5;border-color:#d6d8db}.alert-secondary hr{border-top-color:#c8cbcf}.alert-secondary .alert-link{color:#202326}.alert-success{color:#155724;background-color:#d4edda;border-color:#c3e6cb}.alert-success hr{border-top-color:#b1dfbb}.alert-success .alert-link{color:#0b2e13}.alert-info{color:#0c5460;background-color:#d1ecf1;border-color:#bee5eb}.alert-info hr{border-top-color:#abdde5}.alert-info .alert-link{color:#062c33}.alert-warning{color:#856404;background-color:#fff3cd;border-color:#ffeeba}.alert-warning hr{border-top-color:#ffe8a1}.alert-warning .alert-link{color:#533f03}.alert-danger{color:#721c24;background-color:#f8d7da;border-color:#f5c6cb}.alert-danger hr{border-top-color:#f1b0b7}.alert-danger .alert-link{color:#491217}.alert-light{color:#818182;background-color:#fefefe;border-color:#fdfdfe}.alert-light hr{border-top-color:#ececf6}.alert-light .alert-link{color:#686868}.alert-dark{color:#1b1e21;background-color:#d6d8d9;border-color:#c6c8ca}.alert-dark hr{border-top-color:#b9bbbe}.alert-dark .alert-link{color:#040505}@-webkit-keyframes progress-bar-stripes{from{background-position:1rem 0}to{background-position:0 0}}@keyframes progress-bar-stripes{from{background-position:1rem 0}to{background-position:0 0}}.progress{display:-ms-flexbox;display:flex;height:1rem;overflow:hidden;line-height:0;font-size:.75rem;background-color:#e9ecef;border-radius:.25rem}.progress-bar{display:-ms-flexbox;display:flex;-ms-flex-direction:column;flex-direction:column;-ms-flex-pack:center;justify-content:center;overflow:hidden;color:#fff;text-align:center;white-space:nowrap;background-color:#7a3016;transition:width .6s ease}@media (prefers-reduced-motion:reduce){.progress-bar{transition:none}}.progress-bar-striped{background-image:linear-gradient(45deg,rgba(255,255,255,.15) 25%,transparent 25%,transparent 50%,rgba(255,255,255,.15) 50%,rgba(255,255,255,.15) 75%,transparent 75%,transparent);background-size:1rem 1rem}.progress-bar-animated{-webkit-animation:progress-bar-stripes 1s linear infinite;animation:progress-bar-stripes 1s linear infinite}@media (prefers-reduced-motion:reduce){.progress-bar-animated{-webkit-animation:none;animation:none}}.media{display:-ms-flexbox;display:flex;-ms-flex-align:start;align-items:flex-start}.media-body{-ms-flex:1;flex:1}.list-group{display:-ms-flexbox;display:flex;-ms-flex-direction:column;flex-direction:column;padding-left:0;margin-bottom:0;border-radius:.25rem}.list-group-item-action{width:100%;color:#495057;text-align:inherit}.list-group-item-action:focus,.list-group-item-action:hover{z-index:1;color:#495057;text-decoration:none;background-color:#f8f9fa}.list-group-item-action:active{color:#212529;background-color:#e9ecef}.list-group-item{position:relative;display:block;padding:.75rem 1.25rem;background-color:#fff;border:1px solid rgba(0,0,0,.125)}.list-group-item:first-child{border-top-left-radius:inherit;border-top-right-radius:inherit}.list-group-item:last-child{border-bottom-right-radius:inherit;border-bottom-left-radius:inherit}.list-group-item.disabled,.list-group-item:disabled{color:#6c757d;pointer-events:none;background-color:#fff}.list-group-item.active{z-index:2;color:#fff;background-color:#7a3016;border-color:#7a3016}.list-group-item+.list-group-item{border-top-width:0}.list-group-item+.list-group-item.active{margin-top:-1px;border-top-width:1px}.list-group-horizontal{-ms-flex-direction:row;flex-direction:row}.list-group-horizontal>.list-group-item:first-child{border-bottom-left-radius:.25rem;border-top-right-radius:0}.list-group-horizontal>.list-group-item:last-child{border-top-right-radius:.25rem;border-bottom-left-radius:0}.list-group-horizontal>.list-group-item.active{margin-top:0}.list-group-horizontal>.list-group-item+.list-group-item{border-top-width:1px;border-left-width:0}.list-group-horizontal>.list-group-item+.list-group-item.active{margin-left:-1px;border-left-width:1px}@media (min-width:576px){.list-group-horizontal-sm{-ms-flex-direction:row;flex-direction:row}.list-group-horizontal-sm>.list-group-item:first-child{border-bottom-left-radius:.25rem;border-top-right-radius:0}.list-group-horizontal-sm>.list-group-item:last-child{border-top-right-radius:.25rem;border-bottom-left-radius:0}.list-group-horizontal-sm>.list-group-item.active{margin-top:0}.list-group-horizontal-sm>.list-group-item+.list-group-item{border-top-width:1px;border-left-width:0}.list-group-horizontal-sm>.list-group-item+.list-group-item.active{margin-left:-1px;border-left-width:1px}}@media (min-width:768px){.list-group-horizontal-md{-ms-flex-direction:row;flex-direction:row}.list-group-horizontal-md>.list-group-item:first-child{border-bottom-left-radius:.25rem;border-top-right-radius:0}.list-group-horizontal-md>.list-group-item:last-child{border-top-right-radius:.25rem;border-bottom-left-radius:0}.list-group-horizontal-md>.list-group-item.active{margin-top:0}.list-group-horizontal-md>.list-group-item+.list-group-item{border-top-width:1px;border-left-width:0}.list-group-horizontal-md>.list-group-item+.list-group-item.active{margin-left:-1px;border-left-width:1px}}@media (min-width:992px){.list-group-horizontal-lg{-ms-flex-direction:row;flex-direction:row}.list-group-horizontal-lg>.list-group-item:first-child{border-bottom-left-radius:.25rem;border-top-right-radius:0}.list-group-horizontal-lg>.list-group-item:last-child{border-top-right-radius:.25rem;border-bottom-left-radius:0}.list-group-horizontal-lg>.list-group-item.active{margin-top:0}.list-group-horizontal-lg>.list-group-item+.list-group-item{border-top-width:1px;border-left-width:0}.list-group-horizontal-lg>.list-group-item+.list-group-item.active{margin-left:-1px;border-left-width:1px}}@media (min-width:1200px){.list-group-horizontal-xl{-ms-flex-direction:row;flex-direction:row}.list-group-horizontal-xl>.list-group-item:first-child{border-bottom-left-radius:.25rem;border-top-right-radius:0}.list-group-horizontal-xl>.list-group-item:last-child{border-top-right-radius:.25rem;border-bottom-left-radius:0}.list-group-horizontal-xl>.list-group-item.active{margin-top:0}.list-group-horizontal-xl>.list-group-item+.list-group-item{border-top-width:1px;border-left-width:0}.list-group-horizontal-xl>.list-group-item+.list-group-item.active{margin-left:-1px;border-left-width:1px}}.list-group-flush{border-radius:0}.list-group-flush>.list-group-item{border-width:0 0 1px}.list-group-flush>.list-group-item:last-child{border-bottom-width:0}.list-group-item-primary{color:#004085;background-color:#b8daff}.list-group-item-primary.list-group-item-action:focus,.list-group-item-primary.list-group-item-action:hover{color:#004085;background-color:#9fcdff}.list-group-item-primary.list-group-item-action.active{color:#fff;background-color:#004085;border-color:#004085}.list-group-item-secondary{color:#383d41;background-color:#d6d8db}.list-group-item-secondary.list-group-item-action:focus,.list-group-item-secondary.list-group-item-action:hover{color:#383d41;background-color:#c8cbcf}.list-group-item-secondary.list-group-item-action.active{color:#fff;background-color:#383d41;border-color:#383d41}.list-group-item-success{color:#155724;background-color:#c3e6cb}.list-group-item-success.list-group-item-action:focus,.list-group-item-success.list-group-item-action:hover{color:#155724;background-color:#b1dfbb}.list-group-item-success.list-group-item-action.active{color:#fff;background-color:#155724;border-color:#155724}.list-group-item-info{color:#0c5460;background-color:#bee5eb}.list-group-item-info.list-group-item-action:focus,.list-group-item-info.list-group-item-action:hover{color:#0c5460;background-color:#abdde5}.list-group-item-info.list-group-item-action.active{color:#fff;background-color:#0c5460;border-color:#0c5460}.list-group-item-warning{color:#856404;background-color:#ffeeba}.list-group-item-warning.list-group-item-action:focus,.list-group-item-warning.list-group-item-action:hover{color:#856404;background-color:#ffe8a1}.list-group-item-warning.list-group-item-action.active{color:#fff;background-color:#856404;border-color:#856404}.list-group-item-danger{color:#721c24;background-color:#f5c6cb}.list-group-item-danger.list-group-item-action:focus,.list-group-item-danger.list-group-item-action:hover{color:#721c24;background-color:#f1b0b7}.list-group-item-danger.list-group-item-action.active{color:#fff;background-color:#721c24;border-color:#721c24}.list-group-item-light{color:#818182;background-color:#fdfdfe}.list-group-item-light.list-group-item-action:focus,.list-group-item-light.list-group-item-action:hover{color:#818182;background-color:#ececf6}.list-group-item-light.list-group-item-action.active{color:#fff;background-color:#818182;border-color:#818182}.list-group-item-dark{color:#1b1e21;background-color:#c6c8ca}.list-group-item-dark.list-group-item-action:focus,.list-group-item-dark.list-group-item-action:hover{color:#1b1e21;background-color:#b9bbbe}.list-group-item-dark.list-group-item-action.active{color:#fff;background-color:#1b1e21;border-color:#1b1e21}.close{float:right;font-size:1.5rem;font-weight:700;line-height:1;color:#000;text-shadow:0 1px 0 #fff;opacity:.5}.close:hover{color:#000;text-decoration:none}.close:not(:disabled):not(.disabled):focus,.close:not(:disabled):not(.disabled):hover{opacity:.75}button.close{padding:0;background-color:transparent;border:0}a.close.disabled{pointer-events:none}.toast{max-width:350px;overflow:hidden;font-size:.875rem;background-color:rgba(255,255,255,.85);background-clip:padding-box;border:1px solid rgba(0,0,0,.1);box-shadow:0 .25rem .75rem rgba(0,0,0,.1);-webkit-backdrop-filter:blur(10px);backdrop-filter:blur(10px);opacity:0;border-radius:.25rem}.toast:not(:last-child){margin-bottom:.75rem}.toast.showing{opacity:1}.toast.show{display:block;opacity:1}.toast.hide{display:none}.toast-header{display:-ms-flexbox;display:flex;-ms-flex-align:center;align-items:center;padding:.25rem .75rem;color:#6c757d;background-color:rgba(255,255,255,.85);background-clip:padding-box;border-bottom:1px solid rgba(0,0,0,.05)}.toast-body{padding:.75rem}.modal-open{overflow:hidden}.modal-open .modal{overflow-x:hidden;overflow-y:auto}.modal{position:fixed;top:0;left:0;z-index:1050;display:none;width:100%;height:100%;overflow:hidden;outline:0}.modal-dialog{position:relative;width:auto;margin:.5rem;pointer-events:none}.modal.fade .modal-dialog{transition:-webkit-transform .3s ease-out;transition:transform .3s ease-out;transition:transform .3s ease-out,-webkit-transform .3s ease-out;-webkit-transform:translate(0,-50px);transform:translate(0,-50px)}@media (prefers-reduced-motion:reduce){.modal.fade .modal-dialog{transition:none}}.modal.show .modal-dialog{-webkit-transform:none;transform:none}.modal.modal-static .modal-dialog{-webkit-transform:scale(1.02);transform:scale(1.02)}.modal-dialog-scrollable{display:-ms-flexbox;display:flex;max-height:calc(100% - 1rem)}.modal-dialog-scrollable .modal-content{max-height:calc(100vh - 1rem);overflow:hidden}.modal-dialog-scrollable .modal-footer,.modal-dialog-scrollable .modal-header{-ms-flex-negative:0;flex-shrink:0}.modal-dialog-scrollable .modal-body{overflow-y:auto}.modal-dialog-centered{display:-ms-flexbox;display:flex;-ms-flex-align:center;align-items:center;min-height:calc(100% - 1rem)}.modal-dialog-centered::before{display:block;height:calc(100vh - 1rem);height:-webkit-min-content;height:-moz-min-content;height:min-content;content:""}.modal-dialog-centered.modal-dialog-scrollable{-ms-flex-direction:column;flex-direction:column;-ms-flex-pack:center;justify-content:center;height:100%}.modal-dialog-centered.modal-dialog-scrollable .modal-content{max-height:none}.modal-dialog-centered.modal-dialog-scrollable::before{content:none}.modal-content{position:relative;display:-ms-flexbox;display:flex;-ms-flex-direction:column;flex-direction:column;width:100%;pointer-events:auto;background-color:#fff;background-clip:padding-box;border:1px solid rgba(0,0,0,.2);border-radius:.3rem;outline:0}.modal-backdrop{position:fixed;top:0;left:0;z-index:1040;width:100vw;height:100vh;background-color:#000}.modal-backdrop.fade{opacity:0}.modal-backdrop.show{opacity:.5}.modal-header{display:-ms-flexbox;display:flex;-ms-flex-align:start;align-items:flex-start;-ms-flex-pack:justify;justify-content:space-between;padding:1rem 1rem;border-bottom:1px solid #dee2e6;border-top-left-radius:calc(.3rem - 1px);border-top-right-radius:calc(.3rem - 1px)}.modal-header .close{padding:1rem 1rem;margin:-1rem -1rem -1rem auto}.modal-title{margin-bottom:0;line-height:1.5}.modal-body{position:relative;-ms-flex:1 1 auto;flex:1 1 auto;padding:1rem}.modal-footer{display:-ms-flexbox;display:flex;-ms-flex-wrap:wrap;flex-wrap:wrap;-ms-flex-align:center;align-items:center;-ms-flex-pack:end;justify-content:flex-end;padding:.75rem;border-top:1px solid #dee2e6;border-bottom-right-radius:calc(.3rem - 1px);border-bottom-left-radius:calc(.3rem - 1px)}.modal-footer>*{margin:.25rem}.modal-scrollbar-measure{position:absolute;top:-9999px;width:50px;height:50px;overflow:scroll}@media (min-width:576px){.modal-dialog{max-width:500px;margin:1.75rem auto}.modal-dialog-scrollable{max-height:calc(100% - 3.5rem)}.modal-dialog-scrollable .modal-content{max-height:calc(100vh - 3.5rem)}.modal-dialog-centered{min-height:calc(100% - 3.5rem)}.modal-dialog-centered::before{height:calc(100vh - 3.5rem);height:-webkit-min-content;height:-moz-min-content;height:min-content}.modal-sm{max-width:300px}}@media (min-width:992px){.modal-lg,.modal-xl{max-width:800px}}@media (min-width:1200px){.modal-xl{max-width:1140px}}.tooltip{position:absolute;z-index:1070;display:block;margin:0;font-family:-apple-system,BlinkMacSystemFont,"Segoe UI",Roboto,"Helvetica Neue",Arial,"Noto Sans",sans-serif,"Apple Color Emoji","Segoe UI Emoji","Segoe UI Symbol","Noto Color Emoji";font-style:normal;font-weight:400;line-height:1.5;text-align:left;text-align:start;text-decoration:none;text-shadow:none;text-transform:none;letter-spacing:normal;word-break:normal;word-spacing:normal;white-space:normal;line-break:auto;font-size:.875rem;word-wrap:break-word;opacity:0}.tooltip.show{opacity:.9}.tooltip .arrow{position:absolute;display:block;width:.8rem;height:.4rem}.tooltip .arrow::before{position:absolute;content:"";border-color:transparent;border-style:solid}.bs-tooltip-auto[x-placement^=top],.bs-tooltip-top{padding:.4rem 0}.bs-tooltip-auto[x-placement^=top] .arrow,.bs-tooltip-top .arrow{bottom:0}.bs-tooltip-auto[x-placement^=top] .arrow::before,.bs-tooltip-top .arrow::before{top:0;border-width:.4rem .4rem 0;border-top-color:#000}.bs-tooltip-auto[x-placement^=right],.bs-tooltip-right{padding:0 .4rem}.bs-tooltip-auto[x-placement^=right] .arrow,.bs-tooltip-right .arrow{left:0;width:.4rem;height:.8rem}.bs-tooltip-auto[x-placement^=right] .arrow::before,.bs-tooltip-right .arrow::before{right:0;border-width:.4rem .4rem .4rem 0;border-right-color:#000}.bs-tooltip-auto[x-placement^=bottom],.bs-tooltip-bottom{padding:.4rem 0}.bs-tooltip-auto[x-placement^=bottom] .arrow,.bs-tooltip-bottom .arrow{top:0}.bs-tooltip-auto[x-placement^=bottom] .arrow::before,.bs-tooltip-bottom .arrow::before{bottom:0;border-width:0 .4rem .4rem;border-bottom-color:#000}.bs-tooltip-auto[x-placement^=left],.bs-tooltip-left{padding:0 .4rem}.bs-tooltip-auto[x-placement^=left] .arrow,.bs-tooltip-left .arrow{right:0;width:.4rem;height:.8rem}.bs-tooltip-auto[x-placement^=left] .arrow::before,.bs-tooltip-left .arrow::before{left:0;border-width:.4rem 0 .4rem .4rem;border-left-color:#000}.tooltip-inner{max-width:200px;padding:.25rem .5rem;color:#fff;text-align:center;background-color:#000;border-radius:.25rem}.popover{position:absolute;top:0;left:0;z-index:1060;display:block;max-width:276px;font-family:-apple-system,BlinkMacSystemFont,"Segoe UI",Roboto,"Helvetica Neue",Arial,"Noto Sans",sans-serif,"Apple Color Emoji","Segoe UI Emoji","Segoe UI Symbol","Noto Color Emoji";font-style:normal;font-weight:400;line-height:1.5;text-align:left;text-align:start;text-decoration:none;text-shadow:none;text-transform:none;letter-spacing:normal;word-break:normal;word-spacing:normal;white-space:normal;line-break:auto;font-size:.875rem;word-wrap:break-word;background-color:#fff;background-clip:padding-box;border:1px solid rgba(0,0,0,.2);border-radius:.3rem}.popover .arrow{position:absolute;display:block;width:1rem;height:.5rem;margin:0 .3rem}.popover .arrow::after,.popover .arrow::before{position:absolute;display:block;content:"";border-color:transparent;border-style:solid}.bs-popover-auto[x-placement^=top],.bs-popover-top{margin-bottom:.5rem}.bs-popover-auto[x-placement^=top]>.arrow,.bs-popover-top>.arrow{bottom:calc(-.5rem - 1px)}.bs-popover-auto[x-placement^=top]>.arrow::before,.bs-popover-top>.arrow::before{bottom:0;border-width:.5rem .5rem 0;border-top-color:rgba(0,0,0,.25)}.bs-popover-auto[x-placement^=top]>.arrow::after,.bs-popover-top>.arrow::after{bottom:1px;border-width:.5rem .5rem 0;border-top-color:#fff}.bs-popover-auto[x-placement^=right],.bs-popover-right{margin-left:.5rem}.bs-popover-auto[x-placement^=right]>.arrow,.bs-popover-right>.arrow{left:calc(-.5rem - 1px);width:.5rem;height:1rem;margin:.3rem 0}.bs-popover-auto[x-placement^=right]>.arrow::before,.bs-popover-right>.arrow::before{left:0;border-width:.5rem .5rem .5rem 0;border-right-color:rgba(0,0,0,.25)}.bs-popover-auto[x-placement^=right]>.arrow::after,.bs-popover-right>.arrow::after{left:1px;border-width:.5rem .5rem .5rem 0;border-right-color:#fff}.bs-popover-auto[x-placement^=bottom],.bs-popover-bottom{margin-top:.5rem}.bs-popover-auto[x-placement^=bottom]>.arrow,.bs-popover-bottom>.arrow{top:calc(-.5rem - 1px)}.bs-popover-auto[x-placement^=bottom]>.arrow::before,.bs-popover-bottom>.arrow::before{top:0;border-width:0 .5rem .5rem .5rem;border-bottom-color:rgba(0,0,0,.25)}.bs-popover-auto[x-placement^=bottom]>.arrow::after,.bs-popover-bottom>.arrow::after{top:1px;border-width:0 .5rem .5rem .5rem;border-bottom-color:#fff}.bs-popover-auto[x-placement^=bottom] .popover-header::before,.bs-popover-bottom .popover-header::before{position:absolute;top:0;left:50%;display:block;width:1rem;margin-left:-.5rem;content:"";border-bottom:1px solid #f7f7f7}.bs-popover-auto[x-placement^=left],.bs-popover-left{margin-right:.5rem}.bs-popover-auto[x-placement^=left]>.arrow,.bs-popover-left>.arrow{right:calc(-.5rem - 1px);width:.5rem;height:1rem;margin:.3rem 0}.bs-popover-auto[x-placement^=left]>.arrow::before,.bs-popover-left>.arrow::before{right:0;border-width:.5rem 0 .5rem .5rem;border-left-color:rgba(0,0,0,.25)}.bs-popover-auto[x-placement^=left]>.arrow::after,.bs-popover-left>.arrow::after{right:1px;border-width:.5rem 0 .5rem .5rem;border-left-color:#fff}.popover-header{padding:.5rem .75rem;margin-bottom:0;font-size:1rem;background-color:#f7f7f7;border-bottom:1px solid #ebebeb;border-top-left-radius:calc(.3rem - 1px);border-top-right-radius:calc(.3rem - 1px)}.popover-header:empty{display:none}.popover-body{padding:.5rem .75rem;color:#212529}.carousel{position:relative}.carousel.pointer-event{-ms-touch-action:pan-y;touch-action:pan-y}.carousel-inner{position:relative;width:100%;overflow:hidden}.carousel-inner::after{display:block;clear:both;content:""}.carousel-item{position:relative;display:none;float:left;width:100%;margin-right:-100%;-webkit-backface-visibility:hidden;backface-visibility:hidden;transition:-webkit-transform .6s ease-in-out;transition:transform .6s ease-in-out;transition:transform .6s ease-in-out,-webkit-transform .6s ease-in-out}@media (prefers-reduced-motion:reduce){.carousel-item{transition:none}}.carousel-item-next,.carousel-item-prev,.carousel-item.active{display:block}.active.carousel-item-right,.carousel-item-next:not(.carousel-item-left){-webkit-transform:translateX(100%);transform:translateX(100%)}.active.carousel-item-left,.carousel-item-prev:not(.carousel-item-right){-webkit-transform:translateX(-100%);transform:translateX(-100%)}.carousel-fade .carousel-item{opacity:0;transition-property:opacity;-webkit-transform:none;transform:none}.carousel-fade .carousel-item-next.carousel-item-left,.carousel-fade .carousel-item-prev.carousel-item-right,.carousel-fade .carousel-item.active{z-index:1;opacity:1}.carousel-fade .active.carousel-item-left,.carousel-fade .active.carousel-item-right{z-index:0;opacity:0;transition:opacity 0s .6s}@media (prefers-reduced-motion:reduce){.carousel-fade .active.carousel-item-left,.carousel-fade .active.carousel-item-right{transition:none}}.carousel-control-next,.carousel-control-prev{position:absolute;top:0;bottom:0;z-index:1;display:-ms-flexbox;display:flex;-ms-flex-align:center;align-items:center;-ms-flex-pack:center;justify-content:center;width:15%;color:#fff;text-align:center;opacity:.5;transition:opacity .15s ease}@media (prefers-reduced-motion:reduce){.carousel-control-next,.carousel-control-prev{transition:none}}.carousel-control-next:focus,.carousel-control-next:hover,.carousel-control-prev:focus,.carousel-control-prev:hover{color:#fff;text-decoration:none;outline:0;opacity:.9}.carousel-control-prev{left:0}.carousel-control-next{right:0}.carousel-control-next-icon,.carousel-control-prev-icon{display:inline-block;width:20px;height:20px;background:no-repeat 50%/100% 100%}.carousel-control-prev-icon{background-image:url("data:image/svg+xml,%3csvg xmlns='http://www.w3.org/2000/svg' fill='%23fff' width='8' height='8' viewBox='0 0 8 8'%3e%3cpath d='M5.25 0l-4 4 4 4 1.5-1.5L4.25 4l2.5-2.5L5.25 0z'/%3e%3c/svg%3e")}.carousel-control-next-icon{background-image:url("data:image/svg+xml,%3csvg xmlns='http://www.w3.org/2000/svg' fill='%23fff' width='8' height='8' viewBox='0 0 8 8'%3e%3cpath d='M2.75 0l-1.5 1.5L3.75 4l-2.5 2.5L2.75 8l4-4-4-4z'/%3e%3c/svg%3e")}.carousel-indicators{position:absolute;right:0;bottom:0;left:0;z-index:15;display:-ms-flexbox;display:flex;-ms-flex-pack:center;justify-content:center;padding-left:0;margin-right:15%;margin-left:15%;list-style:none}.carousel-indicators li{box-sizing:content-box;-ms-flex:0 1 auto;flex:0 1 auto;width:30px;height:3px;margin-right:3px;margin-left:3px;text-indent:-999px;cursor:pointer;background-color:#fff;background-clip:padding-box;border-top:10px solid transparent;border-bottom:10px solid transparent;opacity:.5;transition:opacity .6s ease}@media (prefers-reduced-motion:reduce){.carousel-indicators li{transition:none}}.carousel-indicators .active{opacity:1}.carousel-caption{position:absolute;right:15%;bottom:20px;left:15%;z-index:10;padding-top:20px;padding-bottom:20px;color:#fff;text-align:center}@-webkit-keyframes spinner-border{to{-webkit-transform:rotate(360deg);transform:rotate(360deg)}}@keyframes spinner-border{to{-webkit-transform:rotate(360deg);transform:rotate(360deg)}}.spinner-border{display:inline-block;width:2rem;height:2rem;vertical-align:text-bottom;border:.25em solid currentColor;border-right-color:transparent;border-radius:50%;-webkit-animation:spinner-border .75s linear infinite;animation:spinner-border .75s linear infinite}.spinner-border-sm{width:1rem;height:1rem;border-width:.2em}@-webkit-keyframes spinner-grow{0%{-webkit-transform:scale(0);transform:scale(0)}50%{opacity:1;-webkit-transform:none;transform:none}}@keyframes spinner-grow{0%{-webkit-transform:scale(0);transform:scale(0)}50%{opacity:1;-webkit-transform:none;transform:none}}.spinner-grow{display:inline-block;width:2rem;height:2rem;vertical-align:text-bottom;background-color:currentColor;border-radius:50%;opacity:0;-webkit-animation:spinner-grow .75s linear infinite;animation:spinner-grow .75s linear infinite}.spinner-grow-sm{width:1rem;height:1rem}.align-baseline{vertical-align:baseline!important}.align-top{vertical-align:top!important}.align-middle{vertical-align:middle!important}.align-bottom{vertical-align:bottom!important}.align-text-bottom{vertical-align:text-bottom!important}.align-text-top{vertical-align:text-top!important}.bg-primary{background-color:#7a3016!important}a.bg-primary:focus,a.bg-primary:hover,button.bg-primary:focus,button.bg-primary:hover{background-color:#0062cc!important}.bg-secondary{background-color:#6c757d!important}a.bg-secondary:focus,a.bg-secondary:hover,button.bg-secondary:focus,button.bg-secondary:hover{background-color:#545b62!important}.bg-success{background-color:#28a745!important}a.bg-success:focus,a.bg-success:hover,button.bg-success:focus,button.bg-success:hover{background-color:#1e7e34!important}.bg-info{background-color:#17a2b8!important}a.bg-info:focus,a.bg-info:hover,button.bg-info:focus,button.bg-info:hover{background-color:#117a8b!important}.bg-warning{background-color:#ffc107!important}a.bg-warning:focus,a.bg-warning:hover,button.bg-warning:focus,button.bg-warning:hover{background-color:#d39e00!important}.bg-danger{background-color:#dc3545!important}a.bg-danger:focus,a.bg-danger:hover,button.bg-danger:focus,button.bg-danger:hover{background-color:#bd2130!important}.bg-light{background-color:#f8f9fa!important}a.bg-light:focus,a.bg-light:hover,button.bg-light:focus,button.bg-light:hover{background-color:#dae0e5!important}.bg-dark{background-color:#343a40!important}a.bg-dark:focus,a.bg-dark:hover,button.bg-dark:focus,button.bg-dark:hover{background-color:#1d2124!important}.bg-white{background-color:#fff!important}.bg-transparent{background-color:transparent!important}.border{border:1px solid #dee2e6!important}.border-top{border-top:1px solid #dee2e6!important}.border-right{border-right:1px solid #dee2e6!important}.border-bottom{border-bottom:1px solid #dee2e6!important}.border-left{border-left:1px solid #dee2e6!important}.border-0{border:0!important}.border-top-0{border-top:0!important}.border-right-0{border-right:0!important}.border-bottom-0{border-bottom:0!important}.border-left-0{border-left:0!important}.border-primary{border-color:#7a3016!important}.border-secondary{border-color:#6c757d!important}.border-success{border-color:#28a745!important}.border-info{border-color:#17a2b8!important}.border-warning{border-color:#ffc107!important}.border-danger{border-color:#dc3545!important}.border-light{border-color:#f8f9fa!important}.border-dark{border-color:#343a40!important}.border-white{border-color:#fff!important}.rounded-sm{border-radius:.2rem!important}.rounded{border-radius:.25rem!important}.rounded-top{border-top-left-radius:.25rem!important;border-top-right-radius:.25rem!important}.rounded-right{border-top-right-radius:.25rem!important;border-bottom-right-radius:.25rem!important}.rounded-bottom{border-bottom-right-radius:.25rem!important;border-bottom-left-radius:.25rem!important}.rounded-left{border-top-left-radius:.25rem!important;border-bottom-left-radius:.25rem!important}.rounded-lg{border-radius:.3rem!important}.rounded-circle{border-radius:50%!important}.rounded-pill{border-radius:50rem!important}.rounded-0{border-radius:0!important}.clearfix::after{display:block;clear:both;content:""}.d-none{display:none!important}.d-inline{display:inline!important}.d-inline-block{display:inline-block!important}.d-block{display:block!important}.d-table{display:table!important}.d-table-row{display:table-row!important}.d-table-cell{display:table-cell!important}.d-flex{display:-ms-flexbox!important;display:flex!important}.d-inline-flex{display:-ms-inline-flexbox!important;display:inline-flex!important}@media (min-width:576px){.d-sm-none{display:none!important}.d-sm-inline{display:inline!important}.d-sm-inline-block{display:inline-block!important}.d-sm-block{display:block!important}.d-sm-table{display:table!important}.d-sm-table-row{display:table-row!important}.d-sm-table-cell{display:table-cell!important}.d-sm-flex{display:-ms-flexbox!important;display:flex!important}.d-sm-inline-flex{display:-ms-inline-flexbox!important;display:inline-flex!important}}@media (min-width:768px){.d-md-none{display:none!important}.d-md-inline{display:inline!important}.d-md-inline-block{display:inline-block!important}.d-md-block{display:block!important}.d-md-table{display:table!important}.d-md-table-row{display:table-row!important}.d-md-table-cell{display:table-cell!important}.d-md-flex{display:-ms-flexbox!important;display:flex!important}.d-md-inline-flex{display:-ms-inline-flexbox!important;display:inline-flex!important}}@media (min-width:992px){.d-lg-none{display:none!important}.d-lg-inline{display:inline!important}.d-lg-inline-block{display:inline-block!important}.d-lg-block{display:block!important}.d-lg-table{display:table!important}.d-lg-table-row{display:table-row!important}.d-lg-table-cell{display:table-cell!important}.d-lg-flex{display:-ms-flexbox!important;display:flex!important}.d-lg-inline-flex{display:-ms-inline-flexbox!important;display:inline-flex!important}}@media (min-width:1200px){.d-xl-none{display:none!important}.d-xl-inline{display:inline!important}.d-xl-inline-block{display:inline-block!important}.d-xl-block{display:block!important}.d-xl-table{display:table!important}.d-xl-table-row{display:table-row!important}.d-xl-table-cell{display:table-cell!important}.d-xl-flex{display:-ms-flexbox!important;display:flex!important}.d-xl-inline-flex{display:-ms-inline-flexbox!important;display:inline-flex!important}}@media print{.d-print-none{display:none!important}.d-print-inline{display:inline!important}.d-print-inline-block{display:inline-block!important}.d-print-block{display:block!important}.d-print-table{display:table!important}.d-print-table-row{display:table-row!important}.d-print-table-cell{display:table-cell!important}.d-print-flex{display:-ms-flexbox!important;display:flex!important}.d-print-inline-flex{display:-ms-inline-flexbox!important;display:inline-flex!important}}.embed-responsive{position:relative;display:block;width:100%;padding:0;overflow:hidden}.embed-responsive::before{display:block;content:""}.embed-responsive .embed-responsive-item,.embed-responsive embed,.embed-responsive iframe,.embed-responsive object,.embed-responsive video{position:absolute;top:0;bottom:0;left:0;width:100%;height:100%;border:0}.embed-responsive-21by9::before{padding-top:42.857143%}.embed-responsive-16by9::before{padding-top:56.25%}.embed-responsive-4by3::before{padding-top:75%}.embed-responsive-1by1::before{padding-top:100%}.flex-row{-ms-flex-direction:row!important;flex-direction:row!important}.flex-column{-ms-flex-direction:column!important;flex-direction:column!important}.flex-row-reverse{-ms-flex-direction:row-reverse!important;flex-direction:row-reverse!important}.flex-column-reverse{-ms-flex-direction:column-reverse!important;flex-direction:column-reverse!important}.flex-wrap{-ms-flex-wrap:wrap!important;flex-wrap:wrap!important}.flex-nowrap{-ms-flex-wrap:nowrap!important;flex-wrap:nowrap!important}.flex-wrap-reverse{-ms-flex-wrap:wrap-reverse!important;flex-wrap:wrap-reverse!important}.flex-fill{-ms-flex:1 1 auto!important;flex:1 1 auto!important}.flex-grow-0{-ms-flex-positive:0!important;flex-grow:0!important}.flex-grow-1{-ms-flex-positive:1!important;flex-grow:1!important}.flex-shrink-0{-ms-flex-negative:0!important;flex-shrink:0!important}.flex-shrink-1{-ms-flex-negative:1!important;flex-shrink:1!important}.justify-content-start{-ms-flex-pack:start!important;justify-content:flex-start!important}.justify-content-end{-ms-flex-pack:end!important;justify-content:flex-end!important}.justify-content-center{-ms-flex-pack:center!important;justify-content:center!important}.justify-content-between{-ms-flex-pack:justify!important;justify-content:space-between!important}.justify-content-around{-ms-flex-pack:distribute!important;justify-content:space-around!important}.align-items-start{-ms-flex-align:start!important;align-items:flex-start!important}.align-items-end{-ms-flex-align:end!important;align-items:flex-end!important}.align-items-center{-ms-flex-align:center!important;align-items:center!important}.align-items-baseline{-ms-flex-align:baseline!important;align-items:baseline!important}.align-items-stretch{-ms-flex-align:stretch!important;align-items:stretch!important}.align-content-start{-ms-flex-line-pack:start!important;align-content:flex-start!important}.align-content-end{-ms-flex-line-pack:end!important;align-content:flex-end!important}.align-content-center{-ms-flex-line-pack:center!important;align-content:center!important}.align-content-between{-ms-flex-line-pack:justify!important;align-content:space-between!important}.align-content-around{-ms-flex-line-pack:distribute!important;align-content:space-around!important}.align-content-stretch{-ms-flex-line-pack:stretch!important;align-content:stretch!important}.align-self-auto{-ms-flex-item-align:auto!important;align-self:auto!important}.align-self-start{-ms-flex-item-align:start!important;align-self:flex-start!important}.align-self-end{-ms-flex-item-align:end!important;align-self:flex-end!important}.align-self-center{-ms-flex-item-align:center!important;align-self:center!important}.align-self-baseline{-ms-flex-item-align:baseline!important;align-self:baseline!important}.align-self-stretch{-ms-flex-item-align:stretch!important;align-self:stretch!important}@media (min-width:576px){.flex-sm-row{-ms-flex-direction:row!important;flex-direction:row!important}.flex-sm-column{-ms-flex-direction:column!important;flex-direction:column!important}.flex-sm-row-reverse{-ms-flex-direction:row-reverse!important;flex-direction:row-reverse!important}.flex-sm-column-reverse{-ms-flex-direction:column-reverse!important;flex-direction:column-reverse!important}.flex-sm-wrap{-ms-flex-wrap:wrap!important;flex-wrap:wrap!important}.flex-sm-nowrap{-ms-flex-wrap:nowrap!important;flex-wrap:nowrap!important}.flex-sm-wrap-reverse{-ms-flex-wrap:wrap-reverse!important;flex-wrap:wrap-reverse!important}.flex-sm-fill{-ms-flex:1 1 auto!important;flex:1 1 auto!important}.flex-sm-grow-0{-ms-flex-positive:0!important;flex-grow:0!important}.flex-sm-grow-1{-ms-flex-positive:1!important;flex-grow:1!important}.flex-sm-shrink-0{-ms-flex-negative:0!important;flex-shrink:0!important}.flex-sm-shrink-1{-ms-flex-negative:1!important;flex-shrink:1!important}.justify-content-sm-start{-ms-flex-pack:start!important;justify-content:flex-start!important}.justify-content-sm-end{-ms-flex-pack:end!important;justify-content:flex-end!important}.justify-content-sm-center{-ms-flex-pack:center!important;justify-content:center!important}.justify-content-sm-between{-ms-flex-pack:justify!important;justify-content:space-between!important}.justify-content-sm-around{-ms-flex-pack:distribute!important;justify-content:space-around!important}.align-items-sm-start{-ms-flex-align:start!important;align-items:flex-start!important}.align-items-sm-end{-ms-flex-align:end!important;align-items:flex-end!important}.align-items-sm-center{-ms-flex-align:center!important;align-items:center!important}.align-items-sm-baseline{-ms-flex-align:baseline!important;align-items:baseline!important}.align-items-sm-stretch{-ms-flex-align:stretch!important;align-items:stretch!important}.align-content-sm-start{-ms-flex-line-pack:start!important;align-content:flex-start!important}.align-content-sm-end{-ms-flex-line-pack:end!important;align-content:flex-end!important}.align-content-sm-center{-ms-flex-line-pack:center!important;align-content:center!important}.align-content-sm-between{-ms-flex-line-pack:justify!important;align-content:space-between!important}.align-content-sm-around{-ms-flex-line-pack:distribute!important;align-content:space-around!important}.align-content-sm-stretch{-ms-flex-line-pack:stretch!important;align-content:stretch!important}.align-self-sm-auto{-ms-flex-item-align:auto!important;align-self:auto!important}.align-self-sm-start{-ms-flex-item-align:start!important;align-self:flex-start!important}.align-self-sm-end{-ms-flex-item-align:end!important;align-self:flex-end!important}.align-self-sm-center{-ms-flex-item-align:center!important;align-self:center!important}.align-self-sm-baseline{-ms-flex-item-align:baseline!important;align-self:baseline!important}.align-self-sm-stretch{-ms-flex-item-align:stretch!important;align-self:stretch!important}}@media (min-width:768px){.flex-md-row{-ms-flex-direction:row!important;flex-direction:row!important}.flex-md-column{-ms-flex-direction:column!important;flex-direction:column!important}.flex-md-row-reverse{-ms-flex-direction:row-reverse!important;flex-direction:row-reverse!important}.flex-md-column-reverse{-ms-flex-direction:column-reverse!important;flex-direction:column-reverse!important}.flex-md-wrap{-ms-flex-wrap:wrap!important;flex-wrap:wrap!important}.flex-md-nowrap{-ms-flex-wrap:nowrap!important;flex-wrap:nowrap!important}.flex-md-wrap-reverse{-ms-flex-wrap:wrap-reverse!important;flex-wrap:wrap-reverse!important}.flex-md-fill{-ms-flex:1 1 auto!important;flex:1 1 auto!important}.flex-md-grow-0{-ms-flex-positive:0!important;flex-grow:0!important}.flex-md-grow-1{-ms-flex-positive:1!important;flex-grow:1!important}.flex-md-shrink-0{-ms-flex-negative:0!important;flex-shrink:0!important}.flex-md-shrink-1{-ms-flex-negative:1!important;flex-shrink:1!important}.justify-content-md-start{-ms-flex-pack:start!important;justify-content:flex-start!important}.justify-content-md-end{-ms-flex-pack:end!important;justify-content:flex-end!important}.justify-content-md-center{-ms-flex-pack:center!important;justify-content:center!important}.justify-content-md-between{-ms-flex-pack:justify!important;justify-content:space-between!important}.justify-content-md-around{-ms-flex-pack:distribute!important;justify-content:space-around!important}.align-items-md-start{-ms-flex-align:start!important;align-items:flex-start!important}.align-items-md-end{-ms-flex-align:end!important;align-items:flex-end!important}.align-items-md-center{-ms-flex-align:center!important;align-items:center!important}.align-items-md-baseline{-ms-flex-align:baseline!important;align-items:baseline!important}.align-items-md-stretch{-ms-flex-align:stretch!important;align-items:stretch!important}.align-content-md-start{-ms-flex-line-pack:start!important;align-content:flex-start!important}.align-content-md-end{-ms-flex-line-pack:end!important;align-content:flex-end!important}.align-content-md-center{-ms-flex-line-pack:center!important;align-content:center!important}.align-content-md-between{-ms-flex-line-pack:justify!important;align-content:space-between!important}.align-content-md-around{-ms-flex-line-pack:distribute!important;align-content:space-around!important}.align-content-md-stretch{-ms-flex-line-pack:stretch!important;align-content:stretch!important}.align-self-md-auto{-ms-flex-item-align:auto!important;align-self:auto!important}.align-self-md-start{-ms-flex-item-align:start!important;align-self:flex-start!important}.align-self-md-end{-ms-flex-item-align:end!important;align-self:flex-end!important}.align-self-md-center{-ms-flex-item-align:center!important;align-self:center!important}.align-self-md-baseline{-ms-flex-item-align:baseline!important;align-self:baseline!important}.align-self-md-stretch{-ms-flex-item-align:stretch!important;align-self:stretch!important}}@media (min-width:992px){.flex-lg-row{-ms-flex-direction:row!important;flex-direction:row!important}.flex-lg-column{-ms-flex-direction:column!important;flex-direction:column!important}.flex-lg-row-reverse{-ms-flex-direction:row-reverse!important;flex-direction:row-reverse!important}.flex-lg-column-reverse{-ms-flex-direction:column-reverse!important;flex-direction:column-reverse!important}.flex-lg-wrap{-ms-flex-wrap:wrap!important;flex-wrap:wrap!important}.flex-lg-nowrap{-ms-flex-wrap:nowrap!important;flex-wrap:nowrap!important}.flex-lg-wrap-reverse{-ms-flex-wrap:wrap-reverse!important;flex-wrap:wrap-reverse!important}.flex-lg-fill{-ms-flex:1 1 auto!important;flex:1 1 auto!important}.flex-lg-grow-0{-ms-flex-positive:0!important;flex-grow:0!important}.flex-lg-grow-1{-ms-flex-positive:1!important;flex-grow:1!important}.flex-lg-shrink-0{-ms-flex-negative:0!important;flex-shrink:0!important}.flex-lg-shrink-1{-ms-flex-negative:1!important;flex-shrink:1!important}.justify-content-lg-start{-ms-flex-pack:start!important;justify-content:flex-start!important}.justify-content-lg-end{-ms-flex-pack:end!important;justify-content:flex-end!important}.justify-content-lg-center{-ms-flex-pack:center!important;justify-content:center!important}.justify-content-lg-between{-ms-flex-pack:justify!important;justify-content:space-between!important}.justify-content-lg-around{-ms-flex-pack:distribute!important;justify-content:space-around!important}.align-items-lg-start{-ms-flex-align:start!important;align-items:flex-start!important}.align-items-lg-end{-ms-flex-align:end!important;align-items:flex-end!important}.align-items-lg-center{-ms-flex-align:center!important;align-items:center!important}.align-items-lg-baseline{-ms-flex-align:baseline!important;align-items:baseline!important}.align-items-lg-stretch{-ms-flex-align:stretch!important;align-items:stretch!important}.align-content-lg-start{-ms-flex-line-pack:start!important;align-content:flex-start!important}.align-content-lg-end{-ms-flex-line-pack:end!important;align-content:flex-end!important}.align-content-lg-center{-ms-flex-line-pack:center!important;align-content:center!important}.align-content-lg-between{-ms-flex-line-pack:justify!important;align-content:space-between!important}.align-content-lg-around{-ms-flex-line-pack:distribute!important;align-content:space-around!important}.align-content-lg-stretch{-ms-flex-line-pack:stretch!important;align-content:stretch!important}.align-self-lg-auto{-ms-flex-item-align:auto!important;align-self:auto!important}.align-self-lg-start{-ms-flex-item-align:start!important;align-self:flex-start!important}.align-self-lg-end{-ms-flex-item-align:end!important;align-self:flex-end!important}.align-self-lg-center{-ms-flex-item-align:center!important;align-self:center!important}.align-self-lg-baseline{-ms-flex-item-align:baseline!important;align-self:baseline!important}.align-self-lg-stretch{-ms-flex-item-align:stretch!important;align-self:stretch!important}}@media (min-width:1200px){.flex-xl-row{-ms-flex-direction:row!important;flex-direction:row!important}.flex-xl-column{-ms-flex-direction:column!important;flex-direction:column!important}.flex-xl-row-reverse{-ms-flex-direction:row-reverse!important;flex-direction:row-reverse!important}.flex-xl-column-reverse{-ms-flex-direction:column-reverse!important;flex-direction:column-reverse!important}.flex-xl-wrap{-ms-flex-wrap:wrap!important;flex-wrap:wrap!important}.flex-xl-nowrap{-ms-flex-wrap:nowrap!important;flex-wrap:nowrap!important}.flex-xl-wrap-reverse{-ms-flex-wrap:wrap-reverse!important;flex-wrap:wrap-reverse!important}.flex-xl-fill{-ms-flex:1 1 auto!important;flex:1 1 auto!important}.flex-xl-grow-0{-ms-flex-positive:0!important;flex-grow:0!important}.flex-xl-grow-1{-ms-flex-positive:1!important;flex-grow:1!important}.flex-xl-shrink-0{-ms-flex-negative:0!important;flex-shrink:0!important}.flex-xl-shrink-1{-ms-flex-negative:1!important;flex-shrink:1!important}.justify-content-xl-start{-ms-flex-pack:start!important;justify-content:flex-start!important}.justify-content-xl-end{-ms-flex-pack:end!important;justify-content:flex-end!important}.justify-content-xl-center{-ms-flex-pack:center!important;justify-content:center!important}.justify-content-xl-between{-ms-flex-pack:justify!important;justify-content:space-between!important}.justify-content-xl-around{-ms-flex-pack:distribute!important;justify-content:space-around!important}.align-items-xl-start{-ms-flex-align:start!important;align-items:flex-start!important}.align-items-xl-end{-ms-flex-align:end!important;align-items:flex-end!important}.align-items-xl-center{-ms-flex-align:center!important;align-items:center!important}.align-items-xl-baseline{-ms-flex-align:baseline!important;align-items:baseline!important}.align-items-xl-stretch{-ms-flex-align:stretch!important;align-items:stretch!important}.align-content-xl-start{-ms-flex-line-pack:start!important;align-content:flex-start!important}.align-content-xl-end{-ms-flex-line-pack:end!important;align-content:flex-end!important}.align-content-xl-center{-ms-flex-line-pack:center!important;align-content:center!important}.align-content-xl-between{-ms-flex-line-pack:justify!important;align-content:space-between!important}.align-content-xl-around{-ms-flex-line-pack:distribute!important;align-content:space-around!important}.align-content-xl-stretch{-ms-flex-line-pack:stretch!important;align-content:stretch!important}.align-self-xl-auto{-ms-flex-item-align:auto!important;align-self:auto!important}.align-self-xl-start{-ms-flex-item-align:start!important;align-self:flex-start!important}.align-self-xl-end{-ms-flex-item-align:end!important;align-self:flex-end!important}.align-self-xl-center{-ms-flex-item-align:center!important;align-self:center!important}.align-self-xl-baseline{-ms-flex-item-align:baseline!important;align-self:baseline!important}.align-self-xl-stretch{-ms-flex-item-align:stretch!important;align-self:stretch!important}}.float-left{float:left!important}.float-right{float:right!important}.float-none{float:none!important}@media (min-width:576px){.float-sm-left{float:left!important}.float-sm-right{float:right!important}.float-sm-none{float:none!important}}@media (min-width:768px){.float-md-left{float:left!important}.float-md-right{float:right!important}.float-md-none{float:none!important}}@media (min-width:992px){.float-lg-left{float:left!important}.float-lg-right{float:right!important}.float-lg-none{float:none!important}}@media (min-width:1200px){.float-xl-left{float:left!important}.float-xl-right{float:right!important}.float-xl-none{float:none!important}}.user-select-all{-webkit-user-select:all!important;-moz-user-select:all!important;-ms-user-select:all!important;user-select:all!important}.user-select-auto{-webkit-user-select:auto!important;-moz-user-select:auto!important;-ms-user-select:auto!important;user-select:auto!important}.user-select-none{-webkit-user-select:none!important;-moz-user-select:none!important;-ms-user-select:none!important;user-select:none!important}.overflow-auto{overflow:auto!important}.overflow-hidden{overflow:hidden!important}.position-static{position:static!important}.position-relative{position:relative!important}.position-absolute{position:absolute!important}.position-fixed{position:fixed!important}.position-sticky{position:-webkit-sticky!important;position:sticky!important}.fixed-top{position:fixed;top:0;right:0;left:0;z-index:1030}.fixed-bottom{position:fixed;right:0;bottom:0;left:0;z-index:1030}@supports ((position:-webkit-sticky) or (position:sticky)){.sticky-top{position:-webkit-sticky;position:sticky;top:0;z-index:1020}}.sr-only{position:absolute;width:1px;height:1px;padding:0;margin:-1px;overflow:hidden;clip:rect(0,0,0,0);white-space:nowrap;border:0}.sr-only-focusable:active,.sr-only-focusable:focus{position:static;width:auto;height:auto;overflow:visible;clip:auto;white-space:normal}.shadow-sm{box-shadow:0 .125rem .25rem rgba(0,0,0,.075)!important}.shadow{box-shadow:0 .5rem 1rem rgba(0,0,0,.15)!important}.shadow-lg{box-shadow:0 1rem 3rem rgba(0,0,0,.175)!important}.shadow-none{box-shadow:none!important}.w-25{width:25%!important}.w-50{width:50%!important}.w-75{width:75%!important}.w-100{width:100%!important}.w-auto{width:auto!important}.h-25{height:25%!important}.h-50{height:50%!important}.h-75{height:75%!important}.h-100{height:100%!important}.h-auto{height:auto!important}.mw-100{max-width:100%!important}.mh-100{max-height:100%!important}.min-vw-100{min-width:100vw!important}.min-vh-100{min-height:100vh!important}.vw-100{width:100vw!important}.vh-100{height:100vh!important}.m-0{margin:0!important}.mt-0,.my-0{margin-top:0!important}.mr-0,.mx-0{margin-right:0!important}.mb-0,.my-0{margin-bottom:0!important}.ml-0,.mx-0{margin-left:0!important}.m-1{margin:.25rem!important}.mt-1,.my-1{margin-top:.25rem!important}.mr-1,.mx-1{margin-right:.25rem!important}.mb-1,.my-1{margin-bottom:.25rem!important}.ml-1,.mx-1{margin-left:.25rem!important}.m-2{margin:.5rem!important}.mt-2,.my-2{margin-top:.5rem!important}.mr-2,.mx-2{margin-right:.5rem!important}.mb-2,.my-2{margin-bottom:.5rem!important}.ml-2,.mx-2{margin-left:.5rem!important}.m-3{margin:1rem!important}.mt-3,.my-3{margin-top:1rem!important}.mr-3,.mx-3{margin-right:1rem!important}.mb-3,.my-3{margin-bottom:1rem!important}.ml-3,.mx-3{margin-left:1rem!important}.m-4{margin:1.5rem!important}.mt-4,.my-4{margin-top:1.5rem!important}.mr-4,.mx-4{margin-right:1.5rem!important}.mb-4,.my-4{margin-bottom:1.5rem!important}.ml-4,.mx-4{margin-left:1.5rem!important}.m-5{margin:3rem!important}.mt-5,.my-5{margin-top:3rem!important}.mr-5,.mx-5{margin-right:3rem!important}.mb-5,.my-5{margin-bottom:3rem!important}.ml-5,.mx-5{margin-left:3rem!important}.p-0{padding:0!important}.pt-0,.py-0{padding-top:0!important}.pr-0,.px-0{padding-right:0!important}.pb-0,.py-0{padding-bottom:0!important}.pl-0,.px-0{padding-left:0!important}.p-1{padding:.25rem!important}.pt-1,.py-1{padding-top:.25rem!important}.pr-1,.px-1{padding-right:.25rem!important}.pb-1,.py-1{padding-bottom:.25rem!important}.pl-1,.px-1{padding-left:.25rem!important}.p-2{padding:.5rem!important}.pt-2,.py-2{padding-top:.5rem!important}.pr-2,.px-2{padding-right:.5rem!important}.pb-2,.py-2{padding-bottom:.5rem!important}.pl-2,.px-2{padding-left:.5rem!important}.p-3{padding:1rem!important}.pt-3,.py-3{padding-top:1rem!important}.pr-3,.px-3{padding-right:1rem!important}.pb-3,.py-3{padding-bottom:1rem!important}.pl-3,.px-3{padding-left:1rem!important}.p-4{padding:1.5rem!important}.pt-4,.py-4{padding-top:1.5rem!important}.pr-4,.px-4{padding-right:1.5rem!important}.pb-4,.py-4{padding-bottom:1.5rem!important}.pl-4,.px-4{padding-left:1.5rem!important}.p-5{padding:3rem!important}.pt-5,.py-5{padding-top:3rem!important}.pr-5,.px-5{padding-right:3rem!important}.pb-5,.py-5{padding-bottom:3rem!important}.pl-5,.px-5{padding-left:3rem!important}.m-n1{margin:-.25rem!important}.mt-n1,.my-n1{margin-top:-.25rem!important}.mr-n1,.mx-n1{margin-right:-.25rem!important}.mb-n1,.my-n1{margin-bottom:-.25rem!important}.ml-n1,.mx-n1{margin-left:-.25rem!important}.m-n2{margin:-.5rem!important}.mt-n2,.my-n2{margin-top:-.5rem!important}.mr-n2,.mx-n2{margin-right:-.5rem!important}.mb-n2,.my-n2{margin-bottom:-.5rem!important}.ml-n2,.mx-n2{margin-left:-.5rem!important}.m-n3{margin:-1rem!important}.mt-n3,.my-n3{margin-top:-1rem!important}.mr-n3,.mx-n3{margin-right:-1rem!important}.mb-n3,.my-n3{margin-bottom:-1rem!important}.ml-n3,.mx-n3{margin-left:-1rem!important}.m-n4{margin:-1.5rem!important}.mt-n4,.my-n4{margin-top:-1.5rem!important}.mr-n4,.mx-n4{margin-right:-1.5rem!important}.mb-n4,.my-n4{margin-bottom:-1.5rem!important}.ml-n4,.mx-n4{margin-left:-1.5rem!important}.m-n5{margin:-3rem!important}.mt-n5,.my-n5{margin-top:-3rem!important}.mr-n5,.mx-n5{margin-right:-3rem!important}.mb-n5,.my-n5{margin-bottom:-3rem!important}.ml-n5,.mx-n5{margin-left:-3rem!important}.m-auto{margin:auto!important}.mt-auto,.my-auto{margin-top:auto!important}.mr-auto,.mx-auto{margin-right:auto!important}.mb-auto,.my-auto{margin-bottom:auto!important}.ml-auto,.mx-auto{margin-left:auto!important}@media (min-width:576px){.m-sm-0{margin:0!important}.mt-sm-0,.my-sm-0{margin-top:0!important}.mr-sm-0,.mx-sm-0{margin-right:0!important}.mb-sm-0,.my-sm-0{margin-bottom:0!important}.ml-sm-0,.mx-sm-0{margin-left:0!important}.m-sm-1{margin:.25rem!important}.mt-sm-1,.my-sm-1{margin-top:.25rem!important}.mr-sm-1,.mx-sm-1{margin-right:.25rem!important}.mb-sm-1,.my-sm-1{margin-bottom:.25rem!important}.ml-sm-1,.mx-sm-1{margin-left:.25rem!important}.m-sm-2{margin:.5rem!important}.mt-sm-2,.my-sm-2{margin-top:.5rem!important}.mr-sm-2,.mx-sm-2{margin-right:.5rem!important}.mb-sm-2,.my-sm-2{margin-bottom:.5rem!important}.ml-sm-2,.mx-sm-2{margin-left:.5rem!important}.m-sm-3{margin:1rem!important}.mt-sm-3,.my-sm-3{margin-top:1rem!important}.mr-sm-3,.mx-sm-3{margin-right:1rem!important}.mb-sm-3,.my-sm-3{margin-bottom:1rem!important}.ml-sm-3,.mx-sm-3{margin-left:1rem!important}.m-sm-4{margin:1.5rem!important}.mt-sm-4,.my-sm-4{margin-top:1.5rem!important}.mr-sm-4,.mx-sm-4{margin-right:1.5rem!important}.mb-sm-4,.my-sm-4{margin-bottom:1.5rem!important}.ml-sm-4,.mx-sm-4{margin-left:1.5rem!important}.m-sm-5{margin:3rem!important}.mt-sm-5,.my-sm-5{margin-top:3rem!important}.mr-sm-5,.mx-sm-5{margin-right:3rem!important}.mb-sm-5,.my-sm-5{margin-bottom:3rem!important}.ml-sm-5,.mx-sm-5{margin-left:3rem!important}.p-sm-0{padding:0!important}.pt-sm-0,.py-sm-0{padding-top:0!important}.pr-sm-0,.px-sm-0{padding-right:0!important}.pb-sm-0,.py-sm-0{padding-bottom:0!important}.pl-sm-0,.px-sm-0{padding-left:0!important}.p-sm-1{padding:.25rem!important}.pt-sm-1,.py-sm-1{padding-top:.25rem!important}.pr-sm-1,.px-sm-1{padding-right:.25rem!important}.pb-sm-1,.py-sm-1{padding-bottom:.25rem!important}.pl-sm-1,.px-sm-1{padding-left:.25rem!important}.p-sm-2{padding:.5rem!important}.pt-sm-2,.py-sm-2{padding-top:.5rem!important}.pr-sm-2,.px-sm-2{padding-right:.5rem!important}.pb-sm-2,.py-sm-2{padding-bottom:.5rem!important}.pl-sm-2,.px-sm-2{padding-left:.5rem!important}.p-sm-3{padding:1rem!important}.pt-sm-3,.py-sm-3{padding-top:1rem!important}.pr-sm-3,.px-sm-3{padding-right:1rem!important}.pb-sm-3,.py-sm-3{padding-bottom:1rem!important}.pl-sm-3,.px-sm-3{padding-left:1rem!important}.p-sm-4{padding:1.5rem!important}.pt-sm-4,.py-sm-4{padding-top:1.5rem!important}.pr-sm-4,.px-sm-4{padding-right:1.5rem!important}.pb-sm-4,.py-sm-4{padding-bottom:1.5rem!important}.pl-sm-4,.px-sm-4{padding-left:1.5rem!important}.p-sm-5{padding:3rem!important}.pt-sm-5,.py-sm-5{padding-top:3rem!important}.pr-sm-5,.px-sm-5{padding-right:3rem!important}.pb-sm-5,.py-sm-5{padding-bottom:3rem!important}.pl-sm-5,.px-sm-5{padding-left:3rem!important}.m-sm-n1{margin:-.25rem!important}.mt-sm-n1,.my-sm-n1{margin-top:-.25rem!important}.mr-sm-n1,.mx-sm-n1{margin-right:-.25rem!important}.mb-sm-n1,.my-sm-n1{margin-bottom:-.25rem!important}.ml-sm-n1,.mx-sm-n1{margin-left:-.25rem!important}.m-sm-n2{margin:-.5rem!important}.mt-sm-n2,.my-sm-n2{margin-top:-.5rem!important}.mr-sm-n2,.mx-sm-n2{margin-right:-.5rem!important}.mb-sm-n2,.my-sm-n2{margin-bottom:-.5rem!important}.ml-sm-n2,.mx-sm-n2{margin-left:-.5rem!important}.m-sm-n3{margin:-1rem!important}.mt-sm-n3,.my-sm-n3{margin-top:-1rem!important}.mr-sm-n3,.mx-sm-n3{margin-right:-1rem!important}.mb-sm-n3,.my-sm-n3{margin-bottom:-1rem!important}.ml-sm-n3,.mx-sm-n3{margin-left:-1rem!important}.m-sm-n4{margin:-1.5rem!important}.mt-sm-n4,.my-sm-n4{margin-top:-1.5rem!important}.mr-sm-n4,.mx-sm-n4{margin-right:-1.5rem!important}.mb-sm-n4,.my-sm-n4{margin-bottom:-1.5rem!important}.ml-sm-n4,.mx-sm-n4{margin-left:-1.5rem!important}.m-sm-n5{margin:-3rem!important}.mt-sm-n5,.my-sm-n5{margin-top:-3rem!important}.mr-sm-n5,.mx-sm-n5{margin-right:-3rem!important}.mb-sm-n5,.my-sm-n5{margin-bottom:-3rem!important}.ml-sm-n5,.mx-sm-n5{margin-left:-3rem!important}.m-sm-auto{margin:auto!important}.mt-sm-auto,.my-sm-auto{margin-top:auto!important}.mr-sm-auto,.mx-sm-auto{margin-right:auto!important}.mb-sm-auto,.my-sm-auto{margin-bottom:auto!important}.ml-sm-auto,.mx-sm-auto{margin-left:auto!important}}@media (min-width:768px){.m-md-0{margin:0!important}.mt-md-0,.my-md-0{margin-top:0!important}.mr-md-0,.mx-md-0{margin-right:0!important}.mb-md-0,.my-md-0{margin-bottom:0!important}.ml-md-0,.mx-md-0{margin-left:0!important}.m-md-1{margin:.25rem!important}.mt-md-1,.my-md-1{margin-top:.25rem!important}.mr-md-1,.mx-md-1{margin-right:.25rem!important}.mb-md-1,.my-md-1{margin-bottom:.25rem!important}.ml-md-1,.mx-md-1{margin-left:.25rem!important}.m-md-2{margin:.5rem!important}.mt-md-2,.my-md-2{margin-top:.5rem!important}.mr-md-2,.mx-md-2{margin-right:.5rem!important}.mb-md-2,.my-md-2{margin-bottom:.5rem!important}.ml-md-2,.mx-md-2{margin-left:.5rem!important}.m-md-3{margin:1rem!important}.mt-md-3,.my-md-3{margin-top:1rem!important}.mr-md-3,.mx-md-3{margin-right:1rem!important}.mb-md-3,.my-md-3{margin-bottom:1rem!important}.ml-md-3,.mx-md-3{margin-left:1rem!important}.m-md-4{margin:1.5rem!important}.mt-md-4,.my-md-4{margin-top:1.5rem!important}.mr-md-4,.mx-md-4{margin-right:1.5rem!important}.mb-md-4,.my-md-4{margin-bottom:1.5rem!important}.ml-md-4,.mx-md-4{margin-left:1.5rem!important}.m-md-5{margin:3rem!important}.mt-md-5,.my-md-5{margin-top:3rem!important}.mr-md-5,.mx-md-5{margin-right:3rem!important}.mb-md-5,.my-md-5{margin-bottom:3rem!important}.ml-md-5,.mx-md-5{margin-left:3rem!important}.p-md-0{padding:0!important}.pt-md-0,.py-md-0{padding-top:0!important}.pr-md-0,.px-md-0{padding-right:0!important}.pb-md-0,.py-md-0{padding-bottom:0!important}.pl-md-0,.px-md-0{padding-left:0!important}.p-md-1{padding:.25rem!important}.pt-md-1,.py-md-1{padding-top:.25rem!important}.pr-md-1,.px-md-1{padding-right:.25rem!important}.pb-md-1,.py-md-1{padding-bottom:.25rem!important}.pl-md-1,.px-md-1{padding-left:.25rem!important}.p-md-2{padding:.5rem!important}.pt-md-2,.py-md-2{padding-top:.5rem!important}.pr-md-2,.px-md-2{padding-right:.5rem!important}.pb-md-2,.py-md-2{padding-bottom:.5rem!important}.pl-md-2,.px-md-2{padding-left:.5rem!important}.p-md-3{padding:1rem!important}.pt-md-3,.py-md-3{padding-top:1rem!important}.pr-md-3,.px-md-3{padding-right:1rem!important}.pb-md-3,.py-md-3{padding-bottom:1rem!important}.pl-md-3,.px-md-3{padding-left:1rem!important}.p-md-4{padding:1.5rem!important}.pt-md-4,.py-md-4{padding-top:1.5rem!important}.pr-md-4,.px-md-4{padding-right:1.5rem!important}.pb-md-4,.py-md-4{padding-bottom:1.5rem!important}.pl-md-4,.px-md-4{padding-left:1.5rem!important}.p-md-5{padding:3rem!important}.pt-md-5,.py-md-5{padding-top:3rem!important}.pr-md-5,.px-md-5{padding-right:3rem!important}.pb-md-5,.py-md-5{padding-bottom:3rem!important}.pl-md-5,.px-md-5{padding-left:3rem!important}.m-md-n1{margin:-.25rem!important}.mt-md-n1,.my-md-n1{margin-top:-.25rem!important}.mr-md-n1,.mx-md-n1{margin-right:-.25rem!important}.mb-md-n1,.my-md-n1{margin-bottom:-.25rem!important}.ml-md-n1,.mx-md-n1{margin-left:-.25rem!important}.m-md-n2{margin:-.5rem!important}.mt-md-n2,.my-md-n2{margin-top:-.5rem!important}.mr-md-n2,.mx-md-n2{margin-right:-.5rem!important}.mb-md-n2,.my-md-n2{margin-bottom:-.5rem!important}.ml-md-n2,.mx-md-n2{margin-left:-.5rem!important}.m-md-n3{margin:-1rem!important}.mt-md-n3,.my-md-n3{margin-top:-1rem!important}.mr-md-n3,.mx-md-n3{margin-right:-1rem!important}.mb-md-n3,.my-md-n3{margin-bottom:-1rem!important}.ml-md-n3,.mx-md-n3{margin-left:-1rem!important}.m-md-n4{margin:-1.5rem!important}.mt-md-n4,.my-md-n4{margin-top:-1.5rem!important}.mr-md-n4,.mx-md-n4{margin-right:-1.5rem!important}.mb-md-n4,.my-md-n4{margin-bottom:-1.5rem!important}.ml-md-n4,.mx-md-n4{margin-left:-1.5rem!important}.m-md-n5{margin:-3rem!important}.mt-md-n5,.my-md-n5{margin-top:-3rem!important}.mr-md-n5,.mx-md-n5{margin-right:-3rem!important}.mb-md-n5,.my-md-n5{margin-bottom:-3rem!important}.ml-md-n5,.mx-md-n5{margin-left:-3rem!important}.m-md-auto{margin:auto!important}.mt-md-auto,.my-md-auto{margin-top:auto!important}.mr-md-auto,.mx-md-auto{margin-right:auto!important}.mb-md-auto,.my-md-auto{margin-bottom:auto!important}.ml-md-auto,.mx-md-auto{margin-left:auto!important}}@media (min-width:992px){.m-lg-0{margin:0!important}.mt-lg-0,.my-lg-0{margin-top:0!important}.mr-lg-0,.mx-lg-0{margin-right:0!important}.mb-lg-0,.my-lg-0{margin-bottom:0!important}.ml-lg-0,.mx-lg-0{margin-left:0!important}.m-lg-1{margin:.25rem!important}.mt-lg-1,.my-lg-1{margin-top:.25rem!important}.mr-lg-1,.mx-lg-1{margin-right:.25rem!important}.mb-lg-1,.my-lg-1{margin-bottom:.25rem!important}.ml-lg-1,.mx-lg-1{margin-left:.25rem!important}.m-lg-2{margin:.5rem!important}.mt-lg-2,.my-lg-2{margin-top:.5rem!important}.mr-lg-2,.mx-lg-2{margin-right:.5rem!important}.mb-lg-2,.my-lg-2{margin-bottom:.5rem!important}.ml-lg-2,.mx-lg-2{margin-left:.5rem!important}.m-lg-3{margin:1rem!important}.mt-lg-3,.my-lg-3{margin-top:1rem!important}.mr-lg-3,.mx-lg-3{margin-right:1rem!important}.mb-lg-3,.my-lg-3{margin-bottom:1rem!important}.ml-lg-3,.mx-lg-3{margin-left:1rem!important}.m-lg-4{margin:1.5rem!important}.mt-lg-4,.my-lg-4{margin-top:1.5rem!important}.mr-lg-4,.mx-lg-4{margin-right:1.5rem!important}.mb-lg-4,.my-lg-4{margin-bottom:1.5rem!important}.ml-lg-4,.mx-lg-4{margin-left:1.5rem!important}.m-lg-5{margin:3rem!important}.mt-lg-5,.my-lg-5{margin-top:3rem!important}.mr-lg-5,.mx-lg-5{margin-right:3rem!important}.mb-lg-5,.my-lg-5{margin-bottom:3rem!important}.ml-lg-5,.mx-lg-5{margin-left:3rem!important}.p-lg-0{padding:0!important}.pt-lg-0,.py-lg-0{padding-top:0!important}.pr-lg-0,.px-lg-0{padding-right:0!important}.pb-lg-0,.py-lg-0{padding-bottom:0!important}.pl-lg-0,.px-lg-0{padding-left:0!important}.p-lg-1{padding:.25rem!important}.pt-lg-1,.py-lg-1{padding-top:.25rem!important}.pr-lg-1,.px-lg-1{padding-right:.25rem!important}.pb-lg-1,.py-lg-1{padding-bottom:.25rem!important}.pl-lg-1,.px-lg-1{padding-left:.25rem!important}.p-lg-2{padding:.5rem!important}.pt-lg-2,.py-lg-2{padding-top:.5rem!important}.pr-lg-2,.px-lg-2{padding-right:.5rem!important}.pb-lg-2,.py-lg-2{padding-bottom:.5rem!important}.pl-lg-2,.px-lg-2{padding-left:.5rem!important}.p-lg-3{padding:1rem!important}.pt-lg-3,.py-lg-3{padding-top:1rem!important}.pr-lg-3,.px-lg-3{padding-right:1rem!important}.pb-lg-3,.py-lg-3{padding-bottom:1rem!important}.pl-lg-3,.px-lg-3{padding-left:1rem!important}.p-lg-4{padding:1.5rem!important}.pt-lg-4,.py-lg-4{padding-top:1.5rem!important}.pr-lg-4,.px-lg-4{padding-right:1.5rem!important}.pb-lg-4,.py-lg-4{padding-bottom:1.5rem!important}.pl-lg-4,.px-lg-4{padding-left:1.5rem!important}.p-lg-5{padding:3rem!important}.pt-lg-5,.py-lg-5{padding-top:3rem!important}.pr-lg-5,.px-lg-5{padding-right:3rem!important}.pb-lg-5,.py-lg-5{padding-bottom:3rem!important}.pl-lg-5,.px-lg-5{padding-left:3rem!important}.m-lg-n1{margin:-.25rem!important}.mt-lg-n1,.my-lg-n1{margin-top:-.25rem!important}.mr-lg-n1,.mx-lg-n1{margin-right:-.25rem!important}.mb-lg-n1,.my-lg-n1{margin-bottom:-.25rem!important}.ml-lg-n1,.mx-lg-n1{margin-left:-.25rem!important}.m-lg-n2{margin:-.5rem!important}.mt-lg-n2,.my-lg-n2{margin-top:-.5rem!important}.mr-lg-n2,.mx-lg-n2{margin-right:-.5rem!important}.mb-lg-n2,.my-lg-n2{margin-bottom:-.5rem!important}.ml-lg-n2,.mx-lg-n2{margin-left:-.5rem!important}.m-lg-n3{margin:-1rem!important}.mt-lg-n3,.my-lg-n3{margin-top:-1rem!important}.mr-lg-n3,.mx-lg-n3{margin-right:-1rem!important}.mb-lg-n3,.my-lg-n3{margin-bottom:-1rem!important}.ml-lg-n3,.mx-lg-n3{margin-left:-1rem!important}.m-lg-n4{margin:-1.5rem!important}.mt-lg-n4,.my-lg-n4{margin-top:-1.5rem!important}.mr-lg-n4,.mx-lg-n4{margin-right:-1.5rem!important}.mb-lg-n4,.my-lg-n4{margin-bottom:-1.5rem!important}.ml-lg-n4,.mx-lg-n4{margin-left:-1.5rem!important}.m-lg-n5{margin:-3rem!important}.mt-lg-n5,.my-lg-n5{margin-top:-3rem!important}.mr-lg-n5,.mx-lg-n5{margin-right:-3rem!important}.mb-lg-n5,.my-lg-n5{margin-bottom:-3rem!important}.ml-lg-n5,.mx-lg-n5{margin-left:-3rem!important}.m-lg-auto{margin:auto!important}.mt-lg-auto,.my-lg-auto{margin-top:auto!important}.mr-lg-auto,.mx-lg-auto{margin-right:auto!important}.mb-lg-auto,.my-lg-auto{margin-bottom:auto!important}.ml-lg-auto,.mx-lg-auto{margin-left:auto!important}}@media (min-width:1200px){.m-xl-0{margin:0!important}.mt-xl-0,.my-xl-0{margin-top:0!important}.mr-xl-0,.mx-xl-0{margin-right:0!important}.mb-xl-0,.my-xl-0{margin-bottom:0!important}.ml-xl-0,.mx-xl-0{margin-left:0!important}.m-xl-1{margin:.25rem!important}.mt-xl-1,.my-xl-1{margin-top:.25rem!important}.mr-xl-1,.mx-xl-1{margin-right:.25rem!important}.mb-xl-1,.my-xl-1{margin-bottom:.25rem!important}.ml-xl-1,.mx-xl-1{margin-left:.25rem!important}.m-xl-2{margin:.5rem!important}.mt-xl-2,.my-xl-2{margin-top:.5rem!important}.mr-xl-2,.mx-xl-2{margin-right:.5rem!important}.mb-xl-2,.my-xl-2{margin-bottom:.5rem!important}.ml-xl-2,.mx-xl-2{margin-left:.5rem!important}.m-xl-3{margin:1rem!important}.mt-xl-3,.my-xl-3{margin-top:1rem!important}.mr-xl-3,.mx-xl-3{margin-right:1rem!important}.mb-xl-3,.my-xl-3{margin-bottom:1rem!important}.ml-xl-3,.mx-xl-3{margin-left:1rem!important}.m-xl-4{margin:1.5rem!important}.mt-xl-4,.my-xl-4{margin-top:1.5rem!important}.mr-xl-4,.mx-xl-4{margin-right:1.5rem!important}.mb-xl-4,.my-xl-4{margin-bottom:1.5rem!important}.ml-xl-4,.mx-xl-4{margin-left:1.5rem!important}.m-xl-5{margin:3rem!important}.mt-xl-5,.my-xl-5{margin-top:3rem!important}.mr-xl-5,.mx-xl-5{margin-right:3rem!important}.mb-xl-5,.my-xl-5{margin-bottom:3rem!important}.ml-xl-5,.mx-xl-5{margin-left:3rem!important}.p-xl-0{padding:0!important}.pt-xl-0,.py-xl-0{padding-top:0!important}.pr-xl-0,.px-xl-0{padding-right:0!important}.pb-xl-0,.py-xl-0{padding-bottom:0!important}.pl-xl-0,.px-xl-0{padding-left:0!important}.p-xl-1{padding:.25rem!important}.pt-xl-1,.py-xl-1{padding-top:.25rem!important}.pr-xl-1,.px-xl-1{padding-right:.25rem!important}.pb-xl-1,.py-xl-1{padding-bottom:.25rem!important}.pl-xl-1,.px-xl-1{padding-left:.25rem!important}.p-xl-2{padding:.5rem!important}.pt-xl-2,.py-xl-2{padding-top:.5rem!important}.pr-xl-2,.px-xl-2{padding-right:.5rem!important}.pb-xl-2,.py-xl-2{padding-bottom:.5rem!important}.pl-xl-2,.px-xl-2{padding-left:.5rem!important}.p-xl-3{padding:1rem!important}.pt-xl-3,.py-xl-3{padding-top:1rem!important}.pr-xl-3,.px-xl-3{padding-right:1rem!important}.pb-xl-3,.py-xl-3{padding-bottom:1rem!important}.pl-xl-3,.px-xl-3{padding-left:1rem!important}.p-xl-4{padding:1.5rem!important}.pt-xl-4,.py-xl-4{padding-top:1.5rem!important}.pr-xl-4,.px-xl-4{padding-right:1.5rem!important}.pb-xl-4,.py-xl-4{padding-bottom:1.5rem!important}.pl-xl-4,.px-xl-4{padding-left:1.5rem!important}.p-xl-5{padding:3rem!important}.pt-xl-5,.py-xl-5{padding-top:3rem!important}.pr-xl-5,.px-xl-5{padding-right:3rem!important}.pb-xl-5,.py-xl-5{padding-bottom:3rem!important}.pl-xl-5,.px-xl-5{padding-left:3rem!important}.m-xl-n1{margin:-.25rem!important}.mt-xl-n1,.my-xl-n1{margin-top:-.25rem!important}.mr-xl-n1,.mx-xl-n1{margin-right:-.25rem!important}.mb-xl-n1,.my-xl-n1{margin-bottom:-.25rem!important}.ml-xl-n1,.mx-xl-n1{margin-left:-.25rem!important}.m-xl-n2{margin:-.5rem!important}.mt-xl-n2,.my-xl-n2{margin-top:-.5rem!important}.mr-xl-n2,.mx-xl-n2{margin-right:-.5rem!important}.mb-xl-n2,.my-xl-n2{margin-bottom:-.5rem!important}.ml-xl-n2,.mx-xl-n2{margin-left:-.5rem!important}.m-xl-n3{margin:-1rem!important}.mt-xl-n3,.my-xl-n3{margin-top:-1rem!important}.mr-xl-n3,.mx-xl-n3{margin-right:-1rem!important}.mb-xl-n3,.my-xl-n3{margin-bottom:-1rem!important}.ml-xl-n3,.mx-xl-n3{margin-left:-1rem!important}.m-xl-n4{margin:-1.5rem!important}.mt-xl-n4,.my-xl-n4{margin-top:-1.5rem!important}.mr-xl-n4,.mx-xl-n4{margin-right:-1.5rem!important}.mb-xl-n4,.my-xl-n4{margin-bottom:-1.5rem!important}.ml-xl-n4,.mx-xl-n4{margin-left:-1.5rem!important}.m-xl-n5{margin:-3rem!important}.mt-xl-n5,.my-xl-n5{margin-top:-3rem!important}.mr-xl-n5,.mx-xl-n5{margin-right:-3rem!important}.mb-xl-n5,.my-xl-n5{margin-bottom:-3rem!important}.ml-xl-n5,.mx-xl-n5{margin-left:-3rem!important}.m-xl-auto{margin:auto!important}.mt-xl-auto,.my-xl-auto{margin-top:auto!important}.mr-xl-auto,.mx-xl-auto{margin-right:auto!important}.mb-xl-auto,.my-xl-auto{margin-bottom:auto!important}.ml-xl-auto,.mx-xl-auto{margin-left:auto!important}}.stretched-link::after{position:absolute;top:0;right:0;bottom:0;left:0;z-index:1;pointer-events:auto;content:"";background-color:rgba(0,0,0,0)}.text-monospace{font-family:SFMono-Regular,Menlo,Monaco,Consolas,"Liberation Mono","Courier New",monospace!important}.text-justify{text-align:justify!important}.text-wrap{white-space:normal!important}.text-nowrap{white-space:nowrap!important}.text-truncate{overflow:hidden;text-overflow:ellipsis;white-space:nowrap}.text-left{text-align:left!important}.text-right{text-align:right!important}.text-center{text-align:center!important}@media (min-width:576px){.text-sm-left{text-align:left!important}.text-sm-right{text-align:right!important}.text-sm-center{text-align:center!important}}@media (min-width:768px){.text-md-left{text-align:left!important}.text-md-right{text-align:right!important}.text-md-center{text-align:center!important}}@media (min-width:992px){.text-lg-left{text-align:left!important}.text-lg-right{text-align:right!important}.text-lg-center{text-align:center!important}}@media (min-width:1200px){.text-xl-left{text-align:left!important}.text-xl-right{text-align:right!important}.text-xl-center{text-align:center!important}}.text-lowercase{text-transform:lowercase!important}.text-uppercase{text-transform:uppercase!important}.text-capitalize{text-transform:capitalize!important}.font-weight-light{font-weight:300!important}.font-weight-lighter{font-weight:lighter!important}.font-weight-normal{font-weight:400!important}.font-weight-bold{font-weight:700!important}.font-weight-bolder{font-weight:bolder!important}.font-italic{font-style:italic!important}.text-white{color:#fff!important}.text-primary{color:#7a3016!important}a.text-primary:focus,a.text-primary:hover{color:#0056b3!important}.text-secondary{color:#6c757d!important}a.text-secondary:focus,a.text-secondary:hover{color:#494f54!important}.text-success{color:#28a745!important}a.text-success:focus,a.text-success:hover{color:#19692c!important}.text-info{color:#17a2b8!important}a.text-info:focus,a.text-info:hover{color:#0f6674!important}.text-warning{color:#ffc107!important}a.text-warning:focus,a.text-warning:hover{color:#ba8b00!important}.text-danger{color:#dc3545!important}a.text-danger:focus,a.text-danger:hover{color:#a71d2a!important}.text-light{color:#f8f9fa!important}a.text-light:focus,a.text-light:hover{color:#cbd3da!important}.text-dark{color:#343a40!important}a.text-dark:focus,a.text-dark:hover{color:#121416!important}.text-body{color:#212529!important}.text-muted{color:#6c757d!important}.text-black-50{color:rgba(0,0,0,.5)!important}.text-white-50{color:rgba(255,255,255,.5)!important}.text-hide{font:0/0 a;color:transparent;text-shadow:none;background-color:transparent;border:0}.text-decoration-none{text-decoration:none!important}.text-break{word-wrap:break-word!important}.text-reset{color:inherit!important}.visible{visibility:visible!important}.invisible{visibility:hidden!important}@media print{*,::after,::before{text-shadow:none!important;box-shadow:none!important}a:not(.btn){text-decoration:underline}abbr[title]::after{content:" (" attr(title) ")"}pre{white-space:pre-wrap!important}blockquote,pre{border:1px solid #adb5bd;page-break-inside:avoid}thead{display:table-header-group}img,tr{page-break-inside:avoid}h2,h3,p{orphans:3;widows:3}h2,h3{page-break-after:avoid}@page{size:a3}body{min-width:992px!important}.container{min-width:992px!important}.navbar{display:none}.badge{border:1px solid #000}.table{border-collapse:collapse!important}.table td,.table th{background-color:#fff!important}.table-bordered td,.table-bordered th{border:1px solid #dee2e6!important}.table-dark{color:inherit}.table-dark tbody+tbody,.table-dark td,.table-dark th,.table-dark thead th{border-color:#dee2e6}.table .thead-dark th{color:inherit;border-color:#dee2e6}} -/*# sourceMappingURL=bootstrap.min.css.map */ \ No newline at end of file diff --git a/spaces/huggingdalle/dalle-mini/html2canvas.js b/spaces/huggingdalle/dalle-mini/html2canvas.js deleted file mode 100644 index dd1606d8698aae0ed4877058d6a218fda3a515cd..0000000000000000000000000000000000000000 --- a/spaces/huggingdalle/dalle-mini/html2canvas.js +++ /dev/null @@ -1,7756 +0,0 @@ -/*! - * html2canvas 1.4.1 - * Copyright (c) 2022 Niklas von Hertzen - * Released under MIT License - */ -(function (global, factory) { - typeof exports === 'object' && typeof module !== 'undefined' ? module.exports = factory() : - typeof define === 'function' && define.amd ? define(factory) : - (global = typeof globalThis !== 'undefined' ? globalThis : global || self, global.html2canvas = factory()); -}(this, (function () { 'use strict'; - - /*! ***************************************************************************** - Copyright (c) Microsoft Corporation. - - Permission to use, copy, modify, and/or distribute this software for any - purpose with or without fee is hereby granted. - - THE SOFTWARE IS PROVIDED "AS IS" AND THE AUTHOR DISCLAIMS ALL WARRANTIES WITH - REGARD TO THIS SOFTWARE INCLUDING ALL IMPLIED WARRANTIES OF MERCHANTABILITY - AND FITNESS. IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR ANY SPECIAL, DIRECT, - INDIRECT, OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES WHATSOEVER RESULTING FROM - LOSS OF USE, DATA OR PROFITS, WHETHER IN AN ACTION OF CONTRACT, NEGLIGENCE OR - OTHER TORTIOUS ACTION, ARISING OUT OF OR IN CONNECTION WITH THE USE OR - PERFORMANCE OF THIS SOFTWARE. - ***************************************************************************** */ - /* global Reflect, Promise */ - - var extendStatics = function(d, b) { - extendStatics = Object.setPrototypeOf || - ({ __proto__: [] } instanceof Array && function (d, b) { d.__proto__ = b; }) || - function (d, b) { for (var p in b) if (Object.prototype.hasOwnProperty.call(b, p)) d[p] = b[p]; }; - return extendStatics(d, b); - }; - - function __extends(d, b) { - if (typeof b !== "function" && b !== null) - throw new TypeError("Class extends value " + String(b) + " is not a constructor or null"); - extendStatics(d, b); - function __() { this.constructor = d; } - d.prototype = b === null ? Object.create(b) : (__.prototype = b.prototype, new __()); - } - - var __assign = function() { - __assign = Object.assign || function __assign(t) { - for (var s, i = 1, n = arguments.length; i < n; i++) { - s = arguments[i]; - for (var p in s) if (Object.prototype.hasOwnProperty.call(s, p)) t[p] = s[p]; - } - return t; - }; - return __assign.apply(this, arguments); - }; - - function __awaiter(thisArg, _arguments, P, generator) { - function adopt(value) { return value instanceof P ? value : new P(function (resolve) { resolve(value); }); } - return new (P || (P = Promise))(function (resolve, reject) { - function fulfilled(value) { try { step(generator.next(value)); } catch (e) { reject(e); } } - function rejected(value) { try { step(generator["throw"](value)); } catch (e) { reject(e); } } - function step(result) { result.done ? resolve(result.value) : adopt(result.value).then(fulfilled, rejected); } - step((generator = generator.apply(thisArg, _arguments || [])).next()); - }); - } - - function __generator(thisArg, body) { - var _ = { label: 0, sent: function() { if (t[0] & 1) throw t[1]; return t[1]; }, trys: [], ops: [] }, f, y, t, g; - return g = { next: verb(0), "throw": verb(1), "return": verb(2) }, typeof Symbol === "function" && (g[Symbol.iterator] = function() { return this; }), g; - function verb(n) { return function (v) { return step([n, v]); }; } - function step(op) { - if (f) throw new TypeError("Generator is already executing."); - while (_) try { - if (f = 1, y && (t = op[0] & 2 ? y["return"] : op[0] ? y["throw"] || ((t = y["return"]) && t.call(y), 0) : y.next) && !(t = t.call(y, op[1])).done) return t; - if (y = 0, t) op = [op[0] & 2, t.value]; - switch (op[0]) { - case 0: case 1: t = op; break; - case 4: _.label++; return { value: op[1], done: false }; - case 5: _.label++; y = op[1]; op = [0]; continue; - case 7: op = _.ops.pop(); _.trys.pop(); continue; - default: - if (!(t = _.trys, t = t.length > 0 && t[t.length - 1]) && (op[0] === 6 || op[0] === 2)) { _ = 0; continue; } - if (op[0] === 3 && (!t || (op[1] > t[0] && op[1] < t[3]))) { _.label = op[1]; break; } - if (op[0] === 6 && _.label < t[1]) { _.label = t[1]; t = op; break; } - if (t && _.label < t[2]) { _.label = t[2]; _.ops.push(op); break; } - if (t[2]) _.ops.pop(); - _.trys.pop(); continue; - } - op = body.call(thisArg, _); - } catch (e) { op = [6, e]; y = 0; } finally { f = t = 0; } - if (op[0] & 5) throw op[1]; return { value: op[0] ? op[1] : void 0, done: true }; - } - } - - function __spreadArray(to, from, pack) { - if (pack || arguments.length === 2) for (var i = 0, l = from.length, ar; i < l; i++) { - if (ar || !(i in from)) { - if (!ar) ar = Array.prototype.slice.call(from, 0, i); - ar[i] = from[i]; - } - } - return to.concat(ar || from); - } - - var Bounds = /** @class */ (function () { - function Bounds(left, top, width, height) { - this.left = left; - this.top = top; - this.width = width; - this.height = height; - } - Bounds.prototype.add = function (x, y, w, h) { - return new Bounds(this.left + x, this.top + y, this.width + w, this.height + h); - }; - Bounds.fromClientRect = function (context, clientRect) { - return new Bounds(clientRect.left + context.windowBounds.left, clientRect.top + context.windowBounds.top, clientRect.width, clientRect.height); - }; - Bounds.fromDOMRectList = function (context, domRectList) { - var domRect = Array.from(domRectList).find(function (rect) { return rect.width !== 0; }); - return domRect - ? new Bounds(domRect.left + context.windowBounds.left, domRect.top + context.windowBounds.top, domRect.width, domRect.height) - : Bounds.EMPTY; - }; - Bounds.EMPTY = new Bounds(0, 0, 0, 0); - return Bounds; - }()); - var parseBounds = function (context, node) { - return Bounds.fromClientRect(context, node.getBoundingClientRect()); - }; - var parseDocumentSize = function (document) { - var body = document.body; - var documentElement = document.documentElement; - if (!body || !documentElement) { - throw new Error("Unable to get document size"); - } - var width = Math.max(Math.max(body.scrollWidth, documentElement.scrollWidth), Math.max(body.offsetWidth, documentElement.offsetWidth), Math.max(body.clientWidth, documentElement.clientWidth)); - var height = Math.max(Math.max(body.scrollHeight, documentElement.scrollHeight), Math.max(body.offsetHeight, documentElement.offsetHeight), Math.max(body.clientHeight, documentElement.clientHeight)); - return new Bounds(0, 0, width, height); - }; - - /* - * css-line-break 2.1.0 - * Copyright (c) 2022 Niklas von Hertzen - * Released under MIT License - */ - var toCodePoints$1 = function (str) { - var codePoints = []; - var i = 0; - var length = str.length; - while (i < length) { - var value = str.charCodeAt(i++); - if (value >= 0xd800 && value <= 0xdbff && i < length) { - var extra = str.charCodeAt(i++); - if ((extra & 0xfc00) === 0xdc00) { - codePoints.push(((value & 0x3ff) << 10) + (extra & 0x3ff) + 0x10000); - } - else { - codePoints.push(value); - i--; - } - } - else { - codePoints.push(value); - } - } - return codePoints; - }; - var fromCodePoint$1 = function () { - var codePoints = []; - for (var _i = 0; _i < arguments.length; _i++) { - codePoints[_i] = arguments[_i]; - } - if (String.fromCodePoint) { - return String.fromCodePoint.apply(String, codePoints); - } - var length = codePoints.length; - if (!length) { - return ''; - } - var codeUnits = []; - var index = -1; - var result = ''; - while (++index < length) { - var codePoint = codePoints[index]; - if (codePoint <= 0xffff) { - codeUnits.push(codePoint); - } - else { - codePoint -= 0x10000; - codeUnits.push((codePoint >> 10) + 0xd800, (codePoint % 0x400) + 0xdc00); - } - if (index + 1 === length || codeUnits.length > 0x4000) { - result += String.fromCharCode.apply(String, codeUnits); - codeUnits.length = 0; - } - } - return result; - }; - var chars$2 = 'ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789+/'; - // Use a lookup table to find the index. - var lookup$2 = typeof Uint8Array === 'undefined' ? [] : new Uint8Array(256); - for (var i$2 = 0; i$2 < chars$2.length; i$2++) { - lookup$2[chars$2.charCodeAt(i$2)] = i$2; - } - - /* - * utrie 1.0.2 - * Copyright (c) 2022 Niklas von Hertzen - * Released under MIT License - */ - var chars$1$1 = 'ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789+/'; - // Use a lookup table to find the index. - var lookup$1$1 = typeof Uint8Array === 'undefined' ? [] : new Uint8Array(256); - for (var i$1$1 = 0; i$1$1 < chars$1$1.length; i$1$1++) { - lookup$1$1[chars$1$1.charCodeAt(i$1$1)] = i$1$1; - } - var decode$1 = function (base64) { - var bufferLength = base64.length * 0.75, len = base64.length, i, p = 0, encoded1, encoded2, encoded3, encoded4; - if (base64[base64.length - 1] === '=') { - bufferLength--; - if (base64[base64.length - 2] === '=') { - bufferLength--; - } - } - var buffer = typeof ArrayBuffer !== 'undefined' && - typeof Uint8Array !== 'undefined' && - typeof Uint8Array.prototype.slice !== 'undefined' - ? new ArrayBuffer(bufferLength) - : new Array(bufferLength); - var bytes = Array.isArray(buffer) ? buffer : new Uint8Array(buffer); - for (i = 0; i < len; i += 4) { - encoded1 = lookup$1$1[base64.charCodeAt(i)]; - encoded2 = lookup$1$1[base64.charCodeAt(i + 1)]; - encoded3 = lookup$1$1[base64.charCodeAt(i + 2)]; - encoded4 = lookup$1$1[base64.charCodeAt(i + 3)]; - bytes[p++] = (encoded1 << 2) | (encoded2 >> 4); - bytes[p++] = ((encoded2 & 15) << 4) | (encoded3 >> 2); - bytes[p++] = ((encoded3 & 3) << 6) | (encoded4 & 63); - } - return buffer; - }; - var polyUint16Array$1 = function (buffer) { - var length = buffer.length; - var bytes = []; - for (var i = 0; i < length; i += 2) { - bytes.push((buffer[i + 1] << 8) | buffer[i]); - } - return bytes; - }; - var polyUint32Array$1 = function (buffer) { - var length = buffer.length; - var bytes = []; - for (var i = 0; i < length; i += 4) { - bytes.push((buffer[i + 3] << 24) | (buffer[i + 2] << 16) | (buffer[i + 1] << 8) | buffer[i]); - } - return bytes; - }; - - /** Shift size for getting the index-2 table offset. */ - var UTRIE2_SHIFT_2$1 = 5; - /** Shift size for getting the index-1 table offset. */ - var UTRIE2_SHIFT_1$1 = 6 + 5; - /** - * Shift size for shifting left the index array values. - * Increases possible data size with 16-bit index values at the cost - * of compactability. - * This requires data blocks to be aligned by UTRIE2_DATA_GRANULARITY. - */ - var UTRIE2_INDEX_SHIFT$1 = 2; - /** - * Difference between the two shift sizes, - * for getting an index-1 offset from an index-2 offset. 6=11-5 - */ - var UTRIE2_SHIFT_1_2$1 = UTRIE2_SHIFT_1$1 - UTRIE2_SHIFT_2$1; - /** - * The part of the index-2 table for U+D800..U+DBFF stores values for - * lead surrogate code _units_ not code _points_. - * Values for lead surrogate code _points_ are indexed with this portion of the table. - * Length=32=0x20=0x400>>UTRIE2_SHIFT_2. (There are 1024=0x400 lead surrogates.) - */ - var UTRIE2_LSCP_INDEX_2_OFFSET$1 = 0x10000 >> UTRIE2_SHIFT_2$1; - /** Number of entries in a data block. 32=0x20 */ - var UTRIE2_DATA_BLOCK_LENGTH$1 = 1 << UTRIE2_SHIFT_2$1; - /** Mask for getting the lower bits for the in-data-block offset. */ - var UTRIE2_DATA_MASK$1 = UTRIE2_DATA_BLOCK_LENGTH$1 - 1; - var UTRIE2_LSCP_INDEX_2_LENGTH$1 = 0x400 >> UTRIE2_SHIFT_2$1; - /** Count the lengths of both BMP pieces. 2080=0x820 */ - var UTRIE2_INDEX_2_BMP_LENGTH$1 = UTRIE2_LSCP_INDEX_2_OFFSET$1 + UTRIE2_LSCP_INDEX_2_LENGTH$1; - /** - * The 2-byte UTF-8 version of the index-2 table follows at offset 2080=0x820. - * Length 32=0x20 for lead bytes C0..DF, regardless of UTRIE2_SHIFT_2. - */ - var UTRIE2_UTF8_2B_INDEX_2_OFFSET$1 = UTRIE2_INDEX_2_BMP_LENGTH$1; - var UTRIE2_UTF8_2B_INDEX_2_LENGTH$1 = 0x800 >> 6; /* U+0800 is the first code point after 2-byte UTF-8 */ - /** - * The index-1 table, only used for supplementary code points, at offset 2112=0x840. - * Variable length, for code points up to highStart, where the last single-value range starts. - * Maximum length 512=0x200=0x100000>>UTRIE2_SHIFT_1. - * (For 0x100000 supplementary code points U+10000..U+10ffff.) - * - * The part of the index-2 table for supplementary code points starts - * after this index-1 table. - * - * Both the index-1 table and the following part of the index-2 table - * are omitted completely if there is only BMP data. - */ - var UTRIE2_INDEX_1_OFFSET$1 = UTRIE2_UTF8_2B_INDEX_2_OFFSET$1 + UTRIE2_UTF8_2B_INDEX_2_LENGTH$1; - /** - * Number of index-1 entries for the BMP. 32=0x20 - * This part of the index-1 table is omitted from the serialized form. - */ - var UTRIE2_OMITTED_BMP_INDEX_1_LENGTH$1 = 0x10000 >> UTRIE2_SHIFT_1$1; - /** Number of entries in an index-2 block. 64=0x40 */ - var UTRIE2_INDEX_2_BLOCK_LENGTH$1 = 1 << UTRIE2_SHIFT_1_2$1; - /** Mask for getting the lower bits for the in-index-2-block offset. */ - var UTRIE2_INDEX_2_MASK$1 = UTRIE2_INDEX_2_BLOCK_LENGTH$1 - 1; - var slice16$1 = function (view, start, end) { - if (view.slice) { - return view.slice(start, end); - } - return new Uint16Array(Array.prototype.slice.call(view, start, end)); - }; - var slice32$1 = function (view, start, end) { - if (view.slice) { - return view.slice(start, end); - } - return new Uint32Array(Array.prototype.slice.call(view, start, end)); - }; - var createTrieFromBase64$1 = function (base64, _byteLength) { - var buffer = decode$1(base64); - var view32 = Array.isArray(buffer) ? polyUint32Array$1(buffer) : new Uint32Array(buffer); - var view16 = Array.isArray(buffer) ? polyUint16Array$1(buffer) : new Uint16Array(buffer); - var headerLength = 24; - var index = slice16$1(view16, headerLength / 2, view32[4] / 2); - var data = view32[5] === 2 - ? slice16$1(view16, (headerLength + view32[4]) / 2) - : slice32$1(view32, Math.ceil((headerLength + view32[4]) / 4)); - return new Trie$1(view32[0], view32[1], view32[2], view32[3], index, data); - }; - var Trie$1 = /** @class */ (function () { - function Trie(initialValue, errorValue, highStart, highValueIndex, index, data) { - this.initialValue = initialValue; - this.errorValue = errorValue; - this.highStart = highStart; - this.highValueIndex = highValueIndex; - this.index = index; - this.data = data; - } - /** - * Get the value for a code point as stored in the Trie. - * - * @param codePoint the code point - * @return the value - */ - Trie.prototype.get = function (codePoint) { - var ix; - if (codePoint >= 0) { - if (codePoint < 0x0d800 || (codePoint > 0x0dbff && codePoint <= 0x0ffff)) { - // Ordinary BMP code point, excluding leading surrogates. - // BMP uses a single level lookup. BMP index starts at offset 0 in the Trie2 index. - // 16 bit data is stored in the index array itself. - ix = this.index[codePoint >> UTRIE2_SHIFT_2$1]; - ix = (ix << UTRIE2_INDEX_SHIFT$1) + (codePoint & UTRIE2_DATA_MASK$1); - return this.data[ix]; - } - if (codePoint <= 0xffff) { - // Lead Surrogate Code Point. A Separate index section is stored for - // lead surrogate code units and code points. - // The main index has the code unit data. - // For this function, we need the code point data. - // Note: this expression could be refactored for slightly improved efficiency, but - // surrogate code points will be so rare in practice that it's not worth it. - ix = this.index[UTRIE2_LSCP_INDEX_2_OFFSET$1 + ((codePoint - 0xd800) >> UTRIE2_SHIFT_2$1)]; - ix = (ix << UTRIE2_INDEX_SHIFT$1) + (codePoint & UTRIE2_DATA_MASK$1); - return this.data[ix]; - } - if (codePoint < this.highStart) { - // Supplemental code point, use two-level lookup. - ix = UTRIE2_INDEX_1_OFFSET$1 - UTRIE2_OMITTED_BMP_INDEX_1_LENGTH$1 + (codePoint >> UTRIE2_SHIFT_1$1); - ix = this.index[ix]; - ix += (codePoint >> UTRIE2_SHIFT_2$1) & UTRIE2_INDEX_2_MASK$1; - ix = this.index[ix]; - ix = (ix << UTRIE2_INDEX_SHIFT$1) + (codePoint & UTRIE2_DATA_MASK$1); - return this.data[ix]; - } - if (codePoint <= 0x10ffff) { - return this.data[this.highValueIndex]; - } - } - // Fall through. The code point is outside of the legal range of 0..0x10ffff. - return this.errorValue; - }; - return Trie; - }()); - - /* - * base64-arraybuffer 1.0.2 - * Copyright (c) 2022 Niklas von Hertzen - * Released under MIT License - */ - var chars$3 = 'ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789+/'; - // Use a lookup table to find the index. - var lookup$3 = typeof Uint8Array === 'undefined' ? [] : new Uint8Array(256); - for (var i$3 = 0; i$3 < chars$3.length; i$3++) { - lookup$3[chars$3.charCodeAt(i$3)] = i$3; - } - - var base64$1 = 'KwAAAAAAAAAACA4AUD0AADAgAAACAAAAAAAIABAAGABAAEgAUABYAGAAaABgAGgAYgBqAF8AZwBgAGgAcQB5AHUAfQCFAI0AlQCdAKIAqgCyALoAYABoAGAAaABgAGgAwgDKAGAAaADGAM4A0wDbAOEA6QDxAPkAAQEJAQ8BFwF1AH0AHAEkASwBNAE6AUIBQQFJAVEBWQFhAWgBcAF4ATAAgAGGAY4BlQGXAZ8BpwGvAbUBvQHFAc0B0wHbAeMB6wHxAfkBAQIJAvEBEQIZAiECKQIxAjgCQAJGAk4CVgJeAmQCbAJ0AnwCgQKJApECmQKgAqgCsAK4ArwCxAIwAMwC0wLbAjAA4wLrAvMC+AIAAwcDDwMwABcDHQMlAy0DNQN1AD0DQQNJA0kDSQNRA1EDVwNZA1kDdQB1AGEDdQBpA20DdQN1AHsDdQCBA4kDkQN1AHUAmQOhA3UAdQB1AHUAdQB1AHUAdQB1AHUAdQB1AHUAdQB1AHUAdQB1AKYDrgN1AHUAtgO+A8YDzgPWAxcD3gPjA+sD8wN1AHUA+wMDBAkEdQANBBUEHQQlBCoEFwMyBDgEYABABBcDSARQBFgEYARoBDAAcAQzAXgEgASIBJAEdQCXBHUAnwSnBK4EtgS6BMIEyAR1AHUAdQB1AHUAdQCVANAEYABgAGAAYABgAGAAYABgANgEYADcBOQEYADsBPQE/AQEBQwFFAUcBSQFLAU0BWQEPAVEBUsFUwVbBWAAYgVgAGoFcgV6BYIFigWRBWAAmQWfBaYFYABgAGAAYABgAKoFYACxBbAFuQW6BcEFwQXHBcEFwQXPBdMF2wXjBeoF8gX6BQIGCgYSBhoGIgYqBjIGOgZgAD4GRgZMBmAAUwZaBmAAYABgAGAAYABgAGAAYABgAGAAYABgAGIGYABpBnAGYABgAGAAYABgAGAAYABgAGAAYAB4Bn8GhQZgAGAAYAB1AHcDFQSLBmAAYABgAJMGdQA9A3UAmwajBqsGqwaVALMGuwbDBjAAywbSBtIG1QbSBtIG0gbSBtIG0gbdBuMG6wbzBvsGAwcLBxMHAwcbByMHJwcsBywHMQcsB9IGOAdAB0gHTgfSBkgHVgfSBtIG0gbSBtIG0gbSBtIG0gbSBiwHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAdgAGAALAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAdbB2MHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsB2kH0gZwB64EdQB1AHUAdQB1AHUAdQB1AHUHfQdgAIUHjQd1AHUAlQedB2AAYAClB6sHYACzB7YHvgfGB3UAzgfWBzMB3gfmB1EB7gf1B/0HlQENAQUIDQh1ABUIHQglCBcDLQg1CD0IRQhNCEEDUwh1AHUAdQBbCGMIZAhlCGYIZwhoCGkIYwhkCGUIZghnCGgIaQhjCGQIZQhmCGcIaAhpCGMIZAhlCGYIZwhoCGkIYwhkCGUIZghnCGgIaQhjCGQIZQhmCGcIaAhpCGMIZAhlCGYIZwhoCGkIYwhkCGUIZghnCGgIaQhjCGQIZQhmCGcIaAhpCGMIZAhlCGYIZwhoCGkIYwhkCGUIZghnCGgIaQhjCGQIZQhmCGcIaAhpCGMIZAhlCGYIZwhoCGkIYwhkCGUIZghnCGgIaQhjCGQIZQhmCGcIaAhpCGMIZAhlCGYIZwhoCGkIYwhkCGUIZghnCGgIaQhjCGQIZQhmCGcIaAhpCGMIZAhlCGYIZwhoCGkIYwhkCGUIZghnCGgIaQhjCGQIZQhmCGcIaAhpCGMIZAhlCGYIZwhoCGkIYwhkCGUIZghnCGgIaQhjCGQIZQhmCGcIaAhpCGMIZAhlCGYIZwhoCGkIYwhkCGUIZghnCGgIaQhjCGQIZQhmCGcIaAhpCGMIZAhlCGYIZwhoCGkIYwhkCGUIZghnCGgIaQhjCGQIZQhmCGcIaAhpCGMIZAhlCGYIZwhoCGkIYwhkCGUIZghnCGgIaQhjCGQIZQhmCGcIaAhpCGMIZAhlCGYIZwhoCGkIYwhkCGUIZghnCGgIaQhjCGQIZQhmCGcIaAhpCGMIZAhlCGYIZwhoCGkIYwhkCGUIZghnCGgIaQhjCGQIZQhmCGcIaAhpCGMIZAhlCGYIZwhoCGkIYwhkCGUIZghnCGgIaQhjCGQIZQhmCGcIaAhpCGMIZAhlCGYIZwhoCGkIYwhkCGUIZghnCGgIaQhjCGQIZQhmCGcIaAhpCGMIZAhlCGYIZwhoCGkIYwhkCGUIZghnCGgIaQhjCGQIZQhmCGcIaAhpCGMIZAhlCGYIZwhoCGkIYwhkCGUIZghnCGgIcAh3CHoIMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwAIIIggiCCIIIggiCCIIIggiCCIIIggiCCIIIggiCCIIIggiCCIIIggiCCIIIggiCCIIIggiCCIIIggiCCIIIgggwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAALAcsBywHLAcsBywHLAcsBywHLAcsB4oILAcsB44I0gaWCJ4Ipgh1AHUAqgiyCHUAdQB1AHUAdQB1AHUAdQB1AHUAtwh8AXUAvwh1AMUIyQjRCNkI4AjoCHUAdQB1AO4I9gj+CAYJDgkTCS0HGwkjCYIIggiCCIIIggiCCIIIggiCCIIIggiCCIIIggiCCIIIggiCCIIIggiCCIIIggiCCIIIggiCCIIIggiCCIIIggiAAIAAAAFAAYABgAGIAXwBgAHEAdQBFAJUAogCyAKAAYABgAEIA4ABGANMA4QDxAMEBDwE1AFwBLAE6AQEBUQF4QkhCmEKoQrhCgAHIQsAB0MLAAcABwAHAAeDC6ABoAHDCwMMAAcABwAHAAdDDGMMAAcAB6MM4wwjDWMNow3jDaABoAGgAaABoAGgAaABoAGgAaABoAGgAaABoAGgAaABoAGgAaABoAEjDqABWw6bDqABpg6gAaABoAHcDvwOPA+gAaABfA/8DvwO/A78DvwO/A78DvwO/A78DvwO/A78DvwO/A78DvwO/A78DvwO/A78DvwO/A78DvwO/A78DpcPAAcABwAHAAcABwAHAAcABwAHAAcABwAHAAcABwAHAAcABwAHAAcABwAHAAcABwAHAAcABwAHAAcABwAHAAcABwAHAAcABwAHAAcABwAHAAcABwAHAAcABwAHAAcABwAHAAcABwAHAAcABwAHAAcABwAHAAcABwAHAAcABwAHAAcABwAHAAcABwAHAAcABwAHAAcABwAHAAcABwAHAAcABwAHAAcABwAHAAcABwAHAAcABwAHAAcABwAHAAcABwAHAAcABwAHAAcABwAHAAcABwAHAAcABwAHAAcABwAHAAcABwAHAAcABwAHAAcABwAHAAcABwAHAAcABwAHAAcABwAHAAcABwAHAAcABwAHAAcABwAHAAcABwAHAAcABwAHAAcABwAHAAcABwAHAAcABwAHAAcABwAHAAcABwAHAAcABwAHAAcABwAHAAcABwAHAAcABwAHAAcABwAHAAcABwAHAAcABwAHAAcABwAHAAcABwAHAAcABwAHAAcABwAHAAcABwAHAAcABwAHAAcABwAHAAcABwAHAAcABwAHAAcABwAHAAcABwAHAAcABwAHAAcABwAHAAcABwAHAAcABwAHAAcABwAHAAcABwAHAAcABwAHAAcABwAHAAcABwAHAAcABwAHAAcABwAHAAcABwAHAAcABwAHAAcABwAHAAcABwAHAAcABwAHAAcABwAHAAcABwAHAAcABwAHAAcABwAHAAcABwAHAAcABwAHAAcABwAHAAcABwAHAAcABwAHAAcABwAHAAcABwAHAAcABwAHAAcABwAHAAcABwAHAAcABwAHAAcABwAHAAcABwAHAAcABwAHAAcABwAHAAcAB9cPKwkyCToJMAB1AHUAdQBCCUoJTQl1AFUJXAljCWcJawkwADAAMAAwAHMJdQB2CX4JdQCECYoJjgmWCXUAngkwAGAAYABxAHUApgn3A64JtAl1ALkJdQDACTAAMAAwADAAdQB1AHUAdQB1AHUAdQB1AHUAowYNBMUIMAAwADAAMADICcsJ0wnZCRUE4QkwAOkJ8An4CTAAMAB1AAAKvwh1AAgKDwoXCh8KdQAwACcKLgp1ADYKqAmICT4KRgowADAAdQB1AE4KMAB1AFYKdQBeCnUAZQowADAAMAAwADAAMAAwADAAMAAVBHUAbQowADAAdQC5CXUKMAAwAHwBxAijBogEMgF9CoQKiASMCpQKmgqIBKIKqgquCogEDQG2Cr4KxgrLCjAAMADTCtsKCgHjCusK8Qr5CgELMAAwADAAMAB1AIsECQsRC3UANAEZCzAAMAAwADAAMAB1ACELKQswAHUANAExCzkLdQBBC0kLMABRC1kLMAAwADAAMAAwADAAdQBhCzAAMAAwAGAAYABpC3ELdwt/CzAAMACHC4sLkwubC58Lpwt1AK4Ltgt1APsDMAAwADAAMAAwADAAMAAwAL4LwwvLC9IL1wvdCzAAMADlC+kL8Qv5C/8LSQswADAAMAAwADAAMAAwADAAMAAHDDAAMAAwADAAMAAODBYMHgx1AHUAdQB1AHUAdQB1AHUAdQB1AHUAdQB1AHUAdQB1AHUAdQB1AHUAdQB1AHUAdQB1AHUAdQB1ACYMMAAwADAAdQB1AHUALgx1AHUAdQB1AHUAdQA2DDAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwAHUAdQB1AHUAdQB1AHUAdQB1AHUAdQB1AHUAdQB1AHUAdQB1AD4MdQBGDHUAdQB1AHUAdQB1AEkMdQB1AHUAdQB1AFAMMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwAHUAdQB1AHUAdQB1AHUAdQB1AHUAdQB1AHUAdQBYDHUAdQB1AF8MMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAB1AHUAdQB1AHUAdQB1AHUAdQB1AHUAdQB1AHUAdQB1AHUA+wMVBGcMMAAwAHwBbwx1AHcMfwyHDI8MMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAYABgAJcMMAAwADAAdQB1AJ8MlQClDDAAMACtDCwHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsB7UMLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHdQB1AHUAdQB1AHUAdQB1AHUAdQB1AHUAdQB1AA0EMAC9DDAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAsBywHLAcsBywHLAcsBywHLQcwAMEMyAwsBywHLAcsBywHLAcsBywHLAcsBywHzAwwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwAHUAdQB1ANQM2QzhDDAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMABgAGAAYABgAGAAYABgAOkMYADxDGAA+AwADQYNYABhCWAAYAAODTAAMAAwADAAFg1gAGAAHg37AzAAMAAwADAAYABgACYNYAAsDTQNPA1gAEMNPg1LDWAAYABgAGAAYABgAGAAYABgAGAAUg1aDYsGVglhDV0NcQBnDW0NdQ15DWAAYABgAGAAYABgAGAAYABgAGAAYABgAGAAYABgAGAAlQCBDZUAiA2PDZcNMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAnw2nDTAAMAAwADAAMAAwAHUArw23DTAAMAAwADAAMAAwADAAMAAwADAAMAB1AL8NMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAB1AHUAdQB1AHUAdQDHDTAAYABgAM8NMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAA1w11ANwNMAAwAD0B5A0wADAAMAAwADAAMADsDfQN/A0EDgwOFA4wABsOMAAwADAAMAAwADAAMAAwANIG0gbSBtIG0gbSBtIG0gYjDigOwQUuDsEFMw7SBjoO0gbSBtIG0gbSBtIG0gbSBtIG0gbSBtIGQg5KDlIOVg7SBtIGXg5lDm0OdQ7SBtIGfQ6EDooOjQ6UDtIGmg6hDtIG0gaoDqwO0ga0DrwO0gZgAGAAYADEDmAAYAAkBtIGzA5gANIOYADaDokO0gbSBt8O5w7SBu8O0gb1DvwO0gZgAGAAxA7SBtIG0gbSBtIGYABgAGAAYAAED2AAsAUMD9IG0gbSBtIG0gbSBtIG0gbSBtIG0gbSBtIG0gbSBtIG0gbSBtIG0gbSBtIG0gbSBtIG0gbSBtIG0gbSBtIGFA8sBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAccD9IGLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHJA8sBywHLAcsBywHLAccDywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywPLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAc0D9IG0gbSBtIG0gbSBtIG0gbSBtIG0gbSBtIG0gbSBtIG0gbSBtIG0gbSBtIG0gbSBtIG0gbSBtIG0gbSBtIG0gbSBtIG0gbSBtIG0gbSBtIG0gbSBtIG0gbSBtIG0gbSBtIG0gbSBtIG0gbSBtIG0gbSBtIG0gbSBtIG0gbSBtIGLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAccD9IG0gbSBtIG0gbSBtIG0gbSBtIG0gbSBtIG0gbSBtIG0gbSBtIG0gbSBtIG0gbSBtIG0gbSBtIG0gbSBtIG0gbSBtIG0gbSBtIG0gbSBtIG0gbSBtIG0gbSBtIGFA8sBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHPA/SBtIG0gbSBtIG0gbSBtIG0gbSBtIG0gbSBtIG0gbSBtIG0gbSBtIG0gbSBtIG0gbSBtIG0gbSBtIG0gbSBtIG0gbSBtIG0gbSBtIG0gbSBtIG0gbSBtIG0gbSBtIG0gbSBtIG0gbSBtIG0gbSBtIG0gbSBtIG0gbSBtIG0gYUD0QPlQCVAJUAMAAwADAAMACVAJUAlQCVAJUAlQCVAEwPMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAA//8EAAQABAAEAAQABAAEAAQABAANAAMAAQABAAIABAAEAAQABAAEAAQABAAEAAQABAAEAAQABAAEAAQABAAEAAQACgATABcAHgAbABoAHgAXABYAEgAeABsAGAAPABgAHABLAEsASwBLAEsASwBLAEsASwBLABgAGAAeAB4AHgATAB4AUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQABYAGwASAB4AHgAeAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUAAWAA0AEQAeAAQAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAArACsAKwArACsAKwArACsAKwArACsAKwArACsAKwArACsAKwArACsAKwArACsAKwArACsAKwArACsAKwArACsAKwArACsAKwArACsAKwArACsAKwArACsAKwArACsAKwArACsAKwArACsAKwArACsAKwArACsAKwArACsAKwArAAQABAAEAAQABAAFAAQABAAEAAQABAAEAAQABAAEAAQABAAEAAQABAAEAAQABAAEAAQABAAEAAQABAAEAAQABAAJABYAGgAbABsAGwAeAB0AHQAeAE8AFwAeAA0AHgAeABoAGwBPAE8ADgBQAB0AHQAdAE8ATwAXAE8ATwBPABYAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAB0AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAdAFAAUABQAFAAUABQAFAAUAAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAFAAHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgBQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUAAeAB4AHgAeAFAATwBAAE8ATwBPAEAATwBQAFAATwBQAB4AHgAeAB4AHgAeAB0AHQAdAB0AHgAdAB4ADgBQAFAAUABQAFAAHgAeAB4AHgAeAB4AHgBQAB4AUAAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4ABAAEAAQABAAEAAQABAAEAAQABAAEAAQABAAEAAQABAAEAAQABAAEAAQABAAEAAQABAAEAAQABAAEAAQABAAEAAQABAAEAAQABAAEAAQABAAEAAQABAAEAAQABAAEAAQABAAEAAQABAAEAAQABAAEAAQABAAEAAQABAAEAAQABAAEAAQABAAJAAQABAAEAAQABAAEAAQABAAEAAQABAAEAAkACQAJAAkACQAJAAkABAAEAAQABAAEAAQABAAEAAQABAAEAAQABAAeAB4AHgAeAFAAHgAeAB4AKwArAFAAUABQAFAAGABQACsAKwArACsAHgAeAFAAHgBQAFAAUAArAFAAKwAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AKwAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4ABAAEAAQABAAEAAQABAAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgArAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUAArACsAUAAeAB4AHgAeAB4AHgBQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUAAYAA0AKwArAB4AHgAbACsABAAEAAQABAAEAAQABAAEAAQABAAEAAQABAAEAAQABAAEAAQABAAEAAQABAAEAAQABAAEAAQABAAEAAQABAAEAAQABAAEAAQABAAEAAQABAAEAAQABAAEAAQADQAEAB4ABAAEAB4ABAAEABMABAArACsAKwArACsAKwArACsAVgBWAFYAVgBWAFYAVgBWAFYAVgBWAFYAVgBWAFYAVgBWAFYAVgBWAFYAVgBWAFYAVgBWAFYAKwArACsAKwBWAFYAVgBWAB4AHgArACsAKwArACsAKwArACsAKwArACsAHgAeAB4AHgAeAB4AHgAeAB4AGgAaABoAGAAYAB4AHgAEAAQABAAEAAQABAAEAAQABAAEAAQAEwAEACsAEwATAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUAAEAAQABAAEAAQABAAEAAQABAAEAAQABAAEAAQABAAEAAQABAAEAAQABABLAEsASwBLAEsASwBLAEsASwBLABoAGQAZAB4AUABQAAQAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQABMAUAAEAAQABAAEAAQABAAEAB4AHgAEAAQABAAEAAQABABQAFAABAAEAB4ABAAEAAQABABQAFAASwBLAEsASwBLAEsASwBLAEsASwBQAFAAUAAeAB4AUAAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AKwAeAFAABABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUAAEAAQABAAEAAQABAAEAAQABAAEAAQABAAEAAQABAAEAAQABAAEACsAKwBQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAABAAEAAQABAAEAAQABAAEAAQABAAEAFAAKwArACsAKwArACsAKwArACsAKwArACsAKwArAEsASwBLAEsASwBLAEsASwBLAEsAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAABAAEAAQABAAEAAQABAAEAAQAUABQAB4AHgAYABMAUAArACsABAAbABsAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUAAEAAQABAAEAFAABAAEAAQABAAEAFAABAAEAAQAUAAEAAQABAAEAAQAKwArAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeACsAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUAAEAAQABAArACsAHgArAFAAUABQAFAAUABQAFAAUABQAFAAUAArACsAKwArACsAKwArACsAKwArACsAKwArACsAKwArACsAKwArACsAKwBQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUAArAFAAUABQAFAAUABQAFAAUABQAFAAKwArACsAKwArACsAKwArACsAKwArAAQABAAEAAQABAAEAAQABAAEAAQABAAEAAQABAAEAB4ABAAEAAQABAAEAAQABAAEAAQABAAEAAQABAAEAAQABAAEAAQABAAEAAQABAAEAAQABAAEAAQABAAEAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAAQABAAEAFAABAAEAAQABAAEAAQABAAEAAQABAAEAAQABAAEAAQABAAEAAQAUAAEAAQABAAEAAQABAAEAFAAUABQAFAAUABQAFAAUABQAFAABAAEAA0ADQBLAEsASwBLAEsASwBLAEsASwBLAB4AUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUAAEAAQABAArAFAAUABQAFAAUABQAFAAUAArACsAUABQACsAKwBQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAKwBQAFAAUABQAFAAUABQACsAUAArACsAKwBQAFAAUABQACsAKwAEAFAABAAEAAQABAAEAAQABAArACsABAAEACsAKwAEAAQABABQACsAKwArACsAKwArACsAKwAEACsAKwArACsAUABQACsAUABQAFAABAAEACsAKwBLAEsASwBLAEsASwBLAEsASwBLAFAAUAAaABoAUABQAFAAUABQAEwAHgAbAFAAHgAEACsAKwAEAAQABAArAFAAUABQAFAAUABQACsAKwArACsAUABQACsAKwBQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAKwBQAFAAUABQAFAAUABQACsAUABQACsAUABQACsAUABQACsAKwAEACsABAAEAAQABAAEACsAKwArACsABAAEACsAKwAEAAQABAArACsAKwAEACsAKwArACsAKwArACsAUABQAFAAUAArAFAAKwArACsAKwArACsAKwBLAEsASwBLAEsASwBLAEsASwBLAAQABABQAFAAUAAEAB4AKwArACsAKwArACsAKwArACsAKwAEAAQABAArAFAAUABQAFAAUABQAFAAUABQACsAUABQAFAAKwBQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAKwBQAFAAUABQAFAAUABQACsAUABQACsAUABQAFAAUABQACsAKwAEAFAABAAEAAQABAAEAAQABAAEACsABAAEAAQAKwAEAAQABAArACsAUAArACsAKwArACsAKwArACsAKwArACsAKwArACsAKwBQAFAABAAEACsAKwBLAEsASwBLAEsASwBLAEsASwBLAB4AGwArACsAKwArACsAKwArAFAABAAEAAQABAAEAAQAKwAEAAQABAArAFAAUABQAFAAUABQAFAAUAArACsAUABQACsAKwBQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAAQABAAEAAQABAArACsABAAEACsAKwAEAAQABAArACsAKwArACsAKwArAAQABAAEACsAKwArACsAUABQACsAUABQAFAABAAEACsAKwBLAEsASwBLAEsASwBLAEsASwBLAB4AUABQAFAAUABQAFAAUAArACsAKwArACsAKwArACsAKwArAAQAUAArAFAAUABQAFAAUABQACsAKwArAFAAUABQACsAUABQAFAAUAArACsAKwBQAFAAKwBQACsAUABQACsAKwArAFAAUAArACsAKwBQAFAAUAArACsAKwBQAFAAUABQAFAAUABQAFAAUABQAFAAUAArACsAKwArAAQABAAEAAQABAArACsAKwAEAAQABAArAAQABAAEAAQAKwArAFAAKwArACsAKwArACsABAArACsAKwArACsAKwArACsAKwArAEsASwBLAEsASwBLAEsASwBLAEsAUABQAFAAHgAeAB4AHgAeAB4AGwAeACsAKwArACsAKwAEAAQABAAEAAQAUABQAFAAUABQAFAAUABQACsAUABQAFAAKwBQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUAArAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAKwArACsAUAAEAAQABAAEAAQABAAEACsABAAEAAQAKwAEAAQABAAEACsAKwArACsAKwArACsABAAEACsAUABQAFAAKwArACsAKwArAFAAUAAEAAQAKwArAEsASwBLAEsASwBLAEsASwBLAEsAKwArACsAKwArACsAKwAOAFAAUABQAFAAUABQAFAAHgBQAAQABAAEAA4AUABQAFAAUABQAFAAUABQACsAUABQAFAAKwBQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUAArAFAAUABQAFAAUABQAFAAUABQAFAAKwBQAFAAUABQAFAAKwArAAQAUAAEAAQABAAEAAQABAAEACsABAAEAAQAKwAEAAQABAAEACsAKwArACsAKwArACsABAAEACsAKwArACsAKwArACsAUAArAFAAUAAEAAQAKwArAEsASwBLAEsASwBLAEsASwBLAEsAKwBQAFAAKwArACsAKwArACsAKwArACsAKwArACsAKwAEAAQABAAEAFAAUABQAFAAUABQAFAAUABQACsAUABQAFAAKwBQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAABAAEAFAABAAEAAQABAAEAAQABAArAAQABAAEACsABAAEAAQABABQAB4AKwArACsAKwBQAFAAUAAEAFAAUABQAFAAUABQAFAAUABQAFAABAAEACsAKwBLAEsASwBLAEsASwBLAEsASwBLAFAAUABQAFAAUABQAFAAUABQABoAUABQAFAAUABQAFAAKwAEAAQABAArAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQACsAKwArAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUAArAFAAUABQAFAAUABQAFAAUABQACsAUAArACsAUABQAFAAUABQAFAAUAArACsAKwAEACsAKwArACsABAAEAAQABAAEAAQAKwAEACsABAAEAAQABAAEAAQABAAEACsAKwArACsAKwArAEsASwBLAEsASwBLAEsASwBLAEsAKwArAAQABAAeACsAKwArACsAKwArACsAKwArACsAKwArAFwAXABcAFwAXABcAFwAXABcAFwAXABcAFwAXABcAFwAXABcAFwAXABcAFwAXABcAFwAXABcAFwAXABcAFwAXAAqAFwAXAAqACoAKgAqACoAKgAqACsAKwArACsAGwBcAFwAXABcAFwAXABcACoAKgAqACoAKgAqACoAKgAeAEsASwBLAEsASwBLAEsASwBLAEsADQANACsAKwArACsAKwBcAFwAKwBcACsAXABcAFwAXABcACsAXABcAFwAXABcAFwAXABcAFwAXABcAFwAXABcAFwAXABcAFwAXABcACsAXAArAFwAXABcAFwAXABcAFwAXABcAFwAKgBcAFwAKgAqACoAKgAqACoAKgAqACoAXAArACsAXABcAFwAXABcACsAXAArACoAKgAqACoAKgAqACsAKwBLAEsASwBLAEsASwBLAEsASwBLACsAKwBcAFwAXABcAFAADgAOAA4ADgAeAA4ADgAJAA4ADgANAAkAEwATABMAEwATAAkAHgATAB4AHgAeAAQABAAeAB4AHgAeAB4AHgBLAEsASwBLAEsASwBLAEsASwBLAFAAUABQAFAAUABQAFAAUABQAFAADQAEAB4ABAAeAAQAFgARABYAEQAEAAQAUABQAFAAUABQAFAAUABQACsAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAKwArACsAKwAEAAQABAAEAAQABAAEAAQABAAEAAQABAAEAAQADQAEAAQABAAEAAQADQAEAAQAUABQAFAAUABQAAQABAAEAAQABAAEAAQABAAEAAQABAArAAQABAAEAAQABAAEAAQABAAEAAQABAAEAAQABAAEAAQABAAEAAQABAAEAAQABAAEAAQABAAEAAQABAAEAAQABAArAA0ADQAeAB4AHgAeAB4AHgAEAB4AHgAeAB4AHgAeACsAHgAeAA4ADgANAA4AHgAeAB4AHgAeAAkACQArACsAKwArACsAXABcAFwAXABcAFwAXABcAFwAXABcAFwAXABcAFwAXABcAFwAXABcAFwAXABcAFwAXABcAFwAXABcAFwAXABcAFwAXABcACoAKgAqACoAKgAqACoAKgAqACoAKgAqACoAKgAqACoAKgAqACoAKgBcAEsASwBLAEsASwBLAEsASwBLAEsADQANAB4AHgAeAB4AXABcAFwAXABcAFwAKgAqACoAKgBcAFwAXABcACoAKgAqAFwAKgAqACoAXABcACoAKgAqACoAKgAqACoAXABcAFwAKgAqACoAKgBcAFwAXABcAFwAXABcAFwAXABcAFwAXABcACoAKgAqACoAKgAqACoAKgAqACoAKgAqAFwAKgBLAEsASwBLAEsASwBLAEsASwBLACoAKgAqACoAKgAqAFAAUABQAFAAUABQACsAUAArACsAKwArACsAUAArACsAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAHgBQAFAAUABQAFgAWABYAFgAWABYAFgAWABYAFgAWABYAFgAWABYAFgAWABYAFgAWABYAFgAWABYAFgAWABYAFgAWABYAFgAWABZAFkAWQBZAFkAWQBZAFkAWQBZAFkAWQBZAFkAWQBZAFkAWQBZAFkAWQBZAFkAWQBZAFkAWQBZAFkAWQBZAFkAWgBaAFoAWgBaAFoAWgBaAFoAWgBaAFoAWgBaAFoAWgBaAFoAWgBaAFoAWgBaAFoAWgBaAFoAWgBaAFoAWgBaAFAAUABQAFAAUABQAFAAUABQACsAUABQAFAAUAArACsAUABQAFAAUABQAFAAUAArAFAAKwBQAFAAUABQACsAKwBQAFAAUABQAFAAUABQAFAAUAArAFAAUABQAFAAKwArAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUAArAFAAUABQAFAAKwArAFAAUABQAFAAUABQAFAAKwBQACsAUABQAFAAUAArACsAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAKwBQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAKwBQAFAAUABQACsAKwBQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUAArACsABAAEAAQAHgANAB4AHgAeAB4AHgAeAB4AUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQACsAKwArAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAHgAeAB4AHgAeAB4AHgAeAB4AHgArACsAKwArACsAKwBQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQACsAKwBQAFAAUABQAFAAUAArACsADQBQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAHgAeAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUAANAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUAAWABEAKwArACsAUABQAFAAUABQAFAAUABQAFAAUABQAA0ADQANAFAAUABQAFAAUABQAFAAUABQAFAAUAArACsAKwArACsAKwArAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAKwBQAFAAUABQAAQABAAEACsAKwArACsAKwArACsAKwArACsAKwBQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUAAEAAQABAANAA0AKwArACsAKwArACsAKwArACsAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAABAAEACsAKwArACsAKwArACsAKwArACsAKwArAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAKwBQAFAAUAArAAQABAArACsAKwArACsAKwArACsAKwArACsAKwBcAFwAXABcAFwAXABcAFwAXABcAFwAXABcAFwAXABcAFwAXABcAFwAKgAqACoAKgAqACoAKgAqACoAKgAqACoAKgAqACoAKgAqACoAKgAqAA0ADQAVAFwADQAeAA0AGwBcACoAKwArAEsASwBLAEsASwBLAEsASwBLAEsAKwArACsAKwArACsAUABQAFAAUABQAFAAUABQAFAAUAArACsAKwArACsAKwAeAB4AEwATAA0ADQAOAB4AEwATAB4ABAAEAAQACQArAEsASwBLAEsASwBLAEsASwBLAEsAKwArACsAKwArACsAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUAArACsAKwArACsAKwArAFAAUABQAFAAUAAEAAQAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAAQAUAArACsAKwArACsAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUAArACsAKwArACsAKwArACsAKwArAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAKwAEAAQABAAEAAQABAAEAAQABAAEAAQABAArACsAKwArAAQABAAEAAQABAAEAAQABAAEAAQABAAEACsAKwArACsAHgArACsAKwATABMASwBLAEsASwBLAEsASwBLAEsASwBcAFwAXABcAFwAXABcAFwAXABcAFwAXABcAFwAXABcAFwAXAArACsAXABcAFwAXABcACsAKwArACsAKwArACsAKwArACsAKwBcAFwAXABcAFwAXABcAFwAXABcAFwAXAArACsAKwArAFwAXABcAFwAXABcAFwAXABcAFwAXABcAFwAXABcAFwAXABcACsAKwArACsAKwArAEsASwBLAEsASwBLAEsASwBLAEsAXAArACsAKwAqACoAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAAQABAAEAAQABAArACsAHgAeAFwAXABcAFwAXABcAFwAXABcAFwAXABcAFwAXABcAFwAXABcAFwAXABcACoAKgAqACoAKgAqACoAKgAqACoAKwAqACoAKgAqACoAKgAqACoAKgAqACoAKgAqACoAKgAqACoAKgAqACoAKgAqACoAKgAqACoAKgAqACoAKwArAAQASwBLAEsASwBLAEsASwBLAEsASwArACsAKwArACsAKwBLAEsASwBLAEsASwBLAEsASwBLACsAKwArACsAKwArACoAKgAqACoAKgAqACoAXAAqACoAKgAqACoAKgArACsABAAEAAQABAAEAAQABAAEAAQABAAEAAQABAAEAAQABAAEACsAKwArACsAKwArACsAKwArACsAKwArACsAKwArACsAKwArACsAKwArACsAKwArACsAKwArACsAKwArACsABAAEAAQABAAEAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAAQABAAEAAQABABQAFAAUABQAFAAUABQACsAKwArACsASwBLAEsASwBLAEsASwBLAEsASwANAA0AHgANAA0ADQANAB4AHgAeAB4AHgAeAB4AHgAeAB4ABAAEAAQABAAEAAQABAAEAAQAHgAeAB4AHgAeAB4AHgAeAB4AKwArACsABAAEAAQAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAABAAEAAQABAAEAAQABAAEAAQABAAEAAQABABQAFAASwBLAEsASwBLAEsASwBLAEsASwBQAFAAUABQAFAAUABQAFAABAAEAAQABAAEAAQABAAEAAQABAAEAAQABAAEACsAKwArACsAKwArACsAKwAeAB4AHgAeAFAAUABQAFAABAAEAAQABAAEAAQABAAEAAQABAAEAAQABAAEAAQABAAEAAQABAAEACsAKwArAA0ADQANAA0ADQBLAEsASwBLAEsASwBLAEsASwBLACsAKwArAFAAUABQAEsASwBLAEsASwBLAEsASwBLAEsAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAA0ADQBQAFAAUABQAFAAUABQAFAAUAArACsAKwArACsAKwArAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQACsAKwBQAFAAUAAeAB4AHgAeAB4AHgAeAB4AKwArACsAKwArACsAKwArAAQABAAEAB4ABAAEAAQABAAEAAQABAAEAAQABAAEAAQABABQAFAAUABQAAQAUABQAFAAUABQAFAABABQAFAABAAEAAQAUAArACsAKwArACsABAAEAAQABAAEAAQABAAEAAQABAAEAAQABAAEAAQABAAEAAQABAAEAAQABAAEAAQABAAEACsABAAEAAQABAAEAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AKwArAFAAUABQAFAAUABQACsAKwBQAFAAUABQAFAAUABQAFAAKwBQACsAUAArAFAAKwAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeACsAKwAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgArAB4AHgAeAB4AHgAeAB4AHgBQAB4AHgAeAFAAUABQACsAHgAeAB4AHgAeAB4AHgAeAB4AHgBQAFAAUABQACsAKwAeAB4AHgAeAB4AHgArAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AKwArAFAAUABQACsAHgAeAB4AHgAeAB4AHgAOAB4AKwANAA0ADQANAA0ADQANAAkADQANAA0ACAAEAAsABAAEAA0ACQANAA0ADAAdAB0AHgAXABcAFgAXABcAFwAWABcAHQAdAB4AHgAUABQAFAANAAEAAQAEAAQABAAEAAQACQAaABoAGgAaABoAGgAaABoAHgAXABcAHQAVABUAHgAeAB4AHgAeAB4AGAAWABEAFQAVABUAHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4ADQAeAA0ADQANAA0AHgANAA0ADQAHAB4AHgAeAB4AKwAEAAQABAAEAAQABAAEAAQABAAEAFAAUAArACsATwBQAFAAUABQAFAAHgAeAB4AFgARAE8AUABPAE8ATwBPAFAAUABQAFAAUAAeAB4AHgAWABEAKwBQAFAAUABQAFAAUABQAFAAUABQAFAAUABQACsAKwArABsAGwAbABsAGwAbABsAGgAbABsAGwAbABsAGwAbABsAGwAbABsAGwAbABsAGgAbABsAGwAbABoAGwAbABoAGwAbABsAGwAbABsAGwAbABsAGwAbABsAGwAbABsAGwAbAAQABAAEAAQABAAEAAQABAAEAAQABAAEAAQABAAEAAQAHgAeAFAAGgAeAB0AHgBQAB4AGgAeAB4AHgAeAB4AHgAeAB4AHgBPAB4AUAAbAB4AHgBQAFAAUABQAFAAHgAeAB4AHQAdAB4AUAAeAFAAHgBQAB4AUABPAFAAUAAeAB4AHgAeAB4AHgAeAFAAUABQAFAAUAAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAFAAHgBQAFAAUABQAE8ATwBQAFAAUABQAFAATwBQAFAATwBQAE8ATwBPAE8ATwBPAE8ATwBPAE8ATwBPAFAAUABQAFAATwBPAE8ATwBPAE8ATwBPAE8ATwBQAFAAUABQAFAAUABQAFAAUAAeAB4AUABQAFAAUABPAB4AHgArACsAKwArAB0AHQAdAB0AHQAdAB0AHQAdAB0AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB0AHgAdAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAdAB4AHQAdAB4AHgAeAB0AHQAeAB4AHQAeAB4AHgAdAB4AHQAbABsAHgAdAB4AHgAeAB4AHQAeAB4AHQAdAB0AHQAeAB4AHQAeAB0AHgAdAB0AHQAdAB0AHQAeAB0AHgAeAB4AHgAeAB0AHQAdAB0AHgAeAB4AHgAdAB0AHgAeAB4AHgAeAB4AHgAeAB4AHgAdAB4AHgAeAB0AHgAeAB4AHgAeAB0AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAdAB0AHgAeAB0AHQAdAB0AHgAeAB0AHQAeAB4AHQAdAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB0AHQAeAB4AHQAdAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHQAeAB4AHgAdAB4AHgAeAB4AHgAeAB4AHQAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB0AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AFAAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeABYAEQAWABEAHgAeAB4AHgAeAB4AHQAeAB4AHgAeAB4AHgAeACUAJQAeAB4AHgAeAB4AHgAeAB4AHgAWABEAHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AJQAlACUAJQAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgArACsAKwArACsAKwArACsAKwArACsAKwArACsAKwArACsAKwArACsAKwArACsAKwArAE8ATwBPAE8ATwBPAE8ATwBPAE8ATwBPAE8ATwBPAE8ATwBPAE8ATwBPAE8ATwBPAE8ATwBPAE8ATwBPAE8ATwAdAB0AHQAdAB0AHQAdAB0AHQAdAB0AHQAdAB0AHQAdAB0AHQAdAB0AHQAdAB0AHQAdAB0AHQAdAB0AHQAdAB0AHQAdAE8ATwBPAE8ATwBPAE8ATwBPAE8ATwBPAE8ATwBPAE8ATwBPAE8ATwBPAFAAHQAdAB0AHQAdAB0AHQAdAB0AHQAdAB0AHgAeAB4AHgAdAB0AHQAdAB0AHQAdAB0AHQAdAB0AHQAdAB0AHQAdAB0AHQAdAB0AHQAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHQAdAB0AHQAdAB0AHQAdAB0AHQAdAB0AHQAdAB0AHQAeAB4AHQAdAB0AHQAeAB4AHgAeAB4AHgAeAB4AHgAeAB0AHQAeAB0AHQAdAB0AHQAdAB0AHgAeAB4AHgAeAB4AHgAeAB0AHQAeAB4AHQAdAB4AHgAeAB4AHQAdAB4AHgAeAB4AHQAdAB0AHgAeAB0AHgAeAB0AHQAdAB0AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAdAB0AHQAdAB4AHgAeAB4AHgAeAB4AHgAeAB0AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAlACUAJQAlAB4AHQAdAB4AHgAdAB4AHgAeAB4AHQAdAB4AHgAeAB4AJQAlAB0AHQAlAB4AJQAlACUAIAAlACUAHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAlACUAJQAeAB4AHgAeAB0AHgAdAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAdAB0AHgAdAB0AHQAeAB0AJQAdAB0AHgAdAB0AHgAdAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeACUAHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHQAdAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAlACUAJQAlACUAJQAlACUAJQAlACUAJQAdAB0AHQAdACUAHgAlACUAJQAdACUAJQAdAB0AHQAlACUAHQAdACUAHQAdACUAJQAlAB4AHQAeAB4AHgAeAB0AHQAlAB0AHQAdAB0AHQAdACUAJQAlACUAJQAdACUAJQAgACUAHQAdACUAJQAlACUAJQAlACUAJQAeAB4AHgAlACUAIAAgACAAIAAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB0AHgAeAB4AFwAXABcAFwAXABcAHgATABMAJQAeAB4AHgAWABEAFgARABYAEQAWABEAFgARABYAEQAWABEATwBPAE8ATwBPAE8ATwBPAE8ATwBPAE8ATwBPAE8ATwBPAE8ATwBPAE8ATwAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeABYAEQAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAWABEAFgARABYAEQAWABEAFgARAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AFgARABYAEQAWABEAFgARABYAEQAWABEAFgARABYAEQAWABEAFgARABYAEQAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAWABEAFgARAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AFgARAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAdAB0AHQAdAB0AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgArACsAHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AKwAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AUABQAFAAUAAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAEAAQABAAeAB4AKwArACsAKwArABMADQANAA0AUAATAA0AUABQAFAAUABQAFAAUABQACsAKwArACsAKwArACsAUAANACsAKwArACsAKwArACsAKwArACsAKwArACsAKwAEAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUAArACsAKwArACsAKwArACsAKwBQAFAAUABQAFAAUABQACsAUABQAFAAUABQAFAAUAArAFAAUABQAFAAUABQAFAAKwBQAFAAUABQAFAAUABQACsAFwAXABcAFwAXABcAFwAXABcAFwAXABcAFwAXAA0ADQANAA0ADQANAA0ADQAeAA0AFgANAB4AHgAXABcAHgAeABcAFwAWABEAFgARABYAEQAWABEADQANAA0ADQATAFAADQANAB4ADQANAB4AHgAeAB4AHgAMAAwADQANAA0AHgANAA0AFgANAA0ADQANAA0ADQANAA0AHgANAB4ADQANAB4AHgAeACsAKwArACsAKwArACsAKwArACsAKwArACsAJQAlACUAJQAlACUAJQAlACUAJQAlACUAJQAlACUAJQAlACUAJQAlACUAJQAlACUAJQAlACsAJQAlACUAJQAlACUAJQAlACUAJQAlACUAJQAlACUAJQAlACUAJQAlACUAJQAlACUAJQAlACUAJQAlACUAJQAlACUAKwArACsAKwArACsAKwArACsAKwArACsAJQAlACUAJQAlACUAJQAlACUAJQAlACUAJQAlACUAJQAlACUAJQAlACUAJQArACsAKwArACsAKwArACsAKwArACsAKwArACsAKwArACsAKwAlACUAJQAlACUAJQAlACUAJQAlACUAJQArACsAKwArAA0AEQARACUAJQBHAFcAVwAWABEAFgARABYAEQAWABEAFgARACUAJQAWABEAFgARABYAEQAWABEAFQAWABEAEQAlAFcAVwBXAFcAVwBXAFcAVwBXAAQABAAEAAQABAAEACUAVwBXAFcAVwA2ACUAJQBXAFcAVwBHAEcAJQAlACUAKwBRAFcAUQBXAFEAVwBRAFcAUQBXAFcAVwBXAFcAVwBXAFcAVwBXAFcAVwBXAFcAVwBXAFcAVwBXAFcAVwBXAFcAVwBXAFEAVwBXAFcAVwBXAFcAVwBXAFcAVwBXAFcAVwBXAFcAVwBXAFcAVwBXAFcAVwBXAFcAVwBXAFcAVwBXAFcAVwBRAFcAUQBXAFEAVwBXAFcAVwBXAFcAUQBXAFcAVwBXAFcAVwBRAFEAKwArAAQABAAVABUARwBHAFcAFQBRAFcAUQBXAFEAVwBRAFcAUQBXAFcAVwBXAFcAVwBXAFcAVwBXAFcAVwBXAFcAVwBXAFcAVwBXAFcAVwBXAFcAVwBXAFEAVwBRAFcAUQBXAFcAVwBXAFcAVwBRAFcAVwBXAFcAVwBXAFEAUQBXAFcAVwBXABUAUQBHAEcAVwArACsAKwArACsAVwBXAFcAVwBXAFcAVwBXAFcAVwBXAFcAVwBXAFcAVwBXAFcAVwBXAFcAVwBXAFcAVwBXAFcAKwBXAFcAVwBXAFcAVwBXAFcAVwBXAFcAVwBXAFcAVwBXAFcAVwBXAFcAVwBXAFcAVwBXAFcAVwBXAFcAVwBXAFcAVwBXAFcAVwBXAFcAKwAlACUAVwBXAFcAVwAlACUAJQAlACUAJQAlACUAJQAlACsAKwArACsAKwArACsAKwArACsAKwArAFEAUQBRAFEAUQBRAFEAUQBRAFEAUQBRAFEAUQBRAFEAJQAlACUAJQAlACUAJQAlACUAJQAlACUAJQAlACUAJQAlACUAJQAlACUAJQAlACUAJQAlACUAJQAlACUAJQArAFcAVwBXAFcAVwBXAFcAVwBXAFcAJQAlACUAJQAlACUAJQAlACUAJQAlACUAJQAlACUAJQAlACUAJQAlACUAJQBPAE8ATwBPAE8ATwBPAE8AJQBXAFcAVwBXAFcAVwBXAFcAVwBXAFcAVwBXAFcAVwAlACUAJQAlACUAJQAlACUAJQAlACUAJQAlACUAJQAlACUAVwBXAFcAVwBXAFcAVwBXAFcAVwBXAFcAVwBXAFcAVwBXAFcAVwBXAFcAVwBXAFcAVwBXAFcAVwBXAFcAVwBXACUAJQAlAFcAVwBXAFcAVwBXAFcAVwBXAFcAVwBXAFcAVwBXAFcAVwBXAFcAVwBXAEcAVwBXAFcAVwBXAFcAVwBXAFcAVwBXAFcAVwBXAFcAKwArACsAJQAlACUAJQAlACUAJQAlACUAJQAlACUAJQAlACUAJQAlACUAJQArACsAKwArACsAKwArACsAKwBQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAADQATAA0AUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABLAEsASwBLAEsASwBLAEsASwBLAFAAUAArACsAKwArACsAKwArACsAKwArACsAKwArACsAKwArACsAKwArACsAHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAFAABAAEAAQABAAeAAQABAAEAAQABAAEAAQABAAEAAQAHgBQAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AUABQAAQABABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAAQABAAeAA0ADQANAA0ADQArACsAKwArACsAKwArACsAHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAFAAUABQAFAAUABQAFAAUABQAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AUAAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgBQAB4AHgAeAB4AHgAeAFAAHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgArACsAHgAeAB4AHgAeAB4AHgAeAB4AKwArACsAKwArACsAKwArACsAKwArACsAKwArACsAKwArACsAKwArACsAKwAeAB4AUABQAFAAUABQAFAAUABQAFAAUABQAAQAUABQAFAABABQAFAAUABQAAQAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAAQABAAEAAQABAAeAB4AHgAeAAQAKwArACsAUABQAFAAUABQAFAAHgAeABoAHgArACsAKwArACsAKwBQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAADgAOABMAEwArACsAKwArACsAKwArACsABAAEAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAAQABAAEAAQABAAEACsAKwArACsAKwArACsAKwANAA0ASwBLAEsASwBLAEsASwBLAEsASwArACsAKwArACsAKwAEAAQABAAEAAQABAAEAAQABAAEAAQABAAEAAQABAAEAAQABABQAFAAUABQAFAAUAAeAB4AHgBQAA4AUABQAAQAUABQAFAAUABQAFAABAAEAAQABAAEAAQABAAEAA0ADQBQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAAQABAAEAAQABAAEAAQABAAEAAQABAAEAAQAKwArACsAKwArACsAKwArACsAKwArAB4AWABYAFgAWABYAFgAWABYAFgAWABYAFgAWABYAFgAWABYAFgAWABYAFgAWABYAFgAWABYAFgAWABYACsAKwArAAQAHgAeAB4AHgAeAB4ADQANAA0AHgAeAB4AHgArAFAASwBLAEsASwBLAEsASwBLAEsASwArACsAKwArAB4AHgBcAFwAXABcAFwAKgBcAFwAXABcAFwAXABcAFwAXABcAEsASwBLAEsASwBLAEsASwBLAEsAXABcAFwAXABcACsAUABQAFAAUABQAFAAUABQAFAABAAEAAQABAAEAAQABAAEAAQABAAEAAQABAAEACsAKwArACsAKwArACsAKwArAFAAUABQAAQAUABQAFAAUABQAFAAUABQAAQABAArACsASwBLAEsASwBLAEsASwBLAEsASwArACsAHgANAA0ADQBcAFwAXABcAFwAXABcAFwAXABcAFwAXABcAFwAXABcAFwAXABcAFwAXABcAFwAKgAqACoAXAAqACoAKgBcAFwAXABcAFwAXABcAFwAXABcAFwAXABcAFwAXABcAFwAXAAqAFwAKgAqACoAXABcACoAKgBcAFwAXABcAFwAKgAqAFwAKgBcACsAKwArACsAKwArACsAKwArACsAKwArACsAKwArACsAKwArACsAKwArACsAKwArAFwAXABcACoAKgBQAFAAUABQAFAAUABQAFAAUABQAFAABAAEAAQABAAEAA0ADQBQAFAAUAAEAAQAKwArACsAKwArACsAKwArACsAKwBQAFAAUABQAFAAUAArACsAUABQAFAAUABQAFAAKwArAFAAUABQAFAAUABQACsAKwArACsAKwArACsAKwArAFAAUABQAFAAUABQAFAAKwBQAFAAUABQAFAAUABQACsAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAHgAeACsAKwArACsAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUAAEAAQABAAEAAQABAAEAAQADQAEAAQAKwArAEsASwBLAEsASwBLAEsASwBLAEsAKwArACsAKwArACsAVABVAFUAVQBVAFUAVQBVAFUAVQBVAFUAVQBVAFUAVQBVAFUAVQBVAFUAVQBVAFUAVQBVAFUAVQBUAFUAVQBVAFUAVQBVAFUAVQBVAFUAVQBVAFUAVQBVAFUAVQBVAFUAVQBVAFUAVQBVAFUAVQBVACsAKwArACsAKwArACsAKwArACsAKwArAFkAWQBZAFkAWQBZAFkAWQBZAFkAWQBZAFkAWQBZAFkAWQBZAFkAKwArACsAKwBaAFoAWgBaAFoAWgBaAFoAWgBaAFoAWgBaAFoAWgBaAFoAWgBaAFoAWgBaAFoAWgBaAFoAWgBaAFoAKwArACsAKwAGAAYABgAGAAYABgAGAAYABgAGAAYABgAGAAYABgAGAAYABgAGAAYABgAGAAYABgAGAAYABgAGAAYABgAGAAYAVwBXAFcAVwBXAFcAVwBXAFcAVwBXAFcAVwBXACUAJQBXAFcAVwBXAFcAVwBXAFcAVwBXAFcAVwBXAFcAVwBXAFcAVwBXAFcAVwBXAFcAVwBXAFcAJQAlACUAJQAlACUAUABQAFAAUABQAFAAUAArACsAKwArACsAKwArACsAKwArACsAKwBQAFAAUABQAFAAKwArACsAKwArAFYABABWAFYAVgBWAFYAVgBWAFYAVgBWAB4AVgBWAFYAVgBWAFYAVgBWAFYAVgBWAFYAVgArAFYAVgBWAFYAVgArAFYAKwBWAFYAKwBWAFYAKwBWAFYAVgBWAFYAVgBWAFYAVgBWAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AKwArACsAKwArACsAKwArACsAKwArACsAKwArACsAKwArAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAEQAWAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAKwArAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUAArACsAKwArACsAKwArACsAKwArACsAKwArACsAKwArACsAKwArACsAKwArACsAKwBQAFAAUABQAFAAUABQAFAAUABQAFAAUAAaAB4AKwArAAQABAAEAAQABAAEAAQABAAEAAQABAAEAAQABAAEAAQAGAARABEAGAAYABMAEwAWABEAFAArACsAKwArACsAKwAEAAQABAAEAAQABAAEAAQABAAEAAQABAAEAAQABAAEACUAJQAlACUAJQAWABEAFgARABYAEQAWABEAFgARABYAEQAlACUAFgARACUAJQAlACUAJQAlACUAEQAlABEAKwAVABUAEwATACUAFgARABYAEQAWABEAJQAlACUAJQAlACUAJQAlACsAJQAbABoAJQArACsAKwArAFAAUABQAFAAUAArAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAKwArAAcAKwATACUAJQAbABoAJQAlABYAEQAlACUAEQAlABEAJQBXAFcAVwBXAFcAVwBXAFcAVwBXABUAFQAlACUAJQATACUAVwBXAFcAVwBXAFcAVwBXAFcAVwBXAFcAVwBXAFcAVwBXAFcAVwBXAFcAVwBXAFcAVwBXABYAJQARACUAJQAlAFcAVwBXAFcAVwBXAFcAVwBXAFcAVwBXAFcAVwBXAFcAVwBXAFcAVwBXAFcAVwBXAFcAVwAWACUAEQAlABYAEQARABYAEQARABUAVwBRAFEAUQBRAFEAUQBRAFEAUQBRAFcAVwBXAFcAVwBXAFcAVwBXAFcAVwBXAFcAVwBXAFcAVwBXAFcAVwBXAFcAVwBXAFcAVwBXAFcAVwBXAFcAVwBXAEcARwArACsAVwBXAFcAVwBXAFcAKwArAFcAVwBXAFcAVwBXACsAKwBXAFcAVwBXAFcAVwArACsAVwBXAFcAKwArACsAGgAbACUAJQAlABsAGwArAB4AHgAeAB4AHgAeAB4AKwArACsAKwArACsAKwArACsAKwAEAAQABAAQAB0AKwArAFAAUABQAFAAUABQAFAAUABQAFAAUABQACsAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUAArAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAKwBQAFAAKwBQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUAArACsAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQACsAKwBQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUAArACsAKwArACsADQANAA0AKwArACsAKwBQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQACsAKwArAB4AHgAeAB4AHgAeAB4AHgAeAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgBQAFAAHgAeAB4AKwAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeACsAKwArACsAKwArACsAKwArACsAKwArACsAKwArACsAKwArACsAKwArACsAKwArACsAKwArACsAKwArACsAHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAAQAKwArAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUAArACsAKwArACsAKwArACsAKwArACsAKwArACsAKwAEAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQACsAKwArACsAKwArACsAKwArAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAKwArACsAKwArAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAABAAEAAQABAAEACsAKwArACsAKwBQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUAArAA0AUABQAFAAUAArACsAKwArAFAAUABQAFAAUABQAFAAUAANAFAAUABQAFAAUAArACsAKwArACsAKwArACsAKwArAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQACsAKwBQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAKwArACsAKwBQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQACsAKwArACsAKwArACsAKwBQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQACsAKwArACsAKwArACsAKwArACsAKwAeACsAKwArACsAKwArACsAKwArACsAKwArACsAKwArACsAUABQAFAAUABQAFAAKwArAFAAKwBQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUAArAFAAUAArACsAKwBQACsAKwBQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAKwANAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUAAeAB4AUABQAFAAUABQAFAAUAArACsAKwArACsAKwArAFAAUABQAFAAUABQAFAAUABQACsAKwArACsAKwArACsAKwArACsAKwArACsAKwArACsAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUAArAFAAUAArACsAKwArACsAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQACsAKwArAA0AUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQACsAKwArACsAKwAeAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQACsAKwArACsAUABQAFAAUABQAAQABAAEACsABAAEACsAKwArACsAKwAEAAQABAAEAFAAUABQAFAAKwBQAFAAUAArAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAKwArAAQABAAEACsAKwArACsABABQAFAAUABQAFAAUABQAFAAUAArACsAKwArACsAKwArAA0ADQANAA0ADQANAA0ADQAeACsAKwArACsAKwArACsAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUAAeAFAAUABQAFAAUABQAFAAUAAeAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAAQABAArACsAKwArAFAAUABQAFAAUAANAA0ADQANAA0ADQAUACsAKwArACsAKwArACsAKwArAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAKwArACsADQANAA0ADQANAA0ADQBQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUAArACsAKwArACsAKwArAB4AHgAeAB4AKwArACsAKwArACsAKwArACsAKwArACsAUABQAFAAUABQAFAAUAArACsAKwArACsAKwArACsAKwArACsAKwArACsAKwArAFAAUABQAFAAUABQAFAAUABQACsAKwArACsAKwArACsAKwArACsAKwArACsAKwArACsAKwArACsAKwArACsAKwBQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQACsAKwArACsAKwArACsAKwArACsAKwArACsAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUAArACsAKwArACsAKwArAFAAUABQAFAAUABQAAQABAAEAAQAKwArACsAKwArACsAKwArAEsASwBLAEsASwBLAEsASwBLAEsAKwArACsAKwArACsAUABQAFAAUABQAFAAUABQAFAAUAArAAQABAANACsAKwBQAFAAKwArACsAKwArACsAKwArACsAKwArACsAKwArAFAAUABQAFAAUABQAAQABAAEAAQABAAEAAQABAAEAAQABABQAFAAUABQAB4AHgAeAB4AHgArACsAKwArACsAKwAEAAQABAAEAAQABAAEAA0ADQAeAB4AHgAeAB4AKwArACsAKwBQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAEsASwBLAEsASwBLAEsASwBLAEsAKwArACsAKwArACsAKwArACsAKwArACsAKwArACsABABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAAQABAAEAAQABAAEAAQABAAEAAQABAAeAB4AHgANAA0ADQANACsAKwArACsAKwArACsAKwArACsAKwAeACsAKwBQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAKwArACsAKwArACsAKwBLAEsASwBLAEsASwBLAEsASwBLACsAKwArACsAKwArAFAAUABQAFAAUABQAFAABAAEAAQABAAEAAQABAAEAAQABAAEAAQABAAEACsASwBLAEsASwBLAEsASwBLAEsASwANAA0ADQANAFAABAAEAFAAKwArACsAKwArACsAKwArAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAABAAeAA4AUAArACsAKwArACsAKwArACsAKwAEAFAAUABQAFAADQANAB4ADQAEAAQABAAEAB4ABAAEAEsASwBLAEsASwBLAEsASwBLAEsAUAAOAFAADQANAA0AKwBQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAKwArACsAKwArACsAKwArACsAKwArAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQACsAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUAAEAAQABAAEAAQABAAEAAQABAAEAAQABAANAA0AHgANAA0AHgAEACsAUABQAFAAUABQAFAAUAArAFAAKwBQAFAAUABQACsAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAKwBQAFAAUABQAFAAUABQAFAAUABQAA0AKwArACsAKwArACsAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUAAEAAQABAAEAAQABAAEAAQABAAEAAQAKwArACsAKwArAEsASwBLAEsASwBLAEsASwBLAEsAKwArACsAKwArACsABAAEAAQABAArAFAAUABQAFAAUABQAFAAUAArACsAUABQACsAKwBQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAKwBQAFAAUABQAFAAUABQACsAUABQACsAUABQAFAAUABQACsABAAEAFAABAAEAAQABAAEAAQABAArACsABAAEACsAKwAEAAQABAArACsAUAArACsAKwArACsAKwAEACsAKwArACsAKwBQAFAAUABQAFAABAAEACsAKwAEAAQABAAEAAQABAAEACsAKwArAAQABAAEAAQABAArACsAKwArACsAKwArACsAKwArACsABAAEAAQABAAEAAQABABQAFAAUABQAA0ADQANAA0AHgBLAEsASwBLAEsASwBLAEsASwBLAA0ADQArAB4ABABQAFAAUAArACsAKwArACsAKwArACsAKwArACsAKwArACsAKwArACsAKwArACsAKwArACsAKwArACsAKwArACsAKwAEAAQABAAEAFAAUAAeAFAAKwArACsAKwArACsAKwArAEsASwBLAEsASwBLAEsASwBLAEsAKwArACsAKwArACsAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAABAAEAAQABAAEAAQABAArACsABAAEAAQABAAEAAQABAAEAAQADgANAA0AEwATAB4AHgAeAA0ADQANAA0ADQANAA0ADQANAA0ADQANAA0ADQANAFAAUABQAFAABAAEACsAKwAEAA0ADQAeAFAAKwArACsAKwArACsAKwArACsAKwArAEsASwBLAEsASwBLAEsASwBLAEsAKwArACsAKwArACsADgAOAA4ADgAOAA4ADgAOAA4ADgAOAA4ADgArACsAKwArACsAKwArACsAKwArACsAKwArACsAKwArACsAKwArAFAAUABQAFAAUABQAFAAUABQAFAAUAAEAAQABAAEAAQABAAEAAQABAAEAAQABAAEAFAAKwArACsAKwArACsAKwBLAEsASwBLAEsASwBLAEsASwBLACsAKwArACsAKwArACsAKwArACsAKwArACsAKwArACsAKwArACsAKwArACsAXABcAFwAXABcAFwAXABcAFwAXABcAFwAXABcAFwAXABcAFwAXABcAFwAXABcAFwAXABcAFwAKwArACoAKgAqACoAKgAqACoAKgAqACoAKgAqACoAKgAqACsAKwArACsASwBLAEsASwBLAEsASwBLAEsASwBcAFwADQANAA0AKgBQAFAAUABQAFAAUABQAFAAUABQAFAAUAAEAAQABAAEAAQABAAEAAQABAAEAAQABAAEAAQABAAeACsAKwArACsASwBLAEsASwBLAEsASwBLAEsASwBQAFAAUABQAFAAUABQAFAAUAArACsAKwArACsAKwArACsAKwArACsAKwBQAFAAUABQAFAAUABQAFAAKwArAFAAKwArAFAAUABQAFAAUABQAFAAUAArAFAAUAArAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAABAAEAAQABAAEAAQAKwAEAAQAKwArAAQABAAEAAQAUAAEAFAABAAEAA0ADQANACsAKwArACsAKwArACsAKwArAEsASwBLAEsASwBLAEsASwBLAEsAKwArACsAKwArACsAUABQAFAAUABQAFAAUABQACsAKwBQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAABAAEAAQABAAEAAQABAArACsABAAEAAQABAAEAAQABABQAA4AUAAEACsAKwArACsAKwArACsAKwArACsAKwArACsAKwArACsAKwArACsAKwArACsAKwArACsAKwArAFAABAAEAAQABAAEAAQABAAEAAQABABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUAAEAAQABAAEAAQABAAEAFAABAAEAAQABAAOAB4ADQANAA0ADQAOAB4ABAArACsAKwArACsAKwArACsAUAAEAAQABAAEAAQABAAEAAQABAAEAAQAUABQAFAAUABQAFAAUABQAFAAUAAEAAQABAAEAAQABAAEAAQABAAEAAQABAAEAAQABAAEAA0ADQANAFAADgAOAA4ADQANACsAKwArACsAKwArACsAKwArACsAKwArACsAKwArACsAKwArACsAKwArACsAKwArACsAKwArACsAKwBQAFAAUABQAFAAUABQAFAAUAArAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAABAAEAAQABAAEAAQABAAEACsABAAEAAQABAAEAAQABAAEAFAADQANAA0ADQANACsAKwArACsAKwArACsAKwArACsASwBLAEsASwBLAEsASwBLAEsASwBQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUAArACsAKwAOABMAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAKwArAAQABAAEAAQABAAEAAQABAAEAAQABAAEAAQABAArAAQABAAEAAQABAAEAAQABAAEAAQABAAEAAQABAArACsAKwArACsAKwArACsAKwBQAFAAUABQAFAAUABQACsAUABQACsAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUAAEAAQABAAEAAQABAArACsAKwAEACsABAAEACsABAAEAAQABAAEAAQABABQAAQAKwArACsAKwArACsAKwArAEsASwBLAEsASwBLAEsASwBLAEsAKwArACsAKwArACsAUABQAFAAUABQAFAAKwBQAFAAKwBQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUAAEAAQABAAEAAQAKwAEAAQAKwAEAAQABAAEAAQAUAArACsAKwArACsAKwArAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAABAAEAAQABAAeAB4AKwArACsAKwArACsAKwArACsAKwArACsAKwArACsAKwArACsAKwBQACsAKwArACsAKwArACsAKwArACsAKwArACsAKwArAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAB4AHgAeAB4AHgAeAB4AHgAaABoAGgAaAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgArACsAKwArACsAKwArACsAKwArACsAKwArAA0AUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQACsAKwArACsAKwArAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQACsADQANAA0ADQANACsAKwArACsAKwArACsAKwArACsAKwBQAFAAUABQACsAKwArACsAKwArACsAKwArACsAKwArACsAKwArACsAKwArACsAKwArACsAKwArACsAKwArACsAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAASABIAEgAQwBDAEMAUABQAFAAUABDAFAAUABQAEgAQwBIAEMAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAASABDAEMAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAKwAJAAkACQAJAAkACQAJABYAEQArACsAKwArACsAKwArAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABIAEMAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUAArACsAKwArACsAKwArACsAKwArACsAKwArACsAKwArACsAKwArACsAKwArACsAKwArAEsASwBLAEsASwBLAEsASwBLAEsAKwArACsAKwANAA0AKwArACsAKwArACsAKwArACsAKwArACsAKwArACsAKwBQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAKwArAAQABAAEAAQABAANACsAKwArACsAKwArACsAKwArACsAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUAAEAAQABAAEAAQABAAEAA0ADQANAB4AHgAeAB4AHgAeAFAAUABQAFAADQAeACsAKwArACsAKwArACsAKwArACsASwBLAEsASwBLAEsASwBLAEsASwArAFAAUABQAFAAUABQAFAAKwBQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUAArACsAKwArACsAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUAArACsAKwArACsAKwArACsAKwArACsAKwArACsAKwArAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUAANAA0AHgAeACsAKwArACsAKwBQAFAAUABQAFAAUABQAFAAUABQAFAAKwArACsAKwAEAFAABAAEAAQABAAEAAQABAAEAAQABAAEAAQABAAEAAQAKwArACsAKwArACsAKwAEAAQABAAEAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAARwBHABUARwAJACsAKwArACsAKwArACsAKwArACsAKwAEAAQAKwArACsAKwArACsAKwArACsAKwArACsAKwArAFcAVwBXAFcAVwBXAFcAVwBXAFcAVwBXAFcAVwBXAFcAVwBXAFcAVwBXAFcAVwBXACsAKwArACsAKwArACsAKwBXAFcAVwBXAFcAVwBXAFcAVwArACsAKwArACsAKwArACsAKwArACsAKwArACsAKwArACsAKwArACsAKwArACsAUQBRAFEAKwArACsAKwArACsAKwArACsAKwArACsAKwBRAFEAUQBRACsAKwArACsAKwArACsAKwBXAFcAVwBXAFcAVwBXAFcAVwBXAFcAVwBXAFcAVwBXAFcAVwBXAFcAVwBXAFcAVwBXAFcAVwBXACsAKwArACsAUABQAFAAUABQAFAAUABQAFAAUABQACsAKwArACsAKwBQAFAAUABQAFAAUABQAFAAUABQAFAAUABQACsAKwArACsAKwArACsAUABQAFAAUABQAFAAUABQAFAAUAArACsAHgAEAAQADQAEAAQABAAEACsAKwArACsAKwArACsAKwArACsAKwArACsAKwArACsAKwArACsAKwArACsAKwArACsAKwArACsAHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgArACsAKwArACsAKwArACsAKwArAB4AHgAeAB4AHgAeAB4AKwArAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAAQABAAEAAQABAAeAB4AHgAEAAQABAAEAAQABAAEAAQABAAEAAQABAAEAAQABAAEAAQABAAEAAQABAAEAB4AHgAEAAQABAAEAAQABAAEAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4ABAAEAAQABAAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4ABAAEAAQAHgArACsAKwArACsAKwArACsAKwArACsAKwArACsAKwArACsAKwArACsAKwArACsAKwArACsAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQACsAKwArACsAKwArACsAKwArACsAKwArAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgArACsAKwArACsAKwArACsAKwAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgArAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AKwBQAFAAKwArAFAAKwArAFAAUAArACsAUABQAFAAUAArAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeACsAUAArAFAAUABQAFAAUABQAFAAKwAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AKwBQAFAAUABQACsAKwBQAFAAUABQAFAAUABQAFAAKwBQAFAAUABQAFAAUABQACsAHgAeAFAAUABQAFAAUAArAFAAKwArACsAUABQAFAAUABQAFAAUAArAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AKwArAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAHgBQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgBQAFAAUABQAFAAUABQAFAAUABQAFAAHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgBQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAB4AHgAeAB4AHgAeAB4AHgAeACsAKwBLAEsASwBLAEsASwBLAEsASwBLAEsASwBLAEsASwBLAEsASwBLAEsASwBLAEsASwBLAEsASwBLAEsASwBLAEsASwBLAAQABAAEAAQABAAEAAQABAAEAAQABAAEAAQABAAEAAQABAAEAAQABAAEAAQABAAeAB4AHgAeAAQABAAEAAQABAAEAAQABAAEAAQABAAEAAQABAAeAB4AHgAeAB4AHgAeAB4ABAAeAB4AHgAeAB4AHgAeAB4AHgAeAAQAHgAeAA0ADQANAA0AHgArACsAKwArACsAKwArACsAKwArACsAKwArACsAKwAEAAQABAAEAAQAKwAEAAQABAAEAAQABAAEAAQABAAEAAQABAAEAAQABAArACsAKwArACsAKwArACsAKwArACsAKwArACsAKwArAAQABAAEAAQABAAEAAQAKwAEAAQABAAEAAQABAAEAAQABAAEAAQABAAEAAQABAAEAAQAKwArAAQABAAEAAQABAAEAAQAKwAEAAQAKwAEAAQABAAEAAQAKwArACsAKwArACsAKwArACsAKwArACsAKwArACsAKwArACsAKwArACsAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUAArACsAKwAEAAQABAAEAAQABAAEAFAAUABQAFAAUABQAFAAKwArAEsASwBLAEsASwBLAEsASwBLAEsAKwArACsAKwBQAB4AKwArACsAKwArACsAKwArACsAKwArACsAKwArACsAKwBQAFAAUABQAFAAUABQAFAAUABQAFAAUAAEAAQABAAEAEsASwBLAEsASwBLAEsASwBLAEsAKwArACsAKwArABsAUABQAFAAUABQACsAKwBQAFAAUABQAFAAUABQAFAAUAAEAAQABAAEAAQABAAEACsAKwArACsAKwArACsAKwArAB4AHgAeAB4ABAAEAAQABAAEAAQABABQACsAKwArACsASwBLAEsASwBLAEsASwBLAEsASwArACsAKwArABYAFgArACsAKwArACsAKwArACsAKwArACsAKwArACsAKwArACsAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAGgBQAFAAUAAaAFAAUABQAFAAKwArACsAKwArACsAKwArACsAKwArAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUAAeAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQACsAKwBQAFAAUABQACsAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAKwBQAFAAKwBQACsAKwBQACsAUABQAFAAUABQAFAAUABQAFAAUAArAFAAUABQAFAAKwBQACsAUAArACsAKwArACsAKwBQACsAKwArACsAUAArAFAAKwBQACsAUABQAFAAKwBQAFAAKwBQACsAKwBQACsAUAArAFAAKwBQACsAUAArAFAAUAArAFAAKwArAFAAUABQAFAAKwBQAFAAUABQAFAAUABQACsAUABQAFAAUAArAFAAUABQAFAAKwBQACsAUABQAFAAUABQAFAAUABQAFAAUAArAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUAArACsAKwArACsAUABQAFAAKwBQAFAAUABQAFAAKwBQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAKwArACsAKwArACsAKwArACsAKwArACsAKwArACsAKwAeAB4AKwArACsAKwArACsAKwArACsAKwArACsAKwArAE8ATwBPAE8ATwBPAE8ATwBPAE8ATwBPAE8AJQAlACUAHQAdAB0AHQAdAB0AHQAdAB0AHQAdAB0AHQAdAB0AHQAdAB0AHgAeAB0AHQAdAB0AHQAdAB0AHQAdAB0AHQAdAB0AHQAdAB0AHQAdAB4AHgAeACUAJQAlAB0AHQAdAB0AHQAdAB0AHQAdAB0AHQAdAB0AHQAdAB0AHQAlACUAJQAlACUAJQAlACUAJQAlACUAJQAlACUAJQAlACUAJQAlACUAJQApACkAKQApACkAKQApACkAKQApACkAKQApACkAKQApACkAKQApACkAKQApACkAKQApACkAJQAlACUAJQAlACAAJQAlACUAJQAlACUAJQAlACUAJQAlACUAJQAlACUAJQAlACUAJQAlACUAJQAeAB4AJQAlACUAJQAlACUAJQAlACUAJQAlACUAJQAlACUAJQAlACUAJQAlACUAJQAlAB4AHgAlACUAJQAlACUAHgAlACUAJQAlACUAIAAgACAAJQAlACAAJQAlACAAIAAgACUAJQAlACUAJQAlACUAJQAlACUAJQAlACUAJQAlACUAJQAlACUAJQAlACUAJQAlACUAJQAlACUAJQAlACEAIQAhACEAIQAlACUAIAAgACUAJQAgACAAIAAgACAAIAAgACAAIAAgACAAJQAlACUAJQAlACUAJQAlACUAJQAlACUAJQAlACUAJQAlACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAJQAlACUAIAAlACUAJQAlACAAIAAgACUAIAAgACAAJQAlACUAJQAlACUAJQAgACUAIAAlACUAJQAlACUAJQAlACUAJQAlACUAJQAlACUAHgAlAB4AJQAeACUAJQAlACUAJQAgACUAJQAlACUAHgAlAB4AHgAlACUAJQAlACUAJQAlACUAJQAlACUAJQAlAB4AHgAeAB4AHgAeAB4AJQAlACUAJQAlACUAJQAlACUAJQAlACUAJQAlACUAJQAeAB4AHgAeAB4AHgAeAB4AHgAeACUAJQAlACUAJQAlACUAJQAlACUAJQAlACUAHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAlACUAJQAlACUAJQAlACUAJQAlACUAJQAlACUAJQAlACUAJQAlACUAJQAlACAAIAAlACUAJQAlACAAJQAlACUAJQAlACUAJQAlACUAJQAlACUAJQAlACUAJQAlACAAJQAlACUAJQAgACAAJQAlACUAJQAlACUAJQAlACUAJQAlACUAJQAlACUAJQAlACUAJQAlACUAHgAeAB4AHgAeAB4AHgAeACUAJQAlACUAJQAlACUAJQAlACUAJQAlACUAJQAlACUAJQAlACUAJQAeAB4AHgAeAB4AHgAlACUAJQAlACUAJQAlACAAIAAgACUAJQAlACAAIAAgACAAIAAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeABcAFwAXABUAFQAVAB4AHgAeAB4AJQAlACUAIAAlACUAJQAlACUAJQAlACUAJQAlACUAJQAlACUAJQAlACAAIAAgACUAJQAlACUAJQAlACUAJQAlACAAJQAlACUAJQAlACUAJQAlACUAJQAlACAAJQAlACUAJQAlACUAJQAlACUAJQAlACUAJQAlACUAJQAlACUAJQAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AJQAlACUAJQAlACUAJQAlACUAJQAlACUAHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AJQAlACUAJQAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeACUAJQAlACUAJQAlACUAJQAeAB4AHgAeAB4AHgAeAB4AHgAeACUAJQAlACUAJQAlAB4AHgAeAB4AHgAeAB4AHgAlACUAJQAlACUAJQAlACUAHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAgACUAJQAgACUAJQAlACUAJQAlACUAJQAgACAAIAAgACAAIAAgACAAJQAlACUAJQAlACUAIAAlACUAJQAlACUAJQAlACUAJQAgACAAIAAgACAAIAAgACAAIAAgACUAJQAgACAAIAAlACUAJQAlACUAJQAlACUAJQAlACUAJQAlACUAJQAlACUAJQAlACUAJQAlACUAJQAgACUAJQAlACUAJQAlACUAJQAlACUAJQAlACUAJQAlACUAJQAlACUAJQAlACAAIAAlACAAIAAlACAAJQAlACUAJQAlACUAJQAlACUAJQAlACUAJQAgACAAIAAlACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAJQAlAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AKwAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgArACsAKwArACsAKwArACsAKwArACsAKwArACsAKwArAEsASwBLAEsASwBLAEsASwBLAEsAKwArACsAKwArACsAJQAlACUAJQAlACUAJQAlACUAJQAlACUAJQAlACUAJQAlACUAJQAlACUAJQAlACUAJQAlACUAJQAlACUAKwArAFcAVwBXAFcAVwBXAFcAVwBXAFcAVwBXAFcAVwBXAFcAVwBXAFcAVwBXAFcAVwBXAFcAVwBXAFcAVwBXACUAJQBXAFcAVwBXAFcAVwBXAFcAVwBXAFcAVwBXAFcAVwBXAFcAVwBXAFcAVwAlACUAJQAlACUAJQAlACUAJQAlACUAVwBXACUAJQAlACUAJQAlACUAJQAlACUAJQAlACUAJQBXAFcAVwBXAFcAVwBXAFcAVwBXAFcAVwBXAFcAVwBXAFcAJQAlACUAJQAlACUAJQAlACUAJQAlACUAJQAlACUAJQAlACUAJQAlACUAJQAlACUAJQAlACUAJQAlACUAJQBXAFcAVwBXAFcAVwBXAFcAVwBXAFcAJQAlACUAJQAlACUAJQAlACUAJQAlACUAJQAlACUAJQAlACUAJQAlACUAKwAEACsAKwArACsAKwArACsAKwArACsAKwArACsAKwArACsAKwArACsAKwArACsAKwArACsAKwArACsAKwArAAQABAAEAAQABAAEAAQABAAEAAQABAAEAAQABAAEAAQAKwArACsAKwArACsAKwArACsAKwArACsAKwArACsAKwArACsAKwArAA=='; - - var LETTER_NUMBER_MODIFIER = 50; - // Non-tailorable Line Breaking Classes - var BK = 1; // Cause a line break (after) - var CR$1 = 2; // Cause a line break (after), except between CR and LF - var LF$1 = 3; // Cause a line break (after) - var CM = 4; // Prohibit a line break between the character and the preceding character - var NL = 5; // Cause a line break (after) - var WJ = 7; // Prohibit line breaks before and after - var ZW = 8; // Provide a break opportunity - var GL = 9; // Prohibit line breaks before and after - var SP = 10; // Enable indirect line breaks - var ZWJ$1 = 11; // Prohibit line breaks within joiner sequences - // Break Opportunities - var B2 = 12; // Provide a line break opportunity before and after the character - var BA = 13; // Generally provide a line break opportunity after the character - var BB = 14; // Generally provide a line break opportunity before the character - var HY = 15; // Provide a line break opportunity after the character, except in numeric context - var CB = 16; // Provide a line break opportunity contingent on additional information - // Characters Prohibiting Certain Breaks - var CL = 17; // Prohibit line breaks before - var CP = 18; // Prohibit line breaks before - var EX = 19; // Prohibit line breaks before - var IN = 20; // Allow only indirect line breaks between pairs - var NS = 21; // Allow only indirect line breaks before - var OP = 22; // Prohibit line breaks after - var QU = 23; // Act like they are both opening and closing - // Numeric Context - var IS = 24; // Prevent breaks after any and before numeric - var NU = 25; // Form numeric expressions for line breaking purposes - var PO = 26; // Do not break following a numeric expression - var PR = 27; // Do not break in front of a numeric expression - var SY = 28; // Prevent a break before; and allow a break after - // Other Characters - var AI = 29; // Act like AL when the resolvedEAW is N; otherwise; act as ID - var AL = 30; // Are alphabetic characters or symbols that are used with alphabetic characters - var CJ = 31; // Treat as NS or ID for strict or normal breaking. - var EB = 32; // Do not break from following Emoji Modifier - var EM = 33; // Do not break from preceding Emoji Base - var H2 = 34; // Form Korean syllable blocks - var H3 = 35; // Form Korean syllable blocks - var HL = 36; // Do not break around a following hyphen; otherwise act as Alphabetic - var ID = 37; // Break before or after; except in some numeric context - var JL = 38; // Form Korean syllable blocks - var JV = 39; // Form Korean syllable blocks - var JT = 40; // Form Korean syllable blocks - var RI$1 = 41; // Keep pairs together. For pairs; break before and after other classes - var SA = 42; // Provide a line break opportunity contingent on additional, language-specific context analysis - var XX = 43; // Have as yet unknown line breaking behavior or unassigned code positions - var ea_OP = [0x2329, 0xff08]; - var BREAK_MANDATORY = '!'; - var BREAK_NOT_ALLOWED$1 = '×'; - var BREAK_ALLOWED$1 = '÷'; - var UnicodeTrie$1 = createTrieFromBase64$1(base64$1); - var ALPHABETICS = [AL, HL]; - var HARD_LINE_BREAKS = [BK, CR$1, LF$1, NL]; - var SPACE$1 = [SP, ZW]; - var PREFIX_POSTFIX = [PR, PO]; - var LINE_BREAKS = HARD_LINE_BREAKS.concat(SPACE$1); - var KOREAN_SYLLABLE_BLOCK = [JL, JV, JT, H2, H3]; - var HYPHEN = [HY, BA]; - var codePointsToCharacterClasses = function (codePoints, lineBreak) { - if (lineBreak === void 0) { lineBreak = 'strict'; } - var types = []; - var indices = []; - var categories = []; - codePoints.forEach(function (codePoint, index) { - var classType = UnicodeTrie$1.get(codePoint); - if (classType > LETTER_NUMBER_MODIFIER) { - categories.push(true); - classType -= LETTER_NUMBER_MODIFIER; - } - else { - categories.push(false); - } - if (['normal', 'auto', 'loose'].indexOf(lineBreak) !== -1) { - // U+2010, – U+2013, 〜 U+301C, ゠ U+30A0 - if ([0x2010, 0x2013, 0x301c, 0x30a0].indexOf(codePoint) !== -1) { - indices.push(index); - return types.push(CB); - } - } - if (classType === CM || classType === ZWJ$1) { - // LB10 Treat any remaining combining mark or ZWJ as AL. - if (index === 0) { - indices.push(index); - return types.push(AL); - } - // LB9 Do not break a combining character sequence; treat it as if it has the line breaking class of - // the base character in all of the following rules. Treat ZWJ as if it were CM. - var prev = types[index - 1]; - if (LINE_BREAKS.indexOf(prev) === -1) { - indices.push(indices[index - 1]); - return types.push(prev); - } - indices.push(index); - return types.push(AL); - } - indices.push(index); - if (classType === CJ) { - return types.push(lineBreak === 'strict' ? NS : ID); - } - if (classType === SA) { - return types.push(AL); - } - if (classType === AI) { - return types.push(AL); - } - // For supplementary characters, a useful default is to treat characters in the range 10000..1FFFD as AL - // and characters in the ranges 20000..2FFFD and 30000..3FFFD as ID, until the implementation can be revised - // to take into account the actual line breaking properties for these characters. - if (classType === XX) { - if ((codePoint >= 0x20000 && codePoint <= 0x2fffd) || (codePoint >= 0x30000 && codePoint <= 0x3fffd)) { - return types.push(ID); - } - else { - return types.push(AL); - } - } - types.push(classType); - }); - return [indices, types, categories]; - }; - var isAdjacentWithSpaceIgnored = function (a, b, currentIndex, classTypes) { - var current = classTypes[currentIndex]; - if (Array.isArray(a) ? a.indexOf(current) !== -1 : a === current) { - var i = currentIndex; - while (i <= classTypes.length) { - i++; - var next = classTypes[i]; - if (next === b) { - return true; - } - if (next !== SP) { - break; - } - } - } - if (current === SP) { - var i = currentIndex; - while (i > 0) { - i--; - var prev = classTypes[i]; - if (Array.isArray(a) ? a.indexOf(prev) !== -1 : a === prev) { - var n = currentIndex; - while (n <= classTypes.length) { - n++; - var next = classTypes[n]; - if (next === b) { - return true; - } - if (next !== SP) { - break; - } - } - } - if (prev !== SP) { - break; - } - } - } - return false; - }; - var previousNonSpaceClassType = function (currentIndex, classTypes) { - var i = currentIndex; - while (i >= 0) { - var type = classTypes[i]; - if (type === SP) { - i--; - } - else { - return type; - } - } - return 0; - }; - var _lineBreakAtIndex = function (codePoints, classTypes, indicies, index, forbiddenBreaks) { - if (indicies[index] === 0) { - return BREAK_NOT_ALLOWED$1; - } - var currentIndex = index - 1; - if (Array.isArray(forbiddenBreaks) && forbiddenBreaks[currentIndex] === true) { - return BREAK_NOT_ALLOWED$1; - } - var beforeIndex = currentIndex - 1; - var afterIndex = currentIndex + 1; - var current = classTypes[currentIndex]; - // LB4 Always break after hard line breaks. - // LB5 Treat CR followed by LF, as well as CR, LF, and NL as hard line breaks. - var before = beforeIndex >= 0 ? classTypes[beforeIndex] : 0; - var next = classTypes[afterIndex]; - if (current === CR$1 && next === LF$1) { - return BREAK_NOT_ALLOWED$1; - } - if (HARD_LINE_BREAKS.indexOf(current) !== -1) { - return BREAK_MANDATORY; - } - // LB6 Do not break before hard line breaks. - if (HARD_LINE_BREAKS.indexOf(next) !== -1) { - return BREAK_NOT_ALLOWED$1; - } - // LB7 Do not break before spaces or zero width space. - if (SPACE$1.indexOf(next) !== -1) { - return BREAK_NOT_ALLOWED$1; - } - // LB8 Break before any character following a zero-width space, even if one or more spaces intervene. - if (previousNonSpaceClassType(currentIndex, classTypes) === ZW) { - return BREAK_ALLOWED$1; - } - // LB8a Do not break after a zero width joiner. - if (UnicodeTrie$1.get(codePoints[currentIndex]) === ZWJ$1) { - return BREAK_NOT_ALLOWED$1; - } - // zwj emojis - if ((current === EB || current === EM) && UnicodeTrie$1.get(codePoints[afterIndex]) === ZWJ$1) { - return BREAK_NOT_ALLOWED$1; - } - // LB11 Do not break before or after Word joiner and related characters. - if (current === WJ || next === WJ) { - return BREAK_NOT_ALLOWED$1; - } - // LB12 Do not break after NBSP and related characters. - if (current === GL) { - return BREAK_NOT_ALLOWED$1; - } - // LB12a Do not break before NBSP and related characters, except after spaces and hyphens. - if ([SP, BA, HY].indexOf(current) === -1 && next === GL) { - return BREAK_NOT_ALLOWED$1; - } - // LB13 Do not break before ‘]’ or ‘!’ or ‘;’ or ‘/’, even after spaces. - if ([CL, CP, EX, IS, SY].indexOf(next) !== -1) { - return BREAK_NOT_ALLOWED$1; - } - // LB14 Do not break after ‘[’, even after spaces. - if (previousNonSpaceClassType(currentIndex, classTypes) === OP) { - return BREAK_NOT_ALLOWED$1; - } - // LB15 Do not break within ‘”[’, even with intervening spaces. - if (isAdjacentWithSpaceIgnored(QU, OP, currentIndex, classTypes)) { - return BREAK_NOT_ALLOWED$1; - } - // LB16 Do not break between closing punctuation and a nonstarter (lb=NS), even with intervening spaces. - if (isAdjacentWithSpaceIgnored([CL, CP], NS, currentIndex, classTypes)) { - return BREAK_NOT_ALLOWED$1; - } - // LB17 Do not break within ‘——’, even with intervening spaces. - if (isAdjacentWithSpaceIgnored(B2, B2, currentIndex, classTypes)) { - return BREAK_NOT_ALLOWED$1; - } - // LB18 Break after spaces. - if (current === SP) { - return BREAK_ALLOWED$1; - } - // LB19 Do not break before or after quotation marks, such as ‘ ” ’. - if (current === QU || next === QU) { - return BREAK_NOT_ALLOWED$1; - } - // LB20 Break before and after unresolved CB. - if (next === CB || current === CB) { - return BREAK_ALLOWED$1; - } - // LB21 Do not break before hyphen-minus, other hyphens, fixed-width spaces, small kana, and other non-starters, or after acute accents. - if ([BA, HY, NS].indexOf(next) !== -1 || current === BB) { - return BREAK_NOT_ALLOWED$1; - } - // LB21a Don't break after Hebrew + Hyphen. - if (before === HL && HYPHEN.indexOf(current) !== -1) { - return BREAK_NOT_ALLOWED$1; - } - // LB21b Don’t break between Solidus and Hebrew letters. - if (current === SY && next === HL) { - return BREAK_NOT_ALLOWED$1; - } - // LB22 Do not break before ellipsis. - if (next === IN) { - return BREAK_NOT_ALLOWED$1; - } - // LB23 Do not break between digits and letters. - if ((ALPHABETICS.indexOf(next) !== -1 && current === NU) || (ALPHABETICS.indexOf(current) !== -1 && next === NU)) { - return BREAK_NOT_ALLOWED$1; - } - // LB23a Do not break between numeric prefixes and ideographs, or between ideographs and numeric postfixes. - if ((current === PR && [ID, EB, EM].indexOf(next) !== -1) || - ([ID, EB, EM].indexOf(current) !== -1 && next === PO)) { - return BREAK_NOT_ALLOWED$1; - } - // LB24 Do not break between numeric prefix/postfix and letters, or between letters and prefix/postfix. - if ((ALPHABETICS.indexOf(current) !== -1 && PREFIX_POSTFIX.indexOf(next) !== -1) || - (PREFIX_POSTFIX.indexOf(current) !== -1 && ALPHABETICS.indexOf(next) !== -1)) { - return BREAK_NOT_ALLOWED$1; - } - // LB25 Do not break between the following pairs of classes relevant to numbers: - if ( - // (PR | PO) × ( OP | HY )? NU - ([PR, PO].indexOf(current) !== -1 && - (next === NU || ([OP, HY].indexOf(next) !== -1 && classTypes[afterIndex + 1] === NU))) || - // ( OP | HY ) × NU - ([OP, HY].indexOf(current) !== -1 && next === NU) || - // NU × (NU | SY | IS) - (current === NU && [NU, SY, IS].indexOf(next) !== -1)) { - return BREAK_NOT_ALLOWED$1; - } - // NU (NU | SY | IS)* × (NU | SY | IS | CL | CP) - if ([NU, SY, IS, CL, CP].indexOf(next) !== -1) { - var prevIndex = currentIndex; - while (prevIndex >= 0) { - var type = classTypes[prevIndex]; - if (type === NU) { - return BREAK_NOT_ALLOWED$1; - } - else if ([SY, IS].indexOf(type) !== -1) { - prevIndex--; - } - else { - break; - } - } - } - // NU (NU | SY | IS)* (CL | CP)? × (PO | PR)) - if ([PR, PO].indexOf(next) !== -1) { - var prevIndex = [CL, CP].indexOf(current) !== -1 ? beforeIndex : currentIndex; - while (prevIndex >= 0) { - var type = classTypes[prevIndex]; - if (type === NU) { - return BREAK_NOT_ALLOWED$1; - } - else if ([SY, IS].indexOf(type) !== -1) { - prevIndex--; - } - else { - break; - } - } - } - // LB26 Do not break a Korean syllable. - if ((JL === current && [JL, JV, H2, H3].indexOf(next) !== -1) || - ([JV, H2].indexOf(current) !== -1 && [JV, JT].indexOf(next) !== -1) || - ([JT, H3].indexOf(current) !== -1 && next === JT)) { - return BREAK_NOT_ALLOWED$1; - } - // LB27 Treat a Korean Syllable Block the same as ID. - if ((KOREAN_SYLLABLE_BLOCK.indexOf(current) !== -1 && [IN, PO].indexOf(next) !== -1) || - (KOREAN_SYLLABLE_BLOCK.indexOf(next) !== -1 && current === PR)) { - return BREAK_NOT_ALLOWED$1; - } - // LB28 Do not break between alphabetics (“at”). - if (ALPHABETICS.indexOf(current) !== -1 && ALPHABETICS.indexOf(next) !== -1) { - return BREAK_NOT_ALLOWED$1; - } - // LB29 Do not break between numeric punctuation and alphabetics (“e.g.”). - if (current === IS && ALPHABETICS.indexOf(next) !== -1) { - return BREAK_NOT_ALLOWED$1; - } - // LB30 Do not break between letters, numbers, or ordinary symbols and opening or closing parentheses. - if ((ALPHABETICS.concat(NU).indexOf(current) !== -1 && - next === OP && - ea_OP.indexOf(codePoints[afterIndex]) === -1) || - (ALPHABETICS.concat(NU).indexOf(next) !== -1 && current === CP)) { - return BREAK_NOT_ALLOWED$1; - } - // LB30a Break between two regional indicator symbols if and only if there are an even number of regional - // indicators preceding the position of the break. - if (current === RI$1 && next === RI$1) { - var i = indicies[currentIndex]; - var count = 1; - while (i > 0) { - i--; - if (classTypes[i] === RI$1) { - count++; - } - else { - break; - } - } - if (count % 2 !== 0) { - return BREAK_NOT_ALLOWED$1; - } - } - // LB30b Do not break between an emoji base and an emoji modifier. - if (current === EB && next === EM) { - return BREAK_NOT_ALLOWED$1; - } - return BREAK_ALLOWED$1; - }; - var cssFormattedClasses = function (codePoints, options) { - if (!options) { - options = { lineBreak: 'normal', wordBreak: 'normal' }; - } - var _a = codePointsToCharacterClasses(codePoints, options.lineBreak), indicies = _a[0], classTypes = _a[1], isLetterNumber = _a[2]; - if (options.wordBreak === 'break-all' || options.wordBreak === 'break-word') { - classTypes = classTypes.map(function (type) { return ([NU, AL, SA].indexOf(type) !== -1 ? ID : type); }); - } - var forbiddenBreakpoints = options.wordBreak === 'keep-all' - ? isLetterNumber.map(function (letterNumber, i) { - return letterNumber && codePoints[i] >= 0x4e00 && codePoints[i] <= 0x9fff; - }) - : undefined; - return [indicies, classTypes, forbiddenBreakpoints]; - }; - var Break = /** @class */ (function () { - function Break(codePoints, lineBreak, start, end) { - this.codePoints = codePoints; - this.required = lineBreak === BREAK_MANDATORY; - this.start = start; - this.end = end; - } - Break.prototype.slice = function () { - return fromCodePoint$1.apply(void 0, this.codePoints.slice(this.start, this.end)); - }; - return Break; - }()); - var LineBreaker = function (str, options) { - var codePoints = toCodePoints$1(str); - var _a = cssFormattedClasses(codePoints, options), indicies = _a[0], classTypes = _a[1], forbiddenBreakpoints = _a[2]; - var length = codePoints.length; - var lastEnd = 0; - var nextIndex = 0; - return { - next: function () { - if (nextIndex >= length) { - return { done: true, value: null }; - } - var lineBreak = BREAK_NOT_ALLOWED$1; - while (nextIndex < length && - (lineBreak = _lineBreakAtIndex(codePoints, classTypes, indicies, ++nextIndex, forbiddenBreakpoints)) === - BREAK_NOT_ALLOWED$1) { } - if (lineBreak !== BREAK_NOT_ALLOWED$1 || nextIndex === length) { - var value = new Break(codePoints, lineBreak, lastEnd, nextIndex); - lastEnd = nextIndex; - return { value: value, done: false }; - } - return { done: true, value: null }; - }, - }; - }; - - // https://www.w3.org/TR/css-syntax-3 - var FLAG_UNRESTRICTED = 1 << 0; - var FLAG_ID = 1 << 1; - var FLAG_INTEGER = 1 << 2; - var FLAG_NUMBER = 1 << 3; - var LINE_FEED = 0x000a; - var SOLIDUS = 0x002f; - var REVERSE_SOLIDUS = 0x005c; - var CHARACTER_TABULATION = 0x0009; - var SPACE = 0x0020; - var QUOTATION_MARK = 0x0022; - var EQUALS_SIGN = 0x003d; - var NUMBER_SIGN = 0x0023; - var DOLLAR_SIGN = 0x0024; - var PERCENTAGE_SIGN = 0x0025; - var APOSTROPHE = 0x0027; - var LEFT_PARENTHESIS = 0x0028; - var RIGHT_PARENTHESIS = 0x0029; - var LOW_LINE = 0x005f; - var HYPHEN_MINUS = 0x002d; - var EXCLAMATION_MARK = 0x0021; - var LESS_THAN_SIGN = 0x003c; - var GREATER_THAN_SIGN = 0x003e; - var COMMERCIAL_AT = 0x0040; - var LEFT_SQUARE_BRACKET = 0x005b; - var RIGHT_SQUARE_BRACKET = 0x005d; - var CIRCUMFLEX_ACCENT = 0x003d; - var LEFT_CURLY_BRACKET = 0x007b; - var QUESTION_MARK = 0x003f; - var RIGHT_CURLY_BRACKET = 0x007d; - var VERTICAL_LINE = 0x007c; - var TILDE = 0x007e; - var CONTROL = 0x0080; - var REPLACEMENT_CHARACTER = 0xfffd; - var ASTERISK = 0x002a; - var PLUS_SIGN = 0x002b; - var COMMA = 0x002c; - var COLON = 0x003a; - var SEMICOLON = 0x003b; - var FULL_STOP = 0x002e; - var NULL = 0x0000; - var BACKSPACE = 0x0008; - var LINE_TABULATION = 0x000b; - var SHIFT_OUT = 0x000e; - var INFORMATION_SEPARATOR_ONE = 0x001f; - var DELETE = 0x007f; - var EOF = -1; - var ZERO = 0x0030; - var a = 0x0061; - var e = 0x0065; - var f = 0x0066; - var u = 0x0075; - var z = 0x007a; - var A = 0x0041; - var E = 0x0045; - var F = 0x0046; - var U = 0x0055; - var Z = 0x005a; - var isDigit = function (codePoint) { return codePoint >= ZERO && codePoint <= 0x0039; }; - var isSurrogateCodePoint = function (codePoint) { return codePoint >= 0xd800 && codePoint <= 0xdfff; }; - var isHex = function (codePoint) { - return isDigit(codePoint) || (codePoint >= A && codePoint <= F) || (codePoint >= a && codePoint <= f); - }; - var isLowerCaseLetter = function (codePoint) { return codePoint >= a && codePoint <= z; }; - var isUpperCaseLetter = function (codePoint) { return codePoint >= A && codePoint <= Z; }; - var isLetter = function (codePoint) { return isLowerCaseLetter(codePoint) || isUpperCaseLetter(codePoint); }; - var isNonASCIICodePoint = function (codePoint) { return codePoint >= CONTROL; }; - var isWhiteSpace = function (codePoint) { - return codePoint === LINE_FEED || codePoint === CHARACTER_TABULATION || codePoint === SPACE; - }; - var isNameStartCodePoint = function (codePoint) { - return isLetter(codePoint) || isNonASCIICodePoint(codePoint) || codePoint === LOW_LINE; - }; - var isNameCodePoint = function (codePoint) { - return isNameStartCodePoint(codePoint) || isDigit(codePoint) || codePoint === HYPHEN_MINUS; - }; - var isNonPrintableCodePoint = function (codePoint) { - return ((codePoint >= NULL && codePoint <= BACKSPACE) || - codePoint === LINE_TABULATION || - (codePoint >= SHIFT_OUT && codePoint <= INFORMATION_SEPARATOR_ONE) || - codePoint === DELETE); - }; - var isValidEscape = function (c1, c2) { - if (c1 !== REVERSE_SOLIDUS) { - return false; - } - return c2 !== LINE_FEED; - }; - var isIdentifierStart = function (c1, c2, c3) { - if (c1 === HYPHEN_MINUS) { - return isNameStartCodePoint(c2) || isValidEscape(c2, c3); - } - else if (isNameStartCodePoint(c1)) { - return true; - } - else if (c1 === REVERSE_SOLIDUS && isValidEscape(c1, c2)) { - return true; - } - return false; - }; - var isNumberStart = function (c1, c2, c3) { - if (c1 === PLUS_SIGN || c1 === HYPHEN_MINUS) { - if (isDigit(c2)) { - return true; - } - return c2 === FULL_STOP && isDigit(c3); - } - if (c1 === FULL_STOP) { - return isDigit(c2); - } - return isDigit(c1); - }; - var stringToNumber = function (codePoints) { - var c = 0; - var sign = 1; - if (codePoints[c] === PLUS_SIGN || codePoints[c] === HYPHEN_MINUS) { - if (codePoints[c] === HYPHEN_MINUS) { - sign = -1; - } - c++; - } - var integers = []; - while (isDigit(codePoints[c])) { - integers.push(codePoints[c++]); - } - var int = integers.length ? parseInt(fromCodePoint$1.apply(void 0, integers), 10) : 0; - if (codePoints[c] === FULL_STOP) { - c++; - } - var fraction = []; - while (isDigit(codePoints[c])) { - fraction.push(codePoints[c++]); - } - var fracd = fraction.length; - var frac = fracd ? parseInt(fromCodePoint$1.apply(void 0, fraction), 10) : 0; - if (codePoints[c] === E || codePoints[c] === e) { - c++; - } - var expsign = 1; - if (codePoints[c] === PLUS_SIGN || codePoints[c] === HYPHEN_MINUS) { - if (codePoints[c] === HYPHEN_MINUS) { - expsign = -1; - } - c++; - } - var exponent = []; - while (isDigit(codePoints[c])) { - exponent.push(codePoints[c++]); - } - var exp = exponent.length ? parseInt(fromCodePoint$1.apply(void 0, exponent), 10) : 0; - return sign * (int + frac * Math.pow(10, -fracd)) * Math.pow(10, expsign * exp); - }; - var LEFT_PARENTHESIS_TOKEN = { - type: 2 /* LEFT_PARENTHESIS_TOKEN */ - }; - var RIGHT_PARENTHESIS_TOKEN = { - type: 3 /* RIGHT_PARENTHESIS_TOKEN */ - }; - var COMMA_TOKEN = { type: 4 /* COMMA_TOKEN */ }; - var SUFFIX_MATCH_TOKEN = { type: 13 /* SUFFIX_MATCH_TOKEN */ }; - var PREFIX_MATCH_TOKEN = { type: 8 /* PREFIX_MATCH_TOKEN */ }; - var COLUMN_TOKEN = { type: 21 /* COLUMN_TOKEN */ }; - var DASH_MATCH_TOKEN = { type: 9 /* DASH_MATCH_TOKEN */ }; - var INCLUDE_MATCH_TOKEN = { type: 10 /* INCLUDE_MATCH_TOKEN */ }; - var LEFT_CURLY_BRACKET_TOKEN = { - type: 11 /* LEFT_CURLY_BRACKET_TOKEN */ - }; - var RIGHT_CURLY_BRACKET_TOKEN = { - type: 12 /* RIGHT_CURLY_BRACKET_TOKEN */ - }; - var SUBSTRING_MATCH_TOKEN = { type: 14 /* SUBSTRING_MATCH_TOKEN */ }; - var BAD_URL_TOKEN = { type: 23 /* BAD_URL_TOKEN */ }; - var BAD_STRING_TOKEN = { type: 1 /* BAD_STRING_TOKEN */ }; - var CDO_TOKEN = { type: 25 /* CDO_TOKEN */ }; - var CDC_TOKEN = { type: 24 /* CDC_TOKEN */ }; - var COLON_TOKEN = { type: 26 /* COLON_TOKEN */ }; - var SEMICOLON_TOKEN = { type: 27 /* SEMICOLON_TOKEN */ }; - var LEFT_SQUARE_BRACKET_TOKEN = { - type: 28 /* LEFT_SQUARE_BRACKET_TOKEN */ - }; - var RIGHT_SQUARE_BRACKET_TOKEN = { - type: 29 /* RIGHT_SQUARE_BRACKET_TOKEN */ - }; - var WHITESPACE_TOKEN = { type: 31 /* WHITESPACE_TOKEN */ }; - var EOF_TOKEN = { type: 32 /* EOF_TOKEN */ }; - var Tokenizer = /** @class */ (function () { - function Tokenizer() { - this._value = []; - } - Tokenizer.prototype.write = function (chunk) { - this._value = this._value.concat(toCodePoints$1(chunk)); - }; - Tokenizer.prototype.read = function () { - var tokens = []; - var token = this.consumeToken(); - while (token !== EOF_TOKEN) { - tokens.push(token); - token = this.consumeToken(); - } - return tokens; - }; - Tokenizer.prototype.consumeToken = function () { - var codePoint = this.consumeCodePoint(); - switch (codePoint) { - case QUOTATION_MARK: - return this.consumeStringToken(QUOTATION_MARK); - case NUMBER_SIGN: - var c1 = this.peekCodePoint(0); - var c2 = this.peekCodePoint(1); - var c3 = this.peekCodePoint(2); - if (isNameCodePoint(c1) || isValidEscape(c2, c3)) { - var flags = isIdentifierStart(c1, c2, c3) ? FLAG_ID : FLAG_UNRESTRICTED; - var value = this.consumeName(); - return { type: 5 /* HASH_TOKEN */, value: value, flags: flags }; - } - break; - case DOLLAR_SIGN: - if (this.peekCodePoint(0) === EQUALS_SIGN) { - this.consumeCodePoint(); - return SUFFIX_MATCH_TOKEN; - } - break; - case APOSTROPHE: - return this.consumeStringToken(APOSTROPHE); - case LEFT_PARENTHESIS: - return LEFT_PARENTHESIS_TOKEN; - case RIGHT_PARENTHESIS: - return RIGHT_PARENTHESIS_TOKEN; - case ASTERISK: - if (this.peekCodePoint(0) === EQUALS_SIGN) { - this.consumeCodePoint(); - return SUBSTRING_MATCH_TOKEN; - } - break; - case PLUS_SIGN: - if (isNumberStart(codePoint, this.peekCodePoint(0), this.peekCodePoint(1))) { - this.reconsumeCodePoint(codePoint); - return this.consumeNumericToken(); - } - break; - case COMMA: - return COMMA_TOKEN; - case HYPHEN_MINUS: - var e1 = codePoint; - var e2 = this.peekCodePoint(0); - var e3 = this.peekCodePoint(1); - if (isNumberStart(e1, e2, e3)) { - this.reconsumeCodePoint(codePoint); - return this.consumeNumericToken(); - } - if (isIdentifierStart(e1, e2, e3)) { - this.reconsumeCodePoint(codePoint); - return this.consumeIdentLikeToken(); - } - if (e2 === HYPHEN_MINUS && e3 === GREATER_THAN_SIGN) { - this.consumeCodePoint(); - this.consumeCodePoint(); - return CDC_TOKEN; - } - break; - case FULL_STOP: - if (isNumberStart(codePoint, this.peekCodePoint(0), this.peekCodePoint(1))) { - this.reconsumeCodePoint(codePoint); - return this.consumeNumericToken(); - } - break; - case SOLIDUS: - if (this.peekCodePoint(0) === ASTERISK) { - this.consumeCodePoint(); - while (true) { - var c = this.consumeCodePoint(); - if (c === ASTERISK) { - c = this.consumeCodePoint(); - if (c === SOLIDUS) { - return this.consumeToken(); - } - } - if (c === EOF) { - return this.consumeToken(); - } - } - } - break; - case COLON: - return COLON_TOKEN; - case SEMICOLON: - return SEMICOLON_TOKEN; - case LESS_THAN_SIGN: - if (this.peekCodePoint(0) === EXCLAMATION_MARK && - this.peekCodePoint(1) === HYPHEN_MINUS && - this.peekCodePoint(2) === HYPHEN_MINUS) { - this.consumeCodePoint(); - this.consumeCodePoint(); - return CDO_TOKEN; - } - break; - case COMMERCIAL_AT: - var a1 = this.peekCodePoint(0); - var a2 = this.peekCodePoint(1); - var a3 = this.peekCodePoint(2); - if (isIdentifierStart(a1, a2, a3)) { - var value = this.consumeName(); - return { type: 7 /* AT_KEYWORD_TOKEN */, value: value }; - } - break; - case LEFT_SQUARE_BRACKET: - return LEFT_SQUARE_BRACKET_TOKEN; - case REVERSE_SOLIDUS: - if (isValidEscape(codePoint, this.peekCodePoint(0))) { - this.reconsumeCodePoint(codePoint); - return this.consumeIdentLikeToken(); - } - break; - case RIGHT_SQUARE_BRACKET: - return RIGHT_SQUARE_BRACKET_TOKEN; - case CIRCUMFLEX_ACCENT: - if (this.peekCodePoint(0) === EQUALS_SIGN) { - this.consumeCodePoint(); - return PREFIX_MATCH_TOKEN; - } - break; - case LEFT_CURLY_BRACKET: - return LEFT_CURLY_BRACKET_TOKEN; - case RIGHT_CURLY_BRACKET: - return RIGHT_CURLY_BRACKET_TOKEN; - case u: - case U: - var u1 = this.peekCodePoint(0); - var u2 = this.peekCodePoint(1); - if (u1 === PLUS_SIGN && (isHex(u2) || u2 === QUESTION_MARK)) { - this.consumeCodePoint(); - this.consumeUnicodeRangeToken(); - } - this.reconsumeCodePoint(codePoint); - return this.consumeIdentLikeToken(); - case VERTICAL_LINE: - if (this.peekCodePoint(0) === EQUALS_SIGN) { - this.consumeCodePoint(); - return DASH_MATCH_TOKEN; - } - if (this.peekCodePoint(0) === VERTICAL_LINE) { - this.consumeCodePoint(); - return COLUMN_TOKEN; - } - break; - case TILDE: - if (this.peekCodePoint(0) === EQUALS_SIGN) { - this.consumeCodePoint(); - return INCLUDE_MATCH_TOKEN; - } - break; - case EOF: - return EOF_TOKEN; - } - if (isWhiteSpace(codePoint)) { - this.consumeWhiteSpace(); - return WHITESPACE_TOKEN; - } - if (isDigit(codePoint)) { - this.reconsumeCodePoint(codePoint); - return this.consumeNumericToken(); - } - if (isNameStartCodePoint(codePoint)) { - this.reconsumeCodePoint(codePoint); - return this.consumeIdentLikeToken(); - } - return { type: 6 /* DELIM_TOKEN */, value: fromCodePoint$1(codePoint) }; - }; - Tokenizer.prototype.consumeCodePoint = function () { - var value = this._value.shift(); - return typeof value === 'undefined' ? -1 : value; - }; - Tokenizer.prototype.reconsumeCodePoint = function (codePoint) { - this._value.unshift(codePoint); - }; - Tokenizer.prototype.peekCodePoint = function (delta) { - if (delta >= this._value.length) { - return -1; - } - return this._value[delta]; - }; - Tokenizer.prototype.consumeUnicodeRangeToken = function () { - var digits = []; - var codePoint = this.consumeCodePoint(); - while (isHex(codePoint) && digits.length < 6) { - digits.push(codePoint); - codePoint = this.consumeCodePoint(); - } - var questionMarks = false; - while (codePoint === QUESTION_MARK && digits.length < 6) { - digits.push(codePoint); - codePoint = this.consumeCodePoint(); - questionMarks = true; - } - if (questionMarks) { - var start_1 = parseInt(fromCodePoint$1.apply(void 0, digits.map(function (digit) { return (digit === QUESTION_MARK ? ZERO : digit); })), 16); - var end = parseInt(fromCodePoint$1.apply(void 0, digits.map(function (digit) { return (digit === QUESTION_MARK ? F : digit); })), 16); - return { type: 30 /* UNICODE_RANGE_TOKEN */, start: start_1, end: end }; - } - var start = parseInt(fromCodePoint$1.apply(void 0, digits), 16); - if (this.peekCodePoint(0) === HYPHEN_MINUS && isHex(this.peekCodePoint(1))) { - this.consumeCodePoint(); - codePoint = this.consumeCodePoint(); - var endDigits = []; - while (isHex(codePoint) && endDigits.length < 6) { - endDigits.push(codePoint); - codePoint = this.consumeCodePoint(); - } - var end = parseInt(fromCodePoint$1.apply(void 0, endDigits), 16); - return { type: 30 /* UNICODE_RANGE_TOKEN */, start: start, end: end }; - } - else { - return { type: 30 /* UNICODE_RANGE_TOKEN */, start: start, end: start }; - } - }; - Tokenizer.prototype.consumeIdentLikeToken = function () { - var value = this.consumeName(); - if (value.toLowerCase() === 'url' && this.peekCodePoint(0) === LEFT_PARENTHESIS) { - this.consumeCodePoint(); - return this.consumeUrlToken(); - } - else if (this.peekCodePoint(0) === LEFT_PARENTHESIS) { - this.consumeCodePoint(); - return { type: 19 /* FUNCTION_TOKEN */, value: value }; - } - return { type: 20 /* IDENT_TOKEN */, value: value }; - }; - Tokenizer.prototype.consumeUrlToken = function () { - var value = []; - this.consumeWhiteSpace(); - if (this.peekCodePoint(0) === EOF) { - return { type: 22 /* URL_TOKEN */, value: '' }; - } - var next = this.peekCodePoint(0); - if (next === APOSTROPHE || next === QUOTATION_MARK) { - var stringToken = this.consumeStringToken(this.consumeCodePoint()); - if (stringToken.type === 0 /* STRING_TOKEN */) { - this.consumeWhiteSpace(); - if (this.peekCodePoint(0) === EOF || this.peekCodePoint(0) === RIGHT_PARENTHESIS) { - this.consumeCodePoint(); - return { type: 22 /* URL_TOKEN */, value: stringToken.value }; - } - } - this.consumeBadUrlRemnants(); - return BAD_URL_TOKEN; - } - while (true) { - var codePoint = this.consumeCodePoint(); - if (codePoint === EOF || codePoint === RIGHT_PARENTHESIS) { - return { type: 22 /* URL_TOKEN */, value: fromCodePoint$1.apply(void 0, value) }; - } - else if (isWhiteSpace(codePoint)) { - this.consumeWhiteSpace(); - if (this.peekCodePoint(0) === EOF || this.peekCodePoint(0) === RIGHT_PARENTHESIS) { - this.consumeCodePoint(); - return { type: 22 /* URL_TOKEN */, value: fromCodePoint$1.apply(void 0, value) }; - } - this.consumeBadUrlRemnants(); - return BAD_URL_TOKEN; - } - else if (codePoint === QUOTATION_MARK || - codePoint === APOSTROPHE || - codePoint === LEFT_PARENTHESIS || - isNonPrintableCodePoint(codePoint)) { - this.consumeBadUrlRemnants(); - return BAD_URL_TOKEN; - } - else if (codePoint === REVERSE_SOLIDUS) { - if (isValidEscape(codePoint, this.peekCodePoint(0))) { - value.push(this.consumeEscapedCodePoint()); - } - else { - this.consumeBadUrlRemnants(); - return BAD_URL_TOKEN; - } - } - else { - value.push(codePoint); - } - } - }; - Tokenizer.prototype.consumeWhiteSpace = function () { - while (isWhiteSpace(this.peekCodePoint(0))) { - this.consumeCodePoint(); - } - }; - Tokenizer.prototype.consumeBadUrlRemnants = function () { - while (true) { - var codePoint = this.consumeCodePoint(); - if (codePoint === RIGHT_PARENTHESIS || codePoint === EOF) { - return; - } - if (isValidEscape(codePoint, this.peekCodePoint(0))) { - this.consumeEscapedCodePoint(); - } - } - }; - Tokenizer.prototype.consumeStringSlice = function (count) { - var SLICE_STACK_SIZE = 50000; - var value = ''; - while (count > 0) { - var amount = Math.min(SLICE_STACK_SIZE, count); - value += fromCodePoint$1.apply(void 0, this._value.splice(0, amount)); - count -= amount; - } - this._value.shift(); - return value; - }; - Tokenizer.prototype.consumeStringToken = function (endingCodePoint) { - var value = ''; - var i = 0; - do { - var codePoint = this._value[i]; - if (codePoint === EOF || codePoint === undefined || codePoint === endingCodePoint) { - value += this.consumeStringSlice(i); - return { type: 0 /* STRING_TOKEN */, value: value }; - } - if (codePoint === LINE_FEED) { - this._value.splice(0, i); - return BAD_STRING_TOKEN; - } - if (codePoint === REVERSE_SOLIDUS) { - var next = this._value[i + 1]; - if (next !== EOF && next !== undefined) { - if (next === LINE_FEED) { - value += this.consumeStringSlice(i); - i = -1; - this._value.shift(); - } - else if (isValidEscape(codePoint, next)) { - value += this.consumeStringSlice(i); - value += fromCodePoint$1(this.consumeEscapedCodePoint()); - i = -1; - } - } - } - i++; - } while (true); - }; - Tokenizer.prototype.consumeNumber = function () { - var repr = []; - var type = FLAG_INTEGER; - var c1 = this.peekCodePoint(0); - if (c1 === PLUS_SIGN || c1 === HYPHEN_MINUS) { - repr.push(this.consumeCodePoint()); - } - while (isDigit(this.peekCodePoint(0))) { - repr.push(this.consumeCodePoint()); - } - c1 = this.peekCodePoint(0); - var c2 = this.peekCodePoint(1); - if (c1 === FULL_STOP && isDigit(c2)) { - repr.push(this.consumeCodePoint(), this.consumeCodePoint()); - type = FLAG_NUMBER; - while (isDigit(this.peekCodePoint(0))) { - repr.push(this.consumeCodePoint()); - } - } - c1 = this.peekCodePoint(0); - c2 = this.peekCodePoint(1); - var c3 = this.peekCodePoint(2); - if ((c1 === E || c1 === e) && (((c2 === PLUS_SIGN || c2 === HYPHEN_MINUS) && isDigit(c3)) || isDigit(c2))) { - repr.push(this.consumeCodePoint(), this.consumeCodePoint()); - type = FLAG_NUMBER; - while (isDigit(this.peekCodePoint(0))) { - repr.push(this.consumeCodePoint()); - } - } - return [stringToNumber(repr), type]; - }; - Tokenizer.prototype.consumeNumericToken = function () { - var _a = this.consumeNumber(), number = _a[0], flags = _a[1]; - var c1 = this.peekCodePoint(0); - var c2 = this.peekCodePoint(1); - var c3 = this.peekCodePoint(2); - if (isIdentifierStart(c1, c2, c3)) { - var unit = this.consumeName(); - return { type: 15 /* DIMENSION_TOKEN */, number: number, flags: flags, unit: unit }; - } - if (c1 === PERCENTAGE_SIGN) { - this.consumeCodePoint(); - return { type: 16 /* PERCENTAGE_TOKEN */, number: number, flags: flags }; - } - return { type: 17 /* NUMBER_TOKEN */, number: number, flags: flags }; - }; - Tokenizer.prototype.consumeEscapedCodePoint = function () { - var codePoint = this.consumeCodePoint(); - if (isHex(codePoint)) { - var hex = fromCodePoint$1(codePoint); - while (isHex(this.peekCodePoint(0)) && hex.length < 6) { - hex += fromCodePoint$1(this.consumeCodePoint()); - } - if (isWhiteSpace(this.peekCodePoint(0))) { - this.consumeCodePoint(); - } - var hexCodePoint = parseInt(hex, 16); - if (hexCodePoint === 0 || isSurrogateCodePoint(hexCodePoint) || hexCodePoint > 0x10ffff) { - return REPLACEMENT_CHARACTER; - } - return hexCodePoint; - } - if (codePoint === EOF) { - return REPLACEMENT_CHARACTER; - } - return codePoint; - }; - Tokenizer.prototype.consumeName = function () { - var result = ''; - while (true) { - var codePoint = this.consumeCodePoint(); - if (isNameCodePoint(codePoint)) { - result += fromCodePoint$1(codePoint); - } - else if (isValidEscape(codePoint, this.peekCodePoint(0))) { - result += fromCodePoint$1(this.consumeEscapedCodePoint()); - } - else { - this.reconsumeCodePoint(codePoint); - return result; - } - } - }; - return Tokenizer; - }()); - - var Parser = /** @class */ (function () { - function Parser(tokens) { - this._tokens = tokens; - } - Parser.create = function (value) { - var tokenizer = new Tokenizer(); - tokenizer.write(value); - return new Parser(tokenizer.read()); - }; - Parser.parseValue = function (value) { - return Parser.create(value).parseComponentValue(); - }; - Parser.parseValues = function (value) { - return Parser.create(value).parseComponentValues(); - }; - Parser.prototype.parseComponentValue = function () { - var token = this.consumeToken(); - while (token.type === 31 /* WHITESPACE_TOKEN */) { - token = this.consumeToken(); - } - if (token.type === 32 /* EOF_TOKEN */) { - throw new SyntaxError("Error parsing CSS component value, unexpected EOF"); - } - this.reconsumeToken(token); - var value = this.consumeComponentValue(); - do { - token = this.consumeToken(); - } while (token.type === 31 /* WHITESPACE_TOKEN */); - if (token.type === 32 /* EOF_TOKEN */) { - return value; - } - throw new SyntaxError("Error parsing CSS component value, multiple values found when expecting only one"); - }; - Parser.prototype.parseComponentValues = function () { - var values = []; - while (true) { - var value = this.consumeComponentValue(); - if (value.type === 32 /* EOF_TOKEN */) { - return values; - } - values.push(value); - values.push(); - } - }; - Parser.prototype.consumeComponentValue = function () { - var token = this.consumeToken(); - switch (token.type) { - case 11 /* LEFT_CURLY_BRACKET_TOKEN */: - case 28 /* LEFT_SQUARE_BRACKET_TOKEN */: - case 2 /* LEFT_PARENTHESIS_TOKEN */: - return this.consumeSimpleBlock(token.type); - case 19 /* FUNCTION_TOKEN */: - return this.consumeFunction(token); - } - return token; - }; - Parser.prototype.consumeSimpleBlock = function (type) { - var block = { type: type, values: [] }; - var token = this.consumeToken(); - while (true) { - if (token.type === 32 /* EOF_TOKEN */ || isEndingTokenFor(token, type)) { - return block; - } - this.reconsumeToken(token); - block.values.push(this.consumeComponentValue()); - token = this.consumeToken(); - } - }; - Parser.prototype.consumeFunction = function (functionToken) { - var cssFunction = { - name: functionToken.value, - values: [], - type: 18 /* FUNCTION */ - }; - while (true) { - var token = this.consumeToken(); - if (token.type === 32 /* EOF_TOKEN */ || token.type === 3 /* RIGHT_PARENTHESIS_TOKEN */) { - return cssFunction; - } - this.reconsumeToken(token); - cssFunction.values.push(this.consumeComponentValue()); - } - }; - Parser.prototype.consumeToken = function () { - var token = this._tokens.shift(); - return typeof token === 'undefined' ? EOF_TOKEN : token; - }; - Parser.prototype.reconsumeToken = function (token) { - this._tokens.unshift(token); - }; - return Parser; - }()); - var isDimensionToken = function (token) { return token.type === 15 /* DIMENSION_TOKEN */; }; - var isNumberToken = function (token) { return token.type === 17 /* NUMBER_TOKEN */; }; - var isIdentToken = function (token) { return token.type === 20 /* IDENT_TOKEN */; }; - var isStringToken = function (token) { return token.type === 0 /* STRING_TOKEN */; }; - var isIdentWithValue = function (token, value) { - return isIdentToken(token) && token.value === value; - }; - var nonWhiteSpace = function (token) { return token.type !== 31 /* WHITESPACE_TOKEN */; }; - var nonFunctionArgSeparator = function (token) { - return token.type !== 31 /* WHITESPACE_TOKEN */ && token.type !== 4 /* COMMA_TOKEN */; - }; - var parseFunctionArgs = function (tokens) { - var args = []; - var arg = []; - tokens.forEach(function (token) { - if (token.type === 4 /* COMMA_TOKEN */) { - if (arg.length === 0) { - throw new Error("Error parsing function args, zero tokens for arg"); - } - args.push(arg); - arg = []; - return; - } - if (token.type !== 31 /* WHITESPACE_TOKEN */) { - arg.push(token); - } - }); - if (arg.length) { - args.push(arg); - } - return args; - }; - var isEndingTokenFor = function (token, type) { - if (type === 11 /* LEFT_CURLY_BRACKET_TOKEN */ && token.type === 12 /* RIGHT_CURLY_BRACKET_TOKEN */) { - return true; - } - if (type === 28 /* LEFT_SQUARE_BRACKET_TOKEN */ && token.type === 29 /* RIGHT_SQUARE_BRACKET_TOKEN */) { - return true; - } - return type === 2 /* LEFT_PARENTHESIS_TOKEN */ && token.type === 3 /* RIGHT_PARENTHESIS_TOKEN */; - }; - - var isLength = function (token) { - return token.type === 17 /* NUMBER_TOKEN */ || token.type === 15 /* DIMENSION_TOKEN */; - }; - - var isLengthPercentage = function (token) { - return token.type === 16 /* PERCENTAGE_TOKEN */ || isLength(token); - }; - var parseLengthPercentageTuple = function (tokens) { - return tokens.length > 1 ? [tokens[0], tokens[1]] : [tokens[0]]; - }; - var ZERO_LENGTH = { - type: 17 /* NUMBER_TOKEN */, - number: 0, - flags: FLAG_INTEGER - }; - var FIFTY_PERCENT = { - type: 16 /* PERCENTAGE_TOKEN */, - number: 50, - flags: FLAG_INTEGER - }; - var HUNDRED_PERCENT = { - type: 16 /* PERCENTAGE_TOKEN */, - number: 100, - flags: FLAG_INTEGER - }; - var getAbsoluteValueForTuple = function (tuple, width, height) { - var x = tuple[0], y = tuple[1]; - return [getAbsoluteValue(x, width), getAbsoluteValue(typeof y !== 'undefined' ? y : x, height)]; - }; - var getAbsoluteValue = function (token, parent) { - if (token.type === 16 /* PERCENTAGE_TOKEN */) { - return (token.number / 100) * parent; - } - if (isDimensionToken(token)) { - switch (token.unit) { - case 'rem': - case 'em': - return 16 * token.number; // TODO use correct font-size - case 'px': - default: - return token.number; - } - } - return token.number; - }; - - var DEG = 'deg'; - var GRAD = 'grad'; - var RAD = 'rad'; - var TURN = 'turn'; - var angle = { - name: 'angle', - parse: function (_context, value) { - if (value.type === 15 /* DIMENSION_TOKEN */) { - switch (value.unit) { - case DEG: - return (Math.PI * value.number) / 180; - case GRAD: - return (Math.PI / 200) * value.number; - case RAD: - return value.number; - case TURN: - return Math.PI * 2 * value.number; - } - } - throw new Error("Unsupported angle type"); - } - }; - var isAngle = function (value) { - if (value.type === 15 /* DIMENSION_TOKEN */) { - if (value.unit === DEG || value.unit === GRAD || value.unit === RAD || value.unit === TURN) { - return true; - } - } - return false; - }; - var parseNamedSide = function (tokens) { - var sideOrCorner = tokens - .filter(isIdentToken) - .map(function (ident) { return ident.value; }) - .join(' '); - switch (sideOrCorner) { - case 'to bottom right': - case 'to right bottom': - case 'left top': - case 'top left': - return [ZERO_LENGTH, ZERO_LENGTH]; - case 'to top': - case 'bottom': - return deg(0); - case 'to bottom left': - case 'to left bottom': - case 'right top': - case 'top right': - return [ZERO_LENGTH, HUNDRED_PERCENT]; - case 'to right': - case 'left': - return deg(90); - case 'to top left': - case 'to left top': - case 'right bottom': - case 'bottom right': - return [HUNDRED_PERCENT, HUNDRED_PERCENT]; - case 'to bottom': - case 'top': - return deg(180); - case 'to top right': - case 'to right top': - case 'left bottom': - case 'bottom left': - return [HUNDRED_PERCENT, ZERO_LENGTH]; - case 'to left': - case 'right': - return deg(270); - } - return 0; - }; - var deg = function (deg) { return (Math.PI * deg) / 180; }; - - var color$1 = { - name: 'color', - parse: function (context, value) { - if (value.type === 18 /* FUNCTION */) { - var colorFunction = SUPPORTED_COLOR_FUNCTIONS[value.name]; - if (typeof colorFunction === 'undefined') { - throw new Error("Attempting to parse an unsupported color function \"" + value.name + "\""); - } - return colorFunction(context, value.values); - } - if (value.type === 5 /* HASH_TOKEN */) { - if (value.value.length === 3) { - var r = value.value.substring(0, 1); - var g = value.value.substring(1, 2); - var b = value.value.substring(2, 3); - return pack(parseInt(r + r, 16), parseInt(g + g, 16), parseInt(b + b, 16), 1); - } - if (value.value.length === 4) { - var r = value.value.substring(0, 1); - var g = value.value.substring(1, 2); - var b = value.value.substring(2, 3); - var a = value.value.substring(3, 4); - return pack(parseInt(r + r, 16), parseInt(g + g, 16), parseInt(b + b, 16), parseInt(a + a, 16) / 255); - } - if (value.value.length === 6) { - var r = value.value.substring(0, 2); - var g = value.value.substring(2, 4); - var b = value.value.substring(4, 6); - return pack(parseInt(r, 16), parseInt(g, 16), parseInt(b, 16), 1); - } - if (value.value.length === 8) { - var r = value.value.substring(0, 2); - var g = value.value.substring(2, 4); - var b = value.value.substring(4, 6); - var a = value.value.substring(6, 8); - return pack(parseInt(r, 16), parseInt(g, 16), parseInt(b, 16), parseInt(a, 16) / 255); - } - } - if (value.type === 20 /* IDENT_TOKEN */) { - var namedColor = COLORS[value.value.toUpperCase()]; - if (typeof namedColor !== 'undefined') { - return namedColor; - } - } - return COLORS.TRANSPARENT; - } - }; - var isTransparent = function (color) { return (0xff & color) === 0; }; - var asString = function (color) { - var alpha = 0xff & color; - var blue = 0xff & (color >> 8); - var green = 0xff & (color >> 16); - var red = 0xff & (color >> 24); - return alpha < 255 ? "rgba(" + red + "," + green + "," + blue + "," + alpha / 255 + ")" : "rgb(" + red + "," + green + "," + blue + ")"; - }; - var pack = function (r, g, b, a) { - return ((r << 24) | (g << 16) | (b << 8) | (Math.round(a * 255) << 0)) >>> 0; - }; - var getTokenColorValue = function (token, i) { - if (token.type === 17 /* NUMBER_TOKEN */) { - return token.number; - } - if (token.type === 16 /* PERCENTAGE_TOKEN */) { - var max = i === 3 ? 1 : 255; - return i === 3 ? (token.number / 100) * max : Math.round((token.number / 100) * max); - } - return 0; - }; - var rgb = function (_context, args) { - var tokens = args.filter(nonFunctionArgSeparator); - if (tokens.length === 3) { - var _a = tokens.map(getTokenColorValue), r = _a[0], g = _a[1], b = _a[2]; - return pack(r, g, b, 1); - } - if (tokens.length === 4) { - var _b = tokens.map(getTokenColorValue), r = _b[0], g = _b[1], b = _b[2], a = _b[3]; - return pack(r, g, b, a); - } - return 0; - }; - function hue2rgb(t1, t2, hue) { - if (hue < 0) { - hue += 1; - } - if (hue >= 1) { - hue -= 1; - } - if (hue < 1 / 6) { - return (t2 - t1) * hue * 6 + t1; - } - else if (hue < 1 / 2) { - return t2; - } - else if (hue < 2 / 3) { - return (t2 - t1) * 6 * (2 / 3 - hue) + t1; - } - else { - return t1; - } - } - var hsl = function (context, args) { - var tokens = args.filter(nonFunctionArgSeparator); - var hue = tokens[0], saturation = tokens[1], lightness = tokens[2], alpha = tokens[3]; - var h = (hue.type === 17 /* NUMBER_TOKEN */ ? deg(hue.number) : angle.parse(context, hue)) / (Math.PI * 2); - var s = isLengthPercentage(saturation) ? saturation.number / 100 : 0; - var l = isLengthPercentage(lightness) ? lightness.number / 100 : 0; - var a = typeof alpha !== 'undefined' && isLengthPercentage(alpha) ? getAbsoluteValue(alpha, 1) : 1; - if (s === 0) { - return pack(l * 255, l * 255, l * 255, 1); - } - var t2 = l <= 0.5 ? l * (s + 1) : l + s - l * s; - var t1 = l * 2 - t2; - var r = hue2rgb(t1, t2, h + 1 / 3); - var g = hue2rgb(t1, t2, h); - var b = hue2rgb(t1, t2, h - 1 / 3); - return pack(r * 255, g * 255, b * 255, a); - }; - var SUPPORTED_COLOR_FUNCTIONS = { - hsl: hsl, - hsla: hsl, - rgb: rgb, - rgba: rgb - }; - var parseColor = function (context, value) { - return color$1.parse(context, Parser.create(value).parseComponentValue()); - }; - var COLORS = { - ALICEBLUE: 0xf0f8ffff, - ANTIQUEWHITE: 0xfaebd7ff, - AQUA: 0x00ffffff, - AQUAMARINE: 0x7fffd4ff, - AZURE: 0xf0ffffff, - BEIGE: 0xf5f5dcff, - BISQUE: 0xffe4c4ff, - BLACK: 0x000000ff, - BLANCHEDALMOND: 0xffebcdff, - BLUE: 0x0000ffff, - BLUEVIOLET: 0x8a2be2ff, - BROWN: 0xa52a2aff, - BURLYWOOD: 0xdeb887ff, - CADETBLUE: 0x5f9ea0ff, - CHARTREUSE: 0x7fff00ff, - CHOCOLATE: 0xd2691eff, - CORAL: 0xff7f50ff, - CORNFLOWERBLUE: 0x6495edff, - CORNSILK: 0xfff8dcff, - CRIMSON: 0xdc143cff, - CYAN: 0x00ffffff, - DARKBLUE: 0x00008bff, - DARKCYAN: 0x008b8bff, - DARKGOLDENROD: 0xb886bbff, - DARKGRAY: 0xa9a9a9ff, - DARKGREEN: 0x006400ff, - DARKGREY: 0xa9a9a9ff, - DARKKHAKI: 0xbdb76bff, - DARKMAGENTA: 0x8b008bff, - DARKOLIVEGREEN: 0x556b2fff, - DARKORANGE: 0xff8c00ff, - DARKORCHID: 0x9932ccff, - DARKRED: 0x8b0000ff, - DARKSALMON: 0xe9967aff, - DARKSEAGREEN: 0x8fbc8fff, - DARKSLATEBLUE: 0x483d8bff, - DARKSLATEGRAY: 0x2f4f4fff, - DARKSLATEGREY: 0x2f4f4fff, - DARKTURQUOISE: 0x00ced1ff, - DARKVIOLET: 0x9400d3ff, - DEEPPINK: 0xff1493ff, - DEEPSKYBLUE: 0x00bfffff, - DIMGRAY: 0x696969ff, - DIMGREY: 0x696969ff, - DODGERBLUE: 0x1e90ffff, - FIREBRICK: 0xb22222ff, - FLORALWHITE: 0xfffaf0ff, - FORESTGREEN: 0x228b22ff, - FUCHSIA: 0xff00ffff, - GAINSBORO: 0xdcdcdcff, - GHOSTWHITE: 0xf8f8ffff, - GOLD: 0xffd700ff, - GOLDENROD: 0xdaa520ff, - GRAY: 0x808080ff, - GREEN: 0x008000ff, - GREENYELLOW: 0xadff2fff, - GREY: 0x808080ff, - HONEYDEW: 0xf0fff0ff, - HOTPINK: 0xff69b4ff, - INDIANRED: 0xcd5c5cff, - INDIGO: 0x4b0082ff, - IVORY: 0xfffff0ff, - KHAKI: 0xf0e68cff, - LAVENDER: 0xe6e6faff, - LAVENDERBLUSH: 0xfff0f5ff, - LAWNGREEN: 0x7cfc00ff, - LEMONCHIFFON: 0xfffacdff, - LIGHTBLUE: 0xadd8e6ff, - LIGHTCORAL: 0xf08080ff, - LIGHTCYAN: 0xe0ffffff, - LIGHTGOLDENRODYELLOW: 0xfafad2ff, - LIGHTGRAY: 0xd3d3d3ff, - LIGHTGREEN: 0x90ee90ff, - LIGHTGREY: 0xd3d3d3ff, - LIGHTPINK: 0xffb6c1ff, - LIGHTSALMON: 0xffa07aff, - LIGHTSEAGREEN: 0x20b2aaff, - LIGHTSKYBLUE: 0x87cefaff, - LIGHTSLATEGRAY: 0x778899ff, - LIGHTSLATEGREY: 0x778899ff, - LIGHTSTEELBLUE: 0xb0c4deff, - LIGHTYELLOW: 0xffffe0ff, - LIME: 0x00ff00ff, - LIMEGREEN: 0x32cd32ff, - LINEN: 0xfaf0e6ff, - MAGENTA: 0xff00ffff, - MAROON: 0x800000ff, - MEDIUMAQUAMARINE: 0x66cdaaff, - MEDIUMBLUE: 0x0000cdff, - MEDIUMORCHID: 0xba55d3ff, - MEDIUMPURPLE: 0x9370dbff, - MEDIUMSEAGREEN: 0x3cb371ff, - MEDIUMSLATEBLUE: 0x7b68eeff, - MEDIUMSPRINGGREEN: 0x00fa9aff, - MEDIUMTURQUOISE: 0x48d1ccff, - MEDIUMVIOLETRED: 0xc71585ff, - MIDNIGHTBLUE: 0x191970ff, - MINTCREAM: 0xf5fffaff, - MISTYROSE: 0xffe4e1ff, - MOCCASIN: 0xffe4b5ff, - NAVAJOWHITE: 0xffdeadff, - NAVY: 0x000080ff, - OLDLACE: 0xfdf5e6ff, - OLIVE: 0x808000ff, - OLIVEDRAB: 0x6b8e23ff, - ORANGE: 0xffa500ff, - ORANGERED: 0xff4500ff, - ORCHID: 0xda70d6ff, - PALEGOLDENROD: 0xeee8aaff, - PALEGREEN: 0x98fb98ff, - PALETURQUOISE: 0xafeeeeff, - PALEVIOLETRED: 0xdb7093ff, - PAPAYAWHIP: 0xffefd5ff, - PEACHPUFF: 0xffdab9ff, - PERU: 0xcd853fff, - PINK: 0xffc0cbff, - PLUM: 0xdda0ddff, - POWDERBLUE: 0xb0e0e6ff, - PURPLE: 0x800080ff, - REBECCAPURPLE: 0x663399ff, - RED: 0xff0000ff, - ROSYBROWN: 0xbc8f8fff, - ROYALBLUE: 0x4169e1ff, - SADDLEBROWN: 0x8b4513ff, - SALMON: 0xfa8072ff, - SANDYBROWN: 0xf4a460ff, - SEAGREEN: 0x2e8b57ff, - SEASHELL: 0xfff5eeff, - SIENNA: 0xa0522dff, - SILVER: 0xc0c0c0ff, - SKYBLUE: 0x87ceebff, - SLATEBLUE: 0x6a5acdff, - SLATEGRAY: 0x708090ff, - SLATEGREY: 0x708090ff, - SNOW: 0xfffafaff, - SPRINGGREEN: 0x00ff7fff, - STEELBLUE: 0x4682b4ff, - TAN: 0xd2b48cff, - TEAL: 0x008080ff, - THISTLE: 0xd8bfd8ff, - TOMATO: 0xff6347ff, - TRANSPARENT: 0x00000000, - TURQUOISE: 0x40e0d0ff, - VIOLET: 0xee82eeff, - WHEAT: 0xf5deb3ff, - WHITE: 0xffffffff, - WHITESMOKE: 0xf5f5f5ff, - YELLOW: 0xffff00ff, - YELLOWGREEN: 0x9acd32ff - }; - - var backgroundClip = { - name: 'background-clip', - initialValue: 'border-box', - prefix: false, - type: 1 /* LIST */, - parse: function (_context, tokens) { - return tokens.map(function (token) { - if (isIdentToken(token)) { - switch (token.value) { - case 'padding-box': - return 1 /* PADDING_BOX */; - case 'content-box': - return 2 /* CONTENT_BOX */; - } - } - return 0 /* BORDER_BOX */; - }); - } - }; - - var backgroundColor = { - name: "background-color", - initialValue: 'transparent', - prefix: false, - type: 3 /* TYPE_VALUE */, - format: 'color' - }; - - var parseColorStop = function (context, args) { - var color = color$1.parse(context, args[0]); - var stop = args[1]; - return stop && isLengthPercentage(stop) ? { color: color, stop: stop } : { color: color, stop: null }; - }; - var processColorStops = function (stops, lineLength) { - var first = stops[0]; - var last = stops[stops.length - 1]; - if (first.stop === null) { - first.stop = ZERO_LENGTH; - } - if (last.stop === null) { - last.stop = HUNDRED_PERCENT; - } - var processStops = []; - var previous = 0; - for (var i = 0; i < stops.length; i++) { - var stop_1 = stops[i].stop; - if (stop_1 !== null) { - var absoluteValue = getAbsoluteValue(stop_1, lineLength); - if (absoluteValue > previous) { - processStops.push(absoluteValue); - } - else { - processStops.push(previous); - } - previous = absoluteValue; - } - else { - processStops.push(null); - } - } - var gapBegin = null; - for (var i = 0; i < processStops.length; i++) { - var stop_2 = processStops[i]; - if (stop_2 === null) { - if (gapBegin === null) { - gapBegin = i; - } - } - else if (gapBegin !== null) { - var gapLength = i - gapBegin; - var beforeGap = processStops[gapBegin - 1]; - var gapValue = (stop_2 - beforeGap) / (gapLength + 1); - for (var g = 1; g <= gapLength; g++) { - processStops[gapBegin + g - 1] = gapValue * g; - } - gapBegin = null; - } - } - return stops.map(function (_a, i) { - var color = _a.color; - return { color: color, stop: Math.max(Math.min(1, processStops[i] / lineLength), 0) }; - }); - }; - var getAngleFromCorner = function (corner, width, height) { - var centerX = width / 2; - var centerY = height / 2; - var x = getAbsoluteValue(corner[0], width) - centerX; - var y = centerY - getAbsoluteValue(corner[1], height); - return (Math.atan2(y, x) + Math.PI * 2) % (Math.PI * 2); - }; - var calculateGradientDirection = function (angle, width, height) { - var radian = typeof angle === 'number' ? angle : getAngleFromCorner(angle, width, height); - var lineLength = Math.abs(width * Math.sin(radian)) + Math.abs(height * Math.cos(radian)); - var halfWidth = width / 2; - var halfHeight = height / 2; - var halfLineLength = lineLength / 2; - var yDiff = Math.sin(radian - Math.PI / 2) * halfLineLength; - var xDiff = Math.cos(radian - Math.PI / 2) * halfLineLength; - return [lineLength, halfWidth - xDiff, halfWidth + xDiff, halfHeight - yDiff, halfHeight + yDiff]; - }; - var distance = function (a, b) { return Math.sqrt(a * a + b * b); }; - var findCorner = function (width, height, x, y, closest) { - var corners = [ - [0, 0], - [0, height], - [width, 0], - [width, height] - ]; - return corners.reduce(function (stat, corner) { - var cx = corner[0], cy = corner[1]; - var d = distance(x - cx, y - cy); - if (closest ? d < stat.optimumDistance : d > stat.optimumDistance) { - return { - optimumCorner: corner, - optimumDistance: d - }; - } - return stat; - }, { - optimumDistance: closest ? Infinity : -Infinity, - optimumCorner: null - }).optimumCorner; - }; - var calculateRadius = function (gradient, x, y, width, height) { - var rx = 0; - var ry = 0; - switch (gradient.size) { - case 0 /* CLOSEST_SIDE */: - // The ending shape is sized so that that it exactly meets the side of the gradient box closest to the gradient’s center. - // If the shape is an ellipse, it exactly meets the closest side in each dimension. - if (gradient.shape === 0 /* CIRCLE */) { - rx = ry = Math.min(Math.abs(x), Math.abs(x - width), Math.abs(y), Math.abs(y - height)); - } - else if (gradient.shape === 1 /* ELLIPSE */) { - rx = Math.min(Math.abs(x), Math.abs(x - width)); - ry = Math.min(Math.abs(y), Math.abs(y - height)); - } - break; - case 2 /* CLOSEST_CORNER */: - // The ending shape is sized so that that it passes through the corner of the gradient box closest to the gradient’s center. - // If the shape is an ellipse, the ending shape is given the same aspect-ratio it would have if closest-side were specified. - if (gradient.shape === 0 /* CIRCLE */) { - rx = ry = Math.min(distance(x, y), distance(x, y - height), distance(x - width, y), distance(x - width, y - height)); - } - else if (gradient.shape === 1 /* ELLIPSE */) { - // Compute the ratio ry/rx (which is to be the same as for "closest-side") - var c = Math.min(Math.abs(y), Math.abs(y - height)) / Math.min(Math.abs(x), Math.abs(x - width)); - var _a = findCorner(width, height, x, y, true), cx = _a[0], cy = _a[1]; - rx = distance(cx - x, (cy - y) / c); - ry = c * rx; - } - break; - case 1 /* FARTHEST_SIDE */: - // Same as closest-side, except the ending shape is sized based on the farthest side(s) - if (gradient.shape === 0 /* CIRCLE */) { - rx = ry = Math.max(Math.abs(x), Math.abs(x - width), Math.abs(y), Math.abs(y - height)); - } - else if (gradient.shape === 1 /* ELLIPSE */) { - rx = Math.max(Math.abs(x), Math.abs(x - width)); - ry = Math.max(Math.abs(y), Math.abs(y - height)); - } - break; - case 3 /* FARTHEST_CORNER */: - // Same as closest-corner, except the ending shape is sized based on the farthest corner. - // If the shape is an ellipse, the ending shape is given the same aspect ratio it would have if farthest-side were specified. - if (gradient.shape === 0 /* CIRCLE */) { - rx = ry = Math.max(distance(x, y), distance(x, y - height), distance(x - width, y), distance(x - width, y - height)); - } - else if (gradient.shape === 1 /* ELLIPSE */) { - // Compute the ratio ry/rx (which is to be the same as for "farthest-side") - var c = Math.max(Math.abs(y), Math.abs(y - height)) / Math.max(Math.abs(x), Math.abs(x - width)); - var _b = findCorner(width, height, x, y, false), cx = _b[0], cy = _b[1]; - rx = distance(cx - x, (cy - y) / c); - ry = c * rx; - } - break; - } - if (Array.isArray(gradient.size)) { - rx = getAbsoluteValue(gradient.size[0], width); - ry = gradient.size.length === 2 ? getAbsoluteValue(gradient.size[1], height) : rx; - } - return [rx, ry]; - }; - - var linearGradient = function (context, tokens) { - var angle$1 = deg(180); - var stops = []; - parseFunctionArgs(tokens).forEach(function (arg, i) { - if (i === 0) { - var firstToken = arg[0]; - if (firstToken.type === 20 /* IDENT_TOKEN */ && firstToken.value === 'to') { - angle$1 = parseNamedSide(arg); - return; - } - else if (isAngle(firstToken)) { - angle$1 = angle.parse(context, firstToken); - return; - } - } - var colorStop = parseColorStop(context, arg); - stops.push(colorStop); - }); - return { angle: angle$1, stops: stops, type: 1 /* LINEAR_GRADIENT */ }; - }; - - var prefixLinearGradient = function (context, tokens) { - var angle$1 = deg(180); - var stops = []; - parseFunctionArgs(tokens).forEach(function (arg, i) { - if (i === 0) { - var firstToken = arg[0]; - if (firstToken.type === 20 /* IDENT_TOKEN */ && - ['top', 'left', 'right', 'bottom'].indexOf(firstToken.value) !== -1) { - angle$1 = parseNamedSide(arg); - return; - } - else if (isAngle(firstToken)) { - angle$1 = (angle.parse(context, firstToken) + deg(270)) % deg(360); - return; - } - } - var colorStop = parseColorStop(context, arg); - stops.push(colorStop); - }); - return { - angle: angle$1, - stops: stops, - type: 1 /* LINEAR_GRADIENT */ - }; - }; - - var webkitGradient = function (context, tokens) { - var angle = deg(180); - var stops = []; - var type = 1 /* LINEAR_GRADIENT */; - var shape = 0 /* CIRCLE */; - var size = 3 /* FARTHEST_CORNER */; - var position = []; - parseFunctionArgs(tokens).forEach(function (arg, i) { - var firstToken = arg[0]; - if (i === 0) { - if (isIdentToken(firstToken) && firstToken.value === 'linear') { - type = 1 /* LINEAR_GRADIENT */; - return; - } - else if (isIdentToken(firstToken) && firstToken.value === 'radial') { - type = 2 /* RADIAL_GRADIENT */; - return; - } - } - if (firstToken.type === 18 /* FUNCTION */) { - if (firstToken.name === 'from') { - var color = color$1.parse(context, firstToken.values[0]); - stops.push({ stop: ZERO_LENGTH, color: color }); - } - else if (firstToken.name === 'to') { - var color = color$1.parse(context, firstToken.values[0]); - stops.push({ stop: HUNDRED_PERCENT, color: color }); - } - else if (firstToken.name === 'color-stop') { - var values = firstToken.values.filter(nonFunctionArgSeparator); - if (values.length === 2) { - var color = color$1.parse(context, values[1]); - var stop_1 = values[0]; - if (isNumberToken(stop_1)) { - stops.push({ - stop: { type: 16 /* PERCENTAGE_TOKEN */, number: stop_1.number * 100, flags: stop_1.flags }, - color: color - }); - } - } - } - } - }); - return type === 1 /* LINEAR_GRADIENT */ - ? { - angle: (angle + deg(180)) % deg(360), - stops: stops, - type: type - } - : { size: size, shape: shape, stops: stops, position: position, type: type }; - }; - - var CLOSEST_SIDE = 'closest-side'; - var FARTHEST_SIDE = 'farthest-side'; - var CLOSEST_CORNER = 'closest-corner'; - var FARTHEST_CORNER = 'farthest-corner'; - var CIRCLE = 'circle'; - var ELLIPSE = 'ellipse'; - var COVER = 'cover'; - var CONTAIN = 'contain'; - var radialGradient = function (context, tokens) { - var shape = 0 /* CIRCLE */; - var size = 3 /* FARTHEST_CORNER */; - var stops = []; - var position = []; - parseFunctionArgs(tokens).forEach(function (arg, i) { - var isColorStop = true; - if (i === 0) { - var isAtPosition_1 = false; - isColorStop = arg.reduce(function (acc, token) { - if (isAtPosition_1) { - if (isIdentToken(token)) { - switch (token.value) { - case 'center': - position.push(FIFTY_PERCENT); - return acc; - case 'top': - case 'left': - position.push(ZERO_LENGTH); - return acc; - case 'right': - case 'bottom': - position.push(HUNDRED_PERCENT); - return acc; - } - } - else if (isLengthPercentage(token) || isLength(token)) { - position.push(token); - } - } - else if (isIdentToken(token)) { - switch (token.value) { - case CIRCLE: - shape = 0 /* CIRCLE */; - return false; - case ELLIPSE: - shape = 1 /* ELLIPSE */; - return false; - case 'at': - isAtPosition_1 = true; - return false; - case CLOSEST_SIDE: - size = 0 /* CLOSEST_SIDE */; - return false; - case COVER: - case FARTHEST_SIDE: - size = 1 /* FARTHEST_SIDE */; - return false; - case CONTAIN: - case CLOSEST_CORNER: - size = 2 /* CLOSEST_CORNER */; - return false; - case FARTHEST_CORNER: - size = 3 /* FARTHEST_CORNER */; - return false; - } - } - else if (isLength(token) || isLengthPercentage(token)) { - if (!Array.isArray(size)) { - size = []; - } - size.push(token); - return false; - } - return acc; - }, isColorStop); - } - if (isColorStop) { - var colorStop = parseColorStop(context, arg); - stops.push(colorStop); - } - }); - return { size: size, shape: shape, stops: stops, position: position, type: 2 /* RADIAL_GRADIENT */ }; - }; - - var prefixRadialGradient = function (context, tokens) { - var shape = 0 /* CIRCLE */; - var size = 3 /* FARTHEST_CORNER */; - var stops = []; - var position = []; - parseFunctionArgs(tokens).forEach(function (arg, i) { - var isColorStop = true; - if (i === 0) { - isColorStop = arg.reduce(function (acc, token) { - if (isIdentToken(token)) { - switch (token.value) { - case 'center': - position.push(FIFTY_PERCENT); - return false; - case 'top': - case 'left': - position.push(ZERO_LENGTH); - return false; - case 'right': - case 'bottom': - position.push(HUNDRED_PERCENT); - return false; - } - } - else if (isLengthPercentage(token) || isLength(token)) { - position.push(token); - return false; - } - return acc; - }, isColorStop); - } - else if (i === 1) { - isColorStop = arg.reduce(function (acc, token) { - if (isIdentToken(token)) { - switch (token.value) { - case CIRCLE: - shape = 0 /* CIRCLE */; - return false; - case ELLIPSE: - shape = 1 /* ELLIPSE */; - return false; - case CONTAIN: - case CLOSEST_SIDE: - size = 0 /* CLOSEST_SIDE */; - return false; - case FARTHEST_SIDE: - size = 1 /* FARTHEST_SIDE */; - return false; - case CLOSEST_CORNER: - size = 2 /* CLOSEST_CORNER */; - return false; - case COVER: - case FARTHEST_CORNER: - size = 3 /* FARTHEST_CORNER */; - return false; - } - } - else if (isLength(token) || isLengthPercentage(token)) { - if (!Array.isArray(size)) { - size = []; - } - size.push(token); - return false; - } - return acc; - }, isColorStop); - } - if (isColorStop) { - var colorStop = parseColorStop(context, arg); - stops.push(colorStop); - } - }); - return { size: size, shape: shape, stops: stops, position: position, type: 2 /* RADIAL_GRADIENT */ }; - }; - - var isLinearGradient = function (background) { - return background.type === 1 /* LINEAR_GRADIENT */; - }; - var isRadialGradient = function (background) { - return background.type === 2 /* RADIAL_GRADIENT */; - }; - var image = { - name: 'image', - parse: function (context, value) { - if (value.type === 22 /* URL_TOKEN */) { - var image_1 = { url: value.value, type: 0 /* URL */ }; - context.cache.addImage(value.value); - return image_1; - } - if (value.type === 18 /* FUNCTION */) { - var imageFunction = SUPPORTED_IMAGE_FUNCTIONS[value.name]; - if (typeof imageFunction === 'undefined') { - throw new Error("Attempting to parse an unsupported image function \"" + value.name + "\""); - } - return imageFunction(context, value.values); - } - throw new Error("Unsupported image type " + value.type); - } - }; - function isSupportedImage(value) { - return (!(value.type === 20 /* IDENT_TOKEN */ && value.value === 'none') && - (value.type !== 18 /* FUNCTION */ || !!SUPPORTED_IMAGE_FUNCTIONS[value.name])); - } - var SUPPORTED_IMAGE_FUNCTIONS = { - 'linear-gradient': linearGradient, - '-moz-linear-gradient': prefixLinearGradient, - '-ms-linear-gradient': prefixLinearGradient, - '-o-linear-gradient': prefixLinearGradient, - '-webkit-linear-gradient': prefixLinearGradient, - 'radial-gradient': radialGradient, - '-moz-radial-gradient': prefixRadialGradient, - '-ms-radial-gradient': prefixRadialGradient, - '-o-radial-gradient': prefixRadialGradient, - '-webkit-radial-gradient': prefixRadialGradient, - '-webkit-gradient': webkitGradient - }; - - var backgroundImage = { - name: 'background-image', - initialValue: 'none', - type: 1 /* LIST */, - prefix: false, - parse: function (context, tokens) { - if (tokens.length === 0) { - return []; - } - var first = tokens[0]; - if (first.type === 20 /* IDENT_TOKEN */ && first.value === 'none') { - return []; - } - return tokens - .filter(function (value) { return nonFunctionArgSeparator(value) && isSupportedImage(value); }) - .map(function (value) { return image.parse(context, value); }); - } - }; - - var backgroundOrigin = { - name: 'background-origin', - initialValue: 'border-box', - prefix: false, - type: 1 /* LIST */, - parse: function (_context, tokens) { - return tokens.map(function (token) { - if (isIdentToken(token)) { - switch (token.value) { - case 'padding-box': - return 1 /* PADDING_BOX */; - case 'content-box': - return 2 /* CONTENT_BOX */; - } - } - return 0 /* BORDER_BOX */; - }); - } - }; - - var backgroundPosition = { - name: 'background-position', - initialValue: '0% 0%', - type: 1 /* LIST */, - prefix: false, - parse: function (_context, tokens) { - return parseFunctionArgs(tokens) - .map(function (values) { return values.filter(isLengthPercentage); }) - .map(parseLengthPercentageTuple); - } - }; - - var backgroundRepeat = { - name: 'background-repeat', - initialValue: 'repeat', - prefix: false, - type: 1 /* LIST */, - parse: function (_context, tokens) { - return parseFunctionArgs(tokens) - .map(function (values) { - return values - .filter(isIdentToken) - .map(function (token) { return token.value; }) - .join(' '); - }) - .map(parseBackgroundRepeat); - } - }; - var parseBackgroundRepeat = function (value) { - switch (value) { - case 'no-repeat': - return 1 /* NO_REPEAT */; - case 'repeat-x': - case 'repeat no-repeat': - return 2 /* REPEAT_X */; - case 'repeat-y': - case 'no-repeat repeat': - return 3 /* REPEAT_Y */; - case 'repeat': - default: - return 0 /* REPEAT */; - } - }; - - var BACKGROUND_SIZE; - (function (BACKGROUND_SIZE) { - BACKGROUND_SIZE["AUTO"] = "auto"; - BACKGROUND_SIZE["CONTAIN"] = "contain"; - BACKGROUND_SIZE["COVER"] = "cover"; - })(BACKGROUND_SIZE || (BACKGROUND_SIZE = {})); - var backgroundSize = { - name: 'background-size', - initialValue: '0', - prefix: false, - type: 1 /* LIST */, - parse: function (_context, tokens) { - return parseFunctionArgs(tokens).map(function (values) { return values.filter(isBackgroundSizeInfoToken); }); - } - }; - var isBackgroundSizeInfoToken = function (value) { - return isIdentToken(value) || isLengthPercentage(value); - }; - - var borderColorForSide = function (side) { return ({ - name: "border-" + side + "-color", - initialValue: 'transparent', - prefix: false, - type: 3 /* TYPE_VALUE */, - format: 'color' - }); }; - var borderTopColor = borderColorForSide('top'); - var borderRightColor = borderColorForSide('right'); - var borderBottomColor = borderColorForSide('bottom'); - var borderLeftColor = borderColorForSide('left'); - - var borderRadiusForSide = function (side) { return ({ - name: "border-radius-" + side, - initialValue: '0 0', - prefix: false, - type: 1 /* LIST */, - parse: function (_context, tokens) { - return parseLengthPercentageTuple(tokens.filter(isLengthPercentage)); - } - }); }; - var borderTopLeftRadius = borderRadiusForSide('top-left'); - var borderTopRightRadius = borderRadiusForSide('top-right'); - var borderBottomRightRadius = borderRadiusForSide('bottom-right'); - var borderBottomLeftRadius = borderRadiusForSide('bottom-left'); - - var borderStyleForSide = function (side) { return ({ - name: "border-" + side + "-style", - initialValue: 'solid', - prefix: false, - type: 2 /* IDENT_VALUE */, - parse: function (_context, style) { - switch (style) { - case 'none': - return 0 /* NONE */; - case 'dashed': - return 2 /* DASHED */; - case 'dotted': - return 3 /* DOTTED */; - case 'double': - return 4 /* DOUBLE */; - } - return 1 /* SOLID */; - } - }); }; - var borderTopStyle = borderStyleForSide('top'); - var borderRightStyle = borderStyleForSide('right'); - var borderBottomStyle = borderStyleForSide('bottom'); - var borderLeftStyle = borderStyleForSide('left'); - - var borderWidthForSide = function (side) { return ({ - name: "border-" + side + "-width", - initialValue: '0', - type: 0 /* VALUE */, - prefix: false, - parse: function (_context, token) { - if (isDimensionToken(token)) { - return token.number; - } - return 0; - } - }); }; - var borderTopWidth = borderWidthForSide('top'); - var borderRightWidth = borderWidthForSide('right'); - var borderBottomWidth = borderWidthForSide('bottom'); - var borderLeftWidth = borderWidthForSide('left'); - - var color = { - name: "color", - initialValue: 'transparent', - prefix: false, - type: 3 /* TYPE_VALUE */, - format: 'color' - }; - - var direction = { - name: 'direction', - initialValue: 'ltr', - prefix: false, - type: 2 /* IDENT_VALUE */, - parse: function (_context, direction) { - switch (direction) { - case 'rtl': - return 1 /* RTL */; - case 'ltr': - default: - return 0 /* LTR */; - } - } - }; - - var display = { - name: 'display', - initialValue: 'inline-block', - prefix: false, - type: 1 /* LIST */, - parse: function (_context, tokens) { - return tokens.filter(isIdentToken).reduce(function (bit, token) { - return bit | parseDisplayValue(token.value); - }, 0 /* NONE */); - } - }; - var parseDisplayValue = function (display) { - switch (display) { - case 'block': - case '-webkit-box': - return 2 /* BLOCK */; - case 'inline': - return 4 /* INLINE */; - case 'run-in': - return 8 /* RUN_IN */; - case 'flow': - return 16 /* FLOW */; - case 'flow-root': - return 32 /* FLOW_ROOT */; - case 'table': - return 64 /* TABLE */; - case 'flex': - case '-webkit-flex': - return 128 /* FLEX */; - case 'grid': - case '-ms-grid': - return 256 /* GRID */; - case 'ruby': - return 512 /* RUBY */; - case 'subgrid': - return 1024 /* SUBGRID */; - case 'list-item': - return 2048 /* LIST_ITEM */; - case 'table-row-group': - return 4096 /* TABLE_ROW_GROUP */; - case 'table-header-group': - return 8192 /* TABLE_HEADER_GROUP */; - case 'table-footer-group': - return 16384 /* TABLE_FOOTER_GROUP */; - case 'table-row': - return 32768 /* TABLE_ROW */; - case 'table-cell': - return 65536 /* TABLE_CELL */; - case 'table-column-group': - return 131072 /* TABLE_COLUMN_GROUP */; - case 'table-column': - return 262144 /* TABLE_COLUMN */; - case 'table-caption': - return 524288 /* TABLE_CAPTION */; - case 'ruby-base': - return 1048576 /* RUBY_BASE */; - case 'ruby-text': - return 2097152 /* RUBY_TEXT */; - case 'ruby-base-container': - return 4194304 /* RUBY_BASE_CONTAINER */; - case 'ruby-text-container': - return 8388608 /* RUBY_TEXT_CONTAINER */; - case 'contents': - return 16777216 /* CONTENTS */; - case 'inline-block': - return 33554432 /* INLINE_BLOCK */; - case 'inline-list-item': - return 67108864 /* INLINE_LIST_ITEM */; - case 'inline-table': - return 134217728 /* INLINE_TABLE */; - case 'inline-flex': - return 268435456 /* INLINE_FLEX */; - case 'inline-grid': - return 536870912 /* INLINE_GRID */; - } - return 0 /* NONE */; - }; - - var float = { - name: 'float', - initialValue: 'none', - prefix: false, - type: 2 /* IDENT_VALUE */, - parse: function (_context, float) { - switch (float) { - case 'left': - return 1 /* LEFT */; - case 'right': - return 2 /* RIGHT */; - case 'inline-start': - return 3 /* INLINE_START */; - case 'inline-end': - return 4 /* INLINE_END */; - } - return 0 /* NONE */; - } - }; - - var letterSpacing = { - name: 'letter-spacing', - initialValue: '0', - prefix: false, - type: 0 /* VALUE */, - parse: function (_context, token) { - if (token.type === 20 /* IDENT_TOKEN */ && token.value === 'normal') { - return 0; - } - if (token.type === 17 /* NUMBER_TOKEN */) { - return token.number; - } - if (token.type === 15 /* DIMENSION_TOKEN */) { - return token.number; - } - return 0; - } - }; - - var LINE_BREAK; - (function (LINE_BREAK) { - LINE_BREAK["NORMAL"] = "normal"; - LINE_BREAK["STRICT"] = "strict"; - })(LINE_BREAK || (LINE_BREAK = {})); - var lineBreak = { - name: 'line-break', - initialValue: 'normal', - prefix: false, - type: 2 /* IDENT_VALUE */, - parse: function (_context, lineBreak) { - switch (lineBreak) { - case 'strict': - return LINE_BREAK.STRICT; - case 'normal': - default: - return LINE_BREAK.NORMAL; - } - } - }; - - var lineHeight = { - name: 'line-height', - initialValue: 'normal', - prefix: false, - type: 4 /* TOKEN_VALUE */ - }; - var computeLineHeight = function (token, fontSize) { - if (isIdentToken(token) && token.value === 'normal') { - return 1.2 * fontSize; - } - else if (token.type === 17 /* NUMBER_TOKEN */) { - return fontSize * token.number; - } - else if (isLengthPercentage(token)) { - return getAbsoluteValue(token, fontSize); - } - return fontSize; - }; - - var listStyleImage = { - name: 'list-style-image', - initialValue: 'none', - type: 0 /* VALUE */, - prefix: false, - parse: function (context, token) { - if (token.type === 20 /* IDENT_TOKEN */ && token.value === 'none') { - return null; - } - return image.parse(context, token); - } - }; - - var listStylePosition = { - name: 'list-style-position', - initialValue: 'outside', - prefix: false, - type: 2 /* IDENT_VALUE */, - parse: function (_context, position) { - switch (position) { - case 'inside': - return 0 /* INSIDE */; - case 'outside': - default: - return 1 /* OUTSIDE */; - } - } - }; - - var listStyleType = { - name: 'list-style-type', - initialValue: 'none', - prefix: false, - type: 2 /* IDENT_VALUE */, - parse: function (_context, type) { - switch (type) { - case 'disc': - return 0 /* DISC */; - case 'circle': - return 1 /* CIRCLE */; - case 'square': - return 2 /* SQUARE */; - case 'decimal': - return 3 /* DECIMAL */; - case 'cjk-decimal': - return 4 /* CJK_DECIMAL */; - case 'decimal-leading-zero': - return 5 /* DECIMAL_LEADING_ZERO */; - case 'lower-roman': - return 6 /* LOWER_ROMAN */; - case 'upper-roman': - return 7 /* UPPER_ROMAN */; - case 'lower-greek': - return 8 /* LOWER_GREEK */; - case 'lower-alpha': - return 9 /* LOWER_ALPHA */; - case 'upper-alpha': - return 10 /* UPPER_ALPHA */; - case 'arabic-indic': - return 11 /* ARABIC_INDIC */; - case 'armenian': - return 12 /* ARMENIAN */; - case 'bengali': - return 13 /* BENGALI */; - case 'cambodian': - return 14 /* CAMBODIAN */; - case 'cjk-earthly-branch': - return 15 /* CJK_EARTHLY_BRANCH */; - case 'cjk-heavenly-stem': - return 16 /* CJK_HEAVENLY_STEM */; - case 'cjk-ideographic': - return 17 /* CJK_IDEOGRAPHIC */; - case 'devanagari': - return 18 /* DEVANAGARI */; - case 'ethiopic-numeric': - return 19 /* ETHIOPIC_NUMERIC */; - case 'georgian': - return 20 /* GEORGIAN */; - case 'gujarati': - return 21 /* GUJARATI */; - case 'gurmukhi': - return 22 /* GURMUKHI */; - case 'hebrew': - return 22 /* HEBREW */; - case 'hiragana': - return 23 /* HIRAGANA */; - case 'hiragana-iroha': - return 24 /* HIRAGANA_IROHA */; - case 'japanese-formal': - return 25 /* JAPANESE_FORMAL */; - case 'japanese-informal': - return 26 /* JAPANESE_INFORMAL */; - case 'kannada': - return 27 /* KANNADA */; - case 'katakana': - return 28 /* KATAKANA */; - case 'katakana-iroha': - return 29 /* KATAKANA_IROHA */; - case 'khmer': - return 30 /* KHMER */; - case 'korean-hangul-formal': - return 31 /* KOREAN_HANGUL_FORMAL */; - case 'korean-hanja-formal': - return 32 /* KOREAN_HANJA_FORMAL */; - case 'korean-hanja-informal': - return 33 /* KOREAN_HANJA_INFORMAL */; - case 'lao': - return 34 /* LAO */; - case 'lower-armenian': - return 35 /* LOWER_ARMENIAN */; - case 'malayalam': - return 36 /* MALAYALAM */; - case 'mongolian': - return 37 /* MONGOLIAN */; - case 'myanmar': - return 38 /* MYANMAR */; - case 'oriya': - return 39 /* ORIYA */; - case 'persian': - return 40 /* PERSIAN */; - case 'simp-chinese-formal': - return 41 /* SIMP_CHINESE_FORMAL */; - case 'simp-chinese-informal': - return 42 /* SIMP_CHINESE_INFORMAL */; - case 'tamil': - return 43 /* TAMIL */; - case 'telugu': - return 44 /* TELUGU */; - case 'thai': - return 45 /* THAI */; - case 'tibetan': - return 46 /* TIBETAN */; - case 'trad-chinese-formal': - return 47 /* TRAD_CHINESE_FORMAL */; - case 'trad-chinese-informal': - return 48 /* TRAD_CHINESE_INFORMAL */; - case 'upper-armenian': - return 49 /* UPPER_ARMENIAN */; - case 'disclosure-open': - return 50 /* DISCLOSURE_OPEN */; - case 'disclosure-closed': - return 51 /* DISCLOSURE_CLOSED */; - case 'none': - default: - return -1 /* NONE */; - } - } - }; - - var marginForSide = function (side) { return ({ - name: "margin-" + side, - initialValue: '0', - prefix: false, - type: 4 /* TOKEN_VALUE */ - }); }; - var marginTop = marginForSide('top'); - var marginRight = marginForSide('right'); - var marginBottom = marginForSide('bottom'); - var marginLeft = marginForSide('left'); - - var overflow = { - name: 'overflow', - initialValue: 'visible', - prefix: false, - type: 1 /* LIST */, - parse: function (_context, tokens) { - return tokens.filter(isIdentToken).map(function (overflow) { - switch (overflow.value) { - case 'hidden': - return 1 /* HIDDEN */; - case 'scroll': - return 2 /* SCROLL */; - case 'clip': - return 3 /* CLIP */; - case 'auto': - return 4 /* AUTO */; - case 'visible': - default: - return 0 /* VISIBLE */; - } - }); - } - }; - - var overflowWrap = { - name: 'overflow-wrap', - initialValue: 'normal', - prefix: false, - type: 2 /* IDENT_VALUE */, - parse: function (_context, overflow) { - switch (overflow) { - case 'break-word': - return "break-word" /* BREAK_WORD */; - case 'normal': - default: - return "normal" /* NORMAL */; - } - } - }; - - var paddingForSide = function (side) { return ({ - name: "padding-" + side, - initialValue: '0', - prefix: false, - type: 3 /* TYPE_VALUE */, - format: 'length-percentage' - }); }; - var paddingTop = paddingForSide('top'); - var paddingRight = paddingForSide('right'); - var paddingBottom = paddingForSide('bottom'); - var paddingLeft = paddingForSide('left'); - - var textAlign = { - name: 'text-align', - initialValue: 'left', - prefix: false, - type: 2 /* IDENT_VALUE */, - parse: function (_context, textAlign) { - switch (textAlign) { - case 'right': - return 2 /* RIGHT */; - case 'center': - case 'justify': - return 1 /* CENTER */; - case 'left': - default: - return 0 /* LEFT */; - } - } - }; - - var position = { - name: 'position', - initialValue: 'static', - prefix: false, - type: 2 /* IDENT_VALUE */, - parse: function (_context, position) { - switch (position) { - case 'relative': - return 1 /* RELATIVE */; - case 'absolute': - return 2 /* ABSOLUTE */; - case 'fixed': - return 3 /* FIXED */; - case 'sticky': - return 4 /* STICKY */; - } - return 0 /* STATIC */; - } - }; - - var textShadow = { - name: 'text-shadow', - initialValue: 'none', - type: 1 /* LIST */, - prefix: false, - parse: function (context, tokens) { - if (tokens.length === 1 && isIdentWithValue(tokens[0], 'none')) { - return []; - } - return parseFunctionArgs(tokens).map(function (values) { - var shadow = { - color: COLORS.TRANSPARENT, - offsetX: ZERO_LENGTH, - offsetY: ZERO_LENGTH, - blur: ZERO_LENGTH - }; - var c = 0; - for (var i = 0; i < values.length; i++) { - var token = values[i]; - if (isLength(token)) { - if (c === 0) { - shadow.offsetX = token; - } - else if (c === 1) { - shadow.offsetY = token; - } - else { - shadow.blur = token; - } - c++; - } - else { - shadow.color = color$1.parse(context, token); - } - } - return shadow; - }); - } - }; - - var textTransform = { - name: 'text-transform', - initialValue: 'none', - prefix: false, - type: 2 /* IDENT_VALUE */, - parse: function (_context, textTransform) { - switch (textTransform) { - case 'uppercase': - return 2 /* UPPERCASE */; - case 'lowercase': - return 1 /* LOWERCASE */; - case 'capitalize': - return 3 /* CAPITALIZE */; - } - return 0 /* NONE */; - } - }; - - var transform$1 = { - name: 'transform', - initialValue: 'none', - prefix: true, - type: 0 /* VALUE */, - parse: function (_context, token) { - if (token.type === 20 /* IDENT_TOKEN */ && token.value === 'none') { - return null; - } - if (token.type === 18 /* FUNCTION */) { - var transformFunction = SUPPORTED_TRANSFORM_FUNCTIONS[token.name]; - if (typeof transformFunction === 'undefined') { - throw new Error("Attempting to parse an unsupported transform function \"" + token.name + "\""); - } - return transformFunction(token.values); - } - return null; - } - }; - var matrix = function (args) { - var values = args.filter(function (arg) { return arg.type === 17 /* NUMBER_TOKEN */; }).map(function (arg) { return arg.number; }); - return values.length === 6 ? values : null; - }; - // doesn't support 3D transforms at the moment - var matrix3d = function (args) { - var values = args.filter(function (arg) { return arg.type === 17 /* NUMBER_TOKEN */; }).map(function (arg) { return arg.number; }); - var a1 = values[0], b1 = values[1]; values[2]; values[3]; var a2 = values[4], b2 = values[5]; values[6]; values[7]; values[8]; values[9]; values[10]; values[11]; var a4 = values[12], b4 = values[13]; values[14]; values[15]; - return values.length === 16 ? [a1, b1, a2, b2, a4, b4] : null; - }; - var SUPPORTED_TRANSFORM_FUNCTIONS = { - matrix: matrix, - matrix3d: matrix3d - }; - - var DEFAULT_VALUE = { - type: 16 /* PERCENTAGE_TOKEN */, - number: 50, - flags: FLAG_INTEGER - }; - var DEFAULT = [DEFAULT_VALUE, DEFAULT_VALUE]; - var transformOrigin = { - name: 'transform-origin', - initialValue: '50% 50%', - prefix: true, - type: 1 /* LIST */, - parse: function (_context, tokens) { - var origins = tokens.filter(isLengthPercentage); - if (origins.length !== 2) { - return DEFAULT; - } - return [origins[0], origins[1]]; - } - }; - - var visibility = { - name: 'visible', - initialValue: 'none', - prefix: false, - type: 2 /* IDENT_VALUE */, - parse: function (_context, visibility) { - switch (visibility) { - case 'hidden': - return 1 /* HIDDEN */; - case 'collapse': - return 2 /* COLLAPSE */; - case 'visible': - default: - return 0 /* VISIBLE */; - } - } - }; - - var WORD_BREAK; - (function (WORD_BREAK) { - WORD_BREAK["NORMAL"] = "normal"; - WORD_BREAK["BREAK_ALL"] = "break-all"; - WORD_BREAK["KEEP_ALL"] = "keep-all"; - })(WORD_BREAK || (WORD_BREAK = {})); - var wordBreak = { - name: 'word-break', - initialValue: 'normal', - prefix: false, - type: 2 /* IDENT_VALUE */, - parse: function (_context, wordBreak) { - switch (wordBreak) { - case 'break-all': - return WORD_BREAK.BREAK_ALL; - case 'keep-all': - return WORD_BREAK.KEEP_ALL; - case 'normal': - default: - return WORD_BREAK.NORMAL; - } - } - }; - - var zIndex = { - name: 'z-index', - initialValue: 'auto', - prefix: false, - type: 0 /* VALUE */, - parse: function (_context, token) { - if (token.type === 20 /* IDENT_TOKEN */) { - return { auto: true, order: 0 }; - } - if (isNumberToken(token)) { - return { auto: false, order: token.number }; - } - throw new Error("Invalid z-index number parsed"); - } - }; - - var time = { - name: 'time', - parse: function (_context, value) { - if (value.type === 15 /* DIMENSION_TOKEN */) { - switch (value.unit.toLowerCase()) { - case 's': - return 1000 * value.number; - case 'ms': - return value.number; - } - } - throw new Error("Unsupported time type"); - } - }; - - var opacity = { - name: 'opacity', - initialValue: '1', - type: 0 /* VALUE */, - prefix: false, - parse: function (_context, token) { - if (isNumberToken(token)) { - return token.number; - } - return 1; - } - }; - - var textDecorationColor = { - name: "text-decoration-color", - initialValue: 'transparent', - prefix: false, - type: 3 /* TYPE_VALUE */, - format: 'color' - }; - - var textDecorationLine = { - name: 'text-decoration-line', - initialValue: 'none', - prefix: false, - type: 1 /* LIST */, - parse: function (_context, tokens) { - return tokens - .filter(isIdentToken) - .map(function (token) { - switch (token.value) { - case 'underline': - return 1 /* UNDERLINE */; - case 'overline': - return 2 /* OVERLINE */; - case 'line-through': - return 3 /* LINE_THROUGH */; - case 'none': - return 4 /* BLINK */; - } - return 0 /* NONE */; - }) - .filter(function (line) { return line !== 0 /* NONE */; }); - } - }; - - var fontFamily = { - name: "font-family", - initialValue: '', - prefix: false, - type: 1 /* LIST */, - parse: function (_context, tokens) { - var accumulator = []; - var results = []; - tokens.forEach(function (token) { - switch (token.type) { - case 20 /* IDENT_TOKEN */: - case 0 /* STRING_TOKEN */: - accumulator.push(token.value); - break; - case 17 /* NUMBER_TOKEN */: - accumulator.push(token.number.toString()); - break; - case 4 /* COMMA_TOKEN */: - results.push(accumulator.join(' ')); - accumulator.length = 0; - break; - } - }); - if (accumulator.length) { - results.push(accumulator.join(' ')); - } - return results.map(function (result) { return (result.indexOf(' ') === -1 ? result : "'" + result + "'"); }); - } - }; - - var fontSize = { - name: "font-size", - initialValue: '0', - prefix: false, - type: 3 /* TYPE_VALUE */, - format: 'length' - }; - - var fontWeight = { - name: 'font-weight', - initialValue: 'normal', - type: 0 /* VALUE */, - prefix: false, - parse: function (_context, token) { - if (isNumberToken(token)) { - return token.number; - } - if (isIdentToken(token)) { - switch (token.value) { - case 'bold': - return 700; - case 'normal': - default: - return 400; - } - } - return 400; - } - }; - - var fontVariant = { - name: 'font-variant', - initialValue: 'none', - type: 1 /* LIST */, - prefix: false, - parse: function (_context, tokens) { - return tokens.filter(isIdentToken).map(function (token) { return token.value; }); - } - }; - - var fontStyle = { - name: 'font-style', - initialValue: 'normal', - prefix: false, - type: 2 /* IDENT_VALUE */, - parse: function (_context, overflow) { - switch (overflow) { - case 'oblique': - return "oblique" /* OBLIQUE */; - case 'italic': - return "italic" /* ITALIC */; - case 'normal': - default: - return "normal" /* NORMAL */; - } - } - }; - - var contains = function (bit, value) { return (bit & value) !== 0; }; - - var content = { - name: 'content', - initialValue: 'none', - type: 1 /* LIST */, - prefix: false, - parse: function (_context, tokens) { - if (tokens.length === 0) { - return []; - } - var first = tokens[0]; - if (first.type === 20 /* IDENT_TOKEN */ && first.value === 'none') { - return []; - } - return tokens; - } - }; - - var counterIncrement = { - name: 'counter-increment', - initialValue: 'none', - prefix: true, - type: 1 /* LIST */, - parse: function (_context, tokens) { - if (tokens.length === 0) { - return null; - } - var first = tokens[0]; - if (first.type === 20 /* IDENT_TOKEN */ && first.value === 'none') { - return null; - } - var increments = []; - var filtered = tokens.filter(nonWhiteSpace); - for (var i = 0; i < filtered.length; i++) { - var counter = filtered[i]; - var next = filtered[i + 1]; - if (counter.type === 20 /* IDENT_TOKEN */) { - var increment = next && isNumberToken(next) ? next.number : 1; - increments.push({ counter: counter.value, increment: increment }); - } - } - return increments; - } - }; - - var counterReset = { - name: 'counter-reset', - initialValue: 'none', - prefix: true, - type: 1 /* LIST */, - parse: function (_context, tokens) { - if (tokens.length === 0) { - return []; - } - var resets = []; - var filtered = tokens.filter(nonWhiteSpace); - for (var i = 0; i < filtered.length; i++) { - var counter = filtered[i]; - var next = filtered[i + 1]; - if (isIdentToken(counter) && counter.value !== 'none') { - var reset = next && isNumberToken(next) ? next.number : 0; - resets.push({ counter: counter.value, reset: reset }); - } - } - return resets; - } - }; - - var duration = { - name: 'duration', - initialValue: '0s', - prefix: false, - type: 1 /* LIST */, - parse: function (context, tokens) { - return tokens.filter(isDimensionToken).map(function (token) { return time.parse(context, token); }); - } - }; - - var quotes = { - name: 'quotes', - initialValue: 'none', - prefix: true, - type: 1 /* LIST */, - parse: function (_context, tokens) { - if (tokens.length === 0) { - return null; - } - var first = tokens[0]; - if (first.type === 20 /* IDENT_TOKEN */ && first.value === 'none') { - return null; - } - var quotes = []; - var filtered = tokens.filter(isStringToken); - if (filtered.length % 2 !== 0) { - return null; - } - for (var i = 0; i < filtered.length; i += 2) { - var open_1 = filtered[i].value; - var close_1 = filtered[i + 1].value; - quotes.push({ open: open_1, close: close_1 }); - } - return quotes; - } - }; - var getQuote = function (quotes, depth, open) { - if (!quotes) { - return ''; - } - var quote = quotes[Math.min(depth, quotes.length - 1)]; - if (!quote) { - return ''; - } - return open ? quote.open : quote.close; - }; - - var paintOrder = { - name: 'paint-order', - initialValue: 'normal', - prefix: false, - type: 1 /* LIST */, - parse: function (_context, tokens) { - var DEFAULT_VALUE = [0 /* FILL */, 1 /* STROKE */, 2 /* MARKERS */]; - var layers = []; - tokens.filter(isIdentToken).forEach(function (token) { - switch (token.value) { - case 'stroke': - layers.push(1 /* STROKE */); - break; - case 'fill': - layers.push(0 /* FILL */); - break; - case 'markers': - layers.push(2 /* MARKERS */); - break; - } - }); - DEFAULT_VALUE.forEach(function (value) { - if (layers.indexOf(value) === -1) { - layers.push(value); - } - }); - return layers; - } - }; - - var webkitTextStrokeColor = { - name: "-webkit-text-stroke-color", - initialValue: 'currentcolor', - prefix: false, - type: 3 /* TYPE_VALUE */, - format: 'color' - }; - - var webkitTextStrokeWidth = { - name: "-webkit-text-stroke-width", - initialValue: '0', - type: 0 /* VALUE */, - prefix: false, - parse: function (_context, token) { - if (isDimensionToken(token)) { - return token.number; - } - return 0; - } - }; - - var CSSParsedDeclaration = /** @class */ (function () { - function CSSParsedDeclaration(context, declaration) { - var _a, _b; - this.animationDuration = parse(context, duration, declaration.animationDuration); - this.backgroundClip = parse(context, backgroundClip, declaration.backgroundClip); - this.backgroundColor = parse(context, backgroundColor, declaration.backgroundColor); - this.backgroundImage = parse(context, backgroundImage, declaration.backgroundImage); - this.backgroundOrigin = parse(context, backgroundOrigin, declaration.backgroundOrigin); - this.backgroundPosition = parse(context, backgroundPosition, declaration.backgroundPosition); - this.backgroundRepeat = parse(context, backgroundRepeat, declaration.backgroundRepeat); - this.backgroundSize = parse(context, backgroundSize, declaration.backgroundSize); - this.borderTopColor = parse(context, borderTopColor, declaration.borderTopColor); - this.borderRightColor = parse(context, borderRightColor, declaration.borderRightColor); - this.borderBottomColor = parse(context, borderBottomColor, declaration.borderBottomColor); - this.borderLeftColor = parse(context, borderLeftColor, declaration.borderLeftColor); - this.borderTopLeftRadius = parse(context, borderTopLeftRadius, declaration.borderTopLeftRadius); - this.borderTopRightRadius = parse(context, borderTopRightRadius, declaration.borderTopRightRadius); - this.borderBottomRightRadius = parse(context, borderBottomRightRadius, declaration.borderBottomRightRadius); - this.borderBottomLeftRadius = parse(context, borderBottomLeftRadius, declaration.borderBottomLeftRadius); - this.borderTopStyle = parse(context, borderTopStyle, declaration.borderTopStyle); - this.borderRightStyle = parse(context, borderRightStyle, declaration.borderRightStyle); - this.borderBottomStyle = parse(context, borderBottomStyle, declaration.borderBottomStyle); - this.borderLeftStyle = parse(context, borderLeftStyle, declaration.borderLeftStyle); - this.borderTopWidth = parse(context, borderTopWidth, declaration.borderTopWidth); - this.borderRightWidth = parse(context, borderRightWidth, declaration.borderRightWidth); - this.borderBottomWidth = parse(context, borderBottomWidth, declaration.borderBottomWidth); - this.borderLeftWidth = parse(context, borderLeftWidth, declaration.borderLeftWidth); - this.color = parse(context, color, declaration.color); - this.direction = parse(context, direction, declaration.direction); - this.display = parse(context, display, declaration.display); - this.float = parse(context, float, declaration.cssFloat); - this.fontFamily = parse(context, fontFamily, declaration.fontFamily); - this.fontSize = parse(context, fontSize, declaration.fontSize); - this.fontStyle = parse(context, fontStyle, declaration.fontStyle); - this.fontVariant = parse(context, fontVariant, declaration.fontVariant); - this.fontWeight = parse(context, fontWeight, declaration.fontWeight); - this.letterSpacing = parse(context, letterSpacing, declaration.letterSpacing); - this.lineBreak = parse(context, lineBreak, declaration.lineBreak); - this.lineHeight = parse(context, lineHeight, declaration.lineHeight); - this.listStyleImage = parse(context, listStyleImage, declaration.listStyleImage); - this.listStylePosition = parse(context, listStylePosition, declaration.listStylePosition); - this.listStyleType = parse(context, listStyleType, declaration.listStyleType); - this.marginTop = parse(context, marginTop, declaration.marginTop); - this.marginRight = parse(context, marginRight, declaration.marginRight); - this.marginBottom = parse(context, marginBottom, declaration.marginBottom); - this.marginLeft = parse(context, marginLeft, declaration.marginLeft); - this.opacity = parse(context, opacity, declaration.opacity); - var overflowTuple = parse(context, overflow, declaration.overflow); - this.overflowX = overflowTuple[0]; - this.overflowY = overflowTuple[overflowTuple.length > 1 ? 1 : 0]; - this.overflowWrap = parse(context, overflowWrap, declaration.overflowWrap); - this.paddingTop = parse(context, paddingTop, declaration.paddingTop); - this.paddingRight = parse(context, paddingRight, declaration.paddingRight); - this.paddingBottom = parse(context, paddingBottom, declaration.paddingBottom); - this.paddingLeft = parse(context, paddingLeft, declaration.paddingLeft); - this.paintOrder = parse(context, paintOrder, declaration.paintOrder); - this.position = parse(context, position, declaration.position); - this.textAlign = parse(context, textAlign, declaration.textAlign); - this.textDecorationColor = parse(context, textDecorationColor, (_a = declaration.textDecorationColor) !== null && _a !== void 0 ? _a : declaration.color); - this.textDecorationLine = parse(context, textDecorationLine, (_b = declaration.textDecorationLine) !== null && _b !== void 0 ? _b : declaration.textDecoration); - this.textShadow = parse(context, textShadow, declaration.textShadow); - this.textTransform = parse(context, textTransform, declaration.textTransform); - this.transform = parse(context, transform$1, declaration.transform); - this.transformOrigin = parse(context, transformOrigin, declaration.transformOrigin); - this.visibility = parse(context, visibility, declaration.visibility); - this.webkitTextStrokeColor = parse(context, webkitTextStrokeColor, declaration.webkitTextStrokeColor); - this.webkitTextStrokeWidth = parse(context, webkitTextStrokeWidth, declaration.webkitTextStrokeWidth); - this.wordBreak = parse(context, wordBreak, declaration.wordBreak); - this.zIndex = parse(context, zIndex, declaration.zIndex); - } - CSSParsedDeclaration.prototype.isVisible = function () { - return this.display > 0 && this.opacity > 0 && this.visibility === 0 /* VISIBLE */; - }; - CSSParsedDeclaration.prototype.isTransparent = function () { - return isTransparent(this.backgroundColor); - }; - CSSParsedDeclaration.prototype.isTransformed = function () { - return this.transform !== null; - }; - CSSParsedDeclaration.prototype.isPositioned = function () { - return this.position !== 0 /* STATIC */; - }; - CSSParsedDeclaration.prototype.isPositionedWithZIndex = function () { - return this.isPositioned() && !this.zIndex.auto; - }; - CSSParsedDeclaration.prototype.isFloating = function () { - return this.float !== 0 /* NONE */; - }; - CSSParsedDeclaration.prototype.isInlineLevel = function () { - return (contains(this.display, 4 /* INLINE */) || - contains(this.display, 33554432 /* INLINE_BLOCK */) || - contains(this.display, 268435456 /* INLINE_FLEX */) || - contains(this.display, 536870912 /* INLINE_GRID */) || - contains(this.display, 67108864 /* INLINE_LIST_ITEM */) || - contains(this.display, 134217728 /* INLINE_TABLE */)); - }; - return CSSParsedDeclaration; - }()); - var CSSParsedPseudoDeclaration = /** @class */ (function () { - function CSSParsedPseudoDeclaration(context, declaration) { - this.content = parse(context, content, declaration.content); - this.quotes = parse(context, quotes, declaration.quotes); - } - return CSSParsedPseudoDeclaration; - }()); - var CSSParsedCounterDeclaration = /** @class */ (function () { - function CSSParsedCounterDeclaration(context, declaration) { - this.counterIncrement = parse(context, counterIncrement, declaration.counterIncrement); - this.counterReset = parse(context, counterReset, declaration.counterReset); - } - return CSSParsedCounterDeclaration; - }()); - // eslint-disable-next-line @typescript-eslint/no-explicit-any - var parse = function (context, descriptor, style) { - var tokenizer = new Tokenizer(); - var value = style !== null && typeof style !== 'undefined' ? style.toString() : descriptor.initialValue; - tokenizer.write(value); - var parser = new Parser(tokenizer.read()); - switch (descriptor.type) { - case 2 /* IDENT_VALUE */: - var token = parser.parseComponentValue(); - return descriptor.parse(context, isIdentToken(token) ? token.value : descriptor.initialValue); - case 0 /* VALUE */: - return descriptor.parse(context, parser.parseComponentValue()); - case 1 /* LIST */: - return descriptor.parse(context, parser.parseComponentValues()); - case 4 /* TOKEN_VALUE */: - return parser.parseComponentValue(); - case 3 /* TYPE_VALUE */: - switch (descriptor.format) { - case 'angle': - return angle.parse(context, parser.parseComponentValue()); - case 'color': - return color$1.parse(context, parser.parseComponentValue()); - case 'image': - return image.parse(context, parser.parseComponentValue()); - case 'length': - var length_1 = parser.parseComponentValue(); - return isLength(length_1) ? length_1 : ZERO_LENGTH; - case 'length-percentage': - var value_1 = parser.parseComponentValue(); - return isLengthPercentage(value_1) ? value_1 : ZERO_LENGTH; - case 'time': - return time.parse(context, parser.parseComponentValue()); - } - break; - } - }; - - var elementDebuggerAttribute = 'data-html2canvas-debug'; - var getElementDebugType = function (element) { - var attribute = element.getAttribute(elementDebuggerAttribute); - switch (attribute) { - case 'all': - return 1 /* ALL */; - case 'clone': - return 2 /* CLONE */; - case 'parse': - return 3 /* PARSE */; - case 'render': - return 4 /* RENDER */; - default: - return 0 /* NONE */; - } - }; - var isDebugging = function (element, type) { - var elementType = getElementDebugType(element); - return elementType === 1 /* ALL */ || type === elementType; - }; - - var ElementContainer = /** @class */ (function () { - function ElementContainer(context, element) { - this.context = context; - this.textNodes = []; - this.elements = []; - this.flags = 0; - if (isDebugging(element, 3 /* PARSE */)) { - debugger; - } - this.styles = new CSSParsedDeclaration(context, window.getComputedStyle(element, null)); - if (isHTMLElementNode(element)) { - if (this.styles.animationDuration.some(function (duration) { return duration > 0; })) { - element.style.animationDuration = '0s'; - } - if (this.styles.transform !== null) { - // getBoundingClientRect takes transforms into account - element.style.transform = 'none'; - } - } - this.bounds = parseBounds(this.context, element); - if (isDebugging(element, 4 /* RENDER */)) { - this.flags |= 16 /* DEBUG_RENDER */; - } - } - return ElementContainer; - }()); - - /* - * text-segmentation 1.0.3 - * Copyright (c) 2022 Niklas von Hertzen - * Released under MIT License - */ - var base64 = 'AAAAAAAAAAAAEA4AGBkAAFAaAAACAAAAAAAIABAAGAAwADgACAAQAAgAEAAIABAACAAQAAgAEAAIABAACAAQAAgAEAAIABAAQABIAEQATAAIABAACAAQAAgAEAAIABAAVABcAAgAEAAIABAACAAQAGAAaABwAHgAgACIAI4AlgAIABAAmwCjAKgAsAC2AL4AvQDFAMoA0gBPAVYBWgEIAAgACACMANoAYgFkAWwBdAF8AX0BhQGNAZUBlgGeAaMBlQGWAasBswF8AbsBwwF0AcsBYwHTAQgA2wG/AOMBdAF8AekB8QF0AfkB+wHiAHQBfAEIAAMC5gQIAAsCEgIIAAgAFgIeAggAIgIpAggAMQI5AkACygEIAAgASAJQAlgCYAIIAAgACAAKBQoFCgUTBRMFGQUrBSsFCAAIAAgACAAIAAgACAAIAAgACABdAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACABoAmgCrwGvAQgAbgJ2AggAHgEIAAgACADnAXsCCAAIAAgAgwIIAAgACAAIAAgACACKAggAkQKZAggAPADJAAgAoQKkAqwCsgK6AsICCADJAggA0AIIAAgACAAIANYC3gIIAAgACAAIAAgACABAAOYCCAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAkASoB+QIEAAgACAA8AEMCCABCBQgACABJBVAFCAAIAAgACAAIAAgACAAIAAgACABTBVoFCAAIAFoFCABfBWUFCAAIAAgACAAIAAgAbQUIAAgACAAIAAgACABzBXsFfQWFBYoFigWKBZEFigWKBYoFmAWfBaYFrgWxBbkFCAAIAAgACAAIAAgACAAIAAgACAAIAMEFCAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAMgFCADQBQgACAAIAAgACAAIAAgACAAIAAgACAAIAO4CCAAIAAgAiQAIAAgACABAAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAD0AggACAD8AggACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIANYFCAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAMDvwAIAAgAJAIIAAgACAAIAAgACAAIAAgACwMTAwgACAB9BOsEGwMjAwgAKwMyAwsFYgE3A/MEPwMIAEUDTQNRAwgAWQOsAGEDCAAIAAgACAAIAAgACABpAzQFNQU2BTcFOAU5BToFNAU1BTYFNwU4BTkFOgU0BTUFNgU3BTgFOQU6BTQFNQU2BTcFOAU5BToFNAU1BTYFNwU4BTkFOgU0BTUFNgU3BTgFOQU6BTQFNQU2BTcFOAU5BToFNAU1BTYFNwU4BTkFOgU0BTUFNgU3BTgFOQU6BTQFNQU2BTcFOAU5BToFNAU1BTYFNwU4BTkFOgU0BTUFNgU3BTgFOQU6BTQFNQU2BTcFOAU5BToFNAU1BTYFNwU4BTkFOgU0BTUFNgU3BTgFOQU6BTQFNQU2BTcFOAU5BToFNAU1BTYFNwU4BTkFOgU0BTUFNgU3BTgFOQU6BTQFNQU2BTcFOAU5BToFNAU1BTYFNwU4BTkFOgU0BTUFNgU3BTgFOQU6BTQFNQU2BTcFOAU5BToFNAU1BTYFNwU4BTkFOgU0BTUFNgU3BTgFOQU6BTQFNQU2BTcFOAU5BToFNAU1BTYFNwU4BTkFOgU0BTUFNgU3BTgFOQU6BTQFNQU2BTcFOAU5BToFNAU1BTYFNwU4BTkFOgU0BTUFNgU3BTgFOQU6BTQFNQU2BTcFOAU5BToFNAU1BTYFNwU4BTkFOgU0BTUFNgU3BTgFOQU6BTQFNQU2BTcFOAU5BToFNAU1BTYFNwU4BTkFOgU0BTUFNgU3BTgFOQU6BTQFNQU2BTcFOAU5BToFNAU1BTYFNwU4BTkFOgU0BTUFNgU3BTgFOQU6BTQFNQU2BTcFOAU5BToFNAU1BTYFNwU4BTkFOgU0BTUFNgU3BTgFOQU6BTQFNQU2BTcFOAU5BToFNAU1BTYFNwU4BTkFOgU0BTUFNgU3BTgFOQU6BTQFNQU2BTcFOAU5BToFNAU1BTYFNwU4BTkFOgU0BTUFNgU3BTgFOQU6BTQFNQU2BTcFOAU5BToFNAU1BTYFNwU4BTkFIQUoBSwFCAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACABtAwgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACABMAEwACAAIAAgACAAIABgACAAIAAgACAC/AAgACAAyAQgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACACAAIAAwAAgACAAIAAgACAAIAAgACAAIAAAARABIAAgACAAIABQASAAIAAgAIABwAEAAjgCIABsAqAC2AL0AigDQAtwC+IJIQqVAZUBWQqVAZUBlQGVAZUBlQGrC5UBlQGVAZUBlQGVAZUBlQGVAXsKlQGVAbAK6wsrDGUMpQzlDJUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAfAKAAuZA64AtwCJALoC6ADwAAgAuACgA/oEpgO6AqsD+AAIAAgAswMIAAgACAAIAIkAuwP5AfsBwwPLAwgACAAIAAgACADRA9kDCAAIAOED6QMIAAgACAAIAAgACADuA/YDCAAIAP4DyQAIAAgABgQIAAgAXQAOBAgACAAIAAgACAAIABMECAAIAAgACAAIAAgACAD8AAQBCAAIAAgAGgQiBCoECAExBAgAEAEIAAgACAAIAAgACAAIAAgACAAIAAgACAA4BAgACABABEYECAAIAAgATAQYAQgAVAQIAAgACAAIAAgACAAIAAgACAAIAFoECAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgAOQEIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAB+BAcACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAEABhgSMBAgACAAIAAgAlAQIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAwAEAAQABAADAAMAAwADAAQABAAEAAQABAAEAAQABHATAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgAdQMIAAgACAAIAAgACAAIAMkACAAIAAgAfQMIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACACFA4kDCAAIAAgACAAIAOcBCAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAIcDCAAIAAgACAAIAAgACAAIAAgACAAIAJEDCAAIAAgACADFAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACABgBAgAZgQIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgAbAQCBXIECAAIAHkECAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACABAAJwEQACjBKoEsgQIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAC6BMIECAAIAAgACAAIAAgACABmBAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgAxwQIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAGYECAAIAAgAzgQIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgAigWKBYoFigWKBYoFigWKBd0FXwUIAOIF6gXxBYoF3gT5BQAGCAaKBYoFigWKBYoFigWKBYoFigWKBYoFigXWBIoFigWKBYoFigWKBYoFigWKBYsFEAaKBYoFigWKBYoFigWKBRQGCACKBYoFigWKBQgACAAIANEECAAIABgGigUgBggAJgYIAC4GMwaKBYoF0wQ3Bj4GigWKBYoFigWKBYoFigWKBYoFigWKBYoFigUIAAgACAAIAAgACAAIAAgAigWKBYoFigWKBYoFigWKBYoFigWKBYoFigWKBYoFigWKBYoFigWKBYoFigWKBYoFigWKBYoFigWKBYoFigWLBf///////wQABAAEAAQABAAEAAQABAAEAAQAAwAEAAQAAgAEAAQABAAEAAQABAAEAAQABAAEAAQABAAEAAQABAAEAAQABAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAABAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAQABAAEAAQABAAEAAQABAAEAAQABAAEAAQABAAEAAQABAAEAAQABAAEAAQABAAEAAQABAAEAAQABAAEAAQABAAAAAAAAAAAAAAAAAAAAAAAAAAOAAAAAAAAAAQADgAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAUABQAFAAUABQAFAAUAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAAAAUAAAAFAAUAAAAFAAUAAAAFAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAABAAEAAQABAAEAAQAAAAAAAAAAAAAAAAAAAAAAAAAAAAUABQAFAAUABQAFAAUABQAFAAUABQAAAAQAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAABQAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAUABQAFAAUABQAFAAUAAQAAAAUABQAFAAUABQAFAAAAAAAFAAUAAAAFAAUABQAFAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAEAAAAFAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAFAAUABQAFAAUABQAFAAUABQAFAAUAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAFAAUABQAFAAUABQAFAAUABQAAAAAAAAAAAAAAAAAAAAAAAAAFAAAAAAAFAAUAAQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABwAFAAUABQAFAAAABwAHAAcAAAAHAAcABwAFAAEAAAAAAAAAAAAAAAAAAAAAAAUAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAHAAcABwAFAAUABQAFAAcABwAFAAUAAAAAAAEAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAQAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAHAAAAAQABAAAAAAAAAAAAAAAFAAUABQAFAAAABwAFAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAABQAHAAcABwAHAAcAAAAHAAcAAAAAAAUABQAHAAUAAQAHAAEABwAFAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAABQAFAAUABQAFAAUABwABAAUABQAFAAUAAAAAAAAAAAAAAAEAAQABAAEAAQABAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABwAFAAUAAAAAAAAAAAAAAAAABQAFAAUABQAFAAUAAQAFAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAUABQAFAAQAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAQABQANAAQABAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAQABAAEAAQABAAEAAQAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAOAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAABAAEAAQABAAEAAQABAAEAAQABAAEAAQAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAEAAQABAAEAAQABAAEAAQABAAAAAAAAAAAAAAAAAAAAAAABQAHAAUABQAFAAAAAAAAAAcABQAFAAUABQAFAAQABAAEAAQABAAEAAQABAAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABAAEAAQABAAEAAQABAAEAAQABAAEAAQABAAEAAQABAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAUABQAFAAUAAAAFAAUABQAFAAUAAAAFAAUABQAAAAUABQAFAAUABQAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAFAAUABQAAAAAAAAAAAAUABQAFAAcAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAABQAHAAUAAAAHAAcABwAFAAUABQAFAAUABQAFAAUABwAHAAcABwAFAAcABwAAAAUABQAFAAUABQAFAAUAAAAAAAAAAAAAAAAAAAAAAAAAAAAFAAUAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAUABwAHAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAABQAAAAUABwAHAAUABQAFAAUAAAAAAAcABwAAAAAABwAHAAUAAAAAAAAAAAAAAAAAAAAAAAAABQAAAAAAAAAAAAAAAAAAAAAAAAAAAAUABQAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAABQAAAAAABQAFAAcAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAFAAAABwAHAAcABQAFAAAAAAAAAAAABQAFAAAAAAAFAAUABQAAAAAAAAAFAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAUABQAAAAAAAAAFAAAAAAAAAAAAAAAAAAAAAAAAAAAABwAFAAUABQAFAAUAAAAFAAUABwAAAAcABwAFAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAFAAUAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAFAAUABQAFAAUABQAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAUAAAAFAAUABwAFAAUABQAFAAAAAAAHAAcAAAAAAAcABwAFAAAAAAAAAAAAAAAAAAAABQAFAAUAAAAAAAAAAAAAAAAAAAAAAAAAAAAFAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAFAAcABwAAAAAAAAAHAAcABwAAAAcABwAHAAUAAAAAAAAAAAAAAAAAAAAAAAAABQAAAAAAAAAAAAAAAAAAAAAABQAHAAcABwAFAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAUABwAHAAcABwAAAAUABQAFAAAABQAFAAUABQAAAAAAAAAAAAAAAAAAAAUABQAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAABQAAAAcABQAHAAcABQAHAAcAAAAFAAcABwAAAAcABwAFAAUAAAAAAAAAAAAAAAAAAAAFAAUAAAAAAAAAAAAAAAAAAAAAAAAABQAFAAcABwAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAUABQAAAAUABwAAAAAAAAAAAAAAAAAAAAAAAAAAAAUAAAAAAAAAAAAFAAcABwAFAAUABQAAAAUAAAAHAAcABwAHAAcABwAHAAUAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAUAAAAHAAUABQAFAAUABQAFAAUAAAAAAAAAAAAAAAAAAAAAAAUABQAFAAUABQAFAAUABQAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAFAAAABwAFAAUABQAFAAUABQAFAAUABQAAAAAAAAAAAAAAAAAAAAAAAAAAAAAABQAFAAUABQAFAAUAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAUABQAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAABQAAAAUAAAAFAAAAAAAAAAAABwAHAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABwAFAAUABQAFAAUAAAAFAAUAAAAAAAAAAAAAAAUABQAFAAUABQAFAAUABQAFAAUABQAAAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAUABQAFAAUABwAFAAUABQAFAAUABQAAAAUABQAHAAcABQAFAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAHAAcABQAFAAAAAAAAAAAABQAFAAUAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAFAAUABQAFAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAABQAAAAcABQAFAAAAAAAAAAAAAAAAAAUAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAABQAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAABQAFAAUAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAUABQAHAAUABQAFAAUABQAFAAUABwAHAAcABwAHAAcABwAHAAUABwAHAAUABQAFAAUABQAFAAUABQAFAAUABQAAAAAAAAAAAAAAAAAAAAAAAAAFAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAABQAFAAUABwAHAAcABwAFAAUABwAHAAcAAAAAAAAAAAAHAAcABQAHAAcABwAHAAcABwAFAAUABQAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAABQAFAAcABwAFAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAcABQAHAAUABQAFAAUABQAFAAUAAAAFAAAABQAAAAAABQAFAAUABQAFAAUABQAFAAcABwAHAAcABwAHAAUABQAFAAUABQAFAAUABQAFAAUAAAAAAAUABQAFAAUABQAHAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAUABQAFAAUABQAFAAUABwAFAAcABwAHAAcABwAFAAcABwAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAFAAUABQAFAAUABQAFAAUABQAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAFAAUABwAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAHAAUABQAFAAUABwAHAAUABQAHAAUABQAFAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAFAAcABQAFAAcABwAHAAUABwAFAAUABQAHAAcAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAABwAHAAcABwAHAAcABwAHAAUABQAFAAUABQAFAAUABQAHAAcABQAFAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAABQAFAAUAAAAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAcABQAFAAUABQAFAAUABQAAAAAAAAAAAAUAAAAAAAAAAAAAAAAABQAAAAAABwAFAAUAAAAAAAAAAAAAAAAABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAAABQAFAAUABQAFAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAUABQAFAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAABQAFAAUABQAFAAUADgAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAOAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAUABQAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAUABQAFAAUAAAAFAAUABQAFAAUABQAFAAUABQAFAAAAAAAAAAAABQAAAAAAAAAFAAAAAAAAAAAABQAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAABwAHAAUABQAHAAAAAAAAAAAABQAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAcABwAHAAcABQAFAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAUAAAAAAAAAAAAAAAAABQAFAAUABQAFAAUABQAFAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAUABQAFAAUABQAFAAUABQAFAAUABQAHAAcAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAFAAcABwAFAAUABQAFAAcABwAFAAUABwAHAAAAAAAAAAAAAAAFAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAUABQAFAAUABQAFAAcABwAFAAUABwAHAAUABQAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAFAAAAAAAAAAAAAAAAAAAAAAAFAAcAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAABQAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAUAAAAFAAUABQAAAAAABQAFAAAAAAAAAAAAAAAFAAUAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAcABQAFAAcABwAAAAAAAAAAAAAABwAFAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAcABwAFAAcABwAFAAcABwAAAAcABQAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAABQAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAFAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAUABQAFAAUABQAAAAAAAAAAAAAAAAAFAAUABQAAAAUABQAAAAAAAAAAAAAABQAFAAUABQAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAFAAUABQAAAAAAAAAAAAUAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAUABQAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAcABQAHAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAFAAUABQAFAAUABwAFAAUABQAFAAUABQAFAAUAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAFAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAABwAHAAcABQAFAAUABQAFAAUABQAFAAUABwAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAHAAcABwAFAAUABQAHAAcABQAHAAUABQAAAAAAAAAAAAAAAAAFAAAABwAHAAcABQAFAAUABQAFAAUABQAFAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAUABwAHAAcABwAAAAAABwAHAAAAAAAHAAcABwAAAAAAAAAAAAAAAAAAAAAAAAAFAAAAAAAAAAAAAAAAAAAAAAAAAAAABwAHAAAAAAAFAAUABQAFAAUABQAFAAAAAAAAAAUABQAFAAUABQAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAHAAcABwAFAAUABQAFAAUABQAFAAUABwAHAAUABQAFAAcABQAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAABQAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAABQAHAAcABQAFAAUABQAFAAUABwAFAAcABwAFAAcABQAFAAcABQAFAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAABQAHAAcABQAFAAUABQAAAAAABwAHAAcABwAFAAUABwAFAAUAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAABQAFAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAcABwAHAAUABQAFAAUABQAFAAUABQAHAAcABQAHAAUAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAUABwAFAAcABwAFAAUABQAFAAUABQAHAAUAAAAAAAAAAAAAAAAAAAAAAAcABwAFAAUABQAFAAcABQAFAAUABQAFAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAHAAcABwAFAAUABQAFAAUABQAFAAUABQAHAAUABQAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAHAAcABwAFAAUABQAFAAAAAAAFAAUABwAHAAcABwAFAAAAAAAAAAcAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAFAAUABQAFAAUABQAFAAUABQAFAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAUAAAAAAAAAAAAAAAAAAAAAAAAABQAFAAUABQAFAAUABwAHAAUABQAFAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAcABQAFAAUABQAFAAUABQAAAAUABQAFAAUABQAFAAcABQAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUAAAAHAAUABQAFAAUABQAFAAUABwAFAAUABwAFAAUAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAABQAFAAUABQAFAAUAAAAAAAAABQAAAAUABQAAAAUAAAAAAAAAAAAAAAAAAAAAAAAAAAAHAAcABwAHAAcAAAAFAAUAAAAHAAcABQAHAAUAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAFAAUABwAHAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAFAAUABQAFAAUAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAFAAUABQAFAAUABQAFAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAABQAAAAcABwAHAAcABwAHAAcABwAHAAcABwAHAAcABwAHAAAAAAAAAAAAAAAAAAAABQAFAAUABQAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAUAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAcABwAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAUABQAAAAUABQAFAAAAAAAFAAUABQAFAAUABQAFAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAABQAFAAUABQAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAABQAFAAUAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAAAAAAAAAAABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAAAAAAAAAAAAAAAAAAAAAAFAAAAAAAAAAAAAAAAAAAAAAAAAAAABQAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAUABQAFAAUABQAAAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAABQAFAAUABQAFAAUABQAAAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAAAAAABQAFAAUABQAFAAUABQAAAAUABQAAAAUABQAFAAUABQAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAFAAUABQAFAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAABQAFAAUABQAFAAUABQAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAOAA4ADgAOAA4ADgAOAA4ADgAOAA4ADgAOAA4ADgAOAA4ADgAOAA4ADgAOAA4ADgAOAA4ADgAFAAUABQAFAAUADgAOAA4ADgAOAA4ADwAPAA8ADwAPAA8ADwAPAA8ADwAPAA8ADwAPAA8ADwAPAA8ADwAPAA8ADwAPAA8ADwAPAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAcABwAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAABwAHAAcABwAHAAcABwAHAAcABwAHAAcABwAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAABwAHAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAABwAHAAcABwAHAAcABwAHAAcABwAHAAcABwAHAAcABwAHAAcABwAHAAcABwAHAAcABwAHAAcABwAHAAcABwAHAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAAAAAAAAAkACQAJAAkACQAJAAkACQAJAAkACQAJAAkACQAJAAkACQAJAAkACQAJAAkACQAJAAkACQAJAAkACQAJAAkACQAKAAoACgAKAAoACgAKAAoACgAKAAoACgAKAAoACgAKAAoACgAKAAoACgAKAAoACgAMAAwADAAMAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAkACQAJAAkACQAJAAkACQAJAAkACQAJAAkACQAJAAkACQAJAAkAAAAAAAAAAAAKAAoACgAKAAoACgAKAAoACgAKAAoACgAKAAoACgAKAAoACgAKAAoACgAKAAoACgAKAAoACgAKAAoACgAKAAoACgAAAAAAAAAAAAsADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwACwAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAAAAAADgAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA4AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAOAA4ADgAOAA4ADgAAAAAAAAAAAAAAAAAAAAAAAAAAAAAADgAOAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA4ADgAAAAAAAAAAAAAAAAAAAAAADgAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAADgAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAADgAOAA4ADgAOAA4ADgAOAA4ADgAOAAAAAAAAAAAADgAOAA4AAAAAAAAAAAAAAAAAAAAOAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAADgAOAAAAAAAAAAAAAAAAAAAAAAAAAAAADgAAAAAAAAAAAAAAAAAAAAAAAAAOAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAADgAOAA4ADgAAAA4ADgAOAA4ADgAOAAAADgAOAA4ADgAOAA4ADgAOAA4ADgAOAA4AAAAOAA4ADgAOAA4ADgAOAA4ADgAOAA4ADgAOAA4ADgAOAA4ADgAOAA4ADgAOAA4ADgAOAA4ADgAOAA4ADgAOAA4ADgAOAAAAAAAAAAAAAAAAAAAAAAAAAAAADgAOAA4ADgAOAA4ADgAOAA4ADgAOAA4ADgAOAA4ADgAOAA4AAAAAAA4ADgAOAA4ADgAOAA4ADgAOAA4ADgAAAA4AAAAOAAAAAAAAAAAAAAAAAA4AAAAAAAAAAAAAAAAADgAAAAAAAAAAAAAAAAAAAAAAAAAAAA4ADgAAAAAAAAAAAAAAAAAAAAAAAAAAAAAADgAAAAAADgAAAAAAAAAAAA4AAAAOAAAAAAAAAAAADgAOAA4AAAAOAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAOAA4ADgAOAA4AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAOAA4ADgAAAAAAAAAAAAAAAAAAAAAAAAAOAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAOAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAOAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAOAA4AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA4ADgAOAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAADgAOAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAADgAAAAAAAAAAAA4AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAOAAAADgAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAOAA4ADgAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA4ADgAOAA4ADgAOAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA4ADgAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAADgAAAAAADgAOAA4ADgAOAA4ADgAOAA4ADgAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAADgAOAA4ADgAOAA4ADgAOAA4ADgAOAA4ADgAOAA4ADgAOAA4ADgAAAA4ADgAOAA4ADgAOAA4ADgAOAA4ADgAOAA4ADgAOAAAAAAAAAAAAAAAAAAAAAAAAAAAADgAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA4AAAAAAA4ADgAOAA4ADgAOAA4ADgAOAAAADgAOAA4ADgAAAAAAAAAAAAAAAAAAAAAAAAAOAA4ADgAOAA4ADgAOAA4ADgAOAA4ADgAOAA4ADgAOAA4ADgAOAA4ADgAOAA4AAAAAAAAAAAAAAAAADgAOAA4ADgAOAA4ADgAOAA4ADgAOAA4ADgAOAA4ADgAOAA4ADgAOAA4ADgAOAA4ADgAOAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA4ADgAOAA4ADgAOAA4ADgAOAA4ADgAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAOAA4ADgAOAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAADgAOAA4ADgAOAA4ADgAOAAAAAAAAAAAAAAAAAAAAAAAAAAAADgAOAA4ADgAOAA4AAAAAAAAAAAAAAAAAAAAAAA4ADgAOAA4ADgAOAA4ADgAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAOAA4ADgAOAA4ADgAOAA4ADgAOAA4ADgAOAA4ADgAOAA4ADgAOAA4ADgAOAA4ADgAOAA4ADgAOAA4AAAAOAA4ADgAOAA4ADgAAAA4ADgAOAA4ADgAOAA4ADgAOAA4ADgAOAA4ADgAOAA4ADgAOAA4ADgAOAA4ADgAOAA4AAAAAAAAAAAA='; - - /* - * utrie 1.0.2 - * Copyright (c) 2022 Niklas von Hertzen - * Released under MIT License - */ - var chars$1 = 'ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789+/'; - // Use a lookup table to find the index. - var lookup$1 = typeof Uint8Array === 'undefined' ? [] : new Uint8Array(256); - for (var i$1 = 0; i$1 < chars$1.length; i$1++) { - lookup$1[chars$1.charCodeAt(i$1)] = i$1; - } - var decode = function (base64) { - var bufferLength = base64.length * 0.75, len = base64.length, i, p = 0, encoded1, encoded2, encoded3, encoded4; - if (base64[base64.length - 1] === '=') { - bufferLength--; - if (base64[base64.length - 2] === '=') { - bufferLength--; - } - } - var buffer = typeof ArrayBuffer !== 'undefined' && - typeof Uint8Array !== 'undefined' && - typeof Uint8Array.prototype.slice !== 'undefined' - ? new ArrayBuffer(bufferLength) - : new Array(bufferLength); - var bytes = Array.isArray(buffer) ? buffer : new Uint8Array(buffer); - for (i = 0; i < len; i += 4) { - encoded1 = lookup$1[base64.charCodeAt(i)]; - encoded2 = lookup$1[base64.charCodeAt(i + 1)]; - encoded3 = lookup$1[base64.charCodeAt(i + 2)]; - encoded4 = lookup$1[base64.charCodeAt(i + 3)]; - bytes[p++] = (encoded1 << 2) | (encoded2 >> 4); - bytes[p++] = ((encoded2 & 15) << 4) | (encoded3 >> 2); - bytes[p++] = ((encoded3 & 3) << 6) | (encoded4 & 63); - } - return buffer; - }; - var polyUint16Array = function (buffer) { - var length = buffer.length; - var bytes = []; - for (var i = 0; i < length; i += 2) { - bytes.push((buffer[i + 1] << 8) | buffer[i]); - } - return bytes; - }; - var polyUint32Array = function (buffer) { - var length = buffer.length; - var bytes = []; - for (var i = 0; i < length; i += 4) { - bytes.push((buffer[i + 3] << 24) | (buffer[i + 2] << 16) | (buffer[i + 1] << 8) | buffer[i]); - } - return bytes; - }; - - /** Shift size for getting the index-2 table offset. */ - var UTRIE2_SHIFT_2 = 5; - /** Shift size for getting the index-1 table offset. */ - var UTRIE2_SHIFT_1 = 6 + 5; - /** - * Shift size for shifting left the index array values. - * Increases possible data size with 16-bit index values at the cost - * of compactability. - * This requires data blocks to be aligned by UTRIE2_DATA_GRANULARITY. - */ - var UTRIE2_INDEX_SHIFT = 2; - /** - * Difference between the two shift sizes, - * for getting an index-1 offset from an index-2 offset. 6=11-5 - */ - var UTRIE2_SHIFT_1_2 = UTRIE2_SHIFT_1 - UTRIE2_SHIFT_2; - /** - * The part of the index-2 table for U+D800..U+DBFF stores values for - * lead surrogate code _units_ not code _points_. - * Values for lead surrogate code _points_ are indexed with this portion of the table. - * Length=32=0x20=0x400>>UTRIE2_SHIFT_2. (There are 1024=0x400 lead surrogates.) - */ - var UTRIE2_LSCP_INDEX_2_OFFSET = 0x10000 >> UTRIE2_SHIFT_2; - /** Number of entries in a data block. 32=0x20 */ - var UTRIE2_DATA_BLOCK_LENGTH = 1 << UTRIE2_SHIFT_2; - /** Mask for getting the lower bits for the in-data-block offset. */ - var UTRIE2_DATA_MASK = UTRIE2_DATA_BLOCK_LENGTH - 1; - var UTRIE2_LSCP_INDEX_2_LENGTH = 0x400 >> UTRIE2_SHIFT_2; - /** Count the lengths of both BMP pieces. 2080=0x820 */ - var UTRIE2_INDEX_2_BMP_LENGTH = UTRIE2_LSCP_INDEX_2_OFFSET + UTRIE2_LSCP_INDEX_2_LENGTH; - /** - * The 2-byte UTF-8 version of the index-2 table follows at offset 2080=0x820. - * Length 32=0x20 for lead bytes C0..DF, regardless of UTRIE2_SHIFT_2. - */ - var UTRIE2_UTF8_2B_INDEX_2_OFFSET = UTRIE2_INDEX_2_BMP_LENGTH; - var UTRIE2_UTF8_2B_INDEX_2_LENGTH = 0x800 >> 6; /* U+0800 is the first code point after 2-byte UTF-8 */ - /** - * The index-1 table, only used for supplementary code points, at offset 2112=0x840. - * Variable length, for code points up to highStart, where the last single-value range starts. - * Maximum length 512=0x200=0x100000>>UTRIE2_SHIFT_1. - * (For 0x100000 supplementary code points U+10000..U+10ffff.) - * - * The part of the index-2 table for supplementary code points starts - * after this index-1 table. - * - * Both the index-1 table and the following part of the index-2 table - * are omitted completely if there is only BMP data. - */ - var UTRIE2_INDEX_1_OFFSET = UTRIE2_UTF8_2B_INDEX_2_OFFSET + UTRIE2_UTF8_2B_INDEX_2_LENGTH; - /** - * Number of index-1 entries for the BMP. 32=0x20 - * This part of the index-1 table is omitted from the serialized form. - */ - var UTRIE2_OMITTED_BMP_INDEX_1_LENGTH = 0x10000 >> UTRIE2_SHIFT_1; - /** Number of entries in an index-2 block. 64=0x40 */ - var UTRIE2_INDEX_2_BLOCK_LENGTH = 1 << UTRIE2_SHIFT_1_2; - /** Mask for getting the lower bits for the in-index-2-block offset. */ - var UTRIE2_INDEX_2_MASK = UTRIE2_INDEX_2_BLOCK_LENGTH - 1; - var slice16 = function (view, start, end) { - if (view.slice) { - return view.slice(start, end); - } - return new Uint16Array(Array.prototype.slice.call(view, start, end)); - }; - var slice32 = function (view, start, end) { - if (view.slice) { - return view.slice(start, end); - } - return new Uint32Array(Array.prototype.slice.call(view, start, end)); - }; - var createTrieFromBase64 = function (base64, _byteLength) { - var buffer = decode(base64); - var view32 = Array.isArray(buffer) ? polyUint32Array(buffer) : new Uint32Array(buffer); - var view16 = Array.isArray(buffer) ? polyUint16Array(buffer) : new Uint16Array(buffer); - var headerLength = 24; - var index = slice16(view16, headerLength / 2, view32[4] / 2); - var data = view32[5] === 2 - ? slice16(view16, (headerLength + view32[4]) / 2) - : slice32(view32, Math.ceil((headerLength + view32[4]) / 4)); - return new Trie(view32[0], view32[1], view32[2], view32[3], index, data); - }; - var Trie = /** @class */ (function () { - function Trie(initialValue, errorValue, highStart, highValueIndex, index, data) { - this.initialValue = initialValue; - this.errorValue = errorValue; - this.highStart = highStart; - this.highValueIndex = highValueIndex; - this.index = index; - this.data = data; - } - /** - * Get the value for a code point as stored in the Trie. - * - * @param codePoint the code point - * @return the value - */ - Trie.prototype.get = function (codePoint) { - var ix; - if (codePoint >= 0) { - if (codePoint < 0x0d800 || (codePoint > 0x0dbff && codePoint <= 0x0ffff)) { - // Ordinary BMP code point, excluding leading surrogates. - // BMP uses a single level lookup. BMP index starts at offset 0 in the Trie2 index. - // 16 bit data is stored in the index array itself. - ix = this.index[codePoint >> UTRIE2_SHIFT_2]; - ix = (ix << UTRIE2_INDEX_SHIFT) + (codePoint & UTRIE2_DATA_MASK); - return this.data[ix]; - } - if (codePoint <= 0xffff) { - // Lead Surrogate Code Point. A Separate index section is stored for - // lead surrogate code units and code points. - // The main index has the code unit data. - // For this function, we need the code point data. - // Note: this expression could be refactored for slightly improved efficiency, but - // surrogate code points will be so rare in practice that it's not worth it. - ix = this.index[UTRIE2_LSCP_INDEX_2_OFFSET + ((codePoint - 0xd800) >> UTRIE2_SHIFT_2)]; - ix = (ix << UTRIE2_INDEX_SHIFT) + (codePoint & UTRIE2_DATA_MASK); - return this.data[ix]; - } - if (codePoint < this.highStart) { - // Supplemental code point, use two-level lookup. - ix = UTRIE2_INDEX_1_OFFSET - UTRIE2_OMITTED_BMP_INDEX_1_LENGTH + (codePoint >> UTRIE2_SHIFT_1); - ix = this.index[ix]; - ix += (codePoint >> UTRIE2_SHIFT_2) & UTRIE2_INDEX_2_MASK; - ix = this.index[ix]; - ix = (ix << UTRIE2_INDEX_SHIFT) + (codePoint & UTRIE2_DATA_MASK); - return this.data[ix]; - } - if (codePoint <= 0x10ffff) { - return this.data[this.highValueIndex]; - } - } - // Fall through. The code point is outside of the legal range of 0..0x10ffff. - return this.errorValue; - }; - return Trie; - }()); - - /* - * base64-arraybuffer 1.0.2 - * Copyright (c) 2022 Niklas von Hertzen - * Released under MIT License - */ - var chars = 'ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789+/'; - // Use a lookup table to find the index. - var lookup = typeof Uint8Array === 'undefined' ? [] : new Uint8Array(256); - for (var i = 0; i < chars.length; i++) { - lookup[chars.charCodeAt(i)] = i; - } - - var Prepend = 1; - var CR = 2; - var LF = 3; - var Control = 4; - var Extend = 5; - var SpacingMark = 7; - var L = 8; - var V = 9; - var T = 10; - var LV = 11; - var LVT = 12; - var ZWJ = 13; - var Extended_Pictographic = 14; - var RI = 15; - var toCodePoints = function (str) { - var codePoints = []; - var i = 0; - var length = str.length; - while (i < length) { - var value = str.charCodeAt(i++); - if (value >= 0xd800 && value <= 0xdbff && i < length) { - var extra = str.charCodeAt(i++); - if ((extra & 0xfc00) === 0xdc00) { - codePoints.push(((value & 0x3ff) << 10) + (extra & 0x3ff) + 0x10000); - } - else { - codePoints.push(value); - i--; - } - } - else { - codePoints.push(value); - } - } - return codePoints; - }; - var fromCodePoint = function () { - var codePoints = []; - for (var _i = 0; _i < arguments.length; _i++) { - codePoints[_i] = arguments[_i]; - } - if (String.fromCodePoint) { - return String.fromCodePoint.apply(String, codePoints); - } - var length = codePoints.length; - if (!length) { - return ''; - } - var codeUnits = []; - var index = -1; - var result = ''; - while (++index < length) { - var codePoint = codePoints[index]; - if (codePoint <= 0xffff) { - codeUnits.push(codePoint); - } - else { - codePoint -= 0x10000; - codeUnits.push((codePoint >> 10) + 0xd800, (codePoint % 0x400) + 0xdc00); - } - if (index + 1 === length || codeUnits.length > 0x4000) { - result += String.fromCharCode.apply(String, codeUnits); - codeUnits.length = 0; - } - } - return result; - }; - var UnicodeTrie = createTrieFromBase64(base64); - var BREAK_NOT_ALLOWED = '×'; - var BREAK_ALLOWED = '÷'; - var codePointToClass = function (codePoint) { return UnicodeTrie.get(codePoint); }; - var _graphemeBreakAtIndex = function (_codePoints, classTypes, index) { - var prevIndex = index - 2; - var prev = classTypes[prevIndex]; - var current = classTypes[index - 1]; - var next = classTypes[index]; - // GB3 Do not break between a CR and LF - if (current === CR && next === LF) { - return BREAK_NOT_ALLOWED; - } - // GB4 Otherwise, break before and after controls. - if (current === CR || current === LF || current === Control) { - return BREAK_ALLOWED; - } - // GB5 - if (next === CR || next === LF || next === Control) { - return BREAK_ALLOWED; - } - // Do not break Hangul syllable sequences. - // GB6 - if (current === L && [L, V, LV, LVT].indexOf(next) !== -1) { - return BREAK_NOT_ALLOWED; - } - // GB7 - if ((current === LV || current === V) && (next === V || next === T)) { - return BREAK_NOT_ALLOWED; - } - // GB8 - if ((current === LVT || current === T) && next === T) { - return BREAK_NOT_ALLOWED; - } - // GB9 Do not break before extending characters or ZWJ. - if (next === ZWJ || next === Extend) { - return BREAK_NOT_ALLOWED; - } - // Do not break before SpacingMarks, or after Prepend characters. - // GB9a - if (next === SpacingMark) { - return BREAK_NOT_ALLOWED; - } - // GB9a - if (current === Prepend) { - return BREAK_NOT_ALLOWED; - } - // GB11 Do not break within emoji modifier sequences or emoji zwj sequences. - if (current === ZWJ && next === Extended_Pictographic) { - while (prev === Extend) { - prev = classTypes[--prevIndex]; - } - if (prev === Extended_Pictographic) { - return BREAK_NOT_ALLOWED; - } - } - // GB12 Do not break within emoji flag sequences. - // That is, do not break between regional indicator (RI) symbols - // if there is an odd number of RI characters before the break point. - if (current === RI && next === RI) { - var countRI = 0; - while (prev === RI) { - countRI++; - prev = classTypes[--prevIndex]; - } - if (countRI % 2 === 0) { - return BREAK_NOT_ALLOWED; - } - } - return BREAK_ALLOWED; - }; - var GraphemeBreaker = function (str) { - var codePoints = toCodePoints(str); - var length = codePoints.length; - var index = 0; - var lastEnd = 0; - var classTypes = codePoints.map(codePointToClass); - return { - next: function () { - if (index >= length) { - return { done: true, value: null }; - } - var graphemeBreak = BREAK_NOT_ALLOWED; - while (index < length && - (graphemeBreak = _graphemeBreakAtIndex(codePoints, classTypes, ++index)) === BREAK_NOT_ALLOWED) { } - if (graphemeBreak !== BREAK_NOT_ALLOWED || index === length) { - var value = fromCodePoint.apply(null, codePoints.slice(lastEnd, index)); - lastEnd = index; - return { value: value, done: false }; - } - return { done: true, value: null }; - }, - }; - }; - var splitGraphemes = function (str) { - var breaker = GraphemeBreaker(str); - var graphemes = []; - var bk; - while (!(bk = breaker.next()).done) { - if (bk.value) { - graphemes.push(bk.value.slice()); - } - } - return graphemes; - }; - - var testRangeBounds = function (document) { - var TEST_HEIGHT = 123; - if (document.createRange) { - var range = document.createRange(); - if (range.getBoundingClientRect) { - var testElement = document.createElement('boundtest'); - testElement.style.height = TEST_HEIGHT + "px"; - testElement.style.display = 'block'; - document.body.appendChild(testElement); - range.selectNode(testElement); - var rangeBounds = range.getBoundingClientRect(); - var rangeHeight = Math.round(rangeBounds.height); - document.body.removeChild(testElement); - if (rangeHeight === TEST_HEIGHT) { - return true; - } - } - } - return false; - }; - var testIOSLineBreak = function (document) { - var testElement = document.createElement('boundtest'); - testElement.style.width = '50px'; - testElement.style.display = 'block'; - testElement.style.fontSize = '12px'; - testElement.style.letterSpacing = '0px'; - testElement.style.wordSpacing = '0px'; - document.body.appendChild(testElement); - var range = document.createRange(); - testElement.innerHTML = typeof ''.repeat === 'function' ? '👨'.repeat(10) : ''; - var node = testElement.firstChild; - var textList = toCodePoints$1(node.data).map(function (i) { return fromCodePoint$1(i); }); - var offset = 0; - var prev = {}; - // ios 13 does not handle range getBoundingClientRect line changes correctly #2177 - var supports = textList.every(function (text, i) { - range.setStart(node, offset); - range.setEnd(node, offset + text.length); - var rect = range.getBoundingClientRect(); - offset += text.length; - var boundAhead = rect.x > prev.x || rect.y > prev.y; - prev = rect; - if (i === 0) { - return true; - } - return boundAhead; - }); - document.body.removeChild(testElement); - return supports; - }; - var testCORS = function () { return typeof new Image().crossOrigin !== 'undefined'; }; - var testResponseType = function () { return typeof new XMLHttpRequest().responseType === 'string'; }; - var testSVG = function (document) { - var img = new Image(); - var canvas = document.createElement('canvas'); - var ctx = canvas.getContext('2d'); - if (!ctx) { - return false; - } - img.src = "data:image/svg+xml,"; - try { - ctx.drawImage(img, 0, 0); - canvas.toDataURL(); - } - catch (e) { - return false; - } - return true; - }; - var isGreenPixel = function (data) { - return data[0] === 0 && data[1] === 255 && data[2] === 0 && data[3] === 255; - }; - var testForeignObject = function (document) { - var canvas = document.createElement('canvas'); - var size = 100; - canvas.width = size; - canvas.height = size; - var ctx = canvas.getContext('2d'); - if (!ctx) { - return Promise.reject(false); - } - ctx.fillStyle = 'rgb(0, 255, 0)'; - ctx.fillRect(0, 0, size, size); - var img = new Image(); - var greenImageSrc = canvas.toDataURL(); - img.src = greenImageSrc; - var svg = createForeignObjectSVG(size, size, 0, 0, img); - ctx.fillStyle = 'red'; - ctx.fillRect(0, 0, size, size); - return loadSerializedSVG$1(svg) - .then(function (img) { - ctx.drawImage(img, 0, 0); - var data = ctx.getImageData(0, 0, size, size).data; - ctx.fillStyle = 'red'; - ctx.fillRect(0, 0, size, size); - var node = document.createElement('div'); - node.style.backgroundImage = "url(" + greenImageSrc + ")"; - node.style.height = size + "px"; - // Firefox 55 does not render inline tags - return isGreenPixel(data) - ? loadSerializedSVG$1(createForeignObjectSVG(size, size, 0, 0, node)) - : Promise.reject(false); - }) - .then(function (img) { - ctx.drawImage(img, 0, 0); - // Edge does not render background-images - return isGreenPixel(ctx.getImageData(0, 0, size, size).data); - }) - .catch(function () { return false; }); - }; - var createForeignObjectSVG = function (width, height, x, y, node) { - var xmlns = 'http://www.w3.org/2000/svg'; - var svg = document.createElementNS(xmlns, 'svg'); - var foreignObject = document.createElementNS(xmlns, 'foreignObject'); - svg.setAttributeNS(null, 'width', width.toString()); - svg.setAttributeNS(null, 'height', height.toString()); - foreignObject.setAttributeNS(null, 'width', '100%'); - foreignObject.setAttributeNS(null, 'height', '100%'); - foreignObject.setAttributeNS(null, 'x', x.toString()); - foreignObject.setAttributeNS(null, 'y', y.toString()); - foreignObject.setAttributeNS(null, 'externalResourcesRequired', 'true'); - svg.appendChild(foreignObject); - foreignObject.appendChild(node); - return svg; - }; - var loadSerializedSVG$1 = function (svg) { - return new Promise(function (resolve, reject) { - var img = new Image(); - img.onload = function () { return resolve(img); }; - img.onerror = reject; - img.src = "data:image/svg+xml;charset=utf-8," + encodeURIComponent(new XMLSerializer().serializeToString(svg)); - }); - }; - var FEATURES = { - get SUPPORT_RANGE_BOUNDS() { - var value = testRangeBounds(document); - Object.defineProperty(FEATURES, 'SUPPORT_RANGE_BOUNDS', { value: value }); - return value; - }, - get SUPPORT_WORD_BREAKING() { - var value = FEATURES.SUPPORT_RANGE_BOUNDS && testIOSLineBreak(document); - Object.defineProperty(FEATURES, 'SUPPORT_WORD_BREAKING', { value: value }); - return value; - }, - get SUPPORT_SVG_DRAWING() { - var value = testSVG(document); - Object.defineProperty(FEATURES, 'SUPPORT_SVG_DRAWING', { value: value }); - return value; - }, - get SUPPORT_FOREIGNOBJECT_DRAWING() { - var value = typeof Array.from === 'function' && typeof window.fetch === 'function' - ? testForeignObject(document) - : Promise.resolve(false); - Object.defineProperty(FEATURES, 'SUPPORT_FOREIGNOBJECT_DRAWING', { value: value }); - return value; - }, - get SUPPORT_CORS_IMAGES() { - var value = testCORS(); - Object.defineProperty(FEATURES, 'SUPPORT_CORS_IMAGES', { value: value }); - return value; - }, - get SUPPORT_RESPONSE_TYPE() { - var value = testResponseType(); - Object.defineProperty(FEATURES, 'SUPPORT_RESPONSE_TYPE', { value: value }); - return value; - }, - get SUPPORT_CORS_XHR() { - var value = 'withCredentials' in new XMLHttpRequest(); - Object.defineProperty(FEATURES, 'SUPPORT_CORS_XHR', { value: value }); - return value; - }, - get SUPPORT_NATIVE_TEXT_SEGMENTATION() { - // eslint-disable-next-line @typescript-eslint/no-explicit-any - var value = !!(typeof Intl !== 'undefined' && Intl.Segmenter); - Object.defineProperty(FEATURES, 'SUPPORT_NATIVE_TEXT_SEGMENTATION', { value: value }); - return value; - } - }; - - var TextBounds = /** @class */ (function () { - function TextBounds(text, bounds) { - this.text = text; - this.bounds = bounds; - } - return TextBounds; - }()); - var parseTextBounds = function (context, value, styles, node) { - var textList = breakText(value, styles); - var textBounds = []; - var offset = 0; - textList.forEach(function (text) { - if (styles.textDecorationLine.length || text.trim().length > 0) { - if (FEATURES.SUPPORT_RANGE_BOUNDS) { - var clientRects = createRange(node, offset, text.length).getClientRects(); - if (clientRects.length > 1) { - var subSegments = segmentGraphemes(text); - var subOffset_1 = 0; - subSegments.forEach(function (subSegment) { - textBounds.push(new TextBounds(subSegment, Bounds.fromDOMRectList(context, createRange(node, subOffset_1 + offset, subSegment.length).getClientRects()))); - subOffset_1 += subSegment.length; - }); - } - else { - textBounds.push(new TextBounds(text, Bounds.fromDOMRectList(context, clientRects))); - } - } - else { - var replacementNode = node.splitText(text.length); - textBounds.push(new TextBounds(text, getWrapperBounds(context, node))); - node = replacementNode; - } - } - else if (!FEATURES.SUPPORT_RANGE_BOUNDS) { - node = node.splitText(text.length); - } - offset += text.length; - }); - return textBounds; - }; - var getWrapperBounds = function (context, node) { - var ownerDocument = node.ownerDocument; - if (ownerDocument) { - var wrapper = ownerDocument.createElement('html2canvaswrapper'); - wrapper.appendChild(node.cloneNode(true)); - var parentNode = node.parentNode; - if (parentNode) { - parentNode.replaceChild(wrapper, node); - var bounds = parseBounds(context, wrapper); - if (wrapper.firstChild) { - parentNode.replaceChild(wrapper.firstChild, wrapper); - } - return bounds; - } - } - return Bounds.EMPTY; - }; - var createRange = function (node, offset, length) { - var ownerDocument = node.ownerDocument; - if (!ownerDocument) { - throw new Error('Node has no owner document'); - } - var range = ownerDocument.createRange(); - range.setStart(node, offset); - range.setEnd(node, offset + length); - return range; - }; - var segmentGraphemes = function (value) { - if (FEATURES.SUPPORT_NATIVE_TEXT_SEGMENTATION) { - // eslint-disable-next-line @typescript-eslint/no-explicit-any - var segmenter = new Intl.Segmenter(void 0, { granularity: 'grapheme' }); - // eslint-disable-next-line @typescript-eslint/no-explicit-any - return Array.from(segmenter.segment(value)).map(function (segment) { return segment.segment; }); - } - return splitGraphemes(value); - }; - var segmentWords = function (value, styles) { - if (FEATURES.SUPPORT_NATIVE_TEXT_SEGMENTATION) { - // eslint-disable-next-line @typescript-eslint/no-explicit-any - var segmenter = new Intl.Segmenter(void 0, { - granularity: 'word' - }); - // eslint-disable-next-line @typescript-eslint/no-explicit-any - return Array.from(segmenter.segment(value)).map(function (segment) { return segment.segment; }); - } - return breakWords(value, styles); - }; - var breakText = function (value, styles) { - return styles.letterSpacing !== 0 ? segmentGraphemes(value) : segmentWords(value, styles); - }; - // https://drafts.csswg.org/css-text/#word-separator - var wordSeparators = [0x0020, 0x00a0, 0x1361, 0x10100, 0x10101, 0x1039, 0x1091]; - var breakWords = function (str, styles) { - var breaker = LineBreaker(str, { - lineBreak: styles.lineBreak, - wordBreak: styles.overflowWrap === "break-word" /* BREAK_WORD */ ? 'break-word' : styles.wordBreak - }); - var words = []; - var bk; - var _loop_1 = function () { - if (bk.value) { - var value = bk.value.slice(); - var codePoints = toCodePoints$1(value); - var word_1 = ''; - codePoints.forEach(function (codePoint) { - if (wordSeparators.indexOf(codePoint) === -1) { - word_1 += fromCodePoint$1(codePoint); - } - else { - if (word_1.length) { - words.push(word_1); - } - words.push(fromCodePoint$1(codePoint)); - word_1 = ''; - } - }); - if (word_1.length) { - words.push(word_1); - } - } - }; - while (!(bk = breaker.next()).done) { - _loop_1(); - } - return words; - }; - - var TextContainer = /** @class */ (function () { - function TextContainer(context, node, styles) { - this.text = transform(node.data, styles.textTransform); - this.textBounds = parseTextBounds(context, this.text, styles, node); - } - return TextContainer; - }()); - var transform = function (text, transform) { - switch (transform) { - case 1 /* LOWERCASE */: - return text.toLowerCase(); - case 3 /* CAPITALIZE */: - return text.replace(CAPITALIZE, capitalize); - case 2 /* UPPERCASE */: - return text.toUpperCase(); - default: - return text; - } - }; - var CAPITALIZE = /(^|\s|:|-|\(|\))([a-z])/g; - var capitalize = function (m, p1, p2) { - if (m.length > 0) { - return p1 + p2.toUpperCase(); - } - return m; - }; - - var ImageElementContainer = /** @class */ (function (_super) { - __extends(ImageElementContainer, _super); - function ImageElementContainer(context, img) { - var _this = _super.call(this, context, img) || this; - _this.src = img.currentSrc || img.src; - _this.intrinsicWidth = img.naturalWidth; - _this.intrinsicHeight = img.naturalHeight; - _this.context.cache.addImage(_this.src); - return _this; - } - return ImageElementContainer; - }(ElementContainer)); - - var CanvasElementContainer = /** @class */ (function (_super) { - __extends(CanvasElementContainer, _super); - function CanvasElementContainer(context, canvas) { - var _this = _super.call(this, context, canvas) || this; - _this.canvas = canvas; - _this.intrinsicWidth = canvas.width; - _this.intrinsicHeight = canvas.height; - return _this; - } - return CanvasElementContainer; - }(ElementContainer)); - - var SVGElementContainer = /** @class */ (function (_super) { - __extends(SVGElementContainer, _super); - function SVGElementContainer(context, img) { - var _this = _super.call(this, context, img) || this; - var s = new XMLSerializer(); - var bounds = parseBounds(context, img); - img.setAttribute('width', bounds.width + "px"); - img.setAttribute('height', bounds.height + "px"); - _this.svg = "data:image/svg+xml," + encodeURIComponent(s.serializeToString(img)); - _this.intrinsicWidth = img.width.baseVal.value; - _this.intrinsicHeight = img.height.baseVal.value; - _this.context.cache.addImage(_this.svg); - return _this; - } - return SVGElementContainer; - }(ElementContainer)); - - var LIElementContainer = /** @class */ (function (_super) { - __extends(LIElementContainer, _super); - function LIElementContainer(context, element) { - var _this = _super.call(this, context, element) || this; - _this.value = element.value; - return _this; - } - return LIElementContainer; - }(ElementContainer)); - - var OLElementContainer = /** @class */ (function (_super) { - __extends(OLElementContainer, _super); - function OLElementContainer(context, element) { - var _this = _super.call(this, context, element) || this; - _this.start = element.start; - _this.reversed = typeof element.reversed === 'boolean' && element.reversed === true; - return _this; - } - return OLElementContainer; - }(ElementContainer)); - - var CHECKBOX_BORDER_RADIUS = [ - { - type: 15 /* DIMENSION_TOKEN */, - flags: 0, - unit: 'px', - number: 3 - } - ]; - var RADIO_BORDER_RADIUS = [ - { - type: 16 /* PERCENTAGE_TOKEN */, - flags: 0, - number: 50 - } - ]; - var reformatInputBounds = function (bounds) { - if (bounds.width > bounds.height) { - return new Bounds(bounds.left + (bounds.width - bounds.height) / 2, bounds.top, bounds.height, bounds.height); - } - else if (bounds.width < bounds.height) { - return new Bounds(bounds.left, bounds.top + (bounds.height - bounds.width) / 2, bounds.width, bounds.width); - } - return bounds; - }; - var getInputValue = function (node) { - var value = node.type === PASSWORD ? new Array(node.value.length + 1).join('\u2022') : node.value; - return value.length === 0 ? node.placeholder || '' : value; - }; - var CHECKBOX = 'checkbox'; - var RADIO = 'radio'; - var PASSWORD = 'password'; - var INPUT_COLOR = 0x2a2a2aff; - var InputElementContainer = /** @class */ (function (_super) { - __extends(InputElementContainer, _super); - function InputElementContainer(context, input) { - var _this = _super.call(this, context, input) || this; - _this.type = input.type.toLowerCase(); - _this.checked = input.checked; - _this.value = getInputValue(input); - if (_this.type === CHECKBOX || _this.type === RADIO) { - _this.styles.backgroundColor = 0xdededeff; - _this.styles.borderTopColor = - _this.styles.borderRightColor = - _this.styles.borderBottomColor = - _this.styles.borderLeftColor = - 0xa5a5a5ff; - _this.styles.borderTopWidth = - _this.styles.borderRightWidth = - _this.styles.borderBottomWidth = - _this.styles.borderLeftWidth = - 1; - _this.styles.borderTopStyle = - _this.styles.borderRightStyle = - _this.styles.borderBottomStyle = - _this.styles.borderLeftStyle = - 1 /* SOLID */; - _this.styles.backgroundClip = [0 /* BORDER_BOX */]; - _this.styles.backgroundOrigin = [0 /* BORDER_BOX */]; - _this.bounds = reformatInputBounds(_this.bounds); - } - switch (_this.type) { - case CHECKBOX: - _this.styles.borderTopRightRadius = - _this.styles.borderTopLeftRadius = - _this.styles.borderBottomRightRadius = - _this.styles.borderBottomLeftRadius = - CHECKBOX_BORDER_RADIUS; - break; - case RADIO: - _this.styles.borderTopRightRadius = - _this.styles.borderTopLeftRadius = - _this.styles.borderBottomRightRadius = - _this.styles.borderBottomLeftRadius = - RADIO_BORDER_RADIUS; - break; - } - return _this; - } - return InputElementContainer; - }(ElementContainer)); - - var SelectElementContainer = /** @class */ (function (_super) { - __extends(SelectElementContainer, _super); - function SelectElementContainer(context, element) { - var _this = _super.call(this, context, element) || this; - var option = element.options[element.selectedIndex || 0]; - _this.value = option ? option.text || '' : ''; - return _this; - } - return SelectElementContainer; - }(ElementContainer)); - - var TextareaElementContainer = /** @class */ (function (_super) { - __extends(TextareaElementContainer, _super); - function TextareaElementContainer(context, element) { - var _this = _super.call(this, context, element) || this; - _this.value = element.value; - return _this; - } - return TextareaElementContainer; - }(ElementContainer)); - - var IFrameElementContainer = /** @class */ (function (_super) { - __extends(IFrameElementContainer, _super); - function IFrameElementContainer(context, iframe) { - var _this = _super.call(this, context, iframe) || this; - _this.src = iframe.src; - _this.width = parseInt(iframe.width, 10) || 0; - _this.height = parseInt(iframe.height, 10) || 0; - _this.backgroundColor = _this.styles.backgroundColor; - try { - if (iframe.contentWindow && - iframe.contentWindow.document && - iframe.contentWindow.document.documentElement) { - _this.tree = parseTree(context, iframe.contentWindow.document.documentElement); - // http://www.w3.org/TR/css3-background/#special-backgrounds - var documentBackgroundColor = iframe.contentWindow.document.documentElement - ? parseColor(context, getComputedStyle(iframe.contentWindow.document.documentElement).backgroundColor) - : COLORS.TRANSPARENT; - var bodyBackgroundColor = iframe.contentWindow.document.body - ? parseColor(context, getComputedStyle(iframe.contentWindow.document.body).backgroundColor) - : COLORS.TRANSPARENT; - _this.backgroundColor = isTransparent(documentBackgroundColor) - ? isTransparent(bodyBackgroundColor) - ? _this.styles.backgroundColor - : bodyBackgroundColor - : documentBackgroundColor; - } - } - catch (e) { } - return _this; - } - return IFrameElementContainer; - }(ElementContainer)); - - var LIST_OWNERS = ['OL', 'UL', 'MENU']; - var parseNodeTree = function (context, node, parent, root) { - for (var childNode = node.firstChild, nextNode = void 0; childNode; childNode = nextNode) { - nextNode = childNode.nextSibling; - if (isTextNode(childNode) && childNode.data.trim().length > 0) { - parent.textNodes.push(new TextContainer(context, childNode, parent.styles)); - } - else if (isElementNode(childNode)) { - if (isSlotElement(childNode) && childNode.assignedNodes) { - childNode.assignedNodes().forEach(function (childNode) { return parseNodeTree(context, childNode, parent, root); }); - } - else { - var container = createContainer(context, childNode); - if (container.styles.isVisible()) { - if (createsRealStackingContext(childNode, container, root)) { - container.flags |= 4 /* CREATES_REAL_STACKING_CONTEXT */; - } - else if (createsStackingContext(container.styles)) { - container.flags |= 2 /* CREATES_STACKING_CONTEXT */; - } - if (LIST_OWNERS.indexOf(childNode.tagName) !== -1) { - container.flags |= 8 /* IS_LIST_OWNER */; - } - parent.elements.push(container); - childNode.slot; - if (childNode.shadowRoot) { - parseNodeTree(context, childNode.shadowRoot, container, root); - } - else if (!isTextareaElement(childNode) && - !isSVGElement(childNode) && - !isSelectElement(childNode)) { - parseNodeTree(context, childNode, container, root); - } - } - } - } - } - }; - var createContainer = function (context, element) { - if (isImageElement(element)) { - return new ImageElementContainer(context, element); - } - if (isCanvasElement(element)) { - return new CanvasElementContainer(context, element); - } - if (isSVGElement(element)) { - return new SVGElementContainer(context, element); - } - if (isLIElement(element)) { - return new LIElementContainer(context, element); - } - if (isOLElement(element)) { - return new OLElementContainer(context, element); - } - if (isInputElement(element)) { - return new InputElementContainer(context, element); - } - if (isSelectElement(element)) { - return new SelectElementContainer(context, element); - } - if (isTextareaElement(element)) { - return new TextareaElementContainer(context, element); - } - if (isIFrameElement(element)) { - return new IFrameElementContainer(context, element); - } - return new ElementContainer(context, element); - }; - var parseTree = function (context, element) { - var container = createContainer(context, element); - container.flags |= 4 /* CREATES_REAL_STACKING_CONTEXT */; - parseNodeTree(context, element, container, container); - return container; - }; - var createsRealStackingContext = function (node, container, root) { - return (container.styles.isPositionedWithZIndex() || - container.styles.opacity < 1 || - container.styles.isTransformed() || - (isBodyElement(node) && root.styles.isTransparent())); - }; - var createsStackingContext = function (styles) { return styles.isPositioned() || styles.isFloating(); }; - var isTextNode = function (node) { return node.nodeType === Node.TEXT_NODE; }; - var isElementNode = function (node) { return node.nodeType === Node.ELEMENT_NODE; }; - var isHTMLElementNode = function (node) { - return isElementNode(node) && typeof node.style !== 'undefined' && !isSVGElementNode(node); - }; - var isSVGElementNode = function (element) { - return typeof element.className === 'object'; - }; - var isLIElement = function (node) { return node.tagName === 'LI'; }; - var isOLElement = function (node) { return node.tagName === 'OL'; }; - var isInputElement = function (node) { return node.tagName === 'INPUT'; }; - var isHTMLElement = function (node) { return node.tagName === 'HTML'; }; - var isSVGElement = function (node) { return node.tagName === 'svg'; }; - var isBodyElement = function (node) { return node.tagName === 'BODY'; }; - var isCanvasElement = function (node) { return node.tagName === 'CANVAS'; }; - var isVideoElement = function (node) { return node.tagName === 'VIDEO'; }; - var isImageElement = function (node) { return node.tagName === 'IMG'; }; - var isIFrameElement = function (node) { return node.tagName === 'IFRAME'; }; - var isStyleElement = function (node) { return node.tagName === 'STYLE'; }; - var isScriptElement = function (node) { return node.tagName === 'SCRIPT'; }; - var isTextareaElement = function (node) { return node.tagName === 'TEXTAREA'; }; - var isSelectElement = function (node) { return node.tagName === 'SELECT'; }; - var isSlotElement = function (node) { return node.tagName === 'SLOT'; }; - // https://html.spec.whatwg.org/multipage/custom-elements.html#valid-custom-element-name - var isCustomElement = function (node) { return node.tagName.indexOf('-') > 0; }; - - var CounterState = /** @class */ (function () { - function CounterState() { - this.counters = {}; - } - CounterState.prototype.getCounterValue = function (name) { - var counter = this.counters[name]; - if (counter && counter.length) { - return counter[counter.length - 1]; - } - return 1; - }; - CounterState.prototype.getCounterValues = function (name) { - var counter = this.counters[name]; - return counter ? counter : []; - }; - CounterState.prototype.pop = function (counters) { - var _this = this; - counters.forEach(function (counter) { return _this.counters[counter].pop(); }); - }; - CounterState.prototype.parse = function (style) { - var _this = this; - var counterIncrement = style.counterIncrement; - var counterReset = style.counterReset; - var canReset = true; - if (counterIncrement !== null) { - counterIncrement.forEach(function (entry) { - var counter = _this.counters[entry.counter]; - if (counter && entry.increment !== 0) { - canReset = false; - if (!counter.length) { - counter.push(1); - } - counter[Math.max(0, counter.length - 1)] += entry.increment; - } - }); - } - var counterNames = []; - if (canReset) { - counterReset.forEach(function (entry) { - var counter = _this.counters[entry.counter]; - counterNames.push(entry.counter); - if (!counter) { - counter = _this.counters[entry.counter] = []; - } - counter.push(entry.reset); - }); - } - return counterNames; - }; - return CounterState; - }()); - var ROMAN_UPPER = { - integers: [1000, 900, 500, 400, 100, 90, 50, 40, 10, 9, 5, 4, 1], - values: ['M', 'CM', 'D', 'CD', 'C', 'XC', 'L', 'XL', 'X', 'IX', 'V', 'IV', 'I'] - }; - var ARMENIAN = { - integers: [ - 9000, 8000, 7000, 6000, 5000, 4000, 3000, 2000, 1000, 900, 800, 700, 600, 500, 400, 300, 200, 100, 90, 80, 70, - 60, 50, 40, 30, 20, 10, 9, 8, 7, 6, 5, 4, 3, 2, 1 - ], - values: [ - 'Ք', - 'Փ', - 'Ւ', - 'Ց', - 'Ր', - 'Տ', - 'Վ', - 'Ս', - 'Ռ', - 'Ջ', - 'Պ', - 'Չ', - 'Ո', - 'Շ', - 'Ն', - 'Յ', - 'Մ', - 'Ճ', - 'Ղ', - 'Ձ', - 'Հ', - 'Կ', - 'Ծ', - 'Խ', - 'Լ', - 'Ի', - 'Ժ', - 'Թ', - 'Ը', - 'Է', - 'Զ', - 'Ե', - 'Դ', - 'Գ', - 'Բ', - 'Ա' - ] - }; - var HEBREW = { - integers: [ - 10000, 9000, 8000, 7000, 6000, 5000, 4000, 3000, 2000, 1000, 400, 300, 200, 100, 90, 80, 70, 60, 50, 40, 30, 20, - 19, 18, 17, 16, 15, 10, 9, 8, 7, 6, 5, 4, 3, 2, 1 - ], - values: [ - 'י׳', - 'ט׳', - 'ח׳', - 'ז׳', - 'ו׳', - 'ה׳', - 'ד׳', - 'ג׳', - 'ב׳', - 'א׳', - 'ת', - 'ש', - 'ר', - 'ק', - 'צ', - 'פ', - 'ע', - 'ס', - 'נ', - 'מ', - 'ל', - 'כ', - 'יט', - 'יח', - 'יז', - 'טז', - 'טו', - 'י', - 'ט', - 'ח', - 'ז', - 'ו', - 'ה', - 'ד', - 'ג', - 'ב', - 'א' - ] - }; - var GEORGIAN = { - integers: [ - 10000, 9000, 8000, 7000, 6000, 5000, 4000, 3000, 2000, 1000, 900, 800, 700, 600, 500, 400, 300, 200, 100, 90, - 80, 70, 60, 50, 40, 30, 20, 10, 9, 8, 7, 6, 5, 4, 3, 2, 1 - ], - values: [ - 'ჵ', - 'ჰ', - 'ჯ', - 'ჴ', - 'ხ', - 'ჭ', - 'წ', - 'ძ', - 'ც', - 'ჩ', - 'შ', - 'ყ', - 'ღ', - 'ქ', - 'ფ', - 'ჳ', - 'ტ', - 'ს', - 'რ', - 'ჟ', - 'პ', - 'ო', - 'ჲ', - 'ნ', - 'მ', - 'ლ', - 'კ', - 'ი', - 'თ', - 'ჱ', - 'ზ', - 'ვ', - 'ე', - 'დ', - 'გ', - 'ბ', - 'ა' - ] - }; - var createAdditiveCounter = function (value, min, max, symbols, fallback, suffix) { - if (value < min || value > max) { - return createCounterText(value, fallback, suffix.length > 0); - } - return (symbols.integers.reduce(function (string, integer, index) { - while (value >= integer) { - value -= integer; - string += symbols.values[index]; - } - return string; - }, '') + suffix); - }; - var createCounterStyleWithSymbolResolver = function (value, codePointRangeLength, isNumeric, resolver) { - var string = ''; - do { - if (!isNumeric) { - value--; - } - string = resolver(value) + string; - value /= codePointRangeLength; - } while (value * codePointRangeLength >= codePointRangeLength); - return string; - }; - var createCounterStyleFromRange = function (value, codePointRangeStart, codePointRangeEnd, isNumeric, suffix) { - var codePointRangeLength = codePointRangeEnd - codePointRangeStart + 1; - return ((value < 0 ? '-' : '') + - (createCounterStyleWithSymbolResolver(Math.abs(value), codePointRangeLength, isNumeric, function (codePoint) { - return fromCodePoint$1(Math.floor(codePoint % codePointRangeLength) + codePointRangeStart); - }) + - suffix)); - }; - var createCounterStyleFromSymbols = function (value, symbols, suffix) { - if (suffix === void 0) { suffix = '. '; } - var codePointRangeLength = symbols.length; - return (createCounterStyleWithSymbolResolver(Math.abs(value), codePointRangeLength, false, function (codePoint) { return symbols[Math.floor(codePoint % codePointRangeLength)]; }) + suffix); - }; - var CJK_ZEROS = 1 << 0; - var CJK_TEN_COEFFICIENTS = 1 << 1; - var CJK_TEN_HIGH_COEFFICIENTS = 1 << 2; - var CJK_HUNDRED_COEFFICIENTS = 1 << 3; - var createCJKCounter = function (value, numbers, multipliers, negativeSign, suffix, flags) { - if (value < -9999 || value > 9999) { - return createCounterText(value, 4 /* CJK_DECIMAL */, suffix.length > 0); - } - var tmp = Math.abs(value); - var string = suffix; - if (tmp === 0) { - return numbers[0] + string; - } - for (var digit = 0; tmp > 0 && digit <= 4; digit++) { - var coefficient = tmp % 10; - if (coefficient === 0 && contains(flags, CJK_ZEROS) && string !== '') { - string = numbers[coefficient] + string; - } - else if (coefficient > 1 || - (coefficient === 1 && digit === 0) || - (coefficient === 1 && digit === 1 && contains(flags, CJK_TEN_COEFFICIENTS)) || - (coefficient === 1 && digit === 1 && contains(flags, CJK_TEN_HIGH_COEFFICIENTS) && value > 100) || - (coefficient === 1 && digit > 1 && contains(flags, CJK_HUNDRED_COEFFICIENTS))) { - string = numbers[coefficient] + (digit > 0 ? multipliers[digit - 1] : '') + string; - } - else if (coefficient === 1 && digit > 0) { - string = multipliers[digit - 1] + string; - } - tmp = Math.floor(tmp / 10); - } - return (value < 0 ? negativeSign : '') + string; - }; - var CHINESE_INFORMAL_MULTIPLIERS = '十百千萬'; - var CHINESE_FORMAL_MULTIPLIERS = '拾佰仟萬'; - var JAPANESE_NEGATIVE = 'マイナス'; - var KOREAN_NEGATIVE = '마이너스'; - var createCounterText = function (value, type, appendSuffix) { - var defaultSuffix = appendSuffix ? '. ' : ''; - var cjkSuffix = appendSuffix ? '、' : ''; - var koreanSuffix = appendSuffix ? ', ' : ''; - var spaceSuffix = appendSuffix ? ' ' : ''; - switch (type) { - case 0 /* DISC */: - return '•' + spaceSuffix; - case 1 /* CIRCLE */: - return '◦' + spaceSuffix; - case 2 /* SQUARE */: - return '◾' + spaceSuffix; - case 5 /* DECIMAL_LEADING_ZERO */: - var string = createCounterStyleFromRange(value, 48, 57, true, defaultSuffix); - return string.length < 4 ? "0" + string : string; - case 4 /* CJK_DECIMAL */: - return createCounterStyleFromSymbols(value, '〇一二三四五六七八九', cjkSuffix); - case 6 /* LOWER_ROMAN */: - return createAdditiveCounter(value, 1, 3999, ROMAN_UPPER, 3 /* DECIMAL */, defaultSuffix).toLowerCase(); - case 7 /* UPPER_ROMAN */: - return createAdditiveCounter(value, 1, 3999, ROMAN_UPPER, 3 /* DECIMAL */, defaultSuffix); - case 8 /* LOWER_GREEK */: - return createCounterStyleFromRange(value, 945, 969, false, defaultSuffix); - case 9 /* LOWER_ALPHA */: - return createCounterStyleFromRange(value, 97, 122, false, defaultSuffix); - case 10 /* UPPER_ALPHA */: - return createCounterStyleFromRange(value, 65, 90, false, defaultSuffix); - case 11 /* ARABIC_INDIC */: - return createCounterStyleFromRange(value, 1632, 1641, true, defaultSuffix); - case 12 /* ARMENIAN */: - case 49 /* UPPER_ARMENIAN */: - return createAdditiveCounter(value, 1, 9999, ARMENIAN, 3 /* DECIMAL */, defaultSuffix); - case 35 /* LOWER_ARMENIAN */: - return createAdditiveCounter(value, 1, 9999, ARMENIAN, 3 /* DECIMAL */, defaultSuffix).toLowerCase(); - case 13 /* BENGALI */: - return createCounterStyleFromRange(value, 2534, 2543, true, defaultSuffix); - case 14 /* CAMBODIAN */: - case 30 /* KHMER */: - return createCounterStyleFromRange(value, 6112, 6121, true, defaultSuffix); - case 15 /* CJK_EARTHLY_BRANCH */: - return createCounterStyleFromSymbols(value, '子丑寅卯辰巳午未申酉戌亥', cjkSuffix); - case 16 /* CJK_HEAVENLY_STEM */: - return createCounterStyleFromSymbols(value, '甲乙丙丁戊己庚辛壬癸', cjkSuffix); - case 17 /* CJK_IDEOGRAPHIC */: - case 48 /* TRAD_CHINESE_INFORMAL */: - return createCJKCounter(value, '零一二三四五六七八九', CHINESE_INFORMAL_MULTIPLIERS, '負', cjkSuffix, CJK_TEN_COEFFICIENTS | CJK_TEN_HIGH_COEFFICIENTS | CJK_HUNDRED_COEFFICIENTS); - case 47 /* TRAD_CHINESE_FORMAL */: - return createCJKCounter(value, '零壹貳參肆伍陸柒捌玖', CHINESE_FORMAL_MULTIPLIERS, '負', cjkSuffix, CJK_ZEROS | CJK_TEN_COEFFICIENTS | CJK_TEN_HIGH_COEFFICIENTS | CJK_HUNDRED_COEFFICIENTS); - case 42 /* SIMP_CHINESE_INFORMAL */: - return createCJKCounter(value, '零一二三四五六七八九', CHINESE_INFORMAL_MULTIPLIERS, '负', cjkSuffix, CJK_TEN_COEFFICIENTS | CJK_TEN_HIGH_COEFFICIENTS | CJK_HUNDRED_COEFFICIENTS); - case 41 /* SIMP_CHINESE_FORMAL */: - return createCJKCounter(value, '零壹贰叁肆伍陆柒捌玖', CHINESE_FORMAL_MULTIPLIERS, '负', cjkSuffix, CJK_ZEROS | CJK_TEN_COEFFICIENTS | CJK_TEN_HIGH_COEFFICIENTS | CJK_HUNDRED_COEFFICIENTS); - case 26 /* JAPANESE_INFORMAL */: - return createCJKCounter(value, '〇一二三四五六七八九', '十百千万', JAPANESE_NEGATIVE, cjkSuffix, 0); - case 25 /* JAPANESE_FORMAL */: - return createCJKCounter(value, '零壱弐参四伍六七八九', '拾百千万', JAPANESE_NEGATIVE, cjkSuffix, CJK_ZEROS | CJK_TEN_COEFFICIENTS | CJK_TEN_HIGH_COEFFICIENTS); - case 31 /* KOREAN_HANGUL_FORMAL */: - return createCJKCounter(value, '영일이삼사오육칠팔구', '십백천만', KOREAN_NEGATIVE, koreanSuffix, CJK_ZEROS | CJK_TEN_COEFFICIENTS | CJK_TEN_HIGH_COEFFICIENTS); - case 33 /* KOREAN_HANJA_INFORMAL */: - return createCJKCounter(value, '零一二三四五六七八九', '十百千萬', KOREAN_NEGATIVE, koreanSuffix, 0); - case 32 /* KOREAN_HANJA_FORMAL */: - return createCJKCounter(value, '零壹貳參四五六七八九', '拾百千', KOREAN_NEGATIVE, koreanSuffix, CJK_ZEROS | CJK_TEN_COEFFICIENTS | CJK_TEN_HIGH_COEFFICIENTS); - case 18 /* DEVANAGARI */: - return createCounterStyleFromRange(value, 0x966, 0x96f, true, defaultSuffix); - case 20 /* GEORGIAN */: - return createAdditiveCounter(value, 1, 19999, GEORGIAN, 3 /* DECIMAL */, defaultSuffix); - case 21 /* GUJARATI */: - return createCounterStyleFromRange(value, 0xae6, 0xaef, true, defaultSuffix); - case 22 /* GURMUKHI */: - return createCounterStyleFromRange(value, 0xa66, 0xa6f, true, defaultSuffix); - case 22 /* HEBREW */: - return createAdditiveCounter(value, 1, 10999, HEBREW, 3 /* DECIMAL */, defaultSuffix); - case 23 /* HIRAGANA */: - return createCounterStyleFromSymbols(value, 'あいうえおかきくけこさしすせそたちつてとなにぬねのはひふへほまみむめもやゆよらりるれろわゐゑをん'); - case 24 /* HIRAGANA_IROHA */: - return createCounterStyleFromSymbols(value, 'いろはにほへとちりぬるをわかよたれそつねならむうゐのおくやまけふこえてあさきゆめみしゑひもせす'); - case 27 /* KANNADA */: - return createCounterStyleFromRange(value, 0xce6, 0xcef, true, defaultSuffix); - case 28 /* KATAKANA */: - return createCounterStyleFromSymbols(value, 'アイウエオカキクケコサシスセソタチツテトナニヌネノハヒフヘホマミムメモヤユヨラリルレロワヰヱヲン', cjkSuffix); - case 29 /* KATAKANA_IROHA */: - return createCounterStyleFromSymbols(value, 'イロハニホヘトチリヌルヲワカヨタレソツネナラムウヰノオクヤマケフコエテアサキユメミシヱヒモセス', cjkSuffix); - case 34 /* LAO */: - return createCounterStyleFromRange(value, 0xed0, 0xed9, true, defaultSuffix); - case 37 /* MONGOLIAN */: - return createCounterStyleFromRange(value, 0x1810, 0x1819, true, defaultSuffix); - case 38 /* MYANMAR */: - return createCounterStyleFromRange(value, 0x1040, 0x1049, true, defaultSuffix); - case 39 /* ORIYA */: - return createCounterStyleFromRange(value, 0xb66, 0xb6f, true, defaultSuffix); - case 40 /* PERSIAN */: - return createCounterStyleFromRange(value, 0x6f0, 0x6f9, true, defaultSuffix); - case 43 /* TAMIL */: - return createCounterStyleFromRange(value, 0xbe6, 0xbef, true, defaultSuffix); - case 44 /* TELUGU */: - return createCounterStyleFromRange(value, 0xc66, 0xc6f, true, defaultSuffix); - case 45 /* THAI */: - return createCounterStyleFromRange(value, 0xe50, 0xe59, true, defaultSuffix); - case 46 /* TIBETAN */: - return createCounterStyleFromRange(value, 0xf20, 0xf29, true, defaultSuffix); - case 3 /* DECIMAL */: - default: - return createCounterStyleFromRange(value, 48, 57, true, defaultSuffix); - } - }; - - var IGNORE_ATTRIBUTE = 'data-html2canvas-ignore'; - var DocumentCloner = /** @class */ (function () { - function DocumentCloner(context, element, options) { - this.context = context; - this.options = options; - this.scrolledElements = []; - this.referenceElement = element; - this.counters = new CounterState(); - this.quoteDepth = 0; - if (!element.ownerDocument) { - throw new Error('Cloned element does not have an owner document'); - } - this.documentElement = this.cloneNode(element.ownerDocument.documentElement, false); - } - DocumentCloner.prototype.toIFrame = function (ownerDocument, windowSize) { - var _this = this; - var iframe = createIFrameContainer(ownerDocument, windowSize); - if (!iframe.contentWindow) { - return Promise.reject("Unable to find iframe window"); - } - var scrollX = ownerDocument.defaultView.pageXOffset; - var scrollY = ownerDocument.defaultView.pageYOffset; - var cloneWindow = iframe.contentWindow; - var documentClone = cloneWindow.document; - /* Chrome doesn't detect relative background-images assigned in inline - -{% endfor %} \ No newline at end of file diff --git a/spaces/pknez/face-swap-docker/mynewshinyroop/Lib/site-packages/pip/_internal/commands/completion.py b/spaces/pknez/face-swap-docker/mynewshinyroop/Lib/site-packages/pip/_internal/commands/completion.py deleted file mode 100644 index 30233fc7ad2c07c42e7c2d384312f1f4373155f6..0000000000000000000000000000000000000000 --- a/spaces/pknez/face-swap-docker/mynewshinyroop/Lib/site-packages/pip/_internal/commands/completion.py +++ /dev/null @@ -1,121 +0,0 @@ -import sys -import textwrap -from optparse import Values -from typing import List - -from pip._internal.cli.base_command import Command -from pip._internal.cli.status_codes import SUCCESS -from pip._internal.utils.misc import get_prog - -BASE_COMPLETION = """ -# pip {shell} completion start{script}# pip {shell} completion end -""" - -COMPLETION_SCRIPTS = { - "bash": """ - _pip_completion() - {{ - COMPREPLY=( $( COMP_WORDS="${{COMP_WORDS[*]}}" \\ - COMP_CWORD=$COMP_CWORD \\ - PIP_AUTO_COMPLETE=1 $1 2>/dev/null ) ) - }} - complete -o default -F _pip_completion {prog} - """, - "zsh": """ - #compdef -P pip[0-9.]# - compadd $( COMP_WORDS="$words[*]" \\ - COMP_CWORD=$((CURRENT-1)) \\ - PIP_AUTO_COMPLETE=1 $words[1] 2>/dev/null ) - """, - "fish": """ - function __fish_complete_pip - set -lx COMP_WORDS (commandline -o) "" - set -lx COMP_CWORD ( \\ - math (contains -i -- (commandline -t) $COMP_WORDS)-1 \\ - ) - set -lx PIP_AUTO_COMPLETE 1 - string split \\ -- (eval $COMP_WORDS[1]) - end - complete -fa "(__fish_complete_pip)" -c {prog} - """, - "powershell": """ - if ((Test-Path Function:\\TabExpansion) -and -not ` - (Test-Path Function:\\_pip_completeBackup)) {{ - Rename-Item Function:\\TabExpansion _pip_completeBackup - }} - function TabExpansion($line, $lastWord) {{ - $lastBlock = [regex]::Split($line, '[|;]')[-1].TrimStart() - if ($lastBlock.StartsWith("{prog} ")) {{ - $Env:COMP_WORDS=$lastBlock - $Env:COMP_CWORD=$lastBlock.Split().Length - 1 - $Env:PIP_AUTO_COMPLETE=1 - (& {prog}).Split() - Remove-Item Env:COMP_WORDS - Remove-Item Env:COMP_CWORD - Remove-Item Env:PIP_AUTO_COMPLETE - }} - elseif (Test-Path Function:\\_pip_completeBackup) {{ - # Fall back on existing tab expansion - _pip_completeBackup $line $lastWord - }} - }} - """, -} - - -class CompletionCommand(Command): - """A helper command to be used for command completion.""" - - ignore_require_venv = True - - def add_options(self) -> None: - self.cmd_opts.add_option( - "--bash", - "-b", - action="store_const", - const="bash", - dest="shell", - help="Emit completion code for bash", - ) - self.cmd_opts.add_option( - "--zsh", - "-z", - action="store_const", - const="zsh", - dest="shell", - help="Emit completion code for zsh", - ) - self.cmd_opts.add_option( - "--fish", - "-f", - action="store_const", - const="fish", - dest="shell", - help="Emit completion code for fish", - ) - self.cmd_opts.add_option( - "--powershell", - "-p", - action="store_const", - const="powershell", - dest="shell", - help="Emit completion code for powershell", - ) - - self.parser.insert_option_group(0, self.cmd_opts) - - def run(self, options: Values, args: List[str]) -> int: - """Prints the completion code of the given shell""" - shells = COMPLETION_SCRIPTS.keys() - shell_options = ["--" + shell for shell in sorted(shells)] - if options.shell in shells: - script = textwrap.dedent( - COMPLETION_SCRIPTS.get(options.shell, "").format(prog=get_prog()) - ) - print(BASE_COMPLETION.format(script=script, shell=options.shell)) - return SUCCESS - else: - sys.stderr.write( - "ERROR: You must pass {}\n".format(" or ".join(shell_options)) - ) - return SUCCESS diff --git a/spaces/pknez/face-swap-docker/mynewshinyroop/Lib/site-packages/wheel/macosx_libfile.py b/spaces/pknez/face-swap-docker/mynewshinyroop/Lib/site-packages/wheel/macosx_libfile.py deleted file mode 100644 index 3d19984813236184c8f87bead16a282f1980ffd4..0000000000000000000000000000000000000000 --- a/spaces/pknez/face-swap-docker/mynewshinyroop/Lib/site-packages/wheel/macosx_libfile.py +++ /dev/null @@ -1,471 +0,0 @@ -""" -This module contains function to analyse dynamic library -headers to extract system information - -Currently only for MacOSX - -Library file on macosx system starts with Mach-O or Fat field. -This can be distinguish by first 32 bites and it is called magic number. -Proper value of magic number is with suffix _MAGIC. Suffix _CIGAM means -reversed bytes order. -Both fields can occur in two types: 32 and 64 bytes. - -FAT field inform that this library contains few version of library -(typically for different types version). It contains -information where Mach-O headers starts. - -Each section started with Mach-O header contains one library -(So if file starts with this field it contains only one version). - -After filed Mach-O there are section fields. -Each of them starts with two fields: -cmd - magic number for this command -cmdsize - total size occupied by this section information. - -In this case only sections LC_VERSION_MIN_MACOSX (for macosx 10.13 and earlier) -and LC_BUILD_VERSION (for macosx 10.14 and newer) are interesting, -because them contains information about minimal system version. - -Important remarks: -- For fat files this implementation looks for maximum number version. - It not check if it is 32 or 64 and do not compare it with currently built package. - So it is possible to false report higher version that needed. -- All structures signatures are taken form macosx header files. -- I think that binary format will be more stable than `otool` output. - and if apple introduce some changes both implementation will need to be updated. -- The system compile will set the deployment target no lower than - 11.0 for arm64 builds. For "Universal 2" builds use the x86_64 deployment - target when the arm64 target is 11.0. -""" - -from __future__ import annotations - -import ctypes -import os -import sys - -"""here the needed const and struct from mach-o header files""" - -FAT_MAGIC = 0xCAFEBABE -FAT_CIGAM = 0xBEBAFECA -FAT_MAGIC_64 = 0xCAFEBABF -FAT_CIGAM_64 = 0xBFBAFECA -MH_MAGIC = 0xFEEDFACE -MH_CIGAM = 0xCEFAEDFE -MH_MAGIC_64 = 0xFEEDFACF -MH_CIGAM_64 = 0xCFFAEDFE - -LC_VERSION_MIN_MACOSX = 0x24 -LC_BUILD_VERSION = 0x32 - -CPU_TYPE_ARM64 = 0x0100000C - -mach_header_fields = [ - ("magic", ctypes.c_uint32), - ("cputype", ctypes.c_int), - ("cpusubtype", ctypes.c_int), - ("filetype", ctypes.c_uint32), - ("ncmds", ctypes.c_uint32), - ("sizeofcmds", ctypes.c_uint32), - ("flags", ctypes.c_uint32), -] -""" -struct mach_header { - uint32_t magic; /* mach magic number identifier */ - cpu_type_t cputype; /* cpu specifier */ - cpu_subtype_t cpusubtype; /* machine specifier */ - uint32_t filetype; /* type of file */ - uint32_t ncmds; /* number of load commands */ - uint32_t sizeofcmds; /* the size of all the load commands */ - uint32_t flags; /* flags */ -}; -typedef integer_t cpu_type_t; -typedef integer_t cpu_subtype_t; -""" - -mach_header_fields_64 = mach_header_fields + [("reserved", ctypes.c_uint32)] -""" -struct mach_header_64 { - uint32_t magic; /* mach magic number identifier */ - cpu_type_t cputype; /* cpu specifier */ - cpu_subtype_t cpusubtype; /* machine specifier */ - uint32_t filetype; /* type of file */ - uint32_t ncmds; /* number of load commands */ - uint32_t sizeofcmds; /* the size of all the load commands */ - uint32_t flags; /* flags */ - uint32_t reserved; /* reserved */ -}; -""" - -fat_header_fields = [("magic", ctypes.c_uint32), ("nfat_arch", ctypes.c_uint32)] -""" -struct fat_header { - uint32_t magic; /* FAT_MAGIC or FAT_MAGIC_64 */ - uint32_t nfat_arch; /* number of structs that follow */ -}; -""" - -fat_arch_fields = [ - ("cputype", ctypes.c_int), - ("cpusubtype", ctypes.c_int), - ("offset", ctypes.c_uint32), - ("size", ctypes.c_uint32), - ("align", ctypes.c_uint32), -] -""" -struct fat_arch { - cpu_type_t cputype; /* cpu specifier (int) */ - cpu_subtype_t cpusubtype; /* machine specifier (int) */ - uint32_t offset; /* file offset to this object file */ - uint32_t size; /* size of this object file */ - uint32_t align; /* alignment as a power of 2 */ -}; -""" - -fat_arch_64_fields = [ - ("cputype", ctypes.c_int), - ("cpusubtype", ctypes.c_int), - ("offset", ctypes.c_uint64), - ("size", ctypes.c_uint64), - ("align", ctypes.c_uint32), - ("reserved", ctypes.c_uint32), -] -""" -struct fat_arch_64 { - cpu_type_t cputype; /* cpu specifier (int) */ - cpu_subtype_t cpusubtype; /* machine specifier (int) */ - uint64_t offset; /* file offset to this object file */ - uint64_t size; /* size of this object file */ - uint32_t align; /* alignment as a power of 2 */ - uint32_t reserved; /* reserved */ -}; -""" - -segment_base_fields = [("cmd", ctypes.c_uint32), ("cmdsize", ctypes.c_uint32)] -"""base for reading segment info""" - -segment_command_fields = [ - ("cmd", ctypes.c_uint32), - ("cmdsize", ctypes.c_uint32), - ("segname", ctypes.c_char * 16), - ("vmaddr", ctypes.c_uint32), - ("vmsize", ctypes.c_uint32), - ("fileoff", ctypes.c_uint32), - ("filesize", ctypes.c_uint32), - ("maxprot", ctypes.c_int), - ("initprot", ctypes.c_int), - ("nsects", ctypes.c_uint32), - ("flags", ctypes.c_uint32), -] -""" -struct segment_command { /* for 32-bit architectures */ - uint32_t cmd; /* LC_SEGMENT */ - uint32_t cmdsize; /* includes sizeof section structs */ - char segname[16]; /* segment name */ - uint32_t vmaddr; /* memory address of this segment */ - uint32_t vmsize; /* memory size of this segment */ - uint32_t fileoff; /* file offset of this segment */ - uint32_t filesize; /* amount to map from the file */ - vm_prot_t maxprot; /* maximum VM protection */ - vm_prot_t initprot; /* initial VM protection */ - uint32_t nsects; /* number of sections in segment */ - uint32_t flags; /* flags */ -}; -typedef int vm_prot_t; -""" - -segment_command_fields_64 = [ - ("cmd", ctypes.c_uint32), - ("cmdsize", ctypes.c_uint32), - ("segname", ctypes.c_char * 16), - ("vmaddr", ctypes.c_uint64), - ("vmsize", ctypes.c_uint64), - ("fileoff", ctypes.c_uint64), - ("filesize", ctypes.c_uint64), - ("maxprot", ctypes.c_int), - ("initprot", ctypes.c_int), - ("nsects", ctypes.c_uint32), - ("flags", ctypes.c_uint32), -] -""" -struct segment_command_64 { /* for 64-bit architectures */ - uint32_t cmd; /* LC_SEGMENT_64 */ - uint32_t cmdsize; /* includes sizeof section_64 structs */ - char segname[16]; /* segment name */ - uint64_t vmaddr; /* memory address of this segment */ - uint64_t vmsize; /* memory size of this segment */ - uint64_t fileoff; /* file offset of this segment */ - uint64_t filesize; /* amount to map from the file */ - vm_prot_t maxprot; /* maximum VM protection */ - vm_prot_t initprot; /* initial VM protection */ - uint32_t nsects; /* number of sections in segment */ - uint32_t flags; /* flags */ -}; -""" - -version_min_command_fields = segment_base_fields + [ - ("version", ctypes.c_uint32), - ("sdk", ctypes.c_uint32), -] -""" -struct version_min_command { - uint32_t cmd; /* LC_VERSION_MIN_MACOSX or - LC_VERSION_MIN_IPHONEOS or - LC_VERSION_MIN_WATCHOS or - LC_VERSION_MIN_TVOS */ - uint32_t cmdsize; /* sizeof(struct min_version_command) */ - uint32_t version; /* X.Y.Z is encoded in nibbles xxxx.yy.zz */ - uint32_t sdk; /* X.Y.Z is encoded in nibbles xxxx.yy.zz */ -}; -""" - -build_version_command_fields = segment_base_fields + [ - ("platform", ctypes.c_uint32), - ("minos", ctypes.c_uint32), - ("sdk", ctypes.c_uint32), - ("ntools", ctypes.c_uint32), -] -""" -struct build_version_command { - uint32_t cmd; /* LC_BUILD_VERSION */ - uint32_t cmdsize; /* sizeof(struct build_version_command) plus */ - /* ntools * sizeof(struct build_tool_version) */ - uint32_t platform; /* platform */ - uint32_t minos; /* X.Y.Z is encoded in nibbles xxxx.yy.zz */ - uint32_t sdk; /* X.Y.Z is encoded in nibbles xxxx.yy.zz */ - uint32_t ntools; /* number of tool entries following this */ -}; -""" - - -def swap32(x): - return ( - ((x << 24) & 0xFF000000) - | ((x << 8) & 0x00FF0000) - | ((x >> 8) & 0x0000FF00) - | ((x >> 24) & 0x000000FF) - ) - - -def get_base_class_and_magic_number(lib_file, seek=None): - if seek is None: - seek = lib_file.tell() - else: - lib_file.seek(seek) - magic_number = ctypes.c_uint32.from_buffer_copy( - lib_file.read(ctypes.sizeof(ctypes.c_uint32)) - ).value - - # Handle wrong byte order - if magic_number in [FAT_CIGAM, FAT_CIGAM_64, MH_CIGAM, MH_CIGAM_64]: - if sys.byteorder == "little": - BaseClass = ctypes.BigEndianStructure - else: - BaseClass = ctypes.LittleEndianStructure - - magic_number = swap32(magic_number) - else: - BaseClass = ctypes.Structure - - lib_file.seek(seek) - return BaseClass, magic_number - - -def read_data(struct_class, lib_file): - return struct_class.from_buffer_copy(lib_file.read(ctypes.sizeof(struct_class))) - - -def extract_macosx_min_system_version(path_to_lib): - with open(path_to_lib, "rb") as lib_file: - BaseClass, magic_number = get_base_class_and_magic_number(lib_file, 0) - if magic_number not in [FAT_MAGIC, FAT_MAGIC_64, MH_MAGIC, MH_MAGIC_64]: - return - - if magic_number in [FAT_MAGIC, FAT_CIGAM_64]: - - class FatHeader(BaseClass): - _fields_ = fat_header_fields - - fat_header = read_data(FatHeader, lib_file) - if magic_number == FAT_MAGIC: - - class FatArch(BaseClass): - _fields_ = fat_arch_fields - - else: - - class FatArch(BaseClass): - _fields_ = fat_arch_64_fields - - fat_arch_list = [ - read_data(FatArch, lib_file) for _ in range(fat_header.nfat_arch) - ] - - versions_list = [] - for el in fat_arch_list: - try: - version = read_mach_header(lib_file, el.offset) - if version is not None: - if el.cputype == CPU_TYPE_ARM64 and len(fat_arch_list) != 1: - # Xcode will not set the deployment target below 11.0.0 - # for the arm64 architecture. Ignore the arm64 deployment - # in fat binaries when the target is 11.0.0, that way - # the other architectures can select a lower deployment - # target. - # This is safe because there is no arm64 variant for - # macOS 10.15 or earlier. - if version == (11, 0, 0): - continue - versions_list.append(version) - except ValueError: - pass - - if len(versions_list) > 0: - return max(versions_list) - else: - return None - - else: - try: - return read_mach_header(lib_file, 0) - except ValueError: - """when some error during read library files""" - return None - - -def read_mach_header(lib_file, seek=None): - """ - This funcition parse mach-O header and extract - information about minimal system version - - :param lib_file: reference to opened library file with pointer - """ - if seek is not None: - lib_file.seek(seek) - base_class, magic_number = get_base_class_and_magic_number(lib_file) - arch = "32" if magic_number == MH_MAGIC else "64" - - class SegmentBase(base_class): - _fields_ = segment_base_fields - - if arch == "32": - - class MachHeader(base_class): - _fields_ = mach_header_fields - - else: - - class MachHeader(base_class): - _fields_ = mach_header_fields_64 - - mach_header = read_data(MachHeader, lib_file) - for _i in range(mach_header.ncmds): - pos = lib_file.tell() - segment_base = read_data(SegmentBase, lib_file) - lib_file.seek(pos) - if segment_base.cmd == LC_VERSION_MIN_MACOSX: - - class VersionMinCommand(base_class): - _fields_ = version_min_command_fields - - version_info = read_data(VersionMinCommand, lib_file) - return parse_version(version_info.version) - elif segment_base.cmd == LC_BUILD_VERSION: - - class VersionBuild(base_class): - _fields_ = build_version_command_fields - - version_info = read_data(VersionBuild, lib_file) - return parse_version(version_info.minos) - else: - lib_file.seek(pos + segment_base.cmdsize) - continue - - -def parse_version(version): - x = (version & 0xFFFF0000) >> 16 - y = (version & 0x0000FF00) >> 8 - z = version & 0x000000FF - return x, y, z - - -def calculate_macosx_platform_tag(archive_root, platform_tag): - """ - Calculate proper macosx platform tag basing on files which are included to wheel - - Example platform tag `macosx-10.14-x86_64` - """ - prefix, base_version, suffix = platform_tag.split("-") - base_version = tuple(int(x) for x in base_version.split(".")) - base_version = base_version[:2] - if base_version[0] > 10: - base_version = (base_version[0], 0) - assert len(base_version) == 2 - if "MACOSX_DEPLOYMENT_TARGET" in os.environ: - deploy_target = tuple( - int(x) for x in os.environ["MACOSX_DEPLOYMENT_TARGET"].split(".") - ) - deploy_target = deploy_target[:2] - if deploy_target[0] > 10: - deploy_target = (deploy_target[0], 0) - if deploy_target < base_version: - sys.stderr.write( - "[WARNING] MACOSX_DEPLOYMENT_TARGET is set to a lower value ({}) than " - "the version on which the Python interpreter was compiled ({}), and " - "will be ignored.\n".format( - ".".join(str(x) for x in deploy_target), - ".".join(str(x) for x in base_version), - ) - ) - else: - base_version = deploy_target - - assert len(base_version) == 2 - start_version = base_version - versions_dict = {} - for dirpath, _dirnames, filenames in os.walk(archive_root): - for filename in filenames: - if filename.endswith(".dylib") or filename.endswith(".so"): - lib_path = os.path.join(dirpath, filename) - min_ver = extract_macosx_min_system_version(lib_path) - if min_ver is not None: - min_ver = min_ver[0:2] - if min_ver[0] > 10: - min_ver = (min_ver[0], 0) - versions_dict[lib_path] = min_ver - - if len(versions_dict) > 0: - base_version = max(base_version, max(versions_dict.values())) - - # macosx platform tag do not support minor bugfix release - fin_base_version = "_".join([str(x) for x in base_version]) - if start_version < base_version: - problematic_files = [k for k, v in versions_dict.items() if v > start_version] - problematic_files = "\n".join(problematic_files) - if len(problematic_files) == 1: - files_form = "this file" - else: - files_form = "these files" - error_message = ( - "[WARNING] This wheel needs a higher macOS version than {} " - "To silence this warning, set MACOSX_DEPLOYMENT_TARGET to at least " - + fin_base_version - + " or recreate " - + files_form - + " with lower " - "MACOSX_DEPLOYMENT_TARGET: \n" + problematic_files - ) - - if "MACOSX_DEPLOYMENT_TARGET" in os.environ: - error_message = error_message.format( - "is set in MACOSX_DEPLOYMENT_TARGET variable." - ) - else: - error_message = error_message.format( - "the version your Python interpreter is compiled against." - ) - - sys.stderr.write(error_message) - - platform_tag = prefix + "_" + fin_base_version + "_" + suffix - return platform_tag diff --git a/spaces/power2/JoJoGan-powerhow2/e4e/models/__init__.py b/spaces/power2/JoJoGan-powerhow2/e4e/models/__init__.py deleted file mode 100644 index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000 diff --git a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/fastapi/requests.py b/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/fastapi/requests.py deleted file mode 100644 index d16552c0a9535e1c0bd7f701987301681832eba5..0000000000000000000000000000000000000000 --- a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/fastapi/requests.py +++ /dev/null @@ -1,2 +0,0 @@ -from starlette.requests import HTTPConnection as HTTPConnection # noqa: F401 -from starlette.requests import Request as Request # noqa: F401 diff --git a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/httpcore/_backends/trio.py b/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/httpcore/_backends/trio.py deleted file mode 100644 index b1626d28e2ded284a65d48a32309ee201d953c5e..0000000000000000000000000000000000000000 --- a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/httpcore/_backends/trio.py +++ /dev/null @@ -1,161 +0,0 @@ -import ssl -import typing - -import trio - -from .._exceptions import ( - ConnectError, - ConnectTimeout, - ExceptionMapping, - ReadError, - ReadTimeout, - WriteError, - WriteTimeout, - map_exceptions, -) -from .base import SOCKET_OPTION, AsyncNetworkBackend, AsyncNetworkStream - - -class TrioStream(AsyncNetworkStream): - def __init__(self, stream: trio.abc.Stream) -> None: - self._stream = stream - - async def read( - self, max_bytes: int, timeout: typing.Optional[float] = None - ) -> bytes: - timeout_or_inf = float("inf") if timeout is None else timeout - exc_map: ExceptionMapping = { - trio.TooSlowError: ReadTimeout, - trio.BrokenResourceError: ReadError, - trio.ClosedResourceError: ReadError, - } - with map_exceptions(exc_map): - with trio.fail_after(timeout_or_inf): - data: bytes = await self._stream.receive_some(max_bytes=max_bytes) - return data - - async def write( - self, buffer: bytes, timeout: typing.Optional[float] = None - ) -> None: - if not buffer: - return - - timeout_or_inf = float("inf") if timeout is None else timeout - exc_map: ExceptionMapping = { - trio.TooSlowError: WriteTimeout, - trio.BrokenResourceError: WriteError, - trio.ClosedResourceError: WriteError, - } - with map_exceptions(exc_map): - with trio.fail_after(timeout_or_inf): - await self._stream.send_all(data=buffer) - - async def aclose(self) -> None: - await self._stream.aclose() - - async def start_tls( - self, - ssl_context: ssl.SSLContext, - server_hostname: typing.Optional[str] = None, - timeout: typing.Optional[float] = None, - ) -> AsyncNetworkStream: - timeout_or_inf = float("inf") if timeout is None else timeout - exc_map: ExceptionMapping = { - trio.TooSlowError: ConnectTimeout, - trio.BrokenResourceError: ConnectError, - } - ssl_stream = trio.SSLStream( - self._stream, - ssl_context=ssl_context, - server_hostname=server_hostname, - https_compatible=True, - server_side=False, - ) - with map_exceptions(exc_map): - try: - with trio.fail_after(timeout_or_inf): - await ssl_stream.do_handshake() - except Exception as exc: # pragma: nocover - await self.aclose() - raise exc - return TrioStream(ssl_stream) - - def get_extra_info(self, info: str) -> typing.Any: - if info == "ssl_object" and isinstance(self._stream, trio.SSLStream): - # Type checkers cannot see `_ssl_object` attribute because trio._ssl.SSLStream uses __getattr__/__setattr__. - # Tracked at https://github.com/python-trio/trio/issues/542 - return self._stream._ssl_object # type: ignore[attr-defined] - if info == "client_addr": - return self._get_socket_stream().socket.getsockname() - if info == "server_addr": - return self._get_socket_stream().socket.getpeername() - if info == "socket": - stream = self._stream - while isinstance(stream, trio.SSLStream): - stream = stream.transport_stream - assert isinstance(stream, trio.SocketStream) - return stream.socket - if info == "is_readable": - socket = self.get_extra_info("socket") - return socket.is_readable() - return None - - def _get_socket_stream(self) -> trio.SocketStream: - stream = self._stream - while isinstance(stream, trio.SSLStream): - stream = stream.transport_stream - assert isinstance(stream, trio.SocketStream) - return stream - - -class TrioBackend(AsyncNetworkBackend): - async def connect_tcp( - self, - host: str, - port: int, - timeout: typing.Optional[float] = None, - local_address: typing.Optional[str] = None, - socket_options: typing.Optional[typing.Iterable[SOCKET_OPTION]] = None, - ) -> AsyncNetworkStream: - # By default for TCP sockets, trio enables TCP_NODELAY. - # https://trio.readthedocs.io/en/stable/reference-io.html#trio.SocketStream - if socket_options is None: - socket_options = [] # pragma: no cover - timeout_or_inf = float("inf") if timeout is None else timeout - exc_map: ExceptionMapping = { - trio.TooSlowError: ConnectTimeout, - trio.BrokenResourceError: ConnectError, - OSError: ConnectError, - } - with map_exceptions(exc_map): - with trio.fail_after(timeout_or_inf): - stream: trio.abc.Stream = await trio.open_tcp_stream( - host=host, port=port, local_address=local_address - ) - for option in socket_options: - stream.setsockopt(*option) # type: ignore[attr-defined] # pragma: no cover - return TrioStream(stream) - - async def connect_unix_socket( - self, - path: str, - timeout: typing.Optional[float] = None, - socket_options: typing.Optional[typing.Iterable[SOCKET_OPTION]] = None, - ) -> AsyncNetworkStream: # pragma: nocover - if socket_options is None: - socket_options = [] - timeout_or_inf = float("inf") if timeout is None else timeout - exc_map: ExceptionMapping = { - trio.TooSlowError: ConnectTimeout, - trio.BrokenResourceError: ConnectError, - OSError: ConnectError, - } - with map_exceptions(exc_map): - with trio.fail_after(timeout_or_inf): - stream: trio.abc.Stream = await trio.open_unix_socket(path) - for option in socket_options: - stream.setsockopt(*option) # type: ignore[attr-defined] # pragma: no cover - return TrioStream(stream) - - async def sleep(self, seconds: float) -> None: - await trio.sleep(seconds) # pragma: nocover diff --git a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/huggingface_hub/inference/_generated/_async_client.py b/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/huggingface_hub/inference/_generated/_async_client.py deleted file mode 100644 index 3ab4faf43650d8ef404e206b5e31392808de6e26..0000000000000000000000000000000000000000 --- a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/huggingface_hub/inference/_generated/_async_client.py +++ /dev/null @@ -1,1959 +0,0 @@ -# coding=utf-8 -# Copyright 2023-present, the HuggingFace Inc. team. -# -# Licensed under the Apache License, Version 2.0 (the "License"); -# you may not use this file except in compliance with the License. -# You may obtain a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an "AS IS" BASIS, -# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -# See the License for the specific language governing permissions and -# limitations under the License. -# -# WARNING -# This entire file has been adapted from the sync-client code in `src/huggingface_hub/inference/_client.py`. -# Any change in InferenceClient will be automatically reflected in AsyncInferenceClient. -# To re-generate the code, run `make style` or `python ./utils/generate_async_inference_client.py --update`. -# WARNING -import asyncio -import logging -import time -import warnings -from dataclasses import asdict -from typing import ( - TYPE_CHECKING, - Any, - AsyncIterable, - Dict, - List, - Literal, - Optional, - Union, - overload, -) - -from requests.structures import CaseInsensitiveDict - -from huggingface_hub.constants import ALL_INFERENCE_API_FRAMEWORKS, INFERENCE_ENDPOINT, MAIN_INFERENCE_API_FRAMEWORKS -from huggingface_hub.inference._common import ( - TASKS_EXPECTING_IMAGES, - ContentT, - InferenceTimeoutError, - ModelStatus, - _async_stream_text_generation_response, - _b64_encode, - _b64_to_image, - _bytes_to_dict, - _bytes_to_image, - _bytes_to_list, - _get_recommended_model, - _import_numpy, - _is_tgi_server, - _open_as_binary, - _set_as_non_tgi, -) -from huggingface_hub.inference._text_generation import ( - TextGenerationParameters, - TextGenerationRequest, - TextGenerationResponse, - TextGenerationStreamResponse, - raise_text_generation_error, -) -from huggingface_hub.inference._types import ( - ClassificationOutput, - ConversationalOutput, - FillMaskOutput, - ImageSegmentationOutput, - ObjectDetectionOutput, - QuestionAnsweringOutput, - TableQuestionAnsweringOutput, - TokenClassificationOutput, -) -from huggingface_hub.utils import ( - build_hf_headers, -) - -from .._common import _async_yield_from, _import_aiohttp - - -if TYPE_CHECKING: - import numpy as np - from PIL import Image - -logger = logging.getLogger(__name__) - - -class AsyncInferenceClient: - """ - Initialize a new Inference Client. - - [`InferenceClient`] aims to provide a unified experience to perform inference. The client can be used - seamlessly with either the (free) Inference API or self-hosted Inference Endpoints. - - Args: - model (`str`, `optional`): - The model to run inference with. Can be a model id hosted on the Hugging Face Hub, e.g. `bigcode/starcoder` - or a URL to a deployed Inference Endpoint. Defaults to None, in which case a recommended model is - automatically selected for the task. - token (`str`, *optional*): - Hugging Face token. Will default to the locally saved token. Pass `token=False` if you don't want to send - your token to the server. - timeout (`float`, `optional`): - The maximum number of seconds to wait for a response from the server. Loading a new model in Inference - API can take up to several minutes. Defaults to None, meaning it will loop until the server is available. - headers (`Dict[str, str]`, `optional`): - Additional headers to send to the server. By default only the authorization and user-agent headers are sent. - Values in this dictionary will override the default values. - cookies (`Dict[str, str]`, `optional`): - Additional cookies to send to the server. - """ - - def __init__( - self, - model: Optional[str] = None, - token: Union[str, bool, None] = None, - timeout: Optional[float] = None, - headers: Optional[Dict[str, str]] = None, - cookies: Optional[Dict[str, str]] = None, - ) -> None: - self.model: Optional[str] = model - self.headers = CaseInsensitiveDict(build_hf_headers(token=token)) # contains 'authorization' + 'user-agent' - if headers is not None: - self.headers.update(headers) - self.cookies = cookies - self.timeout = timeout - - def __repr__(self): - return f"" - - @overload - async def post( # type: ignore - self, - *, - json: Optional[Union[str, Dict, List]] = None, - data: Optional[ContentT] = None, - model: Optional[str] = None, - task: Optional[str] = None, - stream: Literal[False] = ..., - ) -> bytes: - pass - - @overload - async def post( # type: ignore - self, - *, - json: Optional[Union[str, Dict, List]] = None, - data: Optional[ContentT] = None, - model: Optional[str] = None, - task: Optional[str] = None, - stream: Literal[True] = ..., - ) -> AsyncIterable[bytes]: - pass - - async def post( - self, - *, - json: Optional[Union[str, Dict, List]] = None, - data: Optional[ContentT] = None, - model: Optional[str] = None, - task: Optional[str] = None, - stream: bool = False, - ) -> Union[bytes, AsyncIterable[bytes]]: - """ - Make a POST request to the inference server. - - Args: - json (`Union[str, Dict, List]`, *optional*): - The JSON data to send in the request body. Defaults to None. - data (`Union[str, Path, bytes, BinaryIO]`, *optional*): - The content to send in the request body. It can be raw bytes, a pointer to an opened file, a local file - path, or a URL to an online resource (image, audio file,...). If both `json` and `data` are passed, - `data` will take precedence. At least `json` or `data` must be provided. Defaults to None. - model (`str`, *optional*): - The model to use for inference. Can be a model ID hosted on the Hugging Face Hub or a URL to a deployed - Inference Endpoint. Will override the model defined at the instance level. Defaults to None. - task (`str`, *optional*): - The task to perform on the inference. Used only to default to a recommended model if `model` is not - provided. At least `model` or `task` must be provided. Defaults to None. - stream (`bool`, *optional*): - Whether to iterate over streaming APIs. - - Returns: - bytes: The raw bytes returned by the server. - - Raises: - [`InferenceTimeoutError`]: - If the model is unavailable or the request times out. - `aiohttp.ClientResponseError`: - If the request fails with an HTTP error status code other than HTTP 503. - """ - - aiohttp = _import_aiohttp() - - url = self._resolve_url(model, task) - - if data is not None and json is not None: - warnings.warn("Ignoring `json` as `data` is passed as binary.") - - # Set Accept header if relevant - headers = self.headers.copy() - if task in TASKS_EXPECTING_IMAGES and "Accept" not in headers: - headers["Accept"] = "image/png" - - t0 = time.time() - timeout = self.timeout - while True: - with _open_as_binary(data) as data_as_binary: - # Do not use context manager as we don't want to close the connection immediately when returning - # a stream - client = aiohttp.ClientSession( - headers=headers, cookies=self.cookies, timeout=aiohttp.ClientTimeout(self.timeout) - ) - - try: - response = await client.post(url, json=json, data=data_as_binary) - response_error_payload = None - if response.status != 200: - try: - response_error_payload = await response.json() # get payload before connection closed - except Exception: - pass - response.raise_for_status() - if stream: - return _async_yield_from(client, response) - else: - content = await response.read() - await client.close() - return content - except asyncio.TimeoutError as error: - await client.close() - # Convert any `TimeoutError` to a `InferenceTimeoutError` - raise InferenceTimeoutError(f"Inference call timed out: {url}") from error # type: ignore - except aiohttp.ClientResponseError as error: - error.response_error_payload = response_error_payload - await client.close() - if response.status == 503: - # If Model is unavailable, either raise a TimeoutError... - if timeout is not None and time.time() - t0 > timeout: - raise InferenceTimeoutError( - f"Model not loaded on the server: {url}. Please retry with a higher timeout" - f" (current: {self.timeout}).", - request=error.request, - response=error.response, - ) from error - # ...or wait 1s and retry - logger.info(f"Waiting for model to be loaded on the server: {error}") - time.sleep(1) - if timeout is not None: - timeout = max(self.timeout - (time.time() - t0), 1) # type: ignore - continue - raise error - - async def audio_classification( - self, - audio: ContentT, - *, - model: Optional[str] = None, - ) -> List[ClassificationOutput]: - """ - Perform audio classification on the provided audio content. - - Args: - audio (Union[str, Path, bytes, BinaryIO]): - The audio content to classify. It can be raw audio bytes, a local audio file, or a URL pointing to an - audio file. - model (`str`, *optional*): - The model to use for audio classification. Can be a model ID hosted on the Hugging Face Hub - or a URL to a deployed Inference Endpoint. If not provided, the default recommended model for - audio classification will be used. - - Returns: - `List[Dict]`: The classification output containing the predicted label and its confidence. - - Raises: - [`InferenceTimeoutError`]: - If the model is unavailable or the request times out. - `aiohttp.ClientResponseError`: - If the request fails with an HTTP error status code other than HTTP 503. - - Example: - ```py - # Must be run in an async context - >>> from huggingface_hub import AsyncInferenceClient - >>> client = AsyncInferenceClient() - >>> await client.audio_classification("audio.flac") - [{'score': 0.4976358711719513, 'label': 'hap'}, {'score': 0.3677836060523987, 'label': 'neu'},...] - ``` - """ - response = await self.post(data=audio, model=model, task="audio-classification") - return _bytes_to_list(response) - - async def automatic_speech_recognition( - self, - audio: ContentT, - *, - model: Optional[str] = None, - ) -> str: - """ - Perform automatic speech recognition (ASR or audio-to-text) on the given audio content. - - Args: - audio (Union[str, Path, bytes, BinaryIO]): - The content to transcribe. It can be raw audio bytes, local audio file, or a URL to an audio file. - model (`str`, *optional*): - The model to use for ASR. Can be a model ID hosted on the Hugging Face Hub or a URL to a deployed - Inference Endpoint. If not provided, the default recommended model for ASR will be used. - - Returns: - str: The transcribed text. - - Raises: - [`InferenceTimeoutError`]: - If the model is unavailable or the request times out. - `aiohttp.ClientResponseError`: - If the request fails with an HTTP error status code other than HTTP 503. - - Example: - ```py - # Must be run in an async context - >>> from huggingface_hub import AsyncInferenceClient - >>> client = AsyncInferenceClient() - >>> await client.automatic_speech_recognition("hello_world.flac") - "hello world" - ``` - """ - response = await self.post(data=audio, model=model, task="automatic-speech-recognition") - return _bytes_to_dict(response)["text"] - - async def conversational( - self, - text: str, - generated_responses: Optional[List[str]] = None, - past_user_inputs: Optional[List[str]] = None, - *, - parameters: Optional[Dict[str, Any]] = None, - model: Optional[str] = None, - ) -> ConversationalOutput: - """ - Generate conversational responses based on the given input text (i.e. chat with the API). - - Args: - text (`str`): - The last input from the user in the conversation. - generated_responses (`List[str]`, *optional*): - A list of strings corresponding to the earlier replies from the model. Defaults to None. - past_user_inputs (`List[str]`, *optional*): - A list of strings corresponding to the earlier replies from the user. Should be the same length as - `generated_responses`. Defaults to None. - parameters (`Dict[str, Any]`, *optional*): - Additional parameters for the conversational task. Defaults to None. For more details about the available - parameters, please refer to [this page](https://huggingface.co/docs/api-inference/detailed_parameters#conversational-task) - model (`str`, *optional*): - The model to use for the conversational task. Can be a model ID hosted on the Hugging Face Hub or a URL to - a deployed Inference Endpoint. If not provided, the default recommended conversational model will be used. - Defaults to None. - - Returns: - `Dict`: The generated conversational output. - - Raises: - [`InferenceTimeoutError`]: - If the model is unavailable or the request times out. - `aiohttp.ClientResponseError`: - If the request fails with an HTTP error status code other than HTTP 503. - - Example: - ```py - # Must be run in an async context - >>> from huggingface_hub import AsyncInferenceClient - >>> client = AsyncInferenceClient() - >>> output = await client.conversational("Hi, who are you?") - >>> output - {'generated_text': 'I am the one who knocks.', 'conversation': {'generated_responses': ['I am the one who knocks.'], 'past_user_inputs': ['Hi, who are you?']}, 'warnings': ['Setting `pad_token_id` to `eos_token_id`:50256 async for open-end generation.']} - >>> await client.conversational( - ... "Wow, that's scary!", - ... generated_responses=output["conversation"]["generated_responses"], - ... past_user_inputs=output["conversation"]["past_user_inputs"], - ... ) - ``` - """ - payload: Dict[str, Any] = {"inputs": {"text": text}} - if generated_responses is not None: - payload["inputs"]["generated_responses"] = generated_responses - if past_user_inputs is not None: - payload["inputs"]["past_user_inputs"] = past_user_inputs - if parameters is not None: - payload["parameters"] = parameters - response = await self.post(json=payload, model=model, task="conversational") - return _bytes_to_dict(response) # type: ignore - - async def visual_question_answering( - self, - image: ContentT, - question: str, - *, - model: Optional[str] = None, - ) -> List[str]: - """ - Answering open-ended questions based on an image. - - Args: - image (`Union[str, Path, bytes, BinaryIO]`): - The input image for the context. It can be raw bytes, an image file, or a URL to an online image. - question (`str`): - Question to be answered. - model (`str`, *optional*): - The model to use for the visual question answering task. Can be a model ID hosted on the Hugging Face Hub or a URL to - a deployed Inference Endpoint. If not provided, the default recommended visual question answering model will be used. - Defaults to None. - - Returns: - `List[Dict]`: a list of dictionaries containing the predicted label and associated probability. - - Raises: - `InferenceTimeoutError`: - If the model is unavailable or the request times out. - `aiohttp.ClientResponseError`: - If the request fails with an HTTP error status code other than HTTP 503. - - Example: - ```py - # Must be run in an async context - >>> from huggingface_hub import AsyncInferenceClient - >>> client = AsyncInferenceClient() - >>> await client.visual_question_answering( - ... image="https://huggingface.co/datasets/mishig/sample_images/resolve/main/tiger.jpg", - ... question="What is the animal doing?" - ... ) - [{'score': 0.778609573841095, 'answer': 'laying down'},{'score': 0.6957435607910156, 'answer': 'sitting'}, ...] - ``` - """ - payload: Dict[str, Any] = {"question": question, "image": _b64_encode(image)} - response = await self.post(json=payload, model=model, task="visual-question-answering") - return _bytes_to_list(response) - - async def document_question_answering( - self, - image: ContentT, - question: str, - *, - model: Optional[str] = None, - ) -> List[QuestionAnsweringOutput]: - """ - Answer questions on document images. - - Args: - image (`Union[str, Path, bytes, BinaryIO]`): - The input image for the context. It can be raw bytes, an image file, or a URL to an online image. - question (`str`): - Question to be answered. - model (`str`, *optional*): - The model to use for the document question answering task. Can be a model ID hosted on the Hugging Face Hub or a URL to - a deployed Inference Endpoint. If not provided, the default recommended document question answering model will be used. - Defaults to None. - - Returns: - `List[Dict]`: a list of dictionaries containing the predicted label, associated probability, word ids, and page number. - - Raises: - [`InferenceTimeoutError`]: - If the model is unavailable or the request times out. - `aiohttp.ClientResponseError`: - If the request fails with an HTTP error status code other than HTTP 503. - - Example: - ```py - # Must be run in an async context - >>> from huggingface_hub import AsyncInferenceClient - >>> client = AsyncInferenceClient() - >>> await client.document_question_answering(image="https://huggingface.co/spaces/impira/docquery/resolve/2359223c1837a7587402bda0f2643382a6eefeab/invoice.png", question="What is the invoice number?") - [{'score': 0.42515629529953003, 'answer': 'us-001', 'start': 16, 'end': 16}] - ``` - """ - payload: Dict[str, Any] = {"question": question, "image": _b64_encode(image)} - response = await self.post(json=payload, model=model, task="document-question-answering") - return _bytes_to_list(response) - - async def feature_extraction(self, text: str, *, model: Optional[str] = None) -> "np.ndarray": - """ - Generate embeddings for a given text. - - Args: - text (`str`): - The text to embed. - model (`str`, *optional*): - The model to use for the conversational task. Can be a model ID hosted on the Hugging Face Hub or a URL to - a deployed Inference Endpoint. If not provided, the default recommended conversational model will be used. - Defaults to None. - - Returns: - `np.ndarray`: The embedding representing the input text as a float32 numpy array. - - Raises: - [`InferenceTimeoutError`]: - If the model is unavailable or the request times out. - `aiohttp.ClientResponseError`: - If the request fails with an HTTP error status code other than HTTP 503. - - Example: - ```py - # Must be run in an async context - >>> from huggingface_hub import AsyncInferenceClient - >>> client = AsyncInferenceClient() - >>> await client.feature_extraction("Hi, who are you?") - array([[ 2.424802 , 2.93384 , 1.1750331 , ..., 1.240499, -0.13776633, -0.7889173 ], - [-0.42943227, -0.6364878 , -1.693462 , ..., 0.41978157, -2.4336355 , 0.6162071 ], - ..., - [ 0.28552425, -0.928395 , -1.2077185 , ..., 0.76810825, -2.1069427 , 0.6236161 ]], dtype=float32) - ``` - """ - response = await self.post(json={"inputs": text}, model=model, task="feature-extraction") - np = _import_numpy() - return np.array(_bytes_to_dict(response), dtype="float32") - - async def fill_mask(self, text: str, *, model: Optional[str] = None) -> List[FillMaskOutput]: - """ - Fill in a hole with a missing word (token to be precise). - - Args: - text (`str`): - a string to be filled from, must contain the [MASK] token (check model card for exact name of the mask). - model (`str`, *optional*): - The model to use for the fill mask task. Can be a model ID hosted on the Hugging Face Hub or a URL to - a deployed Inference Endpoint. If not provided, the default recommended fill mask model will be used. - Defaults to None. - - Returns: - `List[Dict]`: a list of fill mask output dictionaries containing the predicted label, associated - probability, token reference, and completed text. - - Raises: - [`InferenceTimeoutError`]: - If the model is unavailable or the request times out. - `aiohttp.ClientResponseError`: - If the request fails with an HTTP error status code other than HTTP 503. - - Example: - ```py - # Must be run in an async context - >>> from huggingface_hub import AsyncInferenceClient - >>> client = AsyncInferenceClient() - >>> await client.fill_mask("The goal of life is .") - [{'score': 0.06897063553333282, - 'token': 11098, - 'token_str': ' happiness', - 'sequence': 'The goal of life is happiness.'}, - {'score': 0.06554922461509705, - 'token': 45075, - 'token_str': ' immortality', - 'sequence': 'The goal of life is immortality.'}] - ``` - """ - response = await self.post(json={"inputs": text}, model=model, task="fill-mask") - return _bytes_to_list(response) - - async def image_classification( - self, - image: ContentT, - *, - model: Optional[str] = None, - ) -> List[ClassificationOutput]: - """ - Perform image classification on the given image using the specified model. - - Args: - image (`Union[str, Path, bytes, BinaryIO]`): - The image to classify. It can be raw bytes, an image file, or a URL to an online image. - model (`str`, *optional*): - The model to use for image classification. Can be a model ID hosted on the Hugging Face Hub or a URL to a - deployed Inference Endpoint. If not provided, the default recommended model for image classification will be used. - - Returns: - `List[Dict]`: a list of dictionaries containing the predicted label and associated probability. - - Raises: - [`InferenceTimeoutError`]: - If the model is unavailable or the request times out. - `aiohttp.ClientResponseError`: - If the request fails with an HTTP error status code other than HTTP 503. - - Example: - ```py - # Must be run in an async context - >>> from huggingface_hub import AsyncInferenceClient - >>> client = AsyncInferenceClient() - >>> await client.image_classification("https://upload.wikimedia.org/wikipedia/commons/thumb/4/43/Cute_dog.jpg/320px-Cute_dog.jpg") - [{'score': 0.9779096841812134, 'label': 'Blenheim spaniel'}, ...] - ``` - """ - response = await self.post(data=image, model=model, task="image-classification") - return _bytes_to_list(response) - - async def image_segmentation( - self, - image: ContentT, - *, - model: Optional[str] = None, - ) -> List[ImageSegmentationOutput]: - """ - Perform image segmentation on the given image using the specified model. - - - - You must have `PIL` installed if you want to work with images (`pip install Pillow`). - - - - Args: - image (`Union[str, Path, bytes, BinaryIO]`): - The image to segment. It can be raw bytes, an image file, or a URL to an online image. - model (`str`, *optional*): - The model to use for image segmentation. Can be a model ID hosted on the Hugging Face Hub or a URL to a - deployed Inference Endpoint. If not provided, the default recommended model for image segmentation will be used. - - Returns: - `List[Dict]`: A list of dictionaries containing the segmented masks and associated attributes. - - Raises: - [`InferenceTimeoutError`]: - If the model is unavailable or the request times out. - `aiohttp.ClientResponseError`: - If the request fails with an HTTP error status code other than HTTP 503. - - Example: - ```py - # Must be run in an async context - >>> from huggingface_hub import AsyncInferenceClient - >>> client = AsyncInferenceClient() - >>> await client.image_segmentation("cat.jpg"): - [{'score': 0.989008, 'label': 'LABEL_184', 'mask': }, ...] - ``` - """ - - # Segment - response = await self.post(data=image, model=model, task="image-segmentation") - output = _bytes_to_dict(response) - - # Parse masks as PIL Image - if not isinstance(output, list): - raise ValueError(f"Server output must be a list. Got {type(output)}: {str(output)[:200]}...") - for item in output: - item["mask"] = _b64_to_image(item["mask"]) - return output - - async def image_to_image( - self, - image: ContentT, - prompt: Optional[str] = None, - *, - negative_prompt: Optional[str] = None, - height: Optional[int] = None, - width: Optional[int] = None, - num_inference_steps: Optional[int] = None, - guidance_scale: Optional[float] = None, - model: Optional[str] = None, - **kwargs, - ) -> "Image": - """ - Perform image-to-image translation using a specified model. - - - - You must have `PIL` installed if you want to work with images (`pip install Pillow`). - - - - Args: - image (`Union[str, Path, bytes, BinaryIO]`): - The input image for translation. It can be raw bytes, an image file, or a URL to an online image. - prompt (`str`, *optional*): - The text prompt to guide the image generation. - negative_prompt (`str`, *optional*): - A negative prompt to guide the translation process. - height (`int`, *optional*): - The height in pixels of the generated image. - width (`int`, *optional*): - The width in pixels of the generated image. - num_inference_steps (`int`, *optional*): - The number of denoising steps. More denoising steps usually lead to a higher quality image at the - expense of slower inference. - guidance_scale (`float`, *optional*): - Higher guidance scale encourages to generate images that are closely linked to the text `prompt`, - usually at the expense of lower image quality. - model (`str`, *optional*): - The model to use for inference. Can be a model ID hosted on the Hugging Face Hub or a URL to a deployed - Inference Endpoint. This parameter overrides the model defined at the instance level. Defaults to None. - - Returns: - `Image`: The translated image. - - Raises: - [`InferenceTimeoutError`]: - If the model is unavailable or the request times out. - `aiohttp.ClientResponseError`: - If the request fails with an HTTP error status code other than HTTP 503. - - Example: - ```py - # Must be run in an async context - >>> from huggingface_hub import AsyncInferenceClient - >>> client = AsyncInferenceClient() - >>> image = await client.image_to_image("cat.jpg", prompt="turn the cat into a tiger") - >>> image.save("tiger.jpg") - ``` - """ - parameters = { - "prompt": prompt, - "negative_prompt": negative_prompt, - "height": height, - "width": width, - "num_inference_steps": num_inference_steps, - "guidance_scale": guidance_scale, - **kwargs, - } - if all(parameter is None for parameter in parameters.values()): - # Either only an image to send => send as raw bytes - data = image - payload: Optional[Dict[str, Any]] = None - else: - # Or an image + some parameters => use base64 encoding - data = None - payload = {"inputs": _b64_encode(image)} - for key, value in parameters.items(): - if value is not None: - payload.setdefault("parameters", {})[key] = value - - response = await self.post(json=payload, data=data, model=model, task="image-to-image") - return _bytes_to_image(response) - - async def image_to_text(self, image: ContentT, *, model: Optional[str] = None) -> str: - """ - Takes an input image and return text. - - Models can have very different outputs depending on your use case (image captioning, optical character recognition - (OCR), Pix2Struct, etc). Please have a look to the model card to learn more about a model's specificities. - - Args: - image (`Union[str, Path, bytes, BinaryIO]`): - The input image to caption. It can be raw bytes, an image file, or a URL to an online image.. - model (`str`, *optional*): - The model to use for inference. Can be a model ID hosted on the Hugging Face Hub or a URL to a deployed - Inference Endpoint. This parameter overrides the model defined at the instance level. Defaults to None. - - Returns: - `str`: The generated text. - - Raises: - [`InferenceTimeoutError`]: - If the model is unavailable or the request times out. - `aiohttp.ClientResponseError`: - If the request fails with an HTTP error status code other than HTTP 503. - - Example: - ```py - # Must be run in an async context - >>> from huggingface_hub import AsyncInferenceClient - >>> client = AsyncInferenceClient() - >>> await client.image_to_text("cat.jpg") - 'a cat standing in a grassy field ' - >>> await client.image_to_text("https://upload.wikimedia.org/wikipedia/commons/thumb/4/43/Cute_dog.jpg/320px-Cute_dog.jpg") - 'a dog laying on the grass next to a flower pot ' - ``` - """ - response = await self.post(data=image, model=model, task="image-to-text") - return _bytes_to_dict(response)[0]["generated_text"] - - async def list_deployed_models( - self, frameworks: Union[None, str, Literal["all"], List[str]] = None - ) -> Dict[str, List[str]]: - """ - List models currently deployed on the Inference API service. - - This helper checks deployed models framework by framework. By default, it will check the 4 main frameworks that - are supported and account for 95% of the hosted models. However, if you want a complete list of models you can - specify `frameworks="all"` as input. Alternatively, if you know before-hand which framework you are interested - in, you can also restrict to search to this one (e.g. `frameworks="text-generation-inference"`). The more - frameworks are checked, the more time it will take. - - - - This endpoint is mostly useful for discoverability. If you already know which model you want to use and want to - check its availability, you can directly use [`~InferenceClient.get_model_status`]. - - - - Args: - frameworks (`Literal["all"]` or `List[str]` or `str`, *optional*): - The frameworks to filter on. By default only a subset of the available frameworks are tested. If set to - "all", all available frameworks will be tested. It is also possible to provide a single framework or a - custom set of frameworks to check. - - Returns: - `Dict[str, List[str]]`: A dictionary mapping task names to a sorted list of model IDs. - - Example: - ```py - # Must be run in an async contextthon - >>> from huggingface_hub import AsyncInferenceClient - >>> client = AsyncInferenceClient() - - # Discover zero-shot-classification models currently deployed - >>> models = await client.list_deployed_models() - >>> models["zero-shot-classification"] - ['Narsil/deberta-large-mnli-zero-cls', 'facebook/bart-large-mnli', ...] - - # List from only 1 framework - >>> await client.list_deployed_models("text-generation-inference") - {'text-generation': ['bigcode/starcoder', 'meta-llama/Llama-2-70b-chat-hf', ...], ...} - ``` - """ - # Resolve which frameworks to check - if frameworks is None: - frameworks = MAIN_INFERENCE_API_FRAMEWORKS - elif frameworks == "all": - frameworks = ALL_INFERENCE_API_FRAMEWORKS - elif isinstance(frameworks, str): - frameworks = [frameworks] - frameworks = list(set(frameworks)) - - # Fetch them iteratively - models_by_task: Dict[str, List[str]] = {} - - def _unpack_response(framework: str, items: List[Dict]) -> None: - for model in items: - if framework == "sentence-transformers": - # Model running with the `sentence-transformers` framework can work with both tasks even if not - # branded as such in the API response - models_by_task.setdefault("feature-extraction", []).append(model["model_id"]) - models_by_task.setdefault("sentence-similarity", []).append(model["model_id"]) - else: - models_by_task.setdefault(model["task"], []).append(model["model_id"]) - - async def _fetch_framework(framework: str) -> None: - async with _import_aiohttp().ClientSession(headers=self.headers) as client: - response = await client.get(f"{INFERENCE_ENDPOINT}/framework/{framework}") - response.raise_for_status() - _unpack_response(framework, await response.json()) - - import asyncio - - await asyncio.gather(*[_fetch_framework(framework) for framework in frameworks]) - - # Sort alphabetically for discoverability and return - for task, models in models_by_task.items(): - models_by_task[task] = sorted(set(models), key=lambda x: x.lower()) - return models_by_task - - async def object_detection( - self, - image: ContentT, - *, - model: Optional[str] = None, - ) -> List[ObjectDetectionOutput]: - """ - Perform object detection on the given image using the specified model. - - - - You must have `PIL` installed if you want to work with images (`pip install Pillow`). - - - - Args: - image (`Union[str, Path, bytes, BinaryIO]`): - The image to detect objects on. It can be raw bytes, an image file, or a URL to an online image. - model (`str`, *optional*): - The model to use for object detection. Can be a model ID hosted on the Hugging Face Hub or a URL to a - deployed Inference Endpoint. If not provided, the default recommended model for object detection (DETR) will be used. - - Returns: - `List[ObjectDetectionOutput]`: A list of dictionaries containing the bounding boxes and associated attributes. - - Raises: - [`InferenceTimeoutError`]: - If the model is unavailable or the request times out. - `aiohttp.ClientResponseError`: - If the request fails with an HTTP error status code other than HTTP 503. - `ValueError`: - If the request output is not a List. - - Example: - ```py - # Must be run in an async context - >>> from huggingface_hub import AsyncInferenceClient - >>> client = AsyncInferenceClient() - >>> await client.object_detection("people.jpg"): - [{"score":0.9486683011054993,"label":"person","box":{"xmin":59,"ymin":39,"xmax":420,"ymax":510}}, ... ] - ``` - """ - # detect objects - response = await self.post(data=image, model=model, task="object-detection") - output = _bytes_to_dict(response) - if not isinstance(output, list): - raise ValueError(f"Server output must be a list. Got {type(output)}: {str(output)[:200]}...") - return output - - async def question_answering( - self, question: str, context: str, *, model: Optional[str] = None - ) -> QuestionAnsweringOutput: - """ - Retrieve the answer to a question from a given text. - - Args: - question (`str`): - Question to be answered. - context (`str`): - The context of the question. - model (`str`): - The model to use for the question answering task. Can be a model ID hosted on the Hugging Face Hub or a URL to - a deployed Inference Endpoint. - - Returns: - `Dict`: a dictionary of question answering output containing the score, start index, end index, and answer. - - Raises: - [`InferenceTimeoutError`]: - If the model is unavailable or the request times out. - `aiohttp.ClientResponseError`: - If the request fails with an HTTP error status code other than HTTP 503. - - Example: - ```py - # Must be run in an async context - >>> from huggingface_hub import AsyncInferenceClient - >>> client = AsyncInferenceClient() - >>> await client.question_answering(question="What's my name?", context="My name is Clara and I live in Berkeley.") - {'score': 0.9326562285423279, 'start': 11, 'end': 16, 'answer': 'Clara'} - ``` - """ - - payload: Dict[str, Any] = {"question": question, "context": context} - response = await self.post( - json=payload, - model=model, - task="question-answering", - ) - return _bytes_to_dict(response) # type: ignore - - async def sentence_similarity( - self, sentence: str, other_sentences: List[str], *, model: Optional[str] = None - ) -> List[float]: - """ - Compute the semantic similarity between a sentence and a list of other sentences by comparing their embeddings. - - Args: - sentence (`str`): - The main sentence to compare to others. - other_sentences (`List[str]`): - The list of sentences to compare to. - model (`str`, *optional*): - The model to use for the conversational task. Can be a model ID hosted on the Hugging Face Hub or a URL to - a deployed Inference Endpoint. If not provided, the default recommended conversational model will be used. - Defaults to None. - - Returns: - `List[float]`: The embedding representing the input text. - - Raises: - [`InferenceTimeoutError`]: - If the model is unavailable or the request times out. - `aiohttp.ClientResponseError`: - If the request fails with an HTTP error status code other than HTTP 503. - - Example: - ```py - # Must be run in an async context - >>> from huggingface_hub import AsyncInferenceClient - >>> client = AsyncInferenceClient() - >>> await client.sentence_similarity( - ... "Machine learning is so easy.", - ... other_sentences=[ - ... "Deep learning is so straightforward.", - ... "This is so difficult, like rocket science.", - ... "I can't believe how much I struggled with this.", - ... ], - ... ) - [0.7785726189613342, 0.45876261591911316, 0.2906220555305481] - ``` - """ - response = await self.post( - json={"inputs": {"source_sentence": sentence, "sentences": other_sentences}}, - model=model, - task="sentence-similarity", - ) - return _bytes_to_list(response) - - async def summarization( - self, - text: str, - *, - parameters: Optional[Dict[str, Any]] = None, - model: Optional[str] = None, - ) -> str: - """ - Generate a summary of a given text using a specified model. - - Args: - text (`str`): - The input text to summarize. - parameters (`Dict[str, Any]`, *optional*): - Additional parameters for summarization. Check out this [page](https://huggingface.co/docs/api-inference/detailed_parameters#summarization-task) - for more details. - model (`str`, *optional*): - The model to use for inference. Can be a model ID hosted on the Hugging Face Hub or a URL to a deployed - Inference Endpoint. This parameter overrides the model defined at the instance level. Defaults to None. - - Returns: - `str`: The generated summary text. - - Raises: - [`InferenceTimeoutError`]: - If the model is unavailable or the request times out. - `aiohttp.ClientResponseError`: - If the request fails with an HTTP error status code other than HTTP 503. - - Example: - ```py - # Must be run in an async context - >>> from huggingface_hub import AsyncInferenceClient - >>> client = AsyncInferenceClient() - >>> await client.summarization("The Eiffel tower...") - 'The Eiffel tower is one of the most famous landmarks in the world....' - ``` - """ - payload: Dict[str, Any] = {"inputs": text} - if parameters is not None: - payload["parameters"] = parameters - response = await self.post(json=payload, model=model, task="summarization") - return _bytes_to_dict(response)[0]["summary_text"] - - async def table_question_answering( - self, table: Dict[str, Any], query: str, *, model: Optional[str] = None - ) -> TableQuestionAnsweringOutput: - """ - Retrieve the answer to a question from information given in a table. - - Args: - table (`str`): - A table of data represented as a dict of lists where entries are headers and the lists are all the - values, all lists must have the same size. - query (`str`): - The query in plain text that you want to ask the table. - model (`str`): - The model to use for the table-question-answering task. Can be a model ID hosted on the Hugging Face - Hub or a URL to a deployed Inference Endpoint. - - Returns: - `Dict`: a dictionary of table question answering output containing the answer, coordinates, cells and the aggregator used. - - Raises: - [`InferenceTimeoutError`]: - If the model is unavailable or the request times out. - `aiohttp.ClientResponseError`: - If the request fails with an HTTP error status code other than HTTP 503. - - Example: - ```py - # Must be run in an async context - >>> from huggingface_hub import AsyncInferenceClient - >>> client = AsyncInferenceClient() - >>> query = "How many stars does the transformers repository have?" - >>> table = {"Repository": ["Transformers", "Datasets", "Tokenizers"], "Stars": ["36542", "4512", "3934"]} - >>> await client.table_question_answering(table, query, model="google/tapas-base-finetuned-wtq") - {'answer': 'AVERAGE > 36542', 'coordinates': [[0, 1]], 'cells': ['36542'], 'aggregator': 'AVERAGE'} - ``` - """ - response = await self.post( - json={ - "query": query, - "table": table, - }, - model=model, - task="table-question-answering", - ) - return _bytes_to_dict(response) # type: ignore - - async def tabular_classification(self, table: Dict[str, Any], *, model: str) -> List[str]: - """ - Classifying a target category (a group) based on a set of attributes. - - Args: - table (`Dict[str, Any]`): - Set of attributes to classify. - model (`str`): - The model to use for the tabular-classification task. Can be a model ID hosted on the Hugging Face Hub or a URL to - a deployed Inference Endpoint. - - Returns: - `List`: a list of labels, one per row in the initial table. - - Raises: - [`InferenceTimeoutError`]: - If the model is unavailable or the request times out. - `aiohttp.ClientResponseError`: - If the request fails with an HTTP error status code other than HTTP 503. - - Example: - ```py - # Must be run in an async context - >>> from huggingface_hub import AsyncInferenceClient - >>> client = AsyncInferenceClient() - >>> table = { - ... "fixed_acidity": ["7.4", "7.8", "10.3"], - ... "volatile_acidity": ["0.7", "0.88", "0.32"], - ... "citric_acid": ["0", "0", "0.45"], - ... "residual_sugar": ["1.9", "2.6", "6.4"], - ... "chlorides": ["0.076", "0.098", "0.073"], - ... "free_sulfur_dioxide": ["11", "25", "5"], - ... "total_sulfur_dioxide": ["34", "67", "13"], - ... "density": ["0.9978", "0.9968", "0.9976"], - ... "pH": ["3.51", "3.2", "3.23"], - ... "sulphates": ["0.56", "0.68", "0.82"], - ... "alcohol": ["9.4", "9.8", "12.6"], - ... } - >>> await client.tabular_classification(table=table, model="julien-c/wine-quality") - ["5", "5", "5"] - ``` - """ - response = await self.post(json={"table": table}, model=model, task="tabular-classification") - return _bytes_to_list(response) - - async def tabular_regression(self, table: Dict[str, Any], *, model: str) -> List[float]: - """ - Predicting a numerical target value given a set of attributes/features in a table. - - Args: - table (`Dict[str, Any]`): - Set of attributes stored in a table. The attributes used to predict the target can be both numerical and categorical. - model (`str`): - The model to use for the tabular-regression task. Can be a model ID hosted on the Hugging Face Hub or a URL to - a deployed Inference Endpoint. - - Returns: - `List`: a list of predicted numerical target values. - - Raises: - [`InferenceTimeoutError`]: - If the model is unavailable or the request times out. - `aiohttp.ClientResponseError`: - If the request fails with an HTTP error status code other than HTTP 503. - - Example: - ```py - # Must be run in an async context - >>> from huggingface_hub import AsyncInferenceClient - >>> client = AsyncInferenceClient() - >>> table = { - ... "Height": ["11.52", "12.48", "12.3778"], - ... "Length1": ["23.2", "24", "23.9"], - ... "Length2": ["25.4", "26.3", "26.5"], - ... "Length3": ["30", "31.2", "31.1"], - ... "Species": ["Bream", "Bream", "Bream"], - ... "Width": ["4.02", "4.3056", "4.6961"], - ... } - >>> await client.tabular_regression(table, model="scikit-learn/Fish-Weight") - [110, 120, 130] - ``` - """ - response = await self.post(json={"table": table}, model=model, task="tabular-regression") - return _bytes_to_list(response) - - async def text_classification(self, text: str, *, model: Optional[str] = None) -> List[ClassificationOutput]: - """ - Perform text classification (e.g. sentiment-analysis) on the given text. - - Args: - text (`str`): - A string to be classified. - model (`str`, *optional*): - The model to use for the text classification task. Can be a model ID hosted on the Hugging Face Hub or a URL to - a deployed Inference Endpoint. If not provided, the default recommended text classification model will be used. - Defaults to None. - - Returns: - `List[Dict]`: a list of dictionaries containing the predicted label and associated probability. - - Raises: - [`InferenceTimeoutError`]: - If the model is unavailable or the request times out. - `aiohttp.ClientResponseError`: - If the request fails with an HTTP error status code other than HTTP 503. - - Example: - ```py - # Must be run in an async context - >>> from huggingface_hub import AsyncInferenceClient - >>> client = AsyncInferenceClient() - >>> await client.text_classification("I like you") - [{'label': 'POSITIVE', 'score': 0.9998695850372314}, {'label': 'NEGATIVE', 'score': 0.0001304351753788069}] - ``` - """ - response = await self.post(json={"inputs": text}, model=model, task="text-classification") - return _bytes_to_list(response)[0] - - @overload - async def text_generation( # type: ignore - self, - prompt: str, - *, - details: Literal[False] = ..., - stream: Literal[False] = ..., - model: Optional[str] = None, - do_sample: bool = False, - max_new_tokens: int = 20, - best_of: Optional[int] = None, - repetition_penalty: Optional[float] = None, - return_full_text: bool = False, - seed: Optional[int] = None, - stop_sequences: Optional[List[str]] = None, - temperature: Optional[float] = None, - top_k: Optional[int] = None, - top_p: Optional[float] = None, - truncate: Optional[int] = None, - typical_p: Optional[float] = None, - watermark: bool = False, - ) -> str: - ... - - @overload - async def text_generation( # type: ignore - self, - prompt: str, - *, - details: Literal[True] = ..., - stream: Literal[False] = ..., - model: Optional[str] = None, - do_sample: bool = False, - max_new_tokens: int = 20, - best_of: Optional[int] = None, - repetition_penalty: Optional[float] = None, - return_full_text: bool = False, - seed: Optional[int] = None, - stop_sequences: Optional[List[str]] = None, - temperature: Optional[float] = None, - top_k: Optional[int] = None, - top_p: Optional[float] = None, - truncate: Optional[int] = None, - typical_p: Optional[float] = None, - watermark: bool = False, - ) -> TextGenerationResponse: - ... - - @overload - async def text_generation( # type: ignore - self, - prompt: str, - *, - details: Literal[False] = ..., - stream: Literal[True] = ..., - model: Optional[str] = None, - do_sample: bool = False, - max_new_tokens: int = 20, - best_of: Optional[int] = None, - repetition_penalty: Optional[float] = None, - return_full_text: bool = False, - seed: Optional[int] = None, - stop_sequences: Optional[List[str]] = None, - temperature: Optional[float] = None, - top_k: Optional[int] = None, - top_p: Optional[float] = None, - truncate: Optional[int] = None, - typical_p: Optional[float] = None, - watermark: bool = False, - ) -> AsyncIterable[str]: - ... - - @overload - async def text_generation( - self, - prompt: str, - *, - details: Literal[True] = ..., - stream: Literal[True] = ..., - model: Optional[str] = None, - do_sample: bool = False, - max_new_tokens: int = 20, - best_of: Optional[int] = None, - repetition_penalty: Optional[float] = None, - return_full_text: bool = False, - seed: Optional[int] = None, - stop_sequences: Optional[List[str]] = None, - temperature: Optional[float] = None, - top_k: Optional[int] = None, - top_p: Optional[float] = None, - truncate: Optional[int] = None, - typical_p: Optional[float] = None, - watermark: bool = False, - ) -> AsyncIterable[TextGenerationStreamResponse]: - ... - - async def text_generation( - self, - prompt: str, - *, - details: bool = False, - stream: bool = False, - model: Optional[str] = None, - do_sample: bool = False, - max_new_tokens: int = 20, - best_of: Optional[int] = None, - repetition_penalty: Optional[float] = None, - return_full_text: bool = False, - seed: Optional[int] = None, - stop_sequences: Optional[List[str]] = None, - temperature: Optional[float] = None, - top_k: Optional[int] = None, - top_p: Optional[float] = None, - truncate: Optional[int] = None, - typical_p: Optional[float] = None, - watermark: bool = False, - decoder_input_details: bool = False, - ) -> Union[str, TextGenerationResponse, AsyncIterable[str], AsyncIterable[TextGenerationStreamResponse]]: - """ - Given a prompt, generate the following text. - - It is recommended to have Pydantic installed in order to get inputs validated. This is preferable as it allow - early failures. - - API endpoint is supposed to run with the `text-generation-inference` backend (TGI). This backend is the - go-to solution to run large language models at scale. However, for some smaller models (e.g. "gpt2") the - default `transformers` + `api-inference` solution is still in use. Both approaches have very similar APIs, but - not exactly the same. This method is compatible with both approaches but some parameters are only available for - `text-generation-inference`. If some parameters are ignored, a warning message is triggered but the process - continues correctly. - - To learn more about the TGI project, please refer to https://github.com/huggingface/text-generation-inference. - - Args: - prompt (`str`): - Input text. - details (`bool`, *optional*): - By default, text_generation returns a string. Pass `details=True` if you want a detailed output (tokens, - probabilities, seed, finish reason, etc.). Only available for models running on with the - `text-generation-inference` backend. - stream (`bool`, *optional*): - By default, text_generation returns the full generated text. Pass `stream=True` if you want a stream of - tokens to be returned. Only available for models running on with the `text-generation-inference` - backend. - model (`str`, *optional*): - The model to use for inference. Can be a model ID hosted on the Hugging Face Hub or a URL to a deployed - Inference Endpoint. This parameter overrides the model defined at the instance level. Defaults to None. - do_sample (`bool`): - Activate logits sampling - max_new_tokens (`int`): - Maximum number of generated tokens - best_of (`int`): - Generate best_of sequences and return the one if the highest token logprobs - repetition_penalty (`float`): - The parameter for repetition penalty. 1.0 means no penalty. See [this - paper](https://arxiv.org/pdf/1909.05858.pdf) for more details. - return_full_text (`bool`): - Whether to prepend the prompt to the generated text - seed (`int`): - Random sampling seed - stop_sequences (`List[str]`): - Stop generating tokens if a member of `stop_sequences` is generated - temperature (`float`): - The value used to module the logits distribution. - top_k (`int`): - The number of highest probability vocabulary tokens to keep for top-k-filtering. - top_p (`float`): - If set to < 1, only the smallest set of most probable tokens with probabilities that add up to `top_p` or - higher are kept for generation. - truncate (`int`): - Truncate inputs tokens to the given size - typical_p (`float`): - Typical Decoding mass - See [Typical Decoding for Natural Language Generation](https://arxiv.org/abs/2202.00666) for more information - watermark (`bool`): - Watermarking with [A Watermark for Large Language Models](https://arxiv.org/abs/2301.10226) - decoder_input_details (`bool`): - Return the decoder input token logprobs and ids. You must set `details=True` as well for it to be taken - into account. Defaults to `False`. - - Returns: - `Union[str, TextGenerationResponse, Iterable[str], Iterable[TextGenerationStreamResponse]]`: - Generated text returned from the server: - - if `stream=False` and `details=False`, the generated text is returned as a `str` (default) - - if `stream=True` and `details=False`, the generated text is returned token by token as a `Iterable[str]` - - if `stream=False` and `details=True`, the generated text is returned with more details as a [`~huggingface_hub.inference._text_generation.TextGenerationResponse`] - - if `details=True` and `stream=True`, the generated text is returned token by token as a iterable of [`~huggingface_hub.inference._text_generation.TextGenerationStreamResponse`] - - Raises: - `ValidationError`: - If input values are not valid. No HTTP call is made to the server. - [`InferenceTimeoutError`]: - If the model is unavailable or the request times out. - `aiohttp.ClientResponseError`: - If the request fails with an HTTP error status code other than HTTP 503. - - Example: - ```py - # Must be run in an async context - >>> from huggingface_hub import AsyncInferenceClient - >>> client = AsyncInferenceClient() - - # Case 1: generate text - >>> await client.text_generation("The huggingface_hub library is ", max_new_tokens=12) - '100% open source and built to be easy to use.' - - # Case 2: iterate over the generated tokens. Useful async for large generation. - >>> async for token in await client.text_generation("The huggingface_hub library is ", max_new_tokens=12, stream=True): - ... print(token) - 100 - % - open - source - and - built - to - be - easy - to - use - . - - # Case 3: get more details about the generation process. - >>> await client.text_generation("The huggingface_hub library is ", max_new_tokens=12, details=True) - TextGenerationResponse( - generated_text='100% open source and built to be easy to use.', - details=Details( - finish_reason=, - generated_tokens=12, - seed=None, - prefill=[ - InputToken(id=487, text='The', logprob=None), - InputToken(id=53789, text=' hugging', logprob=-13.171875), - (...) - InputToken(id=204, text=' ', logprob=-7.0390625) - ], - tokens=[ - Token(id=1425, text='100', logprob=-1.0175781, special=False), - Token(id=16, text='%', logprob=-0.0463562, special=False), - (...) - Token(id=25, text='.', logprob=-0.5703125, special=False) - ], - best_of_sequences=None - ) - ) - - # Case 4: iterate over the generated tokens with more details. - # Last object is more complete, containing the full generated text and the finish reason. - >>> async for details in await client.text_generation("The huggingface_hub library is ", max_new_tokens=12, details=True, stream=True): - ... print(details) - ... - TextGenerationStreamResponse(token=Token(id=1425, text='100', logprob=-1.0175781, special=False), generated_text=None, details=None) - TextGenerationStreamResponse(token=Token(id=16, text='%', logprob=-0.0463562, special=False), generated_text=None, details=None) - TextGenerationStreamResponse(token=Token(id=1314, text=' open', logprob=-1.3359375, special=False), generated_text=None, details=None) - TextGenerationStreamResponse(token=Token(id=3178, text=' source', logprob=-0.28100586, special=False), generated_text=None, details=None) - TextGenerationStreamResponse(token=Token(id=273, text=' and', logprob=-0.5961914, special=False), generated_text=None, details=None) - TextGenerationStreamResponse(token=Token(id=3426, text=' built', logprob=-1.9423828, special=False), generated_text=None, details=None) - TextGenerationStreamResponse(token=Token(id=271, text=' to', logprob=-1.4121094, special=False), generated_text=None, details=None) - TextGenerationStreamResponse(token=Token(id=314, text=' be', logprob=-1.5224609, special=False), generated_text=None, details=None) - TextGenerationStreamResponse(token=Token(id=1833, text=' easy', logprob=-2.1132812, special=False), generated_text=None, details=None) - TextGenerationStreamResponse(token=Token(id=271, text=' to', logprob=-0.08520508, special=False), generated_text=None, details=None) - TextGenerationStreamResponse(token=Token(id=745, text=' use', logprob=-0.39453125, special=False), generated_text=None, details=None) - TextGenerationStreamResponse(token=Token( - id=25, - text='.', - logprob=-0.5703125, - special=False), - generated_text='100% open source and built to be easy to use.', - details=StreamDetails(finish_reason=, generated_tokens=12, seed=None) - ) - ``` - """ - # NOTE: Text-generation integration is taken from the text-generation-inference project. It has more features - # like input/output validation (if Pydantic is installed). See `_text_generation.py` header for more details. - - if decoder_input_details and not details: - warnings.warn( - "`decoder_input_details=True` has been passed to the server but `details=False` is set meaning that" - " the output from the server will be truncated." - ) - decoder_input_details = False - - # Validate parameters - parameters = TextGenerationParameters( - best_of=best_of, - details=details, - do_sample=do_sample, - max_new_tokens=max_new_tokens, - repetition_penalty=repetition_penalty, - return_full_text=return_full_text, - seed=seed, - stop=stop_sequences if stop_sequences is not None else [], - temperature=temperature, - top_k=top_k, - top_p=top_p, - truncate=truncate, - typical_p=typical_p, - watermark=watermark, - decoder_input_details=decoder_input_details, - ) - request = TextGenerationRequest(inputs=prompt, stream=stream, parameters=parameters) - payload = asdict(request) - - # Remove some parameters if not a TGI server - if not _is_tgi_server(model): - ignored_parameters = [] - for key in "watermark", "stop", "details", "decoder_input_details": - if payload["parameters"][key] is not None: - ignored_parameters.append(key) - del payload["parameters"][key] - if len(ignored_parameters) > 0: - warnings.warn( - "API endpoint/model for text-generation is not served via TGI. Ignoring parameters" - f" {ignored_parameters}.", - UserWarning, - ) - if details: - warnings.warn( - "API endpoint/model for text-generation is not served via TGI. Parameter `details=True` will" - " be ignored meaning only the generated text will be returned.", - UserWarning, - ) - details = False - if stream: - raise ValueError( - "API endpoint/model for text-generation is not served via TGI. Cannot return output as a stream." - " Please pass `stream=False` as input." - ) - - # Handle errors separately for more precise error messages - try: - bytes_output = await self.post(json=payload, model=model, task="text-generation", stream=stream) # type: ignore - except _import_aiohttp().ClientResponseError as e: - error_message = getattr(e, "response_error_payload", {}).get("error", "") - if e.code == 400 and "The following `model_kwargs` are not used by the model" in error_message: - _set_as_non_tgi(model) - return await self.text_generation( # type: ignore - prompt=prompt, - details=details, - stream=stream, - model=model, - do_sample=do_sample, - max_new_tokens=max_new_tokens, - best_of=best_of, - repetition_penalty=repetition_penalty, - return_full_text=return_full_text, - seed=seed, - stop_sequences=stop_sequences, - temperature=temperature, - top_k=top_k, - top_p=top_p, - truncate=truncate, - typical_p=typical_p, - watermark=watermark, - decoder_input_details=decoder_input_details, - ) - raise_text_generation_error(e) - - # Parse output - if stream: - return _async_stream_text_generation_response(bytes_output, details) # type: ignore - - data = _bytes_to_dict(bytes_output)[0] - return TextGenerationResponse(**data) if details else data["generated_text"] - - async def text_to_image( - self, - prompt: str, - *, - negative_prompt: Optional[str] = None, - height: Optional[float] = None, - width: Optional[float] = None, - num_inference_steps: Optional[float] = None, - guidance_scale: Optional[float] = None, - model: Optional[str] = None, - **kwargs, - ) -> "Image": - """ - Generate an image based on a given text using a specified model. - - - - You must have `PIL` installed if you want to work with images (`pip install Pillow`). - - - - Args: - prompt (`str`): - The prompt to generate an image from. - negative_prompt (`str`, *optional*): - An optional negative prompt for the image generation. - height (`float`, *optional*): - The height in pixels of the image to generate. - width (`float`, *optional*): - The width in pixels of the image to generate. - num_inference_steps (`int`, *optional*): - The number of denoising steps. More denoising steps usually lead to a higher quality image at the - expense of slower inference. - guidance_scale (`float`, *optional*): - Higher guidance scale encourages to generate images that are closely linked to the text `prompt`, - usually at the expense of lower image quality. - model (`str`, *optional*): - The model to use for inference. Can be a model ID hosted on the Hugging Face Hub or a URL to a deployed - Inference Endpoint. This parameter overrides the model defined at the instance level. Defaults to None. - - Returns: - `Image`: The generated image. - - Raises: - [`InferenceTimeoutError`]: - If the model is unavailable or the request times out. - `aiohttp.ClientResponseError`: - If the request fails with an HTTP error status code other than HTTP 503. - - Example: - ```py - # Must be run in an async context - >>> from huggingface_hub import AsyncInferenceClient - >>> client = AsyncInferenceClient() - - >>> image = await client.text_to_image("An astronaut riding a horse on the moon.") - >>> image.save("astronaut.png") - - >>> image = await client.text_to_image( - ... "An astronaut riding a horse on the moon.", - ... negative_prompt="low resolution, blurry", - ... model="stabilityai/stable-diffusion-2-1", - ... ) - >>> image.save("better_astronaut.png") - ``` - """ - payload = {"inputs": prompt} - parameters = { - "negative_prompt": negative_prompt, - "height": height, - "width": width, - "num_inference_steps": num_inference_steps, - "guidance_scale": guidance_scale, - **kwargs, - } - for key, value in parameters.items(): - if value is not None: - payload.setdefault("parameters", {})[key] = value # type: ignore - response = await self.post(json=payload, model=model, task="text-to-image") - return _bytes_to_image(response) - - async def text_to_speech(self, text: str, *, model: Optional[str] = None) -> bytes: - """ - Synthesize an audio of a voice pronouncing a given text. - - Args: - text (`str`): - The text to synthesize. - model (`str`, *optional*): - The model to use for inference. Can be a model ID hosted on the Hugging Face Hub or a URL to a deployed - Inference Endpoint. This parameter overrides the model defined at the instance level. Defaults to None. - - Returns: - `bytes`: The generated audio. - - Raises: - [`InferenceTimeoutError`]: - If the model is unavailable or the request times out. - `aiohttp.ClientResponseError`: - If the request fails with an HTTP error status code other than HTTP 503. - - Example: - ```py - # Must be run in an async context - >>> from pathlib import Path - >>> from huggingface_hub import AsyncInferenceClient - >>> client = AsyncInferenceClient() - - >>> audio = await client.text_to_speech("Hello world") - >>> Path("hello_world.flac").write_bytes(audio) - ``` - """ - return await self.post(json={"inputs": text}, model=model, task="text-to-speech") - - async def token_classification(self, text: str, *, model: Optional[str] = None) -> List[TokenClassificationOutput]: - """ - Perform token classification on the given text. - Usually used for sentence parsing, either grammatical, or Named Entity Recognition (NER) to understand keywords contained within text. - - Args: - text (`str`): - A string to be classified. - model (`str`, *optional*): - The model to use for the token classification task. Can be a model ID hosted on the Hugging Face Hub or a URL to - a deployed Inference Endpoint. If not provided, the default recommended token classification model will be used. - Defaults to None. - - Returns: - `List[Dict]`: List of token classification outputs containing the entity group, confidence score, word, start and end index. - - Raises: - [`InferenceTimeoutError`]: - If the model is unavailable or the request times out. - `aiohttp.ClientResponseError`: - If the request fails with an HTTP error status code other than HTTP 503. - - Example: - ```py - # Must be run in an async context - >>> from huggingface_hub import AsyncInferenceClient - >>> client = AsyncInferenceClient() - >>> await client.token_classification("My name is Sarah Jessica Parker but you can call me Jessica") - [{'entity_group': 'PER', - 'score': 0.9971321225166321, - 'word': 'Sarah Jessica Parker', - 'start': 11, - 'end': 31}, - {'entity_group': 'PER', - 'score': 0.9773476123809814, - 'word': 'Jessica', - 'start': 52, - 'end': 59}] - ``` - """ - payload: Dict[str, Any] = {"inputs": text} - response = await self.post( - json=payload, - model=model, - task="token-classification", - ) - return _bytes_to_list(response) - - async def translation(self, text: str, *, model: Optional[str] = None) -> str: - """ - Convert text from one language to another. - - Check out https://huggingface.co/tasks/translation for more information on how to choose the best model for - your specific use case. Source and target languages usually depends on the model. - - Args: - text (`str`): - A string to be translated. - model (`str`, *optional*): - The model to use for the translation task. Can be a model ID hosted on the Hugging Face Hub or a URL to - a deployed Inference Endpoint. If not provided, the default recommended translation model will be used. - Defaults to None. - - Returns: - `str`: The generated translated text. - - Raises: - [`InferenceTimeoutError`]: - If the model is unavailable or the request times out. - `aiohttp.ClientResponseError`: - If the request fails with an HTTP error status code other than HTTP 503. - - Example: - ```py - # Must be run in an async context - >>> from huggingface_hub import AsyncInferenceClient - >>> client = AsyncInferenceClient() - >>> await client.translation("My name is Wolfgang and I live in Berlin") - 'Mein Name ist Wolfgang und ich lebe in Berlin.' - >>> await client.translation("My name is Wolfgang and I live in Berlin", model="Helsinki-NLP/opus-mt-en-fr") - "Je m'appelle Wolfgang et je vis à Berlin." - ``` - """ - response = await self.post(json={"inputs": text}, model=model, task="translation") - return _bytes_to_dict(response)[0]["translation_text"] - - async def zero_shot_classification( - self, text: str, labels: List[str], *, multi_label: bool = False, model: Optional[str] = None - ) -> List[ClassificationOutput]: - """ - Provide as input a text and a set of candidate labels to classify the input text. - - Args: - text (`str`): - The input text to classify. - labels (`List[str]`): - List of string possible labels. There must be at least 2 labels. - multi_label (`bool`): - Boolean that is set to True if classes can overlap. - model (`str`, *optional*): - The model to use for inference. Can be a model ID hosted on the Hugging Face Hub or a URL to a deployed - Inference Endpoint. This parameter overrides the model defined at the instance level. Defaults to None. - - Returns: - `List[Dict]`: List of classification outputs containing the predicted labels and their confidence. - - Raises: - [`InferenceTimeoutError`]: - If the model is unavailable or the request times out. - `aiohttp.ClientResponseError`: - If the request fails with an HTTP error status code other than HTTP 503. - - Example: - ```py - # Must be run in an async context - >>> from huggingface_hub import AsyncInferenceClient - >>> client = AsyncInferenceClient() - >>> text = ( - ... "A new model offers an explanation async for how the Galilean satellites formed around the solar system's" - ... "largest world. Konstantin Batygin did not set out to solve one of the solar system's most puzzling" - ... " mysteries when he went async for a run up a hill in Nice, France." - ... ) - >>> labels = ["space & cosmos", "scientific discovery", "microbiology", "robots", "archeology"] - >>> await client.zero_shot_classification(text, labels) - [ - {"label": "scientific discovery", "score": 0.7961668968200684}, - {"label": "space & cosmos", "score": 0.18570658564567566}, - {"label": "microbiology", "score": 0.00730885099619627}, - {"label": "archeology", "score": 0.006258360575884581}, - {"label": "robots", "score": 0.004559356719255447}, - ] - >>> await client.zero_shot_classification(text, labels, multi_label=True) - [ - {"label": "scientific discovery", "score": 0.9829297661781311}, - {"label": "space & cosmos", "score": 0.755190908908844}, - {"label": "microbiology", "score": 0.0005462635890580714}, - {"label": "archeology", "score": 0.00047131875180639327}, - {"label": "robots", "score": 0.00030448526376858354}, - ] - ``` - """ - # Raise ValueError if input is less than 2 labels - if len(labels) < 2: - raise ValueError("You must specify at least 2 classes to compare.") - - response = await self.post( - json={ - "inputs": text, - "parameters": { - "candidate_labels": ",".join(labels), - "multi_label": multi_label, - }, - }, - model=model, - task="zero-shot-classification", - ) - output = _bytes_to_dict(response) - return [{"label": label, "score": score} for label, score in zip(output["labels"], output["scores"])] - - async def zero_shot_image_classification( - self, image: ContentT, labels: List[str], *, model: Optional[str] = None - ) -> List[ClassificationOutput]: - """ - Provide input image and text labels to predict text labels for the image. - - Args: - image (`Union[str, Path, bytes, BinaryIO]`): - The input image to caption. It can be raw bytes, an image file, or a URL to an online image. - labels (`List[str]`): - List of string possible labels. There must be at least 2 labels. - model (`str`, *optional*): - The model to use for inference. Can be a model ID hosted on the Hugging Face Hub or a URL to a deployed - Inference Endpoint. This parameter overrides the model defined at the instance level. Defaults to None. - - Returns: - `List[Dict]`: List of classification outputs containing the predicted labels and their confidence. - - Raises: - [`InferenceTimeoutError`]: - If the model is unavailable or the request times out. - `aiohttp.ClientResponseError`: - If the request fails with an HTTP error status code other than HTTP 503. - - Example: - ```py - # Must be run in an async context - >>> from huggingface_hub import AsyncInferenceClient - >>> client = AsyncInferenceClient() - - >>> await client.zero_shot_image_classification( - ... "https://upload.wikimedia.org/wikipedia/commons/thumb/4/43/Cute_dog.jpg/320px-Cute_dog.jpg", - ... labels=["dog", "cat", "horse"], - ... ) - [{"label": "dog", "score": 0.956}, ...] - ``` - """ - # Raise ValueError if input is less than 2 labels - if len(labels) < 2: - raise ValueError("You must specify at least 2 classes to compare.") - - response = await self.post( - json={"image": _b64_encode(image), "parameters": {"candidate_labels": ",".join(labels)}}, - model=model, - task="zero-shot-image-classification", - ) - return _bytes_to_list(response) - - def _resolve_url(self, model: Optional[str] = None, task: Optional[str] = None) -> str: - model = model or self.model - - # If model is already a URL, ignore `task` and return directly - if model is not None and (model.startswith("http://") or model.startswith("https://")): - return model - - # # If no model but task is set => fetch the recommended one for this task - if model is None: - if task is None: - raise ValueError( - "You must specify at least a model (repo_id or URL) or a task, either when instantiating" - " `InferenceClient` or when making a request." - ) - model = _get_recommended_model(task) - - # Compute InferenceAPI url - return ( - # Feature-extraction and sentence-similarity are the only cases where we handle models with several tasks. - f"{INFERENCE_ENDPOINT}/pipeline/{task}/{model}" - if task in ("feature-extraction", "sentence-similarity") - # Otherwise, we use the default endpoint - else f"{INFERENCE_ENDPOINT}/models/{model}" - ) - - async def get_model_status(self, model: Optional[str] = None) -> ModelStatus: - """ - Get the status of a model hosted on the Inference API. - - - - This endpoint is mostly useful when you already know which model you want to use and want to check its - availability. If you want to discover already deployed models, you should rather use [`~InferenceClient.list_deployed_models`]. - - - - Args: - model (`str`, *optional*): - Identifier of the model for witch the status gonna be checked. If model is not provided, - the model associated with this instance of [`InferenceClient`] will be used. Only InferenceAPI service can be checked so the - identifier cannot be a URL. - - - Returns: - [`ModelStatus`]: An instance of ModelStatus dataclass, containing information, - about the state of the model: load, state, compute type and framework. - - Example: - ```py - # Must be run in an async context - >>> from huggingface_hub import AsyncInferenceClient - >>> client = AsyncInferenceClient() - >>> await client.get_model_status("bigcode/starcoder") - ModelStatus(loaded=True, state='Loaded', compute_type='gpu', framework='text-generation-inference') - ``` - """ - model = model or self.model - if model is None: - raise ValueError("Model id not provided.") - if model.startswith("https://"): - raise NotImplementedError("Model status is only available for Inference API endpoints.") - url = f"{INFERENCE_ENDPOINT}/status/{model}" - - async with _import_aiohttp().ClientSession(headers=self.headers) as client: - response = await client.get(url) - response.raise_for_status() - response_data = await response.json() - - if "error" in response_data: - raise ValueError(response_data["error"]) - - return ModelStatus( - loaded=response_data["loaded"], - state=response_data["state"], - compute_type=response_data["compute_type"], - framework=response_data["framework"], - ) diff --git a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/mpl_toolkits/axes_grid1/anchored_artists.py b/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/mpl_toolkits/axes_grid1/anchored_artists.py deleted file mode 100644 index 1238310b462bc65cc90f861d1034beb4bf0f1fd3..0000000000000000000000000000000000000000 --- a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/mpl_toolkits/axes_grid1/anchored_artists.py +++ /dev/null @@ -1,462 +0,0 @@ -from matplotlib import _api, transforms -from matplotlib.offsetbox import (AnchoredOffsetbox, AuxTransformBox, - DrawingArea, TextArea, VPacker) -from matplotlib.patches import (Rectangle, Ellipse, ArrowStyle, - FancyArrowPatch, PathPatch) -from matplotlib.text import TextPath - -__all__ = ['AnchoredDrawingArea', 'AnchoredAuxTransformBox', - 'AnchoredEllipse', 'AnchoredSizeBar', 'AnchoredDirectionArrows'] - - -class AnchoredDrawingArea(AnchoredOffsetbox): - def __init__(self, width, height, xdescent, ydescent, - loc, pad=0.4, borderpad=0.5, prop=None, frameon=True, - **kwargs): - """ - An anchored container with a fixed size and fillable `.DrawingArea`. - - Artists added to the *drawing_area* will have their coordinates - interpreted as pixels. Any transformations set on the artists will be - overridden. - - Parameters - ---------- - width, height : float - Width and height of the container, in pixels. - xdescent, ydescent : float - Descent of the container in the x- and y- direction, in pixels. - loc : str - Location of this artist. Valid locations are - 'upper left', 'upper center', 'upper right', - 'center left', 'center', 'center right', - 'lower left', 'lower center', 'lower right'. - For backward compatibility, numeric values are accepted as well. - See the parameter *loc* of `.Legend` for details. - pad : float, default: 0.4 - Padding around the child objects, in fraction of the font size. - borderpad : float, default: 0.5 - Border padding, in fraction of the font size. - prop : `~matplotlib.font_manager.FontProperties`, optional - Font property used as a reference for paddings. - frameon : bool, default: True - If True, draw a box around this artist. - **kwargs - Keyword arguments forwarded to `.AnchoredOffsetbox`. - - Attributes - ---------- - drawing_area : `~matplotlib.offsetbox.DrawingArea` - A container for artists to display. - - Examples - -------- - To display blue and red circles of different sizes in the upper right - of an Axes *ax*: - - >>> ada = AnchoredDrawingArea(20, 20, 0, 0, - ... loc='upper right', frameon=False) - >>> ada.drawing_area.add_artist(Circle((10, 10), 10, fc="b")) - >>> ada.drawing_area.add_artist(Circle((30, 10), 5, fc="r")) - >>> ax.add_artist(ada) - """ - self.da = DrawingArea(width, height, xdescent, ydescent) - self.drawing_area = self.da - - super().__init__( - loc, pad=pad, borderpad=borderpad, child=self.da, prop=None, - frameon=frameon, **kwargs - ) - - -class AnchoredAuxTransformBox(AnchoredOffsetbox): - def __init__(self, transform, loc, - pad=0.4, borderpad=0.5, prop=None, frameon=True, **kwargs): - """ - An anchored container with transformed coordinates. - - Artists added to the *drawing_area* are scaled according to the - coordinates of the transformation used. The dimensions of this artist - will scale to contain the artists added. - - Parameters - ---------- - transform : `~matplotlib.transforms.Transform` - The transformation object for the coordinate system in use, i.e., - :attr:`matplotlib.axes.Axes.transData`. - loc : str - Location of this artist. Valid locations are - 'upper left', 'upper center', 'upper right', - 'center left', 'center', 'center right', - 'lower left', 'lower center', 'lower right'. - For backward compatibility, numeric values are accepted as well. - See the parameter *loc* of `.Legend` for details. - pad : float, default: 0.4 - Padding around the child objects, in fraction of the font size. - borderpad : float, default: 0.5 - Border padding, in fraction of the font size. - prop : `~matplotlib.font_manager.FontProperties`, optional - Font property used as a reference for paddings. - frameon : bool, default: True - If True, draw a box around this artist. - **kwargs - Keyword arguments forwarded to `.AnchoredOffsetbox`. - - Attributes - ---------- - drawing_area : `~matplotlib.offsetbox.AuxTransformBox` - A container for artists to display. - - Examples - -------- - To display an ellipse in the upper left, with a width of 0.1 and - height of 0.4 in data coordinates: - - >>> box = AnchoredAuxTransformBox(ax.transData, loc='upper left') - >>> el = Ellipse((0, 0), width=0.1, height=0.4, angle=30) - >>> box.drawing_area.add_artist(el) - >>> ax.add_artist(box) - """ - self.drawing_area = AuxTransformBox(transform) - - super().__init__(loc, pad=pad, borderpad=borderpad, - child=self.drawing_area, prop=prop, frameon=frameon, - **kwargs) - - -@_api.deprecated("3.8") -class AnchoredEllipse(AnchoredOffsetbox): - def __init__(self, transform, width, height, angle, loc, - pad=0.1, borderpad=0.1, prop=None, frameon=True, **kwargs): - """ - Draw an anchored ellipse of a given size. - - Parameters - ---------- - transform : `~matplotlib.transforms.Transform` - The transformation object for the coordinate system in use, i.e., - :attr:`matplotlib.axes.Axes.transData`. - width, height : float - Width and height of the ellipse, given in coordinates of - *transform*. - angle : float - Rotation of the ellipse, in degrees, anti-clockwise. - loc : str - Location of the ellipse. Valid locations are - 'upper left', 'upper center', 'upper right', - 'center left', 'center', 'center right', - 'lower left', 'lower center', 'lower right'. - For backward compatibility, numeric values are accepted as well. - See the parameter *loc* of `.Legend` for details. - pad : float, default: 0.1 - Padding around the ellipse, in fraction of the font size. - borderpad : float, default: 0.1 - Border padding, in fraction of the font size. - frameon : bool, default: True - If True, draw a box around the ellipse. - prop : `~matplotlib.font_manager.FontProperties`, optional - Font property used as a reference for paddings. - **kwargs - Keyword arguments forwarded to `.AnchoredOffsetbox`. - - Attributes - ---------- - ellipse : `~matplotlib.patches.Ellipse` - Ellipse patch drawn. - """ - self._box = AuxTransformBox(transform) - self.ellipse = Ellipse((0, 0), width, height, angle=angle) - self._box.add_artist(self.ellipse) - - super().__init__(loc, pad=pad, borderpad=borderpad, child=self._box, - prop=prop, frameon=frameon, **kwargs) - - -class AnchoredSizeBar(AnchoredOffsetbox): - def __init__(self, transform, size, label, loc, - pad=0.1, borderpad=0.1, sep=2, - frameon=True, size_vertical=0, color='black', - label_top=False, fontproperties=None, fill_bar=None, - **kwargs): - """ - Draw a horizontal scale bar with a center-aligned label underneath. - - Parameters - ---------- - transform : `~matplotlib.transforms.Transform` - The transformation object for the coordinate system in use, i.e., - :attr:`matplotlib.axes.Axes.transData`. - size : float - Horizontal length of the size bar, given in coordinates of - *transform*. - label : str - Label to display. - loc : str - Location of the size bar. Valid locations are - 'upper left', 'upper center', 'upper right', - 'center left', 'center', 'center right', - 'lower left', 'lower center', 'lower right'. - For backward compatibility, numeric values are accepted as well. - See the parameter *loc* of `.Legend` for details. - pad : float, default: 0.1 - Padding around the label and size bar, in fraction of the font - size. - borderpad : float, default: 0.1 - Border padding, in fraction of the font size. - sep : float, default: 2 - Separation between the label and the size bar, in points. - frameon : bool, default: True - If True, draw a box around the horizontal bar and label. - size_vertical : float, default: 0 - Vertical length of the size bar, given in coordinates of - *transform*. - color : str, default: 'black' - Color for the size bar and label. - label_top : bool, default: False - If True, the label will be over the size bar. - fontproperties : `~matplotlib.font_manager.FontProperties`, optional - Font properties for the label text. - fill_bar : bool, optional - If True and if *size_vertical* is nonzero, the size bar will - be filled in with the color specified by the size bar. - Defaults to True if *size_vertical* is greater than - zero and False otherwise. - **kwargs - Keyword arguments forwarded to `.AnchoredOffsetbox`. - - Attributes - ---------- - size_bar : `~matplotlib.offsetbox.AuxTransformBox` - Container for the size bar. - txt_label : `~matplotlib.offsetbox.TextArea` - Container for the label of the size bar. - - Notes - ----- - If *prop* is passed as a keyword argument, but *fontproperties* is - not, then *prop* is assumed to be the intended *fontproperties*. - Using both *prop* and *fontproperties* is not supported. - - Examples - -------- - >>> import matplotlib.pyplot as plt - >>> import numpy as np - >>> from mpl_toolkits.axes_grid1.anchored_artists import ( - ... AnchoredSizeBar) - >>> fig, ax = plt.subplots() - >>> ax.imshow(np.random.random((10, 10))) - >>> bar = AnchoredSizeBar(ax.transData, 3, '3 data units', 4) - >>> ax.add_artist(bar) - >>> fig.show() - - Using all the optional parameters - - >>> import matplotlib.font_manager as fm - >>> fontprops = fm.FontProperties(size=14, family='monospace') - >>> bar = AnchoredSizeBar(ax.transData, 3, '3 units', 4, pad=0.5, - ... sep=5, borderpad=0.5, frameon=False, - ... size_vertical=0.5, color='white', - ... fontproperties=fontprops) - """ - if fill_bar is None: - fill_bar = size_vertical > 0 - - self.size_bar = AuxTransformBox(transform) - self.size_bar.add_artist(Rectangle((0, 0), size, size_vertical, - fill=fill_bar, facecolor=color, - edgecolor=color)) - - if fontproperties is None and 'prop' in kwargs: - fontproperties = kwargs.pop('prop') - - if fontproperties is None: - textprops = {'color': color} - else: - textprops = {'color': color, 'fontproperties': fontproperties} - - self.txt_label = TextArea(label, textprops=textprops) - - if label_top: - _box_children = [self.txt_label, self.size_bar] - else: - _box_children = [self.size_bar, self.txt_label] - - self._box = VPacker(children=_box_children, - align="center", - pad=0, sep=sep) - - super().__init__(loc, pad=pad, borderpad=borderpad, child=self._box, - prop=fontproperties, frameon=frameon, **kwargs) - - -class AnchoredDirectionArrows(AnchoredOffsetbox): - def __init__(self, transform, label_x, label_y, length=0.15, - fontsize=0.08, loc='upper left', angle=0, aspect_ratio=1, - pad=0.4, borderpad=0.4, frameon=False, color='w', alpha=1, - sep_x=0.01, sep_y=0, fontproperties=None, back_length=0.15, - head_width=10, head_length=15, tail_width=2, - text_props=None, arrow_props=None, - **kwargs): - """ - Draw two perpendicular arrows to indicate directions. - - Parameters - ---------- - transform : `~matplotlib.transforms.Transform` - The transformation object for the coordinate system in use, i.e., - :attr:`matplotlib.axes.Axes.transAxes`. - label_x, label_y : str - Label text for the x and y arrows - length : float, default: 0.15 - Length of the arrow, given in coordinates of *transform*. - fontsize : float, default: 0.08 - Size of label strings, given in coordinates of *transform*. - loc : str, default: 'upper left' - Location of the arrow. Valid locations are - 'upper left', 'upper center', 'upper right', - 'center left', 'center', 'center right', - 'lower left', 'lower center', 'lower right'. - For backward compatibility, numeric values are accepted as well. - See the parameter *loc* of `.Legend` for details. - angle : float, default: 0 - The angle of the arrows in degrees. - aspect_ratio : float, default: 1 - The ratio of the length of arrow_x and arrow_y. - Negative numbers can be used to change the direction. - pad : float, default: 0.4 - Padding around the labels and arrows, in fraction of the font size. - borderpad : float, default: 0.4 - Border padding, in fraction of the font size. - frameon : bool, default: False - If True, draw a box around the arrows and labels. - color : str, default: 'white' - Color for the arrows and labels. - alpha : float, default: 1 - Alpha values of the arrows and labels - sep_x, sep_y : float, default: 0.01 and 0 respectively - Separation between the arrows and labels in coordinates of - *transform*. - fontproperties : `~matplotlib.font_manager.FontProperties`, optional - Font properties for the label text. - back_length : float, default: 0.15 - Fraction of the arrow behind the arrow crossing. - head_width : float, default: 10 - Width of arrow head, sent to `.ArrowStyle`. - head_length : float, default: 15 - Length of arrow head, sent to `.ArrowStyle`. - tail_width : float, default: 2 - Width of arrow tail, sent to `.ArrowStyle`. - text_props, arrow_props : dict - Properties of the text and arrows, passed to `.TextPath` and - `.FancyArrowPatch`. - **kwargs - Keyword arguments forwarded to `.AnchoredOffsetbox`. - - Attributes - ---------- - arrow_x, arrow_y : `~matplotlib.patches.FancyArrowPatch` - Arrow x and y - text_path_x, text_path_y : `~matplotlib.text.TextPath` - Path for arrow labels - p_x, p_y : `~matplotlib.patches.PathPatch` - Patch for arrow labels - box : `~matplotlib.offsetbox.AuxTransformBox` - Container for the arrows and labels. - - Notes - ----- - If *prop* is passed as a keyword argument, but *fontproperties* is - not, then *prop* is assumed to be the intended *fontproperties*. - Using both *prop* and *fontproperties* is not supported. - - Examples - -------- - >>> import matplotlib.pyplot as plt - >>> import numpy as np - >>> from mpl_toolkits.axes_grid1.anchored_artists import ( - ... AnchoredDirectionArrows) - >>> fig, ax = plt.subplots() - >>> ax.imshow(np.random.random((10, 10))) - >>> arrows = AnchoredDirectionArrows(ax.transAxes, '111', '110') - >>> ax.add_artist(arrows) - >>> fig.show() - - Using several of the optional parameters, creating downward pointing - arrow and high contrast text labels. - - >>> import matplotlib.font_manager as fm - >>> fontprops = fm.FontProperties(family='monospace') - >>> arrows = AnchoredDirectionArrows(ax.transAxes, 'East', 'South', - ... loc='lower left', color='k', - ... aspect_ratio=-1, sep_x=0.02, - ... sep_y=-0.01, - ... text_props={'ec':'w', 'fc':'k'}, - ... fontproperties=fontprops) - """ - if arrow_props is None: - arrow_props = {} - - if text_props is None: - text_props = {} - - arrowstyle = ArrowStyle("Simple", - head_width=head_width, - head_length=head_length, - tail_width=tail_width) - - if fontproperties is None and 'prop' in kwargs: - fontproperties = kwargs.pop('prop') - - if 'color' not in arrow_props: - arrow_props['color'] = color - - if 'alpha' not in arrow_props: - arrow_props['alpha'] = alpha - - if 'color' not in text_props: - text_props['color'] = color - - if 'alpha' not in text_props: - text_props['alpha'] = alpha - - t_start = transform - t_end = t_start + transforms.Affine2D().rotate_deg(angle) - - self.box = AuxTransformBox(t_end) - - length_x = length - length_y = length*aspect_ratio - - self.arrow_x = FancyArrowPatch( - (0, back_length*length_y), - (length_x, back_length*length_y), - arrowstyle=arrowstyle, - shrinkA=0.0, - shrinkB=0.0, - **arrow_props) - - self.arrow_y = FancyArrowPatch( - (back_length*length_x, 0), - (back_length*length_x, length_y), - arrowstyle=arrowstyle, - shrinkA=0.0, - shrinkB=0.0, - **arrow_props) - - self.box.add_artist(self.arrow_x) - self.box.add_artist(self.arrow_y) - - text_path_x = TextPath(( - length_x+sep_x, back_length*length_y+sep_y), label_x, - size=fontsize, prop=fontproperties) - self.p_x = PathPatch(text_path_x, transform=t_start, **text_props) - self.box.add_artist(self.p_x) - - text_path_y = TextPath(( - length_x*back_length+sep_x, length_y*(1-back_length)+sep_y), - label_y, size=fontsize, prop=fontproperties) - self.p_y = PathPatch(text_path_y, **text_props) - self.box.add_artist(self.p_y) - - super().__init__(loc, pad=pad, borderpad=borderpad, child=self.box, - frameon=frameon, **kwargs) diff --git a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/mpl_toolkits/axisartist/tests/test_floating_axes.py b/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/mpl_toolkits/axisartist/tests/test_floating_axes.py deleted file mode 100644 index 31dcf24bb22d911e8d2c498e34159c11dc31f287..0000000000000000000000000000000000000000 --- a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/mpl_toolkits/axisartist/tests/test_floating_axes.py +++ /dev/null @@ -1,115 +0,0 @@ -import numpy as np - -import matplotlib.pyplot as plt -import matplotlib.projections as mprojections -import matplotlib.transforms as mtransforms -from matplotlib.testing.decorators import image_comparison -from mpl_toolkits.axisartist.axislines import Subplot -from mpl_toolkits.axisartist.floating_axes import ( - FloatingAxes, GridHelperCurveLinear) -from mpl_toolkits.axisartist.grid_finder import FixedLocator -from mpl_toolkits.axisartist import angle_helper - - -def test_subplot(): - fig = plt.figure(figsize=(5, 5)) - ax = Subplot(fig, 111) - fig.add_subplot(ax) - - -# Rather high tolerance to allow ongoing work with floating axes internals; -# remove when image is regenerated. -@image_comparison(['curvelinear3.png'], style='default', tol=5) -def test_curvelinear3(): - fig = plt.figure(figsize=(5, 5)) - - tr = (mtransforms.Affine2D().scale(np.pi / 180, 1) + - mprojections.PolarAxes.PolarTransform()) - grid_helper = GridHelperCurveLinear( - tr, - extremes=(0, 360, 10, 3), - grid_locator1=angle_helper.LocatorDMS(15), - grid_locator2=FixedLocator([2, 4, 6, 8, 10]), - tick_formatter1=angle_helper.FormatterDMS(), - tick_formatter2=None) - ax1 = fig.add_subplot(axes_class=FloatingAxes, grid_helper=grid_helper) - - r_scale = 10 - tr2 = mtransforms.Affine2D().scale(1, 1 / r_scale) + tr - grid_helper2 = GridHelperCurveLinear( - tr2, - extremes=(0, 360, 10 * r_scale, 3 * r_scale), - grid_locator2=FixedLocator([30, 60, 90])) - - ax1.axis["right"] = axis = grid_helper2.new_fixed_axis("right", axes=ax1) - - ax1.axis["left"].label.set_text("Test 1") - ax1.axis["right"].label.set_text("Test 2") - ax1.axis["left", "right"].set_visible(False) - - axis = grid_helper.new_floating_axis(1, 7, axes=ax1, - axis_direction="bottom") - ax1.axis["z"] = axis - axis.toggle(all=True, label=True) - axis.label.set_text("z = ?") - axis.label.set_visible(True) - axis.line.set_color("0.5") - - ax2 = ax1.get_aux_axes(tr) - - xx, yy = [67, 90, 75, 30], [2, 5, 8, 4] - ax2.scatter(xx, yy) - l, = ax2.plot(xx, yy, "k-") - l.set_clip_path(ax1.patch) - - -# Rather high tolerance to allow ongoing work with floating axes internals; -# remove when image is regenerated. -@image_comparison(['curvelinear4.png'], style='default', tol=0.9) -def test_curvelinear4(): - # Remove this line when this test image is regenerated. - plt.rcParams['text.kerning_factor'] = 6 - - fig = plt.figure(figsize=(5, 5)) - - tr = (mtransforms.Affine2D().scale(np.pi / 180, 1) + - mprojections.PolarAxes.PolarTransform()) - grid_helper = GridHelperCurveLinear( - tr, - extremes=(120, 30, 10, 0), - grid_locator1=angle_helper.LocatorDMS(5), - grid_locator2=FixedLocator([2, 4, 6, 8, 10]), - tick_formatter1=angle_helper.FormatterDMS(), - tick_formatter2=None) - ax1 = fig.add_subplot(axes_class=FloatingAxes, grid_helper=grid_helper) - ax1.clear() # Check that clear() also restores the correct limits on ax1. - - ax1.axis["left"].label.set_text("Test 1") - ax1.axis["right"].label.set_text("Test 2") - ax1.axis["top"].set_visible(False) - - axis = grid_helper.new_floating_axis(1, 70, axes=ax1, - axis_direction="bottom") - ax1.axis["z"] = axis - axis.toggle(all=True, label=True) - axis.label.set_axis_direction("top") - axis.label.set_text("z = ?") - axis.label.set_visible(True) - axis.line.set_color("0.5") - - ax2 = ax1.get_aux_axes(tr) - - xx, yy = [67, 90, 75, 30], [2, 5, 8, 4] - ax2.scatter(xx, yy) - l, = ax2.plot(xx, yy, "k-") - l.set_clip_path(ax1.patch) - - -def test_axis_direction(): - # Check that axis direction is propagated on a floating axis - fig = plt.figure() - ax = Subplot(fig, 111) - fig.add_subplot(ax) - ax.axis['y'] = ax.new_floating_axis(nth_coord=1, value=0, - axis_direction='left') - assert ax.axis['y']._axis_direction == 'left' diff --git a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/pandas/tests/arrays/categorical/test_map.py b/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/pandas/tests/arrays/categorical/test_map.py deleted file mode 100644 index 3d41b7cc7094d237fa8d31501ce90a99b04fe4e6..0000000000000000000000000000000000000000 --- a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/pandas/tests/arrays/categorical/test_map.py +++ /dev/null @@ -1,154 +0,0 @@ -import numpy as np -import pytest - -import pandas as pd -from pandas import ( - Categorical, - Index, - Series, -) -import pandas._testing as tm - - -@pytest.fixture(params=[None, "ignore"]) -def na_action(request): - return request.param - - -@pytest.mark.parametrize( - "data, categories", - [ - (list("abcbca"), list("cab")), - (pd.interval_range(0, 3).repeat(3), pd.interval_range(0, 3)), - ], - ids=["string", "interval"], -) -def test_map_str(data, categories, ordered, na_action): - # GH 31202 - override base class since we want to maintain categorical/ordered - cat = Categorical(data, categories=categories, ordered=ordered) - result = cat.map(str, na_action=na_action) - expected = Categorical( - map(str, data), categories=map(str, categories), ordered=ordered - ) - tm.assert_categorical_equal(result, expected) - - -def test_map(na_action): - cat = Categorical(list("ABABC"), categories=list("CBA"), ordered=True) - result = cat.map(lambda x: x.lower(), na_action=na_action) - exp = Categorical(list("ababc"), categories=list("cba"), ordered=True) - tm.assert_categorical_equal(result, exp) - - cat = Categorical(list("ABABC"), categories=list("BAC"), ordered=False) - result = cat.map(lambda x: x.lower(), na_action=na_action) - exp = Categorical(list("ababc"), categories=list("bac"), ordered=False) - tm.assert_categorical_equal(result, exp) - - # GH 12766: Return an index not an array - result = cat.map(lambda x: 1, na_action=na_action) - exp = Index(np.array([1] * 5, dtype=np.int64)) - tm.assert_index_equal(result, exp) - - # change categories dtype - cat = Categorical(list("ABABC"), categories=list("BAC"), ordered=False) - - def f(x): - return {"A": 10, "B": 20, "C": 30}.get(x) - - result = cat.map(f, na_action=na_action) - exp = Categorical([10, 20, 10, 20, 30], categories=[20, 10, 30], ordered=False) - tm.assert_categorical_equal(result, exp) - - mapper = Series([10, 20, 30], index=["A", "B", "C"]) - result = cat.map(mapper, na_action=na_action) - tm.assert_categorical_equal(result, exp) - - result = cat.map({"A": 10, "B": 20, "C": 30}, na_action=na_action) - tm.assert_categorical_equal(result, exp) - - -@pytest.mark.parametrize( - ("data", "f", "expected"), - ( - ([1, 1, np.nan], pd.isna, Index([False, False, True])), - ([1, 2, np.nan], pd.isna, Index([False, False, True])), - ([1, 1, np.nan], {1: False}, Categorical([False, False, np.nan])), - ([1, 2, np.nan], {1: False, 2: False}, Index([False, False, np.nan])), - ( - [1, 1, np.nan], - Series([False, False]), - Categorical([False, False, np.nan]), - ), - ( - [1, 2, np.nan], - Series([False] * 3), - Index([False, False, np.nan]), - ), - ), -) -def test_map_with_nan_none(data, f, expected): # GH 24241 - values = Categorical(data) - result = values.map(f, na_action=None) - if isinstance(expected, Categorical): - tm.assert_categorical_equal(result, expected) - else: - tm.assert_index_equal(result, expected) - - -@pytest.mark.parametrize( - ("data", "f", "expected"), - ( - ([1, 1, np.nan], pd.isna, Categorical([False, False, np.nan])), - ([1, 2, np.nan], pd.isna, Index([False, False, np.nan])), - ([1, 1, np.nan], {1: False}, Categorical([False, False, np.nan])), - ([1, 2, np.nan], {1: False, 2: False}, Index([False, False, np.nan])), - ( - [1, 1, np.nan], - Series([False, False]), - Categorical([False, False, np.nan]), - ), - ( - [1, 2, np.nan], - Series([False, False, False]), - Index([False, False, np.nan]), - ), - ), -) -def test_map_with_nan_ignore(data, f, expected): # GH 24241 - values = Categorical(data) - result = values.map(f, na_action="ignore") - if data[1] == 1: - tm.assert_categorical_equal(result, expected) - else: - tm.assert_index_equal(result, expected) - - -def test_map_with_dict_or_series(na_action): - orig_values = ["a", "B", 1, "a"] - new_values = ["one", 2, 3.0, "one"] - cat = Categorical(orig_values) - - mapper = Series(new_values[:-1], index=orig_values[:-1]) - result = cat.map(mapper, na_action=na_action) - - # Order of categories in result can be different - expected = Categorical(new_values, categories=[3.0, 2, "one"]) - tm.assert_categorical_equal(result, expected) - - mapper = dict(zip(orig_values[:-1], new_values[:-1])) - result = cat.map(mapper, na_action=na_action) - # Order of categories in result can be different - tm.assert_categorical_equal(result, expected) - - -def test_map_na_action_no_default_deprecated(): - # GH51645 - cat = Categorical(["a", "b", "c"]) - msg = ( - "The default value of 'ignore' for the `na_action` parameter in " - "pandas.Categorical.map is deprecated and will be " - "changed to 'None' in a future version. Please set na_action to the " - "desired value to avoid seeing this warning" - ) - with tm.assert_produces_warning(FutureWarning, match=msg): - cat.map(lambda x: x) diff --git a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/pip/_vendor/pygments/styles/__init__.py b/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/pip/_vendor/pygments/styles/__init__.py deleted file mode 100644 index e437d170ed78a453d72cadba14f3aae57ed92351..0000000000000000000000000000000000000000 --- a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/pip/_vendor/pygments/styles/__init__.py +++ /dev/null @@ -1,93 +0,0 @@ -""" - pygments.styles - ~~~~~~~~~~~~~~~ - - Contains built-in styles. - - :copyright: Copyright 2006-2021 by the Pygments team, see AUTHORS. - :license: BSD, see LICENSE for details. -""" - -from pip._vendor.pygments.plugin import find_plugin_styles -from pip._vendor.pygments.util import ClassNotFound - - -#: Maps style names to 'submodule::classname'. -STYLE_MAP = { - 'default': 'default::DefaultStyle', - 'emacs': 'emacs::EmacsStyle', - 'friendly': 'friendly::FriendlyStyle', - 'friendly_grayscale': 'friendly_grayscale::FriendlyGrayscaleStyle', - 'colorful': 'colorful::ColorfulStyle', - 'autumn': 'autumn::AutumnStyle', - 'murphy': 'murphy::MurphyStyle', - 'manni': 'manni::ManniStyle', - 'material': 'material::MaterialStyle', - 'monokai': 'monokai::MonokaiStyle', - 'perldoc': 'perldoc::PerldocStyle', - 'pastie': 'pastie::PastieStyle', - 'borland': 'borland::BorlandStyle', - 'trac': 'trac::TracStyle', - 'native': 'native::NativeStyle', - 'fruity': 'fruity::FruityStyle', - 'bw': 'bw::BlackWhiteStyle', - 'vim': 'vim::VimStyle', - 'vs': 'vs::VisualStudioStyle', - 'tango': 'tango::TangoStyle', - 'rrt': 'rrt::RrtStyle', - 'xcode': 'xcode::XcodeStyle', - 'igor': 'igor::IgorStyle', - 'paraiso-light': 'paraiso_light::ParaisoLightStyle', - 'paraiso-dark': 'paraiso_dark::ParaisoDarkStyle', - 'lovelace': 'lovelace::LovelaceStyle', - 'algol': 'algol::AlgolStyle', - 'algol_nu': 'algol_nu::Algol_NuStyle', - 'arduino': 'arduino::ArduinoStyle', - 'rainbow_dash': 'rainbow_dash::RainbowDashStyle', - 'abap': 'abap::AbapStyle', - 'solarized-dark': 'solarized::SolarizedDarkStyle', - 'solarized-light': 'solarized::SolarizedLightStyle', - 'sas': 'sas::SasStyle', - 'stata': 'stata_light::StataLightStyle', - 'stata-light': 'stata_light::StataLightStyle', - 'stata-dark': 'stata_dark::StataDarkStyle', - 'inkpot': 'inkpot::InkPotStyle', - 'zenburn': 'zenburn::ZenburnStyle', - 'gruvbox-dark': 'gruvbox::GruvboxDarkStyle', - 'gruvbox-light': 'gruvbox::GruvboxLightStyle', - 'dracula': 'dracula::DraculaStyle', - 'one-dark': 'onedark::OneDarkStyle', - 'lilypond' : 'lilypond::LilyPondStyle', -} - - -def get_style_by_name(name): - if name in STYLE_MAP: - mod, cls = STYLE_MAP[name].split('::') - builtin = "yes" - else: - for found_name, style in find_plugin_styles(): - if name == found_name: - return style - # perhaps it got dropped into our styles package - builtin = "" - mod = name - cls = name.title() + "Style" - - try: - mod = __import__('pygments.styles.' + mod, None, None, [cls]) - except ImportError: - raise ClassNotFound("Could not find style module %r" % mod + - (builtin and ", though it should be builtin") + ".") - try: - return getattr(mod, cls) - except AttributeError: - raise ClassNotFound("Could not find style class %r in style module." % cls) - - -def get_all_styles(): - """Return a generator for all styles by name, - both builtin and plugin.""" - yield from STYLE_MAP - for name, _ in find_plugin_styles(): - yield name diff --git a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/pygments/lexers/javascript.py b/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/pygments/lexers/javascript.py deleted file mode 100644 index bc5e2e43cb8da41f277917e4d8d10a4648394088..0000000000000000000000000000000000000000 --- a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/pygments/lexers/javascript.py +++ /dev/null @@ -1,1588 +0,0 @@ -""" - pygments.lexers.javascript - ~~~~~~~~~~~~~~~~~~~~~~~~~~ - - Lexers for JavaScript and related languages. - - :copyright: Copyright 2006-2023 by the Pygments team, see AUTHORS. - :license: BSD, see LICENSE for details. -""" - -import re - -from pygments.lexer import bygroups, combined, default, do_insertions, include, \ - inherit, Lexer, RegexLexer, this, using, words, line_re -from pygments.token import Text, Comment, Operator, Keyword, Name, String, \ - Number, Punctuation, Other, Generic, Whitespace -from pygments.util import get_bool_opt -import pygments.unistring as uni - -__all__ = ['JavascriptLexer', 'KalLexer', 'LiveScriptLexer', 'DartLexer', - 'TypeScriptLexer', 'LassoLexer', 'ObjectiveJLexer', - 'CoffeeScriptLexer', 'MaskLexer', 'EarlGreyLexer', 'JuttleLexer', - 'NodeConsoleLexer'] - -JS_IDENT_START = ('(?:[$_' + uni.combine('Lu', 'Ll', 'Lt', 'Lm', 'Lo', 'Nl') + - ']|\\\\u[a-fA-F0-9]{4})') -JS_IDENT_PART = ('(?:[$' + uni.combine('Lu', 'Ll', 'Lt', 'Lm', 'Lo', 'Nl', - 'Mn', 'Mc', 'Nd', 'Pc') + - '\u200c\u200d]|\\\\u[a-fA-F0-9]{4})') -JS_IDENT = JS_IDENT_START + '(?:' + JS_IDENT_PART + ')*' - - -class JavascriptLexer(RegexLexer): - """ - For JavaScript source code. - """ - - name = 'JavaScript' - url = 'https://www.ecma-international.org/publications-and-standards/standards/ecma-262/' - aliases = ['javascript', 'js'] - filenames = ['*.js', '*.jsm', '*.mjs', '*.cjs'] - mimetypes = ['application/javascript', 'application/x-javascript', - 'text/x-javascript', 'text/javascript'] - - flags = re.DOTALL | re.MULTILINE - - tokens = { - 'commentsandwhitespace': [ - (r'\s+', Whitespace), - (r')?', Other), - (r'[^[<]+', Other), - ], - 'nosquarebrackets': [ - (r'\[noprocess\]', Comment.Preproc, 'noprocess'), - (r'\[', Other), - (r'<\?(lasso(script)?|=)', Comment.Preproc, 'anglebrackets'), - (r'<(!--.*?-->)?', Other), - (r'[^[<]+', Other), - ], - 'noprocess': [ - (r'\[/noprocess\]', Comment.Preproc, '#pop'), - (r'\[', Other), - (r'[^[]', Other), - ], - 'squarebrackets': [ - (r'\]', Comment.Preproc, '#pop'), - include('lasso'), - ], - 'anglebrackets': [ - (r'\?>', Comment.Preproc, '#pop'), - include('lasso'), - ], - 'lassofile': [ - (r'\]|\?>', Comment.Preproc, '#pop'), - include('lasso'), - ], - 'whitespacecomments': [ - (r'\s+', Whitespace), - (r'(//.*?)(\s*)$', bygroups(Comment.Single, Whitespace)), - (r'/\*\*!.*?\*/', String.Doc), - (r'/\*.*?\*/', Comment.Multiline), - ], - 'lasso': [ - # whitespace/comments - include('whitespacecomments'), - - # literals - (r'\d*\.\d+(e[+-]?\d+)?', Number.Float), - (r'0x[\da-f]+', Number.Hex), - (r'\d+', Number.Integer), - (r'(infinity|NaN)\b', Number), - (r"'", String.Single, 'singlestring'), - (r'"', String.Double, 'doublestring'), - (r'`[^`]*`', String.Backtick), - - # names - (r'\$[a-z_][\w.]*', Name.Variable), - (r'#([a-z_][\w.]*|\d+\b)', Name.Variable.Instance), - (r"(\.)(\s*)('[a-z_][\w.]*')", - bygroups(Name.Builtin.Pseudo, Whitespace, Name.Variable.Class)), - (r"(self)(\s*)(->)(\s*)('[a-z_][\w.]*')", - bygroups(Name.Builtin.Pseudo, Whitespace, Operator, Whitespace, - Name.Variable.Class)), - (r'(\.\.?)(\s*)([a-z_][\w.]*(=(?!=))?)', - bygroups(Name.Builtin.Pseudo, Whitespace, Name.Other.Member)), - (r'(->\\?|&)(\s*)([a-z_][\w.]*(=(?!=))?)', - bygroups(Operator, Whitespace, Name.Other.Member)), - (r'(?)(self|inherited|currentcapture|givenblock)\b', - Name.Builtin.Pseudo), - (r'-(?!infinity)[a-z_][\w.]*', Name.Attribute), - (r'(::)(\s*)([a-z_][\w.]*)', - bygroups(Punctuation, Whitespace, Name.Label)), - (r'(error_(code|msg)_\w+|Error_AddError|Error_ColumnRestriction|' - r'Error_DatabaseConnectionUnavailable|Error_DatabaseTimeout|' - r'Error_DeleteError|Error_FieldRestriction|Error_FileNotFound|' - r'Error_InvalidDatabase|Error_InvalidPassword|' - r'Error_InvalidUsername|Error_ModuleNotFound|' - r'Error_NoError|Error_NoPermission|Error_OutOfMemory|' - r'Error_ReqColumnMissing|Error_ReqFieldMissing|' - r'Error_RequiredColumnMissing|Error_RequiredFieldMissing|' - r'Error_UpdateError)\b', Name.Exception), - - # definitions - (r'(define)(\s+)([a-z_][\w.]*)(\s*)(=>)(\s*)(type|trait|thread)\b', - bygroups(Keyword.Declaration, Whitespace, Name.Class, - Whitespace, Operator, Whitespace, Keyword)), - (r'(define)(\s+)([a-z_][\w.]*)(\s*)(->)(\s*)([a-z_][\w.]*=?|[-+*/%])', - bygroups(Keyword.Declaration, Whitespace, Name.Class, - Whitespace, Operator, Whitespace, Name.Function), - 'signature'), - (r'(define)(\s+)([a-z_][\w.]*)', - bygroups(Keyword.Declaration, Whitespace, Name.Function), 'signature'), - (r'(public|protected|private|provide)(\s+)(([a-z_][\w.]*=?|[-+*/%])' - r'(?=\s*\())', bygroups(Keyword, Whitespace, Name.Function), - 'signature'), - (r'(public|protected|private|provide)(\s+)([a-z_][\w.]*)', - bygroups(Keyword, Whitespace, Name.Function)), - - # keywords - (r'(true|false|none|minimal|full|all|void)\b', Keyword.Constant), - (r'(local|var|variable|global|data(?=\s))\b', Keyword.Declaration), - (r'(array|date|decimal|duration|integer|map|pair|string|tag|xml|' - r'null|boolean|bytes|keyword|list|locale|queue|set|stack|' - r'staticarray)\b', Keyword.Type), - (r'([a-z_][\w.]*)(\s+)(in)\b', bygroups(Name, Whitespace, Keyword)), - (r'(let|into)(\s+)([a-z_][\w.]*)', bygroups(Keyword, Whitespace, Name)), - (r'require\b', Keyword, 'requiresection'), - (r'(/?)(Namespace_Using)\b', bygroups(Punctuation, Keyword.Namespace)), - (r'(/?)(Cache|Database_Names|Database_SchemaNames|' - r'Database_TableNames|Define_Tag|Define_Type|Email_Batch|' - r'Encode_Set|HTML_Comment|Handle|Handle_Error|Header|If|Inline|' - r'Iterate|LJAX_Target|Link|Link_CurrentAction|Link_CurrentGroup|' - r'Link_CurrentRecord|Link_Detail|Link_FirstGroup|Link_FirstRecord|' - r'Link_LastGroup|Link_LastRecord|Link_NextGroup|Link_NextRecord|' - r'Link_PrevGroup|Link_PrevRecord|Log|Loop|Output_None|Portal|' - r'Private|Protect|Records|Referer|Referrer|Repeating|ResultSet|' - r'Rows|Search_Args|Search_Arguments|Select|Sort_Args|' - r'Sort_Arguments|Thread_Atomic|Value_List|While|Abort|Case|Else|' - r'Fail_If|Fail_IfNot|Fail|If_Empty|If_False|If_Null|If_True|' - r'Loop_Abort|Loop_Continue|Loop_Count|Params|Params_Up|Return|' - r'Return_Value|Run_Children|SOAP_DefineTag|SOAP_LastRequest|' - r'SOAP_LastResponse|Tag_Name|ascending|average|by|define|' - r'descending|do|equals|frozen|group|handle_failure|import|in|into|' - r'join|let|match|max|min|on|order|parent|protected|provide|public|' - r'require|returnhome|skip|split_thread|sum|take|thread|to|trait|' - r'type|where|with|yield|yieldhome)\b', - bygroups(Punctuation, Keyword)), - - # other - (r',', Punctuation, 'commamember'), - (r'(and|or|not)\b', Operator.Word), - (r'([a-z_][\w.]*)(\s*)(::)(\s*)([a-z_][\w.]*)?(\s*=(?!=))', - bygroups(Name, Whitespace, Punctuation, Whitespace, Name.Label, - Operator)), - (r'(/?)([\w.]+)', bygroups(Punctuation, Name.Other)), - (r'(=)(n?bw|n?ew|n?cn|lte?|gte?|n?eq|n?rx|ft)\b', - bygroups(Operator, Operator.Word)), - (r':=|[-+*/%=<>&|!?\\]+', Operator), - (r'[{}():;,@^]', Punctuation), - ], - 'singlestring': [ - (r"'", String.Single, '#pop'), - (r"[^'\\]+", String.Single), - include('escape'), - (r"\\", String.Single), - ], - 'doublestring': [ - (r'"', String.Double, '#pop'), - (r'[^"\\]+', String.Double), - include('escape'), - (r'\\', String.Double), - ], - 'escape': [ - (r'\\(U[\da-f]{8}|u[\da-f]{4}|x[\da-f]{1,2}|[0-7]{1,3}|:[^:\n\r]+:|' - r'[abefnrtv?"\'\\]|$)', String.Escape), - ], - 'signature': [ - (r'=>', Operator, '#pop'), - (r'\)', Punctuation, '#pop'), - (r'[(,]', Punctuation, 'parameter'), - include('lasso'), - ], - 'parameter': [ - (r'\)', Punctuation, '#pop'), - (r'-?[a-z_][\w.]*', Name.Attribute, '#pop'), - (r'\.\.\.', Name.Builtin.Pseudo), - include('lasso'), - ], - 'requiresection': [ - (r'(([a-z_][\w.]*=?|[-+*/%])(?=\s*\())', Name, 'requiresignature'), - (r'(([a-z_][\w.]*=?|[-+*/%])(?=(\s*::\s*[\w.]+)?\s*,))', Name), - (r'[a-z_][\w.]*=?|[-+*/%]', Name, '#pop'), - (r'(::)(\s*)([a-z_][\w.]*)', - bygroups(Punctuation, Whitespace, Name.Label)), - (r',', Punctuation), - include('whitespacecomments'), - ], - 'requiresignature': [ - (r'(\)(?=(\s*::\s*[\w.]+)?\s*,))', Punctuation, '#pop'), - (r'\)', Punctuation, '#pop:2'), - (r'-?[a-z_][\w.]*', Name.Attribute), - (r'(::)(\s*)([a-z_][\w.]*)', - bygroups(Punctuation, Whitespace, Name.Label)), - (r'\.\.\.', Name.Builtin.Pseudo), - (r'[(,]', Punctuation), - include('whitespacecomments'), - ], - 'commamember': [ - (r'(([a-z_][\w.]*=?|[-+*/%])' - r'(?=\s*(\(([^()]*\([^()]*\))*[^)]*\)\s*)?(::[\w.\s]+)?=>))', - Name.Function, 'signature'), - include('whitespacecomments'), - default('#pop'), - ], - } - - def __init__(self, **options): - self.builtinshighlighting = get_bool_opt( - options, 'builtinshighlighting', True) - self.requiredelimiters = get_bool_opt( - options, 'requiredelimiters', False) - - self._builtins = set() - self._members = set() - if self.builtinshighlighting: - from pygments.lexers._lasso_builtins import BUILTINS, MEMBERS - for key, value in BUILTINS.items(): - self._builtins.update(value) - for key, value in MEMBERS.items(): - self._members.update(value) - RegexLexer.__init__(self, **options) - - def get_tokens_unprocessed(self, text): - stack = ['root'] - if self.requiredelimiters: - stack.append('delimiters') - for index, token, value in \ - RegexLexer.get_tokens_unprocessed(self, text, stack): - if (token is Name.Other and value.lower() in self._builtins or - token is Name.Other.Member and - value.lower().rstrip('=') in self._members): - yield index, Name.Builtin, value - continue - yield index, token, value - - def analyse_text(text): - rv = 0.0 - if 'bin/lasso9' in text: - rv += 0.8 - if re.search(r'<\?lasso', text, re.I): - rv += 0.4 - if re.search(r'local\(', text, re.I): - rv += 0.4 - return rv - - -class ObjectiveJLexer(RegexLexer): - """ - For Objective-J source code with preprocessor directives. - - .. versionadded:: 1.3 - """ - - name = 'Objective-J' - aliases = ['objective-j', 'objectivej', 'obj-j', 'objj'] - filenames = ['*.j'] - mimetypes = ['text/x-objective-j'] - - #: optional Comment or Whitespace - _ws = r'(?:\s|//[^\n]*\n|/[*](?:[^*]|[*][^/])*[*]/)*' - - flags = re.DOTALL | re.MULTILINE - - tokens = { - 'root': [ - include('whitespace'), - - # function definition - (r'^(' + _ws + r'[+-]' + _ws + r')([(a-zA-Z_].*?[^(])(' + _ws + r'\{)', - bygroups(using(this), using(this, state='function_signature'), - using(this))), - - # class definition - (r'(@interface|@implementation)(\s+)', bygroups(Keyword, Whitespace), - 'classname'), - (r'(@class|@protocol)(\s*)', bygroups(Keyword, Whitespace), - 'forward_classname'), - (r'(\s*)(@end)(\s*)', bygroups(Whitespace, Keyword, Whitespace)), - - include('statements'), - ('[{()}]', Punctuation), - (';', Punctuation), - ], - 'whitespace': [ - (r'(@import)(\s+)("(?:\\\\|\\"|[^"])*")', - bygroups(Comment.Preproc, Whitespace, String.Double)), - (r'(@import)(\s+)(<(?:\\\\|\\>|[^>])*>)', - bygroups(Comment.Preproc, Whitespace, String.Double)), - (r'(#(?:include|import))(\s+)("(?:\\\\|\\"|[^"])*")', - bygroups(Comment.Preproc, Whitespace, String.Double)), - (r'(#(?:include|import))(\s+)(<(?:\\\\|\\>|[^>])*>)', - bygroups(Comment.Preproc, Whitespace, String.Double)), - - (r'#if\s+0', Comment.Preproc, 'if0'), - (r'#', Comment.Preproc, 'macro'), - - (r'\s+', Whitespace), - (r'(\\)(\n)', - bygroups(String.Escape, Whitespace)), # line continuation - (r'//(\n|(.|\n)*?[^\\]\n)', Comment.Single), - (r'/(\\\n)?[*](.|\n)*?[*](\\\n)?/', Comment.Multiline), - (r' https://geags.com/2uCsv8



    -

    autocad product key allows users to share files and work together on them. with autocad product key, users can edit 3d models of buildings and vehicles, create layouts for houses, design websites, and animate a variety of content. the basic function of autocad is to create drawings and drawings that can be used for building
    ca3bfb1094

    -

    as its name implies, the autocad plant crack is used for 3d modeling and in the manufacturing of 3d designs. 2d drawings and floor plans can also be created using this tool. this software is usually used in both office and home settings.

    -

    software products that work on a computer, usually have their own area in the software & support folder. the autocad crack will be added to the program files folder. you can also download the autocad 2009 crack and install it on any computer running windows 10. this is a big advantage over other software products. you do not have to worry about the license key and other such issues.

    -

    -

    autocad express can also be used to download the 2d drawing templates that users can edit and print them out. if the user does not have autocad, then the user can download the program from the autodesk website. autocad 2010 key can also be found from the same website. these are the basic functions of this software.
    ca3bfb1094

    899543212b
    -
    -
    \ No newline at end of file diff --git a/spaces/quidiaMuxgu/Expedit-SAM/Copytrans Contacts V1018 Activation Code.md b/spaces/quidiaMuxgu/Expedit-SAM/Copytrans Contacts V1018 Activation Code.md deleted file mode 100644 index 83fecf3e5e58c31a875e9c457037e40690157078..0000000000000000000000000000000000000000 --- a/spaces/quidiaMuxgu/Expedit-SAM/Copytrans Contacts V1018 Activation Code.md +++ /dev/null @@ -1,37 +0,0 @@ - -

    How to Activate Copytrans Contacts V1018 with a Valid Code

    - -

    Copytrans Contacts is a powerful tool that allows you to manage, backup, and transfer your iPhone contacts, messages, calendars, notes, and more. With Copytrans Contacts V1018, you can enjoy new features and improvements, such as:

    -

    Copytrans Contacts V1018 Activation Code


    Downloadhttps://geags.com/2uCqBu



    - -
      -
    • Support for iOS 15 and iPhone 13
    • -
    • Ability to export contacts to Excel, CSV, or vCard formats
    • -
    • Option to merge duplicate contacts and fix formatting issues
    • -
    • Enhanced user interface and performance
    • -
    - -

    To use Copytrans Contacts V1018, you need to activate it with a valid code. A code is a combination of letters and numbers that you receive after purchasing a license from the official website. If you don't have a code yet, you can get one here: https://www.copytrans.net/buy/

    - -

    If you already have a code, follow these steps to activate Copytrans Contacts V1018:

    - -
      -
    1. Download and install Copytrans Contacts V1018 from this link: https://www.copytrans.net/download/
    2. -
    3. Launch Copytrans Contacts and connect your iPhone to your computer via USB cable
    4. -
    5. Click on the "Activate" button at the top right corner of the program window
    6. -
    7. Enter your code in the pop-up window and click on "OK"
    8. -
    9. Wait for the activation process to complete and enjoy using Copytrans Contacts V1018
    10. -
    - -

    If you encounter any problems with the activation process, please contact the support team at support@copytrans.net or visit the FAQ page: https://www.copytrans.net/support/

    -

    - -

    Copytrans Contacts V1018 is a reliable and easy-to-use solution for managing your iPhone data. With a valid activation code, you can unlock all its features and benefits. Don't wait any longer and get your code today!

    - -

    Why do you need Copytrans Contacts V1018? If you have ever lost your iPhone contacts, messages, or other data due to accidental deletion, device damage, or software update, you know how frustrating and stressful it can be. You may also want to transfer your data to a new device, a different computer, or a cloud service. Or you may simply want to organize and edit your data in a more convenient way.

    - -

    That's where Copytrans Contacts V1018 comes in handy. It is a versatile and user-friendly tool that lets you backup, restore, and transfer your iPhone data with ease. You can access and manage your data on your computer, without using iTunes or iCloud. You can also export your data to various formats and applications, such as Outlook, Gmail, Excel, Word, and more. You can even edit your contacts and messages directly on your computer, and sync the changes with your iPhone.

    - -

    How does Copytrans Contacts V1018 work? Copytrans Contacts V1018 works with any iPhone model and any iOS version. It is compatible with Windows 10, 8.1, 8, 7, and Vista. It is also safe and secure, as it does not modify or overwrite your original data. To use Copytrans Contacts V1018, you just need to download and install it on your computer, connect your iPhone via USB cable, and start managing your data.

    d5da3c52bf
    -
    -
    \ No newline at end of file diff --git a/spaces/quidiaMuxgu/Expedit-SAM/IntroductiontoinstrumentationandcontrolakghoshpdfUPD Freedownload.md b/spaces/quidiaMuxgu/Expedit-SAM/IntroductiontoinstrumentationandcontrolakghoshpdfUPD Freedownload.md deleted file mode 100644 index ea329ce124a92ab5773a8b677bcbc39df91ae9a3..0000000000000000000000000000000000000000 --- a/spaces/quidiaMuxgu/Expedit-SAM/IntroductiontoinstrumentationandcontrolakghoshpdfUPD Freedownload.md +++ /dev/null @@ -1,6 +0,0 @@ -

    introductiontoinstrumentationandcontrolakghoshpdffreedownload


    Download ››› https://geags.com/2uCqde



    - -introductiontoinstrumentationandcontrolakghoshpdffreedownload · interligados aden stone, o rei dos vampiros · fast gsm omap 1.0.0.7 · DAEMON Tools Ultra ... 4d29de3e1b
    -
    -
    -

    diff --git a/spaces/qwertyuiee/AnimeBackgroundGAN/app.py b/spaces/qwertyuiee/AnimeBackgroundGAN/app.py deleted file mode 100644 index d86273f767b63f5fc12458291d49c68d56ccb15e..0000000000000000000000000000000000000000 --- a/spaces/qwertyuiee/AnimeBackgroundGAN/app.py +++ /dev/null @@ -1,187 +0,0 @@ -from cgitb import enable -from ctypes.wintypes import HFONT -import os -import sys -import torch -import gradio as gr -import numpy as np -import torchvision.transforms as transforms - - -from torch.autograd import Variable -from network.Transformer import Transformer -from huggingface_hub import hf_hub_download - -from PIL import Image - -import logging - -logging.basicConfig(level=logging.INFO) -logger = logging.getLogger(__name__) - -# Constants - -MAX_DIMENSION = 1280 -MODEL_PATH = "models" -COLOUR_MODEL = "RGB" - -STYLE_SHINKAI = "Makoto Shinkai" -STYLE_HOSODA = "Mamoru Hosoda" -STYLE_MIYAZAKI = "Hayao Miyazaki" -STYLE_KON = "Satoshi Kon" -DEFAULT_STYLE = STYLE_SHINKAI -STYLE_CHOICE_LIST = [STYLE_SHINKAI, STYLE_HOSODA, STYLE_MIYAZAKI, STYLE_KON] - -MODEL_REPO_SHINKAI = "akiyamasho/AnimeBackgroundGAN-Shinkai" -MODEL_FILE_SHINKAI = "shinkai_makoto.pth" - -MODEL_REPO_HOSODA = "akiyamasho/AnimeBackgroundGAN-Hosoda" -MODEL_FILE_HOSODA = "hosoda_mamoru.pth" - -MODEL_REPO_MIYAZAKI = "akiyamasho/AnimeBackgroundGAN-Miyazaki" -MODEL_FILE_MIYAZAKI = "miyazaki_hayao.pth" - -MODEL_REPO_KON = "akiyamasho/AnimeBackgroundGAN-Kon" -MODEL_FILE_KON = "kon_satoshi.pth" - -# Model Initalisation -shinkai_model_hfhub = hf_hub_download(repo_id=MODEL_REPO_SHINKAI, filename=MODEL_FILE_SHINKAI) -hosoda_model_hfhub = hf_hub_download(repo_id=MODEL_REPO_HOSODA, filename=MODEL_FILE_HOSODA) -miyazaki_model_hfhub = hf_hub_download(repo_id=MODEL_REPO_MIYAZAKI, filename=MODEL_FILE_MIYAZAKI) -kon_model_hfhub = hf_hub_download(repo_id=MODEL_REPO_KON, filename=MODEL_FILE_KON) - -shinkai_model = Transformer() -hosoda_model = Transformer() -miyazaki_model = Transformer() -kon_model = Transformer() - -enable_gpu = torch.cuda.is_available() - -if enable_gpu: - # If you have multiple cards, - # you can assign to a specific card, eg: "cuda:0"("cuda") or "cuda:1" - # Use the first card by default: "cuda" - device = torch.device("cuda") -else: - device = "cpu" - -shinkai_model.load_state_dict( - torch.load(shinkai_model_hfhub, device) -) -hosoda_model.load_state_dict( - torch.load(hosoda_model_hfhub, device) -) -miyazaki_model.load_state_dict( - torch.load(miyazaki_model_hfhub, device) -) -kon_model.load_state_dict( - torch.load(kon_model_hfhub, device) -) - -if enable_gpu: - shinkai_model = shinkai_model.to(device) - hosoda_model = hosoda_model.to(device) - miyazaki_model = miyazaki_model.to(device) - kon_model = kon_model.to(device) - -shinkai_model.eval() -hosoda_model.eval() -miyazaki_model.eval() -kon_model.eval() - - -# Functions - -def get_model(style): - if style == STYLE_SHINKAI: - return shinkai_model - elif style == STYLE_HOSODA: - return hosoda_model - elif style == STYLE_MIYAZAKI: - return miyazaki_model - elif style == STYLE_KON: - return kon_model - else: - logger.warning( - f"Style {style} not found. Defaulting to Makoto Shinkai" - ) - return shinkai_model - - -def adjust_image_for_model(img): - logger.info(f"Image Height: {img.height}, Image Width: {img.width}") - if img.height > MAX_DIMENSION or img.width > MAX_DIMENSION: - logger.info(f"Dimensions too large. Resizing to {MAX_DIMENSION}px.") - img.thumbnail((MAX_DIMENSION, MAX_DIMENSION), Image.ANTIALIAS) - - return img - - -def inference(img, style): - img = adjust_image_for_model(img) - - # load image - input_image = img.convert(COLOUR_MODEL) - input_image = np.asarray(input_image) - # RGB -> BGR - input_image = input_image[:, :, [2, 1, 0]] - input_image = transforms.ToTensor()(input_image).unsqueeze(0) - # preprocess, (-1, 1) - input_image = -1 + 2 * input_image - - if enable_gpu: - logger.info(f"CUDA found. Using GPU.") - # Allows to specify a card for calculation - input_image = Variable(input_image).to(device) - else: - logger.info(f"CUDA not found. Using CPU.") - input_image = Variable(input_image).float() - - # forward - model = get_model(style) - output_image = model(input_image) - output_image = output_image[0] - # BGR -> RGB - output_image = output_image[[2, 1, 0], :, :] - output_image = output_image.data.cpu().float() * 0.5 + 0.5 - - return transforms.ToPILImage()(output_image) - - -# Gradio setup - -title = "Anime Background GAN" -description = "Gradio Demo for CartoonGAN by Chen Et. Al. Models are Shinkai Makoto, Hosoda Mamoru, Kon Satoshi, and Miyazaki Hayao." -article = "

    CartoonGAN Whitepaper from Chen et.al

    Github Repo

    Original Implementation from Yijunmaverick

    visitor badge

    " - -examples = [ - ["examples/garden_in.jpg", STYLE_SHINKAI], - ["examples/library_in.jpg", STYLE_KON], -] - - -gr.Interface( - fn=inference, - inputs=[ - gr.inputs.Image( - type="pil", - label="Input Photo (less than 1280px on both width and height)", - ), - gr.inputs.Dropdown( - STYLE_CHOICE_LIST, - type="value", - default=DEFAULT_STYLE, - label="Style", - ), - ], - outputs=gr.outputs.Image( - type="pil", - label="Output Image", - ), - title=title, - description=description, - article=article, - examples=examples, - allow_flagging="never", - allow_screenshot=False, -).launch(enable_queue=True) diff --git a/spaces/radames/UserControllableLT-Latent-Transformer/expansion/utils/util_flow.py b/spaces/radames/UserControllableLT-Latent-Transformer/expansion/utils/util_flow.py deleted file mode 100644 index 13c683370f8f2b4b6ac6b077d05b0964753821bb..0000000000000000000000000000000000000000 --- a/spaces/radames/UserControllableLT-Latent-Transformer/expansion/utils/util_flow.py +++ /dev/null @@ -1,272 +0,0 @@ -import math -import png -import struct -import array -import numpy as np -import cv2 -import pdb - -from io import * - -UNKNOWN_FLOW_THRESH = 1e9; -UNKNOWN_FLOW = 1e10; - -# Middlebury checks -TAG_STRING = 'PIEH' # use this when WRITING the file -TAG_FLOAT = 202021.25 # check for this when READING the file - -def readPFM(file): - import re - file = open(file, 'rb') - - color = None - width = None - height = None - scale = None - endian = None - - header = file.readline().rstrip() - if header == b'PF': - color = True - elif header == b'Pf': - color = False - else: - raise Exception('Not a PFM file.') - - dim_match = re.match(b'^(\d+)\s(\d+)\s$', file.readline()) - if dim_match: - width, height = map(int, dim_match.groups()) - else: - raise Exception('Malformed PFM header.') - - scale = float(file.readline().rstrip()) - if scale < 0: # little-endian - endian = '<' - scale = -scale - else: - endian = '>' # big-endian - - data = np.fromfile(file, endian + 'f') - shape = (height, width, 3) if color else (height, width) - - data = np.reshape(data, shape) - data = np.flipud(data) - return data, scale - - -def save_pfm(file, image, scale = 1): - import sys - color = None - - if image.dtype.name != 'float32': - raise Exception('Image dtype must be float32.') - - if len(image.shape) == 3 and image.shape[2] == 3: # color image - color = True - elif len(image.shape) == 2 or len(image.shape) == 3 and image.shape[2] == 1: # greyscale - color = False - else: - raise Exception('Image must have H x W x 3, H x W x 1 or H x W dimensions.') - - file.write('PF\n' if color else 'Pf\n') - file.write('%d %d\n' % (image.shape[1], image.shape[0])) - - endian = image.dtype.byteorder - - if endian == '<' or endian == '=' and sys.byteorder == 'little': - scale = -scale - - file.write('%f\n' % scale) - - image.tofile(file) - - -def ReadMiddleburyFloFile(path): - """ Read .FLO file as specified by Middlebury. - - Returns tuple (width, height, u, v, mask), where u, v, mask are flat - arrays of values. - """ - - with open(path, 'rb') as fil: - tag = struct.unpack('f', fil.read(4))[0] - width = struct.unpack('i', fil.read(4))[0] - height = struct.unpack('i', fil.read(4))[0] - - assert tag == TAG_FLOAT - - #data = np.fromfile(path, dtype=np.float, count=-1) - #data = data[3:] - - fmt = 'f' * width*height*2 - data = struct.unpack(fmt, fil.read(4*width*height*2)) - - u = data[::2] - v = data[1::2] - - mask = map(lambda x,y: abs(x) 0: - # print(u[ind], v[ind], mask[ind], row[3*x], row[3*x+1], row[3*x+2]) - - #png_reader.close() - - return (width, height, u, v, mask) - - -def WriteMiddleburyFloFile(path, width, height, u, v, mask=None): - """ Write .FLO file as specified by Middlebury. - """ - - if mask is not None: - u_masked = map(lambda x,y: x if y else UNKNOWN_FLOW, u, mask) - v_masked = map(lambda x,y: x if y else UNKNOWN_FLOW, v, mask) - else: - u_masked = u - v_masked = v - - fmt = 'f' * width*height*2 - # Interleave lists - data = [x for t in zip(u_masked,v_masked) for x in t] - - with open(path, 'wb') as fil: - fil.write(str.encode(TAG_STRING)) - fil.write(struct.pack('i', width)) - fil.write(struct.pack('i', height)) - fil.write(struct.pack(fmt, *data)) - - -def write_flow(path,flow): - - invalid_idx = (flow[:, :, 2] == 0) - flow[:, :, 0:2] = flow[:, :, 0:2]*64.+ 2 ** 15 - flow[invalid_idx, 0] = 0 - flow[invalid_idx, 1] = 0 - - flow = flow.astype(np.uint16) - flow = cv2.imwrite(path, flow[:,:,::-1]) - - #WriteKittiPngFile(path, - # flow.shape[1], flow.shape[0], flow[:,:,0].flatten(), - # flow[:,:,1].flatten(), flow[:,:,2].flatten()) - - - -def WriteKittiPngFile(path, width, height, u, v, mask=None): - """ Write 16-bit .PNG file as specified by KITTI-2015 (flow). - - u, v are lists of float values - mask is a list of floats, denoting the *valid* pixels. - """ - - data = array.array('H',[0])*width*height*3 - - for i,(u_,v_,mask_) in enumerate(zip(u,v,mask)): - data[3*i] = int(u_*64.0+2**15) - data[3*i+1] = int(v_*64.0+2**15) - data[3*i+2] = int(mask_) - - # if mask_ > 0: - # print(data[3*i], data[3*i+1],data[3*i+2]) - - with open(path, 'wb') as png_file: - png_writer = png.Writer(width=width, height=height, bitdepth=16, compression=3, greyscale=False) - png_writer.write_array(png_file, data) - - -def ConvertMiddleburyFloToKittiPng(src_path, dest_path): - width, height, u, v, mask = ReadMiddleburyFloFile(src_path) - WriteKittiPngFile(dest_path, width, height, u, v, mask=mask) - -def ConvertKittiPngToMiddleburyFlo(src_path, dest_path): - width, height, u, v, mask = ReadKittiPngFile(src_path) - WriteMiddleburyFloFile(dest_path, width, height, u, v, mask=mask) - - -def ParseFilenameKitti(filename): - # Parse kitti filename (seq_frameno.xx), - # return seq, frameno, ext. - # Be aware that seq might contain the dataset name (if contained as prefix) - ext = filename[filename.rfind('.'):] - frameno = filename[filename.rfind('_')+1:filename.rfind('.')] - frameno = int(frameno) - seq = filename[:filename.rfind('_')] - return seq, frameno, ext - - -def read_calib_file(filepath): - """Read in a calibration file and parse into a dictionary.""" - data = {} - - with open(filepath, 'r') as f: - for line in f.readlines(): - key, value = line.split(':', 1) - # The only non-float values in these files are dates, which - # we don't care about anyway - try: - data[key] = np.array([float(x) for x in value.split()]) - except ValueError: - pass - - return data - -def load_calib_cam_to_cam(cam_to_cam_file): - # We'll return the camera calibration as a dictionary - data = {} - - # Load and parse the cam-to-cam calibration data - filedata = read_calib_file(cam_to_cam_file) - - # Create 3x4 projection matrices - P_rect_00 = np.reshape(filedata['P_rect_00'], (3, 4)) - P_rect_10 = np.reshape(filedata['P_rect_01'], (3, 4)) - P_rect_20 = np.reshape(filedata['P_rect_02'], (3, 4)) - P_rect_30 = np.reshape(filedata['P_rect_03'], (3, 4)) - - # Compute the camera intrinsics - data['K_cam0'] = P_rect_00[0:3, 0:3] - data['K_cam1'] = P_rect_10[0:3, 0:3] - data['K_cam2'] = P_rect_20[0:3, 0:3] - data['K_cam3'] = P_rect_30[0:3, 0:3] - - data['b00'] = P_rect_00[0, 3] / P_rect_00[0, 0] - data['b10'] = P_rect_10[0, 3] / P_rect_10[0, 0] - data['b20'] = P_rect_20[0, 3] / P_rect_20[0, 0] - data['b30'] = P_rect_30[0, 3] / P_rect_30[0, 0] - - return data - diff --git a/spaces/radames/UserControllableLT-Latent-Transformer/interface/pixel2style2pixel/models/__init__.py b/spaces/radames/UserControllableLT-Latent-Transformer/interface/pixel2style2pixel/models/__init__.py deleted file mode 100644 index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000 diff --git a/spaces/raedeXanto/academic-chatgpt-beta/Dmiedit Aptio V2 11 Zip A Guide to Updating BIOS and NVRAM Variables with iFlashV and iSetupCfg.md b/spaces/raedeXanto/academic-chatgpt-beta/Dmiedit Aptio V2 11 Zip A Guide to Updating BIOS and NVRAM Variables with iFlashV and iSetupCfg.md deleted file mode 100644 index e198added362618069d3074ab5b9bdadec4516db..0000000000000000000000000000000000000000 --- a/spaces/raedeXanto/academic-chatgpt-beta/Dmiedit Aptio V2 11 Zip A Guide to Updating BIOS and NVRAM Variables with iFlashV and iSetupCfg.md +++ /dev/null @@ -1,151 +0,0 @@ -
    -

    PHA Pro Hazop Software Crack: What You Need to Know

    -

    If you are looking for a way to crack PHA Pro Hazop software, you might be tempted by some websites that offer free downloads or torrents of this popular tool. However, before you click on that link, you should be aware of the potential dangers and drawbacks of cracking software. In this article, we will explain what PHA Pro Hazop software is, why people want to crack it, what are the risks and consequences of cracking it, how to crack it safely and legally, and what are some alternatives to cracking it.

    -

    pha pro hazop software crack


    DOWNLOAD »»» https://tinourl.com/2uL0qC



    -

    Introduction

    -

    What is PHA Pro Hazop Software?

    -

    PHA Pro Hazop software is a product of Sphera, a leading provider of integrated risk management solutions. It is a software tool that helps organizations conduct process hazard analysis (PHA) and hazard and operability (HAZOP) studies easily and thoroughly. PHA and HAZOP are methods of identifying, assessing and controlling the impact of process-related risks on safety, health, environment and business performance.

    -

    PHA Pro Hazop software offers a user-friendly, flexible and knowledge-based solution that has evolved over the past decade through extensive commercial use to meet the requirements of many of the world's largest companies. It has features such as:

    -
      -
    • Dynamic linking of diagrams with worksheets
    • -
    • Professional reports exportable in HTML, MS Word, MS Excel
    • -
    • International support such as multi-language and right to left data entry
    • -
    • Enhanced AutoType and Copy features
    • -
    • Enhanced Release Management
    • -
    • Criticality Matrix
    • -
    • Linked HAZOP and LOPA templates
    • -
    • Recommendations Manager
    • -
    • Comprehensive knowledge libraries to shorten study time and leverage best practices
    • -
    -

    Why do people want to crack it?

    -

    PHA Pro Hazop software is not cheap. According to its official website, it costs $4,995 for a single user license, $9,995 for a five user license, and $19,995 for an unlimited user license. These prices do not include maintenance fees or training costs. For many individuals or small businesses who need to conduct PHA or HAZOP studies, these prices may be too high or unaffordable.

    -

    Therefore, some people may look for ways to crack PHA Pro Hazop software, which means to bypass its security features and use it without paying for it. Cracking software is usually done by downloading a cracked version from a website or a torrent site, or by using a keygen or a patch program that generates a fake serial number or modifies the original files.

    -

    What are the risks and consequences of cracking it?

    -

    Cracking software may seem like an easy and convenient way to save money, but it comes with many risks and consequences that may outweigh the benefits. Some of these are:

    -
      -
    • Legal risks: Cracking software is illegal in most countries. It violates the intellectual property rights of the software developers and distributors. It can result in civil lawsuits or criminal charges that may lead to fines or imprisonment.
    • -
    • Security risks: Cracking software may expose your computer or network to viruses, malware, spyware or ransomware that can damage your data or system, steal your personal information or credentials, or lock your files until you pay a ransom.
    • -
    • Quality risks: Cracking software may compromise the functionality or performance of the software. It may cause errors, bugs, crashes or compatibility issues that can affect your work or results. It may also prevent you from receiving updates, patches or technical support from the official source.
    • -
    • Ethical risks: Cracking software may harm the reputation or credibility of yourself or your organization. It may show a lack of respect or integrity towards the software developers and distributors who invested time, money and effort into creating and maintaining the software. It may also discourage them from developing more or better software in the future.
    • -
    -

    How to Crack PHA Pro Hazop Software Safely and Legally

    -

    If you still want to crack PHA Pro Hazop software despite knowing the risks and consequences, you should at least try to do it safely and legally. Here are some tips on how to do that:

    -

    pha pro hazop software download
    -pha pro hazop software free trial
    -pha pro hazop software tutorial
    -pha pro hazop software training
    -pha pro hazop software license
    -pha pro hazop software price
    -pha pro hazop software review
    -pha pro hazop software features
    -pha pro hazop software benefits
    -pha pro hazop software alternatives
    -sphera pha pro hazop software
    -dyadem pha pro hazop software
    -process hazard analysis pha pro hazop software
    -risk assessment pha pro hazop software
    -process safety management pha pro hazop software
    -how to use pha pro hazop software
    -how to install pha pro hazop software
    -how to update pha pro hazop software
    -how to crack pha pro hazop software
    -how to get pha pro hazop software for free
    -best practices for pha pro hazop software
    -tips and tricks for pha pro hazop software
    -advantages and disadvantages of pha pro hazop software
    -comparison of pha pro hazop software with other tools
    -customer testimonials for pha pro hazop software
    -case studies for pha pro hazop software
    -success stories for pha pro hazop software
    -challenges and solutions for pha pro hazop software
    -faqs for pha pro hazop software
    -support and service for pha pro hazop software
    -requirements and specifications for pha pro hazop software
    -compatibility and integration for pha pro hazop software
    -security and reliability for pha pro hazop software
    -performance and scalability for pha pro hazop software
    -customization and configuration for pha pro hazop software
    -templates and reports for pha pro hazop software
    -knowledge libraries and databases for pha pro hazop software
    -recommendations and suggestions for pha pro hazop software
    -improvements and enhancements for pha pro hazop software
    -feedback and reviews for pha pro hazop software
    -problems and issues for pha pro hazop software
    -troubleshooting and debugging for pha pro hazop software
    -errors and bugs for pha pro hazop software
    -fixes and patches for pha pro hazop software
    -updates and upgrades for pha pro hazop software
    -versions and releases for pha pro hazop software
    -demos and trials for pha pro hazop software
    -discounts and offers for pha pro hazop software
    -coupons and codes for pha pro hazop software
    -deals and sales for pha pro hazop software

    -

    Use a reliable source

    -

    Not all websites or torrent sites that offer cracked software are trustworthy. Some of them may contain malicious links or files that can infect your computer or network. Therefore, you should do some research before downloading anything from them. You should check their ratings, reviews, comments or feedback from other users. You should also use antivirus software or online scanners to scan the downloaded files before opening them.

    -

    Scan for viruses and malware

    -

    Even if you use a reliable source, you should still scan for viruses and malware regularly. Some cracked software may contain hidden codes or programs that can activate later or evade detection by antivirus software. Therefore, you should use reputable antivirus software or online scanners to scan your computer or network periodically. You should also update your antivirus software regularly to keep up with new threats.

    -

    Backup your data and system

    -

    In case something goes wrong with your cracked software or your computer or network gets infected by viruses or malware, you should have a backup plan. You should backup your data and system regularly to an external device or cloud service. This way, you can restore your data and system if they get corrupted or lost.

    -

    Follow the instructions carefully

    -

    If you download a cracked version of PHA Pro Hazop software from a website or a torrent site, it may come with instructions on how to install and use it. You should follow these instructions carefully to avoid any errors or problems. You should also read any readme files or notes that come with the cracked software. They may contain important information or warnings that you need to know.

    -

    Alternatives to Cracking PHA Pro Hazop Software

    -

    If you want to avoid the risks and consequences of cracking PHA Pro Hazop software altogether, you should consider some alternatives that are safer and legal. Some of these are:

    -

    Buy a licensed version

    -

    The best way to use PHA Pro Hazop software is to buy a licensed version from its official website or an authorized reseller. This way, you can enjoy all its features and benefits without any worries. You can also receive updates, patches and technical support from the official source. You can also choose a license option that suits your budget and needs.

    -

    Use a free or open-source software

    -

    If you cannot afford to buy a licensed version of PHA Pro Hazop software, you can look for free or open-source alternatives that offer similar functions or features. Free software means that you can use it without paying anything for it. Open-source software means that you can access its source code and modify it according to your preferences. Some examples of free or open-source PHA or HAZOP software are:

    -
      -
    • OpenLCA: A life cycle assessment (LCA) tool that can also perform environmental risk assessment (ERA).
    • -
    • OpenRisk Manual: A wiki-based platform that provides guidance on risk management methodologies.
    • -
    • R Project: A programming language and environment that can perform statistical analysis and graphical visualization.
    • -
    • Python: A programming language that can perform various tasks such as data analysis, web development and machine learning.
    • -
    -

    Hire a professional service

    -

    If you do not have the time or skills to conduct PHA or HAZOP studies yourself, you can hire a professional service that can do it for you. A professional service can provide you with expert knowledge, experience and resources that can help you conduct PHA or HAZOP studies efficiently and effectively. A professional service can also provide you with a report that meets the standards and regulations of your industry or sector. Some examples of professional services that offer PHA or HAZOP studies are:

    -
      -
    • ABS Group: A global provider of risk and reliability solutions for various industries.
    • -
    • DEKRA: A global leader in safety testing, inspection and certification.
    • -
    • Intertek: A global provider of quality assurance and risk management services.
    • -
    • SGS: A global provider of inspection, verification, testing and certification services.
    • -
    -

    Conclusion

    -

    Summary of the main points

    -

    In conclusion, PHA Pro Hazop software is a powerful tool that can help you conduct process hazard analysis and hazard and operability studies easily and thoroughly. However, cracking it is not a good idea, as it can expose you to legal, security, quality and ethical risks and consequences. Instead, you should either buy a licensed version, use a free or open-source alternative, or hire a professional service.

    -

    Call to action

    -

    If you want to learn more about PHA Pro Hazop software or how to conduct PHA or HAZOP studies, you can visit the official website of Sphera at https://sphera.com/. You can also contact them for a free demo or a quote. Alternatively, you can check out some of the free or open-source software or professional services that we mentioned in this article.

    -

    FAQs

    -
      -
    1. What is the difference between PHA and HAZOP?
    2. -

      PHA stands for process hazard analysis, which is a general term for any method of identifying, assessing and controlling the impact of process-related risks on safety, health, environment and business performance. HAZOP stands for hazard and operability, which is a specific type of PHA that uses a systematic and structured approach to examine the design and operation of a process for potential deviations from normal conditions that may cause hazards or operability problems.

      -
    3. What are the benefits of using PHA Pro Hazop software?
    4. -

      PHA Pro Hazop software can help you conduct PHA or HAZOP studies more efficiently and effectively. It can help you:

      -
        -
      • Build on previous assessments to avoid wasting time and resources
      • -
      • Retain valuable corporate knowledge and intellectual property
      • -
      • Implement risk studies easily and thoroughly
      • -
      • Customize preformatted standard PHA templates
      • -
      • Leverage a myriad of features specific to processes
      • -
      • Increase consistency across assessments
      • -
      • Generate professional reports in various formats
      • -
      • Use comprehensive knowledge libraries to shorten study time and leverage best practices
      • -
      -
    5. How much does PHA Pro Hazop software cost?
    6. -

      According to its official website, PHA Pro Hazop software costs $4,995 for a single user license, $9,995 for a five user license, and $19,995 for an unlimited user license. These prices do not include maintenance fees or training costs.

      -
    7. How can I crack PHA Pro Hazop software safely and legally?
    8. -

      If you still want to crack PHA Pro Hazop software despite knowing the risks and consequences, you should at least try to do it safely and legally. You should:

      -
        -
      • Use a reliable source
      • -
      • Scan for viruses and malware
      • -
      • Backup your data and system
      • -
      • Follow the instructions carefully
      • -
      -
    9. What are some alternatives to cracking PHA Pro Hazop software?
    10. -

      If you want to avoid the risks and consequences of cracking PHA Pro Hazop software altogether, you should consider some alternatives that are safer and legal. You should either:

      -
        -
      • Buy a licensed version
      • -
      • Use a free or open-source software
      • -
      • Hire a professional service
      • -
      -

      0a6ba089eb
      -
      -
      \ No newline at end of file diff --git a/spaces/rahul999r/Rahul_Kannada_TTS/utils/inference/advanced_tts.py b/spaces/rahul999r/Rahul_Kannada_TTS/utils/inference/advanced_tts.py deleted file mode 100644 index ccf42704b83aee57487359f447a0966c05de704e..0000000000000000000000000000000000000000 --- a/spaces/rahul999r/Rahul_Kannada_TTS/utils/inference/advanced_tts.py +++ /dev/null @@ -1,135 +0,0 @@ - -from tts import TextToMel, MelToWav -from transliterate import XlitEngine -from num_to_word_on_sent import normalize_nums - -import re -import numpy as np -from scipy.io.wavfile import write - -from mosestokenizer import * -from indicnlp.tokenize import sentence_tokenize -import argparse - -_INDIC = ["as", "bn", "gu", "hi", "kn", "ml", "mr", "or", "pa", "ta", "te"] -_PURAM_VIRAM_LANGUAGES = ["hi", "or", "bn", "as"] -_TRANSLITERATION_NOT_AVAILABLE_IN = ["en","or"] -#_NUM2WORDS_NOT_AVAILABLE_IN = [] - -def normalize_text(text, lang): - if lang in _PURAM_VIRAM_LANGUAGES: - text = text.replace('|', '।') - text = text.replace('.', '।') - return text - -def split_sentences(paragraph, language): - if language == "en": - with MosesSentenceSplitter(language) as splitter: - return splitter([paragraph]) - elif language in _INDIC: - return sentence_tokenize.sentence_split(paragraph, lang=language) - - - -def load_models(acoustic, vocoder, device): - text_to_mel = TextToMel(glow_model_dir=acoustic, device=device) - mel_to_wav = MelToWav(hifi_model_dir=vocoder, device=device) - return text_to_mel, mel_to_wav - - -def translit(text, lang): - reg = re.compile(r'[a-zA-Z]') - words = [engine.translit_word(word, topk=1)[lang][0] if reg.match(word) else word for word in text.split()] - updated_sent = ' '.join(words) - return updated_sent - - - -def run_tts(text, lang, args): - if lang == 'hi': - text = text.replace('।', '.') # only for hindi models - - if lang == 'en' and text[-1] != '.': - text = text + '. ' - - if args.number_conversion == 1 and lang!='en': - print("Doing number conversion") - text_num_to_word = normalize_nums(text, lang) # converting numbers to words in lang - else: - text_num_to_word = text - - - if args.transliteration == 1 and lang not in _TRANSLITERATION_NOT_AVAILABLE_IN: - print("Doing transliteration") - text_num_to_word_and_transliterated = translit(text_num_to_word, lang) # transliterating english words to lang - else: - text_num_to_word_and_transliterated = text_num_to_word - - final_text = ' ' + text_num_to_word_and_transliterated - - mel = text_to_mel.generate_mel(final_text, args.noise_scale, args.length_scale) - audio, sr = mel_to_wav.generate_wav(mel) - return sr, audio - -def run_tts_paragraph(args): - audio_list = [] - if args.split_sentences == 1: - text = normalize_text(args.text, args.lang) - split_sentences_list = split_sentences(text, args.lang) - - for sent in split_sentences_list: - sr, audio = run_tts(sent, args.lang, args) - audio_list.append(audio) - - concatenated_audio = np.concatenate([i for i in audio_list]) - if args.wav: - write(filename=args.wav, rate=sr, data=concatenated_audio) - return (sr, concatenated_audio) - else: - sr, audio = run_tts(args.text, args.lang, args) - if args.wav: - write(filename=args.wav, rate=sr, data=audio) - return (sr, audio) - - -def load_all_models(args): - global engine - if args.lang not in _TRANSLITERATION_NOT_AVAILABLE_IN: - engine = XlitEngine(args.lang) # loading translit model globally - - global text_to_mel - global mel_to_wav - - text_to_mel, mel_to_wav = load_models(args.acoustic, args.vocoder, args.device) - - try: - args.noise_scale = float(args.noise_scale) - args.length_scale = float(args.length_scale) - except: - pass - - print(args) - - - -if __name__ == "__main__": - parser = argparse.ArgumentParser() - parser.add_argument("-a", "--acoustic", required=True, type=str) - parser.add_argument("-v", "--vocoder", required=True, type=str) - parser.add_argument("-d", "--device", type=str, default="cpu") - parser.add_argument("-t", "--text", type=str, required=True) - parser.add_argument("-w", "--wav", type=str, required=True) - parser.add_argument("-n", "--noise-scale", default='0.667', type=str ) - parser.add_argument("-l", "--length-scale", default='1.0', type=str) - - parser.add_argument("-T", "--transliteration", default=1, type=int) - parser.add_argument("-N", "--number-conversion", default=1, type=int) - parser.add_argument("-S", "--split-sentences", default=1, type=int) - parser.add_argument("-L", "--lang", type=str, required=True) - - args = parser.parse_args() - - load_all_models(args) - run_tts_paragraph(args) - - diff --git a/spaces/rayan-saleh/whisper2notion/server/node_modules/@types/node/fs/promises.d.ts b/spaces/rayan-saleh/whisper2notion/server/node_modules/@types/node/fs/promises.d.ts deleted file mode 100644 index aca2fd51b27b708a592bed55b2c6f61c9564f15b..0000000000000000000000000000000000000000 --- a/spaces/rayan-saleh/whisper2notion/server/node_modules/@types/node/fs/promises.d.ts +++ /dev/null @@ -1,1138 +0,0 @@ -/** - * The `fs/promises` API provides asynchronous file system methods that return - * promises. - * - * The promise APIs use the underlying Node.js threadpool to perform file - * system operations off the event loop thread. These operations are not - * synchronized or threadsafe. Care must be taken when performing multiple - * concurrent modifications on the same file or data corruption may occur. - * @since v10.0.0 - */ -declare module 'fs/promises' { - import { Abortable } from 'node:events'; - import { Stream } from 'node:stream'; - import { ReadableStream } from 'node:stream/web'; - import { - BigIntStats, - BufferEncodingOption, - constants as fsConstants, - CopyOptions, - Dir, - Dirent, - MakeDirectoryOptions, - Mode, - ObjectEncodingOptions, - OpenDirOptions, - OpenMode, - PathLike, - ReadStream, - ReadVResult, - RmDirOptions, - RmOptions, - StatOptions, - Stats, - TimeLike, - WatchEventType, - WatchOptions, - WriteStream, - WriteVResult, - } from 'node:fs'; - import { Interface as ReadlineInterface } from 'node:readline'; - - interface FileChangeInfo { - eventType: WatchEventType; - filename: T; - } - interface FlagAndOpenMode { - mode?: Mode | undefined; - flag?: OpenMode | undefined; - } - interface FileReadResult { - bytesRead: number; - buffer: T; - } - interface FileReadOptions { - /** - * @default `Buffer.alloc(0xffff)` - */ - buffer?: T; - /** - * @default 0 - */ - offset?: number | null; - /** - * @default `buffer.byteLength` - */ - length?: number | null; - position?: number | null; - } - interface CreateReadStreamOptions { - encoding?: BufferEncoding | null | undefined; - autoClose?: boolean | undefined; - emitClose?: boolean | undefined; - start?: number | undefined; - end?: number | undefined; - highWaterMark?: number | undefined; - } - interface CreateWriteStreamOptions { - encoding?: BufferEncoding | null | undefined; - autoClose?: boolean | undefined; - emitClose?: boolean | undefined; - start?: number | undefined; - } - // TODO: Add `EventEmitter` close - interface FileHandle { - /** - * The numeric file descriptor managed by the {FileHandle} object. - * @since v10.0.0 - */ - readonly fd: number; - /** - * Alias of `filehandle.writeFile()`. - * - * When operating on file handles, the mode cannot be changed from what it was set - * to with `fsPromises.open()`. Therefore, this is equivalent to `filehandle.writeFile()`. - * @since v10.0.0 - * @return Fulfills with `undefined` upon success. - */ - appendFile(data: string | Uint8Array, options?: (ObjectEncodingOptions & FlagAndOpenMode) | BufferEncoding | null): Promise; - /** - * Changes the ownership of the file. A wrapper for [`chown(2)`](http://man7.org/linux/man-pages/man2/chown.2.html). - * @since v10.0.0 - * @param uid The file's new owner's user id. - * @param gid The file's new group's group id. - * @return Fulfills with `undefined` upon success. - */ - chown(uid: number, gid: number): Promise; - /** - * Modifies the permissions on the file. See [`chmod(2)`](http://man7.org/linux/man-pages/man2/chmod.2.html). - * @since v10.0.0 - * @param mode the file mode bit mask. - * @return Fulfills with `undefined` upon success. - */ - chmod(mode: Mode): Promise; - /** - * Unlike the 16 kb default `highWaterMark` for a `stream.Readable`, the stream - * returned by this method has a default `highWaterMark` of 64 kb. - * - * `options` can include `start` and `end` values to read a range of bytes from - * the file instead of the entire file. Both `start` and `end` are inclusive and - * start counting at 0, allowed values are in the - * \[0, [`Number.MAX_SAFE_INTEGER`](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/Number/MAX_SAFE_INTEGER)\] range. If `start` is - * omitted or `undefined`, `filehandle.createReadStream()` reads sequentially from - * the current file position. The `encoding` can be any one of those accepted by `Buffer`. - * - * If the `FileHandle` points to a character device that only supports blocking - * reads (such as keyboard or sound card), read operations do not finish until data - * is available. This can prevent the process from exiting and the stream from - * closing naturally. - * - * By default, the stream will emit a `'close'` event after it has been - * destroyed. Set the `emitClose` option to `false` to change this behavior. - * - * ```js - * import { open } from 'fs/promises'; - * - * const fd = await open('/dev/input/event0'); - * // Create a stream from some character device. - * const stream = fd.createReadStream(); - * setTimeout(() => { - * stream.close(); // This may not close the stream. - * // Artificially marking end-of-stream, as if the underlying resource had - * // indicated end-of-file by itself, allows the stream to close. - * // This does not cancel pending read operations, and if there is such an - * // operation, the process may still not be able to exit successfully - * // until it finishes. - * stream.push(null); - * stream.read(0); - * }, 100); - * ``` - * - * If `autoClose` is false, then the file descriptor won't be closed, even if - * there's an error. It is the application's responsibility to close it and make - * sure there's no file descriptor leak. If `autoClose` is set to true (default - * behavior), on `'error'` or `'end'` the file descriptor will be closed - * automatically. - * - * An example to read the last 10 bytes of a file which is 100 bytes long: - * - * ```js - * import { open } from 'fs/promises'; - * - * const fd = await open('sample.txt'); - * fd.createReadStream({ start: 90, end: 99 }); - * ``` - * @since v16.11.0 - */ - createReadStream(options?: CreateReadStreamOptions): ReadStream; - /** - * `options` may also include a `start` option to allow writing data at some - * position past the beginning of the file, allowed values are in the - * \[0, [`Number.MAX_SAFE_INTEGER`](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/Number/MAX_SAFE_INTEGER)\] range. Modifying a file rather than - * replacing it may require the `flags` `open` option to be set to `r+` rather than - * the default `r`. The `encoding` can be any one of those accepted by `Buffer`. - * - * If `autoClose` is set to true (default behavior) on `'error'` or `'finish'`the file descriptor will be closed automatically. If `autoClose` is false, - * then the file descriptor won't be closed, even if there's an error. - * It is the application's responsibility to close it and make sure there's no - * file descriptor leak. - * - * By default, the stream will emit a `'close'` event after it has been - * destroyed. Set the `emitClose` option to `false` to change this behavior. - * @since v16.11.0 - */ - createWriteStream(options?: CreateWriteStreamOptions): WriteStream; - /** - * Forces all currently queued I/O operations associated with the file to the - * operating system's synchronized I/O completion state. Refer to the POSIX [`fdatasync(2)`](http://man7.org/linux/man-pages/man2/fdatasync.2.html) documentation for details. - * - * Unlike `filehandle.sync` this method does not flush modified metadata. - * @since v10.0.0 - * @return Fulfills with `undefined` upon success. - */ - datasync(): Promise; - /** - * Request that all data for the open file descriptor is flushed to the storage - * device. The specific implementation is operating system and device specific. - * Refer to the POSIX [`fsync(2)`](http://man7.org/linux/man-pages/man2/fsync.2.html) documentation for more detail. - * @since v10.0.0 - * @return Fufills with `undefined` upon success. - */ - sync(): Promise; - /** - * Reads data from the file and stores that in the given buffer. - * - * If the file is not modified concurrently, the end-of-file is reached when the - * number of bytes read is zero. - * @since v10.0.0 - * @param buffer A buffer that will be filled with the file data read. - * @param offset The location in the buffer at which to start filling. - * @param length The number of bytes to read. - * @param position The location where to begin reading data from the file. If `null`, data will be read from the current file position, and the position will be updated. If `position` is an - * integer, the current file position will remain unchanged. - * @return Fulfills upon success with an object with two properties: - */ - read(buffer: T, offset?: number | null, length?: number | null, position?: number | null): Promise>; - read(options?: FileReadOptions): Promise>; - /** - * Returns a `ReadableStream` that may be used to read the files data. - * - * An error will be thrown if this method is called more than once or is called after the `FileHandle` is closed - * or closing. - * - * ```js - * import { open } from 'node:fs/promises'; - * - * const file = await open('./some/file/to/read'); - * - * for await (const chunk of file.readableWebStream()) - * console.log(chunk); - * - * await file.close(); - * ``` - * - * While the `ReadableStream` will read the file to completion, it will not close the `FileHandle` automatically. User code must still call the `fileHandle.close()` method. - * - * @since v17.0.0 - * @experimental - */ - readableWebStream(): ReadableStream; - /** - * Asynchronously reads the entire contents of a file. - * - * If `options` is a string, then it specifies the `encoding`. - * - * The `FileHandle` has to support reading. - * - * If one or more `filehandle.read()` calls are made on a file handle and then a`filehandle.readFile()` call is made, the data will be read from the current - * position till the end of the file. It doesn't always read from the beginning - * of the file. - * @since v10.0.0 - * @return Fulfills upon a successful read with the contents of the file. If no encoding is specified (using `options.encoding`), the data is returned as a {Buffer} object. Otherwise, the - * data will be a string. - */ - readFile( - options?: { - encoding?: null | undefined; - flag?: OpenMode | undefined; - } | null - ): Promise; - /** - * Asynchronously reads the entire contents of a file. The underlying file will _not_ be closed automatically. - * The `FileHandle` must have been opened for reading. - * @param options An object that may contain an optional flag. - * If a flag is not provided, it defaults to `'r'`. - */ - readFile( - options: - | { - encoding: BufferEncoding; - flag?: OpenMode | undefined; - } - | BufferEncoding - ): Promise; - /** - * Asynchronously reads the entire contents of a file. The underlying file will _not_ be closed automatically. - * The `FileHandle` must have been opened for reading. - * @param options An object that may contain an optional flag. - * If a flag is not provided, it defaults to `'r'`. - */ - readFile( - options?: - | (ObjectEncodingOptions & { - flag?: OpenMode | undefined; - }) - | BufferEncoding - | null - ): Promise; - /** - * Convenience method to create a `readline` interface and stream over the file. For example: - * - * ```js - * import { open } from 'node:fs/promises'; - * - * const file = await open('./some/file/to/read'); - * - * for await (const line of file.readLines()) { - * console.log(line); - * } - * ``` - * - * @since v18.11.0 - * @param options See `filehandle.createReadStream()` for the options. - */ - readLines(options?: CreateReadStreamOptions): ReadlineInterface; - /** - * @since v10.0.0 - * @return Fulfills with an {fs.Stats} for the file. - */ - stat( - opts?: StatOptions & { - bigint?: false | undefined; - } - ): Promise; - stat( - opts: StatOptions & { - bigint: true; - } - ): Promise; - stat(opts?: StatOptions): Promise; - /** - * Truncates the file. - * - * If the file was larger than `len` bytes, only the first `len` bytes will be - * retained in the file. - * - * The following example retains only the first four bytes of the file: - * - * ```js - * import { open } from 'fs/promises'; - * - * let filehandle = null; - * try { - * filehandle = await open('temp.txt', 'r+'); - * await filehandle.truncate(4); - * } finally { - * await filehandle?.close(); - * } - * ``` - * - * If the file previously was shorter than `len` bytes, it is extended, and the - * extended part is filled with null bytes (`'\0'`): - * - * If `len` is negative then `0` will be used. - * @since v10.0.0 - * @param [len=0] - * @return Fulfills with `undefined` upon success. - */ - truncate(len?: number): Promise; - /** - * Change the file system timestamps of the object referenced by the `FileHandle` then resolves the promise with no arguments upon success. - * @since v10.0.0 - */ - utimes(atime: TimeLike, mtime: TimeLike): Promise; - /** - * Asynchronously writes data to a file, replacing the file if it already exists.`data` can be a string, a buffer, an - * [AsyncIterable](https://tc39.github.io/ecma262/#sec-asynciterable-interface) or - * [Iterable](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Iteration_protocols#The_iterable_protocol) object. - * The promise is resolved with no arguments upon success. - * - * If `options` is a string, then it specifies the `encoding`. - * - * The `FileHandle` has to support writing. - * - * It is unsafe to use `filehandle.writeFile()` multiple times on the same file - * without waiting for the promise to be resolved (or rejected). - * - * If one or more `filehandle.write()` calls are made on a file handle and then a`filehandle.writeFile()` call is made, the data will be written from the - * current position till the end of the file. It doesn't always write from the - * beginning of the file. - * @since v10.0.0 - */ - writeFile(data: string | Uint8Array, options?: (ObjectEncodingOptions & FlagAndOpenMode & Abortable) | BufferEncoding | null): Promise; - /** - * Write `buffer` to the file. - * - * The promise is resolved with an object containing two properties: - * - * It is unsafe to use `filehandle.write()` multiple times on the same file - * without waiting for the promise to be resolved (or rejected). For this - * scenario, use `filehandle.createWriteStream()`. - * - * On Linux, positional writes do not work when the file is opened in append mode. - * The kernel ignores the position argument and always appends the data to - * the end of the file. - * @since v10.0.0 - * @param [offset=0] The start position from within `buffer` where the data to write begins. - * @param [length=buffer.byteLength - offset] The number of bytes from `buffer` to write. - * @param position The offset from the beginning of the file where the data from `buffer` should be written. If `position` is not a `number`, the data will be written at the current position. - * See the POSIX pwrite(2) documentation for more detail. - */ - write( - buffer: TBuffer, - offset?: number | null, - length?: number | null, - position?: number | null - ): Promise<{ - bytesWritten: number; - buffer: TBuffer; - }>; - write( - data: string, - position?: number | null, - encoding?: BufferEncoding | null - ): Promise<{ - bytesWritten: number; - buffer: string; - }>; - /** - * Write an array of [ArrayBufferView](https://developer.mozilla.org/en-US/docs/Web/API/ArrayBufferView) s to the file. - * - * The promise is resolved with an object containing a two properties: - * - * It is unsafe to call `writev()` multiple times on the same file without waiting - * for the promise to be resolved (or rejected). - * - * On Linux, positional writes don't work when the file is opened in append mode. - * The kernel ignores the position argument and always appends the data to - * the end of the file. - * @since v12.9.0 - * @param position The offset from the beginning of the file where the data from `buffers` should be written. If `position` is not a `number`, the data will be written at the current - * position. - */ - writev(buffers: ReadonlyArray, position?: number): Promise; - /** - * Read from a file and write to an array of [ArrayBufferView](https://developer.mozilla.org/en-US/docs/Web/API/ArrayBufferView) s - * @since v13.13.0, v12.17.0 - * @param position The offset from the beginning of the file where the data should be read from. If `position` is not a `number`, the data will be read from the current position. - * @return Fulfills upon success an object containing two properties: - */ - readv(buffers: ReadonlyArray, position?: number): Promise; - /** - * Closes the file handle after waiting for any pending operation on the handle to - * complete. - * - * ```js - * import { open } from 'fs/promises'; - * - * let filehandle; - * try { - * filehandle = await open('thefile.txt', 'r'); - * } finally { - * await filehandle?.close(); - * } - * ``` - * @since v10.0.0 - * @return Fulfills with `undefined` upon success. - */ - close(): Promise; - } - - const constants: typeof fsConstants; - - /** - * Tests a user's permissions for the file or directory specified by `path`. - * The `mode` argument is an optional integer that specifies the accessibility - * checks to be performed. `mode` should be either the value `fs.constants.F_OK`or a mask consisting of the bitwise OR of any of `fs.constants.R_OK`,`fs.constants.W_OK`, and `fs.constants.X_OK` - * (e.g.`fs.constants.W_OK | fs.constants.R_OK`). Check `File access constants` for - * possible values of `mode`. - * - * If the accessibility check is successful, the promise is resolved with no - * value. If any of the accessibility checks fail, the promise is rejected - * with an [Error](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/Error) object. The following example checks if the file`/etc/passwd` can be read and - * written by the current process. - * - * ```js - * import { access } from 'fs/promises'; - * import { constants } from 'fs'; - * - * try { - * await access('/etc/passwd', constants.R_OK | constants.W_OK); - * console.log('can access'); - * } catch { - * console.error('cannot access'); - * } - * ``` - * - * Using `fsPromises.access()` to check for the accessibility of a file before - * calling `fsPromises.open()` is not recommended. Doing so introduces a race - * condition, since other processes may change the file's state between the two - * calls. Instead, user code should open/read/write the file directly and handle - * the error raised if the file is not accessible. - * @since v10.0.0 - * @param [mode=fs.constants.F_OK] - * @return Fulfills with `undefined` upon success. - */ - function access(path: PathLike, mode?: number): Promise; - /** - * Asynchronously copies `src` to `dest`. By default, `dest` is overwritten if it - * already exists. - * - * No guarantees are made about the atomicity of the copy operation. If an - * error occurs after the destination file has been opened for writing, an attempt - * will be made to remove the destination. - * - * ```js - * import { constants } from 'fs'; - * import { copyFile } from 'fs/promises'; - * - * try { - * await copyFile('source.txt', 'destination.txt'); - * console.log('source.txt was copied to destination.txt'); - * } catch { - * console.log('The file could not be copied'); - * } - * - * // By using COPYFILE_EXCL, the operation will fail if destination.txt exists. - * try { - * await copyFile('source.txt', 'destination.txt', constants.COPYFILE_EXCL); - * console.log('source.txt was copied to destination.txt'); - * } catch { - * console.log('The file could not be copied'); - * } - * ``` - * @since v10.0.0 - * @param src source filename to copy - * @param dest destination filename of the copy operation - * @param [mode=0] Optional modifiers that specify the behavior of the copy operation. It is possible to create a mask consisting of the bitwise OR of two or more values (e.g. - * `fs.constants.COPYFILE_EXCL | fs.constants.COPYFILE_FICLONE`) - * @return Fulfills with `undefined` upon success. - */ - function copyFile(src: PathLike, dest: PathLike, mode?: number): Promise; - /** - * Opens a `FileHandle`. - * - * Refer to the POSIX [`open(2)`](http://man7.org/linux/man-pages/man2/open.2.html) documentation for more detail. - * - * Some characters (`< > : " / \ | ? *`) are reserved under Windows as documented - * by [Naming Files, Paths, and Namespaces](https://docs.microsoft.com/en-us/windows/desktop/FileIO/naming-a-file). Under NTFS, if the filename contains - * a colon, Node.js will open a file system stream, as described by [this MSDN page](https://docs.microsoft.com/en-us/windows/desktop/FileIO/using-streams). - * @since v10.0.0 - * @param [flags='r'] See `support of file system `flags``. - * @param [mode=0o666] Sets the file mode (permission and sticky bits) if the file is created. - * @return Fulfills with a {FileHandle} object. - */ - function open(path: PathLike, flags?: string | number, mode?: Mode): Promise; - /** - * Renames `oldPath` to `newPath`. - * @since v10.0.0 - * @return Fulfills with `undefined` upon success. - */ - function rename(oldPath: PathLike, newPath: PathLike): Promise; - /** - * Truncates (shortens or extends the length) of the content at `path` to `len`bytes. - * @since v10.0.0 - * @param [len=0] - * @return Fulfills with `undefined` upon success. - */ - function truncate(path: PathLike, len?: number): Promise; - /** - * Removes the directory identified by `path`. - * - * Using `fsPromises.rmdir()` on a file (not a directory) results in the - * promise being rejected with an `ENOENT` error on Windows and an `ENOTDIR`error on POSIX. - * - * To get a behavior similar to the `rm -rf` Unix command, use `fsPromises.rm()` with options `{ recursive: true, force: true }`. - * @since v10.0.0 - * @return Fulfills with `undefined` upon success. - */ - function rmdir(path: PathLike, options?: RmDirOptions): Promise; - /** - * Removes files and directories (modeled on the standard POSIX `rm` utility). - * @since v14.14.0 - * @return Fulfills with `undefined` upon success. - */ - function rm(path: PathLike, options?: RmOptions): Promise; - /** - * Asynchronously creates a directory. - * - * The optional `options` argument can be an integer specifying `mode` (permission - * and sticky bits), or an object with a `mode` property and a `recursive`property indicating whether parent directories should be created. Calling`fsPromises.mkdir()` when `path` is a directory - * that exists results in a - * rejection only when `recursive` is false. - * @since v10.0.0 - * @return Upon success, fulfills with `undefined` if `recursive` is `false`, or the first directory path created if `recursive` is `true`. - */ - function mkdir( - path: PathLike, - options: MakeDirectoryOptions & { - recursive: true; - } - ): Promise; - /** - * Asynchronous mkdir(2) - create a directory. - * @param path A path to a file. If a URL is provided, it must use the `file:` protocol. - * @param options Either the file mode, or an object optionally specifying the file mode and whether parent folders - * should be created. If a string is passed, it is parsed as an octal integer. If not specified, defaults to `0o777`. - */ - function mkdir( - path: PathLike, - options?: - | Mode - | (MakeDirectoryOptions & { - recursive?: false | undefined; - }) - | null - ): Promise; - /** - * Asynchronous mkdir(2) - create a directory. - * @param path A path to a file. If a URL is provided, it must use the `file:` protocol. - * @param options Either the file mode, or an object optionally specifying the file mode and whether parent folders - * should be created. If a string is passed, it is parsed as an octal integer. If not specified, defaults to `0o777`. - */ - function mkdir(path: PathLike, options?: Mode | MakeDirectoryOptions | null): Promise; - /** - * Reads the contents of a directory. - * - * The optional `options` argument can be a string specifying an encoding, or an - * object with an `encoding` property specifying the character encoding to use for - * the filenames. If the `encoding` is set to `'buffer'`, the filenames returned - * will be passed as `Buffer` objects. - * - * If `options.withFileTypes` is set to `true`, the resolved array will contain `fs.Dirent` objects. - * - * ```js - * import { readdir } from 'fs/promises'; - * - * try { - * const files = await readdir(path); - * for (const file of files) - * console.log(file); - * } catch (err) { - * console.error(err); - * } - * ``` - * @since v10.0.0 - * @return Fulfills with an array of the names of the files in the directory excluding `'.'` and `'..'`. - */ - function readdir( - path: PathLike, - options?: - | (ObjectEncodingOptions & { - withFileTypes?: false | undefined; - }) - | BufferEncoding - | null - ): Promise; - /** - * Asynchronous readdir(3) - read a directory. - * @param path A path to a file. If a URL is provided, it must use the `file:` protocol. - * @param options The encoding (or an object specifying the encoding), used as the encoding of the result. If not provided, `'utf8'` is used. - */ - function readdir( - path: PathLike, - options: - | { - encoding: 'buffer'; - withFileTypes?: false | undefined; - } - | 'buffer' - ): Promise; - /** - * Asynchronous readdir(3) - read a directory. - * @param path A path to a file. If a URL is provided, it must use the `file:` protocol. - * @param options The encoding (or an object specifying the encoding), used as the encoding of the result. If not provided, `'utf8'` is used. - */ - function readdir( - path: PathLike, - options?: - | (ObjectEncodingOptions & { - withFileTypes?: false | undefined; - }) - | BufferEncoding - | null - ): Promise; - /** - * Asynchronous readdir(3) - read a directory. - * @param path A path to a file. If a URL is provided, it must use the `file:` protocol. - * @param options If called with `withFileTypes: true` the result data will be an array of Dirent. - */ - function readdir( - path: PathLike, - options: ObjectEncodingOptions & { - withFileTypes: true; - } - ): Promise; - /** - * Reads the contents of the symbolic link referred to by `path`. See the POSIX [`readlink(2)`](http://man7.org/linux/man-pages/man2/readlink.2.html) documentation for more detail. The promise is - * resolved with the`linkString` upon success. - * - * The optional `options` argument can be a string specifying an encoding, or an - * object with an `encoding` property specifying the character encoding to use for - * the link path returned. If the `encoding` is set to `'buffer'`, the link path - * returned will be passed as a `Buffer` object. - * @since v10.0.0 - * @return Fulfills with the `linkString` upon success. - */ - function readlink(path: PathLike, options?: ObjectEncodingOptions | BufferEncoding | null): Promise; - /** - * Asynchronous readlink(2) - read value of a symbolic link. - * @param path A path to a file. If a URL is provided, it must use the `file:` protocol. - * @param options The encoding (or an object specifying the encoding), used as the encoding of the result. If not provided, `'utf8'` is used. - */ - function readlink(path: PathLike, options: BufferEncodingOption): Promise; - /** - * Asynchronous readlink(2) - read value of a symbolic link. - * @param path A path to a file. If a URL is provided, it must use the `file:` protocol. - * @param options The encoding (or an object specifying the encoding), used as the encoding of the result. If not provided, `'utf8'` is used. - */ - function readlink(path: PathLike, options?: ObjectEncodingOptions | string | null): Promise; - /** - * Creates a symbolic link. - * - * The `type` argument is only used on Windows platforms and can be one of `'dir'`,`'file'`, or `'junction'`. Windows junction points require the destination path - * to be absolute. When using `'junction'`, the `target` argument will - * automatically be normalized to absolute path. - * @since v10.0.0 - * @param [type='file'] - * @return Fulfills with `undefined` upon success. - */ - function symlink(target: PathLike, path: PathLike, type?: string | null): Promise; - /** - * Equivalent to `fsPromises.stat()` unless `path` refers to a symbolic link, - * in which case the link itself is stat-ed, not the file that it refers to. - * Refer to the POSIX [`lstat(2)`](http://man7.org/linux/man-pages/man2/lstat.2.html) document for more detail. - * @since v10.0.0 - * @return Fulfills with the {fs.Stats} object for the given symbolic link `path`. - */ - function lstat( - path: PathLike, - opts?: StatOptions & { - bigint?: false | undefined; - } - ): Promise; - function lstat( - path: PathLike, - opts: StatOptions & { - bigint: true; - } - ): Promise; - function lstat(path: PathLike, opts?: StatOptions): Promise; - /** - * @since v10.0.0 - * @return Fulfills with the {fs.Stats} object for the given `path`. - */ - function stat( - path: PathLike, - opts?: StatOptions & { - bigint?: false | undefined; - } - ): Promise; - function stat( - path: PathLike, - opts: StatOptions & { - bigint: true; - } - ): Promise; - function stat(path: PathLike, opts?: StatOptions): Promise; - /** - * Creates a new link from the `existingPath` to the `newPath`. See the POSIX [`link(2)`](http://man7.org/linux/man-pages/man2/link.2.html) documentation for more detail. - * @since v10.0.0 - * @return Fulfills with `undefined` upon success. - */ - function link(existingPath: PathLike, newPath: PathLike): Promise; - /** - * If `path` refers to a symbolic link, then the link is removed without affecting - * the file or directory to which that link refers. If the `path` refers to a file - * path that is not a symbolic link, the file is deleted. See the POSIX [`unlink(2)`](http://man7.org/linux/man-pages/man2/unlink.2.html) documentation for more detail. - * @since v10.0.0 - * @return Fulfills with `undefined` upon success. - */ - function unlink(path: PathLike): Promise; - /** - * Changes the permissions of a file. - * @since v10.0.0 - * @return Fulfills with `undefined` upon success. - */ - function chmod(path: PathLike, mode: Mode): Promise; - /** - * Changes the permissions on a symbolic link. - * - * This method is only implemented on macOS. - * @deprecated Since v10.0.0 - * @return Fulfills with `undefined` upon success. - */ - function lchmod(path: PathLike, mode: Mode): Promise; - /** - * Changes the ownership on a symbolic link. - * @since v10.0.0 - * @return Fulfills with `undefined` upon success. - */ - function lchown(path: PathLike, uid: number, gid: number): Promise; - /** - * Changes the access and modification times of a file in the same way as `fsPromises.utimes()`, with the difference that if the path refers to a - * symbolic link, then the link is not dereferenced: instead, the timestamps of - * the symbolic link itself are changed. - * @since v14.5.0, v12.19.0 - * @return Fulfills with `undefined` upon success. - */ - function lutimes(path: PathLike, atime: TimeLike, mtime: TimeLike): Promise; - /** - * Changes the ownership of a file. - * @since v10.0.0 - * @return Fulfills with `undefined` upon success. - */ - function chown(path: PathLike, uid: number, gid: number): Promise; - /** - * Change the file system timestamps of the object referenced by `path`. - * - * The `atime` and `mtime` arguments follow these rules: - * - * * Values can be either numbers representing Unix epoch time, `Date`s, or a - * numeric string like `'123456789.0'`. - * * If the value can not be converted to a number, or is `NaN`, `Infinity` or`-Infinity`, an `Error` will be thrown. - * @since v10.0.0 - * @return Fulfills with `undefined` upon success. - */ - function utimes(path: PathLike, atime: TimeLike, mtime: TimeLike): Promise; - /** - * Determines the actual location of `path` using the same semantics as the`fs.realpath.native()` function. - * - * Only paths that can be converted to UTF8 strings are supported. - * - * The optional `options` argument can be a string specifying an encoding, or an - * object with an `encoding` property specifying the character encoding to use for - * the path. If the `encoding` is set to `'buffer'`, the path returned will be - * passed as a `Buffer` object. - * - * On Linux, when Node.js is linked against musl libc, the procfs file system must - * be mounted on `/proc` in order for this function to work. Glibc does not have - * this restriction. - * @since v10.0.0 - * @return Fulfills with the resolved path upon success. - */ - function realpath(path: PathLike, options?: ObjectEncodingOptions | BufferEncoding | null): Promise; - /** - * Asynchronous realpath(3) - return the canonicalized absolute pathname. - * @param path A path to a file. If a URL is provided, it must use the `file:` protocol. - * @param options The encoding (or an object specifying the encoding), used as the encoding of the result. If not provided, `'utf8'` is used. - */ - function realpath(path: PathLike, options: BufferEncodingOption): Promise; - /** - * Asynchronous realpath(3) - return the canonicalized absolute pathname. - * @param path A path to a file. If a URL is provided, it must use the `file:` protocol. - * @param options The encoding (or an object specifying the encoding), used as the encoding of the result. If not provided, `'utf8'` is used. - */ - function realpath(path: PathLike, options?: ObjectEncodingOptions | BufferEncoding | null): Promise; - /** - * Creates a unique temporary directory. A unique directory name is generated by - * appending six random characters to the end of the provided `prefix`. Due to - * platform inconsistencies, avoid trailing `X` characters in `prefix`. Some - * platforms, notably the BSDs, can return more than six random characters, and - * replace trailing `X` characters in `prefix` with random characters. - * - * The optional `options` argument can be a string specifying an encoding, or an - * object with an `encoding` property specifying the character encoding to use. - * - * ```js - * import { mkdtemp } from 'fs/promises'; - * - * try { - * await mkdtemp(path.join(os.tmpdir(), 'foo-')); - * } catch (err) { - * console.error(err); - * } - * ``` - * - * The `fsPromises.mkdtemp()` method will append the six randomly selected - * characters directly to the `prefix` string. For instance, given a directory`/tmp`, if the intention is to create a temporary directory _within_`/tmp`, the`prefix` must end with a trailing - * platform-specific path separator - * (`require('path').sep`). - * @since v10.0.0 - * @return Fulfills with a string containing the filesystem path of the newly created temporary directory. - */ - function mkdtemp(prefix: string, options?: ObjectEncodingOptions | BufferEncoding | null): Promise; - /** - * Asynchronously creates a unique temporary directory. - * Generates six random characters to be appended behind a required `prefix` to create a unique temporary directory. - * @param options The encoding (or an object specifying the encoding), used as the encoding of the result. If not provided, `'utf8'` is used. - */ - function mkdtemp(prefix: string, options: BufferEncodingOption): Promise; - /** - * Asynchronously creates a unique temporary directory. - * Generates six random characters to be appended behind a required `prefix` to create a unique temporary directory. - * @param options The encoding (or an object specifying the encoding), used as the encoding of the result. If not provided, `'utf8'` is used. - */ - function mkdtemp(prefix: string, options?: ObjectEncodingOptions | BufferEncoding | null): Promise; - /** - * Asynchronously writes data to a file, replacing the file if it already exists.`data` can be a string, a buffer, an - * [AsyncIterable](https://tc39.github.io/ecma262/#sec-asynciterable-interface) or - * [Iterable](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Iteration_protocols#The_iterable_protocol) object. - * - * The `encoding` option is ignored if `data` is a buffer. - * - * If `options` is a string, then it specifies the encoding. - * - * The `mode` option only affects the newly created file. See `fs.open()` for more details. - * - * Any specified `FileHandle` has to support writing. - * - * It is unsafe to use `fsPromises.writeFile()` multiple times on the same file - * without waiting for the promise to be settled. - * - * Similarly to `fsPromises.readFile` \- `fsPromises.writeFile` is a convenience - * method that performs multiple `write` calls internally to write the buffer - * passed to it. For performance sensitive code consider using `fs.createWriteStream()` or `filehandle.createWriteStream()`. - * - * It is possible to use an `AbortSignal` to cancel an `fsPromises.writeFile()`. - * Cancelation is "best effort", and some amount of data is likely still - * to be written. - * - * ```js - * import { writeFile } from 'fs/promises'; - * import { Buffer } from 'buffer'; - * - * try { - * const controller = new AbortController(); - * const { signal } = controller; - * const data = new Uint8Array(Buffer.from('Hello Node.js')); - * const promise = writeFile('message.txt', data, { signal }); - * - * // Abort the request before the promise settles. - * controller.abort(); - * - * await promise; - * } catch (err) { - * // When a request is aborted - err is an AbortError - * console.error(err); - * } - * ``` - * - * Aborting an ongoing request does not abort individual operating - * system requests but rather the internal buffering `fs.writeFile` performs. - * @since v10.0.0 - * @param file filename or `FileHandle` - * @return Fulfills with `undefined` upon success. - */ - function writeFile( - file: PathLike | FileHandle, - data: string | NodeJS.ArrayBufferView | Iterable | AsyncIterable | Stream, - options?: - | (ObjectEncodingOptions & { - mode?: Mode | undefined; - flag?: OpenMode | undefined; - } & Abortable) - | BufferEncoding - | null - ): Promise; - /** - * Asynchronously append data to a file, creating the file if it does not yet - * exist. `data` can be a string or a `Buffer`. - * - * If `options` is a string, then it specifies the `encoding`. - * - * The `mode` option only affects the newly created file. See `fs.open()` for more details. - * - * The `path` may be specified as a `FileHandle` that has been opened - * for appending (using `fsPromises.open()`). - * @since v10.0.0 - * @param path filename or {FileHandle} - * @return Fulfills with `undefined` upon success. - */ - function appendFile(path: PathLike | FileHandle, data: string | Uint8Array, options?: (ObjectEncodingOptions & FlagAndOpenMode) | BufferEncoding | null): Promise; - /** - * Asynchronously reads the entire contents of a file. - * - * If no encoding is specified (using `options.encoding`), the data is returned - * as a `Buffer` object. Otherwise, the data will be a string. - * - * If `options` is a string, then it specifies the encoding. - * - * When the `path` is a directory, the behavior of `fsPromises.readFile()` is - * platform-specific. On macOS, Linux, and Windows, the promise will be rejected - * with an error. On FreeBSD, a representation of the directory's contents will be - * returned. - * - * It is possible to abort an ongoing `readFile` using an `AbortSignal`. If a - * request is aborted the promise returned is rejected with an `AbortError`: - * - * ```js - * import { readFile } from 'fs/promises'; - * - * try { - * const controller = new AbortController(); - * const { signal } = controller; - * const promise = readFile(fileName, { signal }); - * - * // Abort the request before the promise settles. - * controller.abort(); - * - * await promise; - * } catch (err) { - * // When a request is aborted - err is an AbortError - * console.error(err); - * } - * ``` - * - * Aborting an ongoing request does not abort individual operating - * system requests but rather the internal buffering `fs.readFile` performs. - * - * Any specified `FileHandle` has to support reading. - * @since v10.0.0 - * @param path filename or `FileHandle` - * @return Fulfills with the contents of the file. - */ - function readFile( - path: PathLike | FileHandle, - options?: - | ({ - encoding?: null | undefined; - flag?: OpenMode | undefined; - } & Abortable) - | null - ): Promise; - /** - * Asynchronously reads the entire contents of a file. - * @param path A path to a file. If a URL is provided, it must use the `file:` protocol. - * If a `FileHandle` is provided, the underlying file will _not_ be closed automatically. - * @param options An object that may contain an optional flag. - * If a flag is not provided, it defaults to `'r'`. - */ - function readFile( - path: PathLike | FileHandle, - options: - | ({ - encoding: BufferEncoding; - flag?: OpenMode | undefined; - } & Abortable) - | BufferEncoding - ): Promise; - /** - * Asynchronously reads the entire contents of a file. - * @param path A path to a file. If a URL is provided, it must use the `file:` protocol. - * If a `FileHandle` is provided, the underlying file will _not_ be closed automatically. - * @param options An object that may contain an optional flag. - * If a flag is not provided, it defaults to `'r'`. - */ - function readFile( - path: PathLike | FileHandle, - options?: - | (ObjectEncodingOptions & - Abortable & { - flag?: OpenMode | undefined; - }) - | BufferEncoding - | null - ): Promise; - /** - * Asynchronously open a directory for iterative scanning. See the POSIX [`opendir(3)`](http://man7.org/linux/man-pages/man3/opendir.3.html) documentation for more detail. - * - * Creates an `fs.Dir`, which contains all further functions for reading from - * and cleaning up the directory. - * - * The `encoding` option sets the encoding for the `path` while opening the - * directory and subsequent read operations. - * - * Example using async iteration: - * - * ```js - * import { opendir } from 'fs/promises'; - * - * try { - * const dir = await opendir('./'); - * for await (const dirent of dir) - * console.log(dirent.name); - * } catch (err) { - * console.error(err); - * } - * ``` - * - * When using the async iterator, the `fs.Dir` object will be automatically - * closed after the iterator exits. - * @since v12.12.0 - * @return Fulfills with an {fs.Dir}. - */ - function opendir(path: PathLike, options?: OpenDirOptions): Promise; - /** - * Returns an async iterator that watches for changes on `filename`, where `filename`is either a file or a directory. - * - * ```js - * const { watch } = require('fs/promises'); - * - * const ac = new AbortController(); - * const { signal } = ac; - * setTimeout(() => ac.abort(), 10000); - * - * (async () => { - * try { - * const watcher = watch(__filename, { signal }); - * for await (const event of watcher) - * console.log(event); - * } catch (err) { - * if (err.name === 'AbortError') - * return; - * throw err; - * } - * })(); - * ``` - * - * On most platforms, `'rename'` is emitted whenever a filename appears or - * disappears in the directory. - * - * All the `caveats` for `fs.watch()` also apply to `fsPromises.watch()`. - * @since v15.9.0, v14.18.0 - * @return of objects with the properties: - */ - function watch( - filename: PathLike, - options: - | (WatchOptions & { - encoding: 'buffer'; - }) - | 'buffer' - ): AsyncIterable>; - /** - * Watch for changes on `filename`, where `filename` is either a file or a directory, returning an `FSWatcher`. - * @param filename A path to a file or directory. If a URL is provided, it must use the `file:` protocol. - * @param options Either the encoding for the filename provided to the listener, or an object optionally specifying encoding, persistent, and recursive options. - * If `encoding` is not supplied, the default of `'utf8'` is used. - * If `persistent` is not supplied, the default of `true` is used. - * If `recursive` is not supplied, the default of `false` is used. - */ - function watch(filename: PathLike, options?: WatchOptions | BufferEncoding): AsyncIterable>; - /** - * Watch for changes on `filename`, where `filename` is either a file or a directory, returning an `FSWatcher`. - * @param filename A path to a file or directory. If a URL is provided, it must use the `file:` protocol. - * @param options Either the encoding for the filename provided to the listener, or an object optionally specifying encoding, persistent, and recursive options. - * If `encoding` is not supplied, the default of `'utf8'` is used. - * If `persistent` is not supplied, the default of `true` is used. - * If `recursive` is not supplied, the default of `false` is used. - */ - function watch(filename: PathLike, options: WatchOptions | string): AsyncIterable> | AsyncIterable>; - /** - * Asynchronously copies the entire directory structure from `src` to `dest`, - * including subdirectories and files. - * - * When copying a directory to another directory, globs are not supported and - * behavior is similar to `cp dir1/ dir2/`. - * @since v16.7.0 - * @experimental - * @param src source path to copy. - * @param dest destination path to copy to. - * @return Fulfills with `undefined` upon success. - */ - function cp(source: string | URL, destination: string | URL, opts?: CopyOptions): Promise; -} -declare module 'node:fs/promises' { - export * from 'fs/promises'; -} diff --git a/spaces/rbanfield/libfacedetection/src/detect-image.cpp b/spaces/rbanfield/libfacedetection/src/detect-image.cpp deleted file mode 100644 index 6cab40030ab13c20bebba0bbf0906f0dee307364..0000000000000000000000000000000000000000 --- a/spaces/rbanfield/libfacedetection/src/detect-image.cpp +++ /dev/null @@ -1,135 +0,0 @@ -/* -By downloading, copying, installing or using the software you agree to this license. -If you do not agree to this license, do not download, install, -copy or use the software. - - - License Agreement For libfacedetection - (3-clause BSD License) - -Copyright (c) 2018-2020, Shiqi Yu, all rights reserved. -shiqi.yu@gmail.com - -Redistribution and use in source and binary forms, with or without modification, -are permitted provided that the following conditions are met: - - * Redistributions of source code must retain the above copyright notice, - this list of conditions and the following disclaimer. - - * Redistributions in binary form must reproduce the above copyright notice, - this list of conditions and the following disclaimer in the documentation - and/or other materials provided with the distribution. - - * Neither the names of the copyright holders nor the names of the contributors - may be used to endorse or promote products derived from this software - without specific prior written permission. - -This software is provided by the copyright holders and contributors "as is" and -any express or implied warranties, including, but not limited to, the implied -warranties of merchantability and fitness for a particular purpose are disclaimed. -In no event shall copyright holders or contributors be liable for any direct, -indirect, incidental, special, exemplary, or consequential damages -(including, but not limited to, procurement of substitute goods or services; -loss of use, data, or profits; or business interruption) however caused -and on any theory of liability, whether in contract, strict liability, -or tort (including negligence or otherwise) arising in any way out of -the use of this software, even if advised of the possibility of such damage. -*/ - -#include -#include -#include "facedetectcnn.h" - -//define the buffer size. Do not change the size! -//0x9000 = 1024 * (16 * 2 + 4), detect 1024 face at most -#define DETECT_BUFFER_SIZE 0x9000 -using namespace cv; -using namespace std; - -int main(int argc, char* argv[]) -{ - char* input_image_filename; - char* output_image_filename = NULL; - if(argc != 2 && argc != 3) - { - printf("Usage: %s [output_file_name]\n", argv[0]); - return -1; - } - input_image_filename = argv[1]; - if(argc == 3) { - output_image_filename = argv[2]; - } - - - //load an image and convert it to gray (single-channel) - Mat image = imread(input_image_filename); - if(image.empty()) - { - fprintf(stderr, "Can not load the image file %s.\n", input_image_filename); - return -1; - } - - - int * pResults = NULL; - //pBuffer is used in the detection functions. - //If you call functions in multiple threads, please create one buffer for each thread! - unsigned char * pBuffer = (unsigned char *)malloc(DETECT_BUFFER_SIZE); - if(!pBuffer) - { - fprintf(stderr, "Can not alloc buffer.\n"); - return -1; - } - - - /////////////////////////////////////////// - // CNN face detection - // Best detection rate - ////////////////////////////////////////// - //!!! The input image must be a BGR one (three-channel) instead of RGB - //!!! DO NOT RELEASE pResults !!! - TickMeter cvtm; - cvtm.start(); - - pResults = facedetect_cnn(pBuffer, (unsigned char*)(image.ptr(0)), image.cols, image.rows, (int)image.step); - - cvtm.stop(); - printf("[\n"); - Mat result_image = image.clone(); - //print the detection results - for(int i = 0; i < (pResults ? *pResults : 0); i++) - { - short * p = ((short*)(pResults + 1)) + 16*i; - int confidence = p[0]; - int x = p[1]; - int y = p[2]; - int w = p[3]; - int h = p[4]; - printf(" {\n"); - printf(" \"xmin\": %d,\n", x); - printf(" \"ymin\": %d,\n", y); - printf(" \"xmax\": %d,\n", x+w); - printf(" \"ymax\": %d,\n", y+h); - printf(" \"confidence\": %d\n", confidence); - if (i+1 < *pResults) { - printf(" },\n"); - } - else { - printf(" }\n"); - } - - //show the score of the face. Its range is [0-100] - char sScore[256]; - snprintf(sScore, 256, "%d", confidence); - cv::putText(result_image, sScore, cv::Point(x, y-3), cv::FONT_HERSHEY_SIMPLEX, 0.5, cv::Scalar(0, 255, 0), 1); - //draw face rectangle - rectangle(result_image, Rect(x, y, w, h), Scalar(0, 255, 0), 2); - } - printf("]\n"); - if (output_image_filename != NULL) - imwrite(output_image_filename, result_image); - - //release the buffer - free(pBuffer); - - return 0; -} diff --git a/spaces/recenWmenso/ChatGPT-with-Voice-Cloning-for-All/datasets/Android Os 2.2 Kernel 2.6.32 Build Number V1.5.5.md b/spaces/recenWmenso/ChatGPT-with-Voice-Cloning-for-All/datasets/Android Os 2.2 Kernel 2.6.32 Build Number V1.5.5.md deleted file mode 100644 index 7096993071a728adc057569a0540791a8fe282c5..0000000000000000000000000000000000000000 --- a/spaces/recenWmenso/ChatGPT-with-Voice-Cloning-for-All/datasets/Android Os 2.2 Kernel 2.6.32 Build Number V1.5.5.md +++ /dev/null @@ -1,8 +0,0 @@ -

      android os 2.2 kernel 2.6.32 build number v1.5.5


      Download Zip ►►►►► https://urlgoal.com/2uCKkN



      -
      -My android OS4.0 Kernel 3.0.8 build number V1.5.O won't load on screen and I don't know what the problem is... Please suggest a solution from Fixya.com. In the beginning I wrote: -I want to use my Android OS4.0 Kernel 3.0.8 build number V1.5.O to make a game app for my device (in this case, my samsung galaxy s2), but I can't get it to load and start working. -I tried several different things at first, but nothing worked. I would really appreciate it if someone can help me with this problem. I tried this link: http://www.youtube.com/watchv=Dy-N8Qtq7sE and that did nothing either. 8a78ff9644
      -
      -
      -

      diff --git a/spaces/recenWmenso/ChatGPT-with-Voice-Cloning-for-All/datasets/Behind The Enemy Lines Torrents Download.md b/spaces/recenWmenso/ChatGPT-with-Voice-Cloning-for-All/datasets/Behind The Enemy Lines Torrents Download.md deleted file mode 100644 index 80ad365345491a4a5466f357410c68681be4dcb4..0000000000000000000000000000000000000000 --- a/spaces/recenWmenso/ChatGPT-with-Voice-Cloning-for-All/datasets/Behind The Enemy Lines Torrents Download.md +++ /dev/null @@ -1,6 +0,0 @@ -

      behind the enemy lines torrents download


      DOWNLOAD ····· https://urlgoal.com/2uCJch



      - - d5da3c52bf
      -
      -
      -

      diff --git a/spaces/renumics/cifar100-sliceguard-demo/prepare.py b/spaces/renumics/cifar100-sliceguard-demo/prepare.py deleted file mode 100644 index 21c8121a4386479a05c38cc68fd50043a831efbd..0000000000000000000000000000000000000000 --- a/spaces/renumics/cifar100-sliceguard-demo/prepare.py +++ /dev/null @@ -1,23 +0,0 @@ -import pickle -import datasets -import os -import pandas as pd - -if __name__ == "__main__": - cache_file = "dataset_cache.parquet" - if os.path.exists(cache_file): - # Load dataset from cache - df = pd.read_parquet(cache_file) - print("Dataset loaded from cache.") - else: - # Load dataset using datasets.load_dataset() - dataset = datasets.load_dataset("renumics/cifar100-enriched", split="test") - print("Dataset loaded using datasets.load_dataset().") - - df = dataset.to_pandas() - - # Save dataset to cache - df.to_parquet(cache_file) - - print("Dataset saved to cache.") - diff --git a/spaces/robin0307/MMOCR/configs/_base_/schedules/schedule_sgd_160e.py b/spaces/robin0307/MMOCR/configs/_base_/schedules/schedule_sgd_160e.py deleted file mode 100644 index 985b8f63b3cb34f04ff55b298b44a53568a50ae8..0000000000000000000000000000000000000000 --- a/spaces/robin0307/MMOCR/configs/_base_/schedules/schedule_sgd_160e.py +++ /dev/null @@ -1,13 +0,0 @@ -# optimizer -optimizer = dict(type='SGD', lr=0.08, momentum=0.9, weight_decay=0.0001) -optimizer_config = dict(grad_clip=None) -# learning policy -lr_config = dict( - policy='step', - warmup='linear', - warmup_iters=500, - warmup_ratio=0.001, - step=[80, 128]) -# running settings -runner = dict(type='EpochBasedRunner', max_epochs=160) -checkpoint_config = dict(interval=10) diff --git a/spaces/rockeycoss/Prompt-Segment-Anything-Demo/mmdet/utils/util_mixins.py b/spaces/rockeycoss/Prompt-Segment-Anything-Demo/mmdet/utils/util_mixins.py deleted file mode 100644 index b83b6617f5e4a202067e1659bf448962a2a2bc72..0000000000000000000000000000000000000000 --- a/spaces/rockeycoss/Prompt-Segment-Anything-Demo/mmdet/utils/util_mixins.py +++ /dev/null @@ -1,105 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -"""This module defines the :class:`NiceRepr` mixin class, which defines a -``__repr__`` and ``__str__`` method that only depend on a custom ``__nice__`` -method, which you must define. This means you only have to overload one -function instead of two. Furthermore, if the object defines a ``__len__`` -method, then the ``__nice__`` method defaults to something sensible, otherwise -it is treated as abstract and raises ``NotImplementedError``. - -To use simply have your object inherit from :class:`NiceRepr` -(multi-inheritance should be ok). - -This code was copied from the ubelt library: https://github.com/Erotemic/ubelt - -Example: - >>> # Objects that define __nice__ have a default __str__ and __repr__ - >>> class Student(NiceRepr): - ... def __init__(self, name): - ... self.name = name - ... def __nice__(self): - ... return self.name - >>> s1 = Student('Alice') - >>> s2 = Student('Bob') - >>> print(f's1 = {s1}') - >>> print(f's2 = {s2}') - s1 = - s2 = - -Example: - >>> # Objects that define __len__ have a default __nice__ - >>> class Group(NiceRepr): - ... def __init__(self, data): - ... self.data = data - ... def __len__(self): - ... return len(self.data) - >>> g = Group([1, 2, 3]) - >>> print(f'g = {g}') - g = -""" -import warnings - - -class NiceRepr: - """Inherit from this class and define ``__nice__`` to "nicely" print your - objects. - - Defines ``__str__`` and ``__repr__`` in terms of ``__nice__`` function - Classes that inherit from :class:`NiceRepr` should redefine ``__nice__``. - If the inheriting class has a ``__len__``, method then the default - ``__nice__`` method will return its length. - - Example: - >>> class Foo(NiceRepr): - ... def __nice__(self): - ... return 'info' - >>> foo = Foo() - >>> assert str(foo) == '' - >>> assert repr(foo).startswith('>> class Bar(NiceRepr): - ... pass - >>> bar = Bar() - >>> import pytest - >>> with pytest.warns(None) as record: - >>> assert 'object at' in str(bar) - >>> assert 'object at' in repr(bar) - - Example: - >>> class Baz(NiceRepr): - ... def __len__(self): - ... return 5 - >>> baz = Baz() - >>> assert str(baz) == '' - """ - - def __nice__(self): - """str: a "nice" summary string describing this module""" - if hasattr(self, '__len__'): - # It is a common pattern for objects to use __len__ in __nice__ - # As a convenience we define a default __nice__ for these objects - return str(len(self)) - else: - # In all other cases force the subclass to overload __nice__ - raise NotImplementedError( - f'Define the __nice__ method for {self.__class__!r}') - - def __repr__(self): - """str: the string of the module""" - try: - nice = self.__nice__() - classname = self.__class__.__name__ - return f'<{classname}({nice}) at {hex(id(self))}>' - except NotImplementedError as ex: - warnings.warn(str(ex), category=RuntimeWarning) - return object.__repr__(self) - - def __str__(self): - """str: the string of the module""" - try: - classname = self.__class__.__name__ - nice = self.__nice__() - return f'<{classname}({nice})>' - except NotImplementedError as ex: - warnings.warn(str(ex), category=RuntimeWarning) - return object.__repr__(self) diff --git a/spaces/rorallitri/biomedical-language-models/logs/Led zeppelin bbc session rar download The ultimate guide for rock fans.md b/spaces/rorallitri/biomedical-language-models/logs/Led zeppelin bbc session rar download The ultimate guide for rock fans.md deleted file mode 100644 index bec799bb7762582aeb43a25aee4e924572816745..0000000000000000000000000000000000000000 --- a/spaces/rorallitri/biomedical-language-models/logs/Led zeppelin bbc session rar download The ultimate guide for rock fans.md +++ /dev/null @@ -1,9 +0,0 @@ - -

      BBC Sessions was originally released in 1997 and has been certified double platinum by the RIAA. THE COMPLETE BBC SESSIONS builds on that collection with a third disc that boasts eight unreleased performances. In addition, the set includes extensive session-by-session liner notes written by Dave Lewis. For the first time ever, it provides accurate details and notes about all of the band's BBC sessions.

      -

      Led zeppelin bbc session rar download


      Download Zip --->>> https://tinurll.com/2uzmvK



      -

      Musical highlights on this new collection include the debut of a long-lost radio session that has achieved near-mythic status among fans. Originally broadcast in April 1969, the session included three songs: "I Can't Quit You Baby," "You Shook Me," and the only recorded performance of "Sunshine Woman."

      -

      Mention of bands like this, Bogshed, The Nightingales, Big Flame, The Membranes et al has generated a moment of anamnesis - in my acid-house, ecstasy and free party haze beginning around 1988, I had totally forgotten about that whole period of intense gig going and mileu of music when for a brief moment, I moved my attention from American music (Buttholes, Sonic Youth, Flipper, Rollins, Bad Brains, Scratch Acid, JFA, Bongwater etc) back to the UK from around 86 to 88 if my memory serves:, a scene that would in my opinion not have thrived as it did without the support and enthusiasm of the legend that was John Peel and those numerous BBC sessions such as these - I can still recall the pleasure in Peel's voice as The Noseflutes almost Joycean session song titles were announced (and hopefully a few are included in this bounty).

      Got to check out this 100 worst album titles - though do you mean The Sunday Times? (easy to get it and The Sunday People mixed up - they always existed as a continuum of fake news in my brain, before fake news became a fake news item).

      -

      PS > Regarding your brief mention of the truly awful Aerosmith, they get a wonderfully camp slagging on the Danny Fields documentary 'Danny Says'(I knew his name from my teenage Nico obsession and her Jim Morrison and Iggy connections) but had no idea what an all round interesting individual he was - well worth an illegal download...

      And speaking of Iggy, the recent Stooges documentary 'Gimme Danger' is well worth an illegal download...

      And speaking of illegality, that's well worth a download....

      -

      aaccfb2cb3
      -
      -
      \ No newline at end of file diff --git a/spaces/rupeshs/fastsdcpu/backend/image_saver.py b/spaces/rupeshs/fastsdcpu/backend/image_saver.py deleted file mode 100644 index a243b6103b4e77a57cdd7b07242e99daf778da44..0000000000000000000000000000000000000000 --- a/spaces/rupeshs/fastsdcpu/backend/image_saver.py +++ /dev/null @@ -1,39 +0,0 @@ -from os import path, mkdir -from typing import Any -from uuid import uuid4 -from backend.models.lcmdiffusion_setting import LCMDiffusionSetting -import json - - -class ImageSaver: - @staticmethod - def save_images( - output_path: str, - images: Any, - folder_name: str = "", - format: str = ".png", - lcm_diffusion_setting: LCMDiffusionSetting = None, - ) -> None: - gen_id = uuid4() - for index, image in enumerate(images): - if not path.exists(output_path): - mkdir(output_path) - - if folder_name: - out_path = path.join( - output_path, - folder_name, - ) - else: - out_path = output_path - - if not path.exists(out_path): - mkdir(out_path) - image.save(path.join(out_path, f"{gen_id}-{index+1}{format}")) - if lcm_diffusion_setting: - with open(path.join(out_path, f"{gen_id}.json"), "w") as json_file: - json.dump( - lcm_diffusion_setting.model_dump(), - json_file, - indent=4, - ) diff --git a/spaces/safetensors/convert_large/convert.py b/spaces/safetensors/convert_large/convert.py deleted file mode 100644 index 899c140eabd4e44747aa02dacf5197357dc2db1d..0000000000000000000000000000000000000000 --- a/spaces/safetensors/convert_large/convert.py +++ /dev/null @@ -1,286 +0,0 @@ -import argparse -import json -import os -import shutil -from collections import defaultdict -from inspect import signature -from tempfile import TemporaryDirectory -from typing import Dict, List, Optional, Set, Tuple - -import torch - -from huggingface_hub import CommitInfo, CommitOperationAdd, Discussion, HfApi, hf_hub_download -from huggingface_hub.file_download import repo_folder_name -from safetensors.torch import load_file, save_file, _remove_duplicate_names - - -COMMIT_DESCRIPTION = """ -This is an automated PR created with https://huggingface.co/spaces/safetensors/convert - -This new file is equivalent to `pytorch_model.bin` but safe in the sense that -no arbitrary code can be put into it. - -These files also happen to load much faster than their pytorch counterpart: -https://colab.research.google.com/github/huggingface/notebooks/blob/main/safetensors_doc/en/speed.ipynb - -The widgets on your model page will run using this model even if this is not merged -making sure the file actually works. - -If you find any issues: please report here: https://huggingface.co/spaces/safetensors/convert/discussions - -Feel free to ignore this PR. -""" -PR_TITLE = "Adding `safetensors` variant of this model" - - -ConversionResult = Tuple[List["CommitOperationAdd"], List[Tuple[str, "Exception"]]] - - -class AlreadyExists(Exception): - pass - - -def rename(pt_filename: str) -> str: - filename, ext = os.path.splitext(pt_filename) - local = f"{filename}.safetensors" - local = local.replace("pytorch_model", "model") - return local - - -def convert_multi(model_id: str, folder: str, api: "HfApi") -> ConversionResult: - filename = hf_hub_download(repo_id=model_id, filename="pytorch_model.bin.index.json") - with open(filename, "r") as f: - data = json.load(f) - - filenames = set(data["weight_map"].values()) - - - - index = os.path.join(folder, "model.safetensors.index.json") - with open(index, "w") as f: - newdata = {k: v for k, v in data.items()} - newmap = {k: rename(v) for k, v in data["weight_map"].items()} - newdata["weight_map"] = newmap - json.dump(newdata, f, indent=4) - - - new_pr = api.create_commit( - repo_id=model_id, - operations=[CommitOperationAdd(path_in_repo=index.split("/")[-1], path_or_fileobj=index)], - commit_message=PR_TITLE, - commit_description=COMMIT_DESCRIPTION, - create_pr=True, - ) - - for filename in filenames: - pt_filename = hf_hub_download(repo_id=model_id, filename=filename) - sf_filename = rename(pt_filename) - sf_filename = os.path.join(folder, sf_filename) - convert_file(pt_filename, sf_filename) - api.create_commit( - repo_id=model_id, - commit_message=f"Adds {sf_filename}", - revision=new_pr.pr_revision, - operations=[CommitOperationAdd(path_in_repo=sf_filename.split("/")[-1], path_or_fileobj=sf_filename)], - create_pr=False, - ) - os.remove(pt_filename) - os.remove(sf_filename) - return new_pr, [] - - -def convert_single(model_id: str, folder: str, api: "HfApi") -> ConversionResult: - pt_filename = hf_hub_download(repo_id=model_id, filename="pytorch_model.bin") - - sf_name = "model.safetensors" - sf_filename = os.path.join(folder, sf_name) - convert_file(pt_filename, sf_filename) - - new_pr = api.create_commit( - repo_id=model_id, - operations=[CommitOperationAdd(path_in_repo=sf_name, path_or_fileobj=sf_filename)], - commit_message=PR_TITLE, - commit_description=COMMIT_DESCRIPTION, - create_pr=True, - ) - return new_pr, [] - - -def convert_file( - pt_filename: str, - sf_filename: str, -): - loaded = torch.load(pt_filename, map_location="cpu") - if "state_dict" in loaded: - loaded = loaded["state_dict"] - to_removes = _remove_duplicate_names(loaded) - - metadata = {"format": "pt"} - for kept_name, to_remove_group in to_removes.items(): - for to_remove in to_remove_group: - if to_remove not in metadata: - metadata[to_remove] = kept_name - del loaded[to_remove] - # For tensors to be contiguous - loaded = {k: v.contiguous() for k, v in loaded.items()} - - dirname = os.path.dirname(sf_filename) - os.makedirs(dirname, exist_ok=True) - save_file(loaded, sf_filename, metadata=metadata) - reloaded = load_file(sf_filename) - for k in loaded: - pt_tensor = loaded[k] - sf_tensor = reloaded[k] - if not torch.equal(pt_tensor, sf_tensor): - raise RuntimeError(f"The output tensors do not match for key {k}") - - -def create_diff(pt_infos: Dict[str, List[str]], sf_infos: Dict[str, List[str]]) -> str: - errors = [] - for key in ["missing_keys", "mismatched_keys", "unexpected_keys"]: - pt_set = set(pt_infos[key]) - sf_set = set(sf_infos[key]) - - pt_only = pt_set - sf_set - sf_only = sf_set - pt_set - - if pt_only: - errors.append(f"{key} : PT warnings contain {pt_only} which are not present in SF warnings") - if sf_only: - errors.append(f"{key} : SF warnings contain {sf_only} which are not present in PT warnings") - return "\n".join(errors) - - -def previous_pr(api: "HfApi", model_id: str, pr_title: str) -> Optional["Discussion"]: - try: - main_commit = api.list_repo_commits(model_id)[0].commit_id - discussions = api.get_repo_discussions(repo_id=model_id) - except Exception: - return None - for discussion in discussions: - if discussion.status == "open" and discussion.is_pull_request and discussion.title == pr_title: - commits = api.list_repo_commits(model_id, revision=discussion.git_reference) - - if main_commit == commits[1].commit_id: - return discussion - return None - - -def convert_generic(model_id: str, folder: str, filenames: Set[str], api: "HfApi") -> ConversionResult: - operations = [] - errors = [] - - extensions = set([".bin", ".ckpt"]) - - new_pr = None - for filename in filenames: - prefix, ext = os.path.splitext(filename) - if ext in extensions: - pt_filename = hf_hub_download(model_id, filename=filename) - dirname, raw_filename = os.path.split(filename) - if raw_filename == "pytorch_model.bin": - # XXX: This is a special case to handle `transformers` and the - # `transformers` part of the model which is actually loaded by `transformers`. - sf_in_repo = os.path.join(dirname, "model.safetensors") - else: - sf_in_repo = f"{prefix}.safetensors" - sf_filename = os.path.join(folder, sf_in_repo) - try: - convert_file(pt_filename, sf_filename) - - if new_pr is None: - new_pr = api.create_commit( - repo_id=model_id, - operations=[CommitOperationAdd(path_in_repo=sf_in_repo, path_or_fileobj=sf_filename)], - commit_message=PR_TITLE, - commit_description=COMMIT_DESCRIPTION, - create_pr=True, - ) - else: - api.create_commit( - repo_id=model_id, - commit_message=f"Adds {sf_filename}", - revision=new_pr.pr_revision, - operations=[CommitOperationAdd(path_in_repo=sf_in_repo, path_or_fileobj=sf_filename)], - create_pr=False, - ) - os.remove(pt_filename) - os.remove(sf_filename) - except Exception as e: - errors.append((pt_filename, e)) - return new_pr, errors - - -def convert(api: "HfApi", model_id: str, force: bool = False) -> Tuple["CommitInfo", List["Exception"]]: - info = api.model_info(model_id) - filenames = set(s.rfilename for s in info.siblings) - - with TemporaryDirectory() as d: - folder = os.path.join(d, repo_folder_name(repo_id=model_id, repo_type="models")) - os.makedirs(folder) - new_pr = None - try: - operations = None - pr = previous_pr(api, model_id, PR_TITLE) - - library_name = getattr(info, "library_name", None) - if any(filename.endswith(".safetensors") for filename in filenames) and not force: - raise AlreadyExists(f"Model {model_id} is already converted, skipping..") - elif pr is not None and not force: - url = f"https://huggingface.co/{model_id}/discussions/{pr.num}" - new_pr = pr - raise AlreadyExists(f"Model {model_id} already has an open PR check out {url}") - elif library_name == "transformers": - if "pytorch_model.bin" in filenames: - new_pr, errors = convert_single(model_id, folder, api) - elif "pytorch_model.bin.index.json" in filenames: - new_pr, errors = convert_multi(model_id, folder, api) - else: - raise RuntimeError(f"Model {model_id} doesn't seem to be a valid pytorch model. Cannot convert") - else: - new_pr, errors = convert_generic(model_id, folder, filenames, api) - - print(f"Pr created at {new_pr.pr_url}") - finally: - shutil.rmtree(folder) - return new_pr, errors - - -if __name__ == "__main__": - DESCRIPTION = """ - Simple utility tool to convert automatically some weights on the hub to `safetensors` format. - It is PyTorch exclusive for now. - It works by downloading the weights (PT), converting them locally, and uploading them back - as a PR on the hub. - """ - parser = argparse.ArgumentParser(description=DESCRIPTION) - parser.add_argument( - "model_id", - type=str, - help="The name of the model on the hub to convert. E.g. `gpt2` or `facebook/wav2vec2-base-960h`", - ) - parser.add_argument( - "--force", - action="store_true", - help="Create the PR even if it already exists of if the model was already converted.", - ) - parser.add_argument( - "-y", - action="store_true", - help="Ignore safety prompt", - ) - args = parser.parse_args() - model_id = args.model_id - api = HfApi() - if args.y: - txt = "y" - else: - txt = input( - "This conversion script will unpickle a pickled file, which is inherently unsafe. If you do not trust this file, we invite you to use" - " https://huggingface.co/spaces/safetensors/convert or google colab or other hosted solution to avoid potential issues with this file." - " Continue [Y/n] ?" - ) - if txt.lower() in {"", "y"}: - _commit_info, _errors = convert(api, model_id, force=args.force) - else: - print(f"Answer was `{txt}` aborting.") diff --git a/spaces/safi842/FashionGen/models/stylegan/stylegan_tf/dnnlib/__init__.py b/spaces/safi842/FashionGen/models/stylegan/stylegan_tf/dnnlib/__init__.py deleted file mode 100644 index ad43827d8a279c4a797e09b51b8fd96e8e003ee6..0000000000000000000000000000000000000000 --- a/spaces/safi842/FashionGen/models/stylegan/stylegan_tf/dnnlib/__init__.py +++ /dev/null @@ -1,20 +0,0 @@ -# Copyright (c) 2019, NVIDIA CORPORATION. All rights reserved. -# -# This work is licensed under the Creative Commons Attribution-NonCommercial -# 4.0 International License. To view a copy of this license, visit -# http://creativecommons.org/licenses/by-nc/4.0/ or send a letter to -# Creative Commons, PO Box 1866, Mountain View, CA 94042, USA. - -from . import submission - -from .submission.run_context import RunContext - -from .submission.submit import SubmitTarget -from .submission.submit import PathType -from .submission.submit import SubmitConfig -from .submission.submit import get_path_from_template -from .submission.submit import submit_run - -from .util import EasyDict - -submit_config: SubmitConfig = None # Package level variable for SubmitConfig which is only valid when inside the run function. diff --git a/spaces/sam-hq-team/sam-hq/sam-hq/segment_anything/modeling/mask_decoder.py b/spaces/sam-hq-team/sam-hq/sam-hq/segment_anything/modeling/mask_decoder.py deleted file mode 100644 index 242ecb769555913252ee8d60cdc88af5d1cdfb16..0000000000000000000000000000000000000000 --- a/spaces/sam-hq-team/sam-hq/sam-hq/segment_anything/modeling/mask_decoder.py +++ /dev/null @@ -1,178 +0,0 @@ -# Copyright (c) Meta Platforms, Inc. and affiliates. -# All rights reserved. - -# This source code is licensed under the license found in the -# LICENSE file in the root directory of this source tree. - -import torch -from torch import nn -from torch.nn import functional as F - -from typing import List, Tuple, Type - -from .common import LayerNorm2d - - -class MaskDecoder(nn.Module): - def __init__( - self, - *, - transformer_dim: int, - transformer: nn.Module, - num_multimask_outputs: int = 3, - activation: Type[nn.Module] = nn.GELU, - iou_head_depth: int = 3, - iou_head_hidden_dim: int = 256, - ) -> None: - """ - Predicts masks given an image and prompt embeddings, using a - transformer architecture. - - Arguments: - transformer_dim (int): the channel dimension of the transformer - transformer (nn.Module): the transformer used to predict masks - num_multimask_outputs (int): the number of masks to predict - when disambiguating masks - activation (nn.Module): the type of activation to use when - upscaling masks - iou_head_depth (int): the depth of the MLP used to predict - mask quality - iou_head_hidden_dim (int): the hidden dimension of the MLP - used to predict mask quality - """ - super().__init__() - self.transformer_dim = transformer_dim - self.transformer = transformer - - self.num_multimask_outputs = num_multimask_outputs - - self.iou_token = nn.Embedding(1, transformer_dim) - self.num_mask_tokens = num_multimask_outputs + 1 - self.mask_tokens = nn.Embedding(self.num_mask_tokens, transformer_dim) - - self.output_upscaling = nn.Sequential( - nn.ConvTranspose2d(transformer_dim, transformer_dim // 4, kernel_size=2, stride=2), - LayerNorm2d(transformer_dim // 4), - activation(), - nn.ConvTranspose2d(transformer_dim // 4, transformer_dim // 8, kernel_size=2, stride=2), - activation(), - ) - self.output_hypernetworks_mlps = nn.ModuleList( - [ - MLP(transformer_dim, transformer_dim, transformer_dim // 8, 3) - for i in range(self.num_mask_tokens) - ] - ) - - self.iou_prediction_head = MLP( - transformer_dim, iou_head_hidden_dim, self.num_mask_tokens, iou_head_depth - ) - - def forward( - self, - image_embeddings: torch.Tensor, - image_pe: torch.Tensor, - sparse_prompt_embeddings: torch.Tensor, - dense_prompt_embeddings: torch.Tensor, - multimask_output: bool, - hq_token_only: bool, - interm_embeddings: torch.Tensor, - ) -> Tuple[torch.Tensor, torch.Tensor]: - """ - Predict masks given image and prompt embeddings. - - Arguments: - image_embeddings (torch.Tensor): the embeddings from the image encoder - image_pe (torch.Tensor): positional encoding with the shape of image_embeddings - sparse_prompt_embeddings (torch.Tensor): the embeddings of the points and boxes - dense_prompt_embeddings (torch.Tensor): the embeddings of the mask inputs - multimask_output (bool): Whether to return multiple masks or a single - mask. - - Returns: - torch.Tensor: batched predicted masks - torch.Tensor: batched predictions of mask quality - """ - masks, iou_pred = self.predict_masks( - image_embeddings=image_embeddings, - image_pe=image_pe, - sparse_prompt_embeddings=sparse_prompt_embeddings, - dense_prompt_embeddings=dense_prompt_embeddings, - ) - - # Select the correct mask or masks for output - if multimask_output: - mask_slice = slice(1, None) - else: - mask_slice = slice(0, 1) - masks = masks[:, mask_slice, :, :] - iou_pred = iou_pred[:, mask_slice] - - # Prepare output - return masks, iou_pred - - def predict_masks( - self, - image_embeddings: torch.Tensor, - image_pe: torch.Tensor, - sparse_prompt_embeddings: torch.Tensor, - dense_prompt_embeddings: torch.Tensor, - ) -> Tuple[torch.Tensor, torch.Tensor]: - """Predicts masks. See 'forward' for more details.""" - # Concatenate output tokens - output_tokens = torch.cat([self.iou_token.weight, self.mask_tokens.weight], dim=0) - output_tokens = output_tokens.unsqueeze(0).expand(sparse_prompt_embeddings.size(0), -1, -1) - tokens = torch.cat((output_tokens, sparse_prompt_embeddings), dim=1) - - # Expand per-image data in batch direction to be per-mask - src = torch.repeat_interleave(image_embeddings, tokens.shape[0], dim=0) - src = src + dense_prompt_embeddings - pos_src = torch.repeat_interleave(image_pe, tokens.shape[0], dim=0) - b, c, h, w = src.shape - - # Run the transformer - hs, src = self.transformer(src, pos_src, tokens) - iou_token_out = hs[:, 0, :] - mask_tokens_out = hs[:, 1 : (1 + self.num_mask_tokens), :] - - # Upscale mask embeddings and predict masks using the mask tokens - src = src.transpose(1, 2).view(b, c, h, w) - upscaled_embedding = self.output_upscaling(src) - hyper_in_list: List[torch.Tensor] = [] - for i in range(self.num_mask_tokens): - hyper_in_list.append(self.output_hypernetworks_mlps[i](mask_tokens_out[:, i, :])) - hyper_in = torch.stack(hyper_in_list, dim=1) - b, c, h, w = upscaled_embedding.shape - masks = (hyper_in @ upscaled_embedding.view(b, c, h * w)).view(b, -1, h, w) - - # Generate mask quality predictions - iou_pred = self.iou_prediction_head(iou_token_out) - - return masks, iou_pred - - -# Lightly adapted from -# https://github.com/facebookresearch/MaskFormer/blob/main/mask_former/modeling/transformer/transformer_predictor.py # noqa -class MLP(nn.Module): - def __init__( - self, - input_dim: int, - hidden_dim: int, - output_dim: int, - num_layers: int, - sigmoid_output: bool = False, - ) -> None: - super().__init__() - self.num_layers = num_layers - h = [hidden_dim] * (num_layers - 1) - self.layers = nn.ModuleList( - nn.Linear(n, k) for n, k in zip([input_dim] + h, h + [output_dim]) - ) - self.sigmoid_output = sigmoid_output - - def forward(self, x): - for i, layer in enumerate(self.layers): - x = F.relu(layer(x)) if i < self.num_layers - 1 else layer(x) - if self.sigmoid_output: - x = F.sigmoid(x) - return x diff --git a/spaces/sandeepmajumdar/Bloom-Slim-Text-Generation/app.py b/spaces/sandeepmajumdar/Bloom-Slim-Text-Generation/app.py deleted file mode 100644 index 2cc8efb955d27dd1dc92582f6058cc1a879f954b..0000000000000000000000000000000000000000 --- a/spaces/sandeepmajumdar/Bloom-Slim-Text-Generation/app.py +++ /dev/null @@ -1,17 +0,0 @@ -import gradio as gr -from transformers import pipeline - -def convert(prompt): - model = pipeline('text-generation', model='bigscience/bloom-1b1') - result = model(prompt, repetition_penalty = 2.0, max_length=200, num_beams = 2, num_beam_groups = 2, top_k=1, temperature=0.9, diversity_penalty = 0.9) - return result[0]['generated_text'] - -with gr.Blocks() as bls: - gr.Markdown("Here is a Text Generation app") - with gr.Row(): - inp = gr.Textbox(label="Type your prompt here and click Run", placeholder='Example: The main cloud services of AWS are:') - out = gr.Textbox(label="Output") - btn = gr.Button("Run") - btn.click(fn=convert, inputs=inp, outputs=out) - -bls.launch() \ No newline at end of file diff --git a/spaces/sarulab-speech/UTMOS-demo/README.md b/spaces/sarulab-speech/UTMOS-demo/README.md deleted file mode 100644 index c2c16d3956eaae0f1c5d0488b5d9842632971acf..0000000000000000000000000000000000000000 --- a/spaces/sarulab-speech/UTMOS-demo/README.md +++ /dev/null @@ -1,13 +0,0 @@ ---- -title: UTMOS Demo -emoji: 🐢 -colorFrom: purple -colorTo: pink -sdk: gradio -sdk_version: 2.8.10 -app_file: app.py -pinned: false -license: afl-3.0 ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces#reference diff --git a/spaces/seduerr/text_analytics/text_analytics/pipes/emphatics_tagger.py b/spaces/seduerr/text_analytics/text_analytics/pipes/emphatics_tagger.py deleted file mode 100644 index a624331c66c9bd85067434352cdceb87e632ed50..0000000000000000000000000000000000000000 --- a/spaces/seduerr/text_analytics/text_analytics/pipes/emphatics_tagger.py +++ /dev/null @@ -1,42 +0,0 @@ -from spacy.matcher import PhraseMatcher -from spacy.tokens import Doc -from spacy.tokens import Span -from spacy.util import filter_spans -from spacy.language import Language - -from text_analytics.constants import ACCEPTED_LANGUAGES - -emphatics_getter = lambda doc: [doc[span['start']:span['end']] - for span in doc._.emphatics_span_indices] - -Doc.set_extension('emphatics_span_indices', force=False, default=[]) -Doc.set_extension('emphatics', force=False, getter=emphatics_getter) - -@Language.factory('emphatics tagger') -class EmphaticsTagger: - def __init__(self, name, nlp, language) -> None: - if not language in ACCEPTED_LANGUAGES: - raise ValueError(f'Language {language} is not supported yet') - - self._language = language - self._matcher = PhraseMatcher(nlp.vocab, attr='LOWER') - self._connectives = [] - if language == 'en': # emphatics connectives for spanish - self._connectives = ['him', 'there', 'their', 'it', 'he', 'she', 'we', 'who', 'them', 'they', 'you', 'himself', 'her', 'whom', 'itself', 'somebody', 'something', 'us', 'anybody', 'herself', 'anyone', 'everybody', 'nobody', 'everyone', 'themselves', 'yourself', 'someone', 'his', 'yours'] - else: # Support for future languages - pass - - for con in self._connectives: - self._matcher.add(con, None, nlp(con)) - - - def __call__(self, doc: Doc) -> Doc: - matches = self._matcher(doc) - emphatics_spans = [doc[start:end] for _, start, end in matches] - - doc._.emphatics_span_indices = [{'start': span.start, - 'end': span.end, - 'label': span.label} - for span in filter_spans(emphatics_spans)] # Save the emphatics connectives found - - return doc \ No newline at end of file diff --git a/spaces/segments-tobias/conex/espnet/nets/pytorch_backend/tacotron2/decoder.py b/spaces/segments-tobias/conex/espnet/nets/pytorch_backend/tacotron2/decoder.py deleted file mode 100644 index c5a5b9ba23b50fa0ef3b7ec9e753d2ae5e43eb2a..0000000000000000000000000000000000000000 --- a/spaces/segments-tobias/conex/espnet/nets/pytorch_backend/tacotron2/decoder.py +++ /dev/null @@ -1,676 +0,0 @@ -#!/usr/bin/env python3 -# -*- coding: utf-8 -*- - -# Copyright 2019 Nagoya University (Tomoki Hayashi) -# Apache 2.0 (http://www.apache.org/licenses/LICENSE-2.0) - -"""Tacotron2 decoder related modules.""" - -import six - -import torch -import torch.nn.functional as F - -from espnet.nets.pytorch_backend.rnn.attentions import AttForwardTA - - -def decoder_init(m): - """Initialize decoder parameters.""" - if isinstance(m, torch.nn.Conv1d): - torch.nn.init.xavier_uniform_(m.weight, torch.nn.init.calculate_gain("tanh")) - - -class ZoneOutCell(torch.nn.Module): - """ZoneOut Cell module. - - This is a module of zoneout described in - `Zoneout: Regularizing RNNs by Randomly Preserving Hidden Activations`_. - This code is modified from `eladhoffer/seq2seq.pytorch`_. - - Examples: - >>> lstm = torch.nn.LSTMCell(16, 32) - >>> lstm = ZoneOutCell(lstm, 0.5) - - .. _`Zoneout: Regularizing RNNs by Randomly Preserving Hidden Activations`: - https://arxiv.org/abs/1606.01305 - - .. _`eladhoffer/seq2seq.pytorch`: - https://github.com/eladhoffer/seq2seq.pytorch - - """ - - def __init__(self, cell, zoneout_rate=0.1): - """Initialize zone out cell module. - - Args: - cell (torch.nn.Module): Pytorch recurrent cell module - e.g. `torch.nn.Module.LSTMCell`. - zoneout_rate (float, optional): Probability of zoneout from 0.0 to 1.0. - - """ - super(ZoneOutCell, self).__init__() - self.cell = cell - self.hidden_size = cell.hidden_size - self.zoneout_rate = zoneout_rate - if zoneout_rate > 1.0 or zoneout_rate < 0.0: - raise ValueError( - "zoneout probability must be in the range from 0.0 to 1.0." - ) - - def forward(self, inputs, hidden): - """Calculate forward propagation. - - Args: - inputs (Tensor): Batch of input tensor (B, input_size). - hidden (tuple): - - Tensor: Batch of initial hidden states (B, hidden_size). - - Tensor: Batch of initial cell states (B, hidden_size). - - Returns: - tuple: - - Tensor: Batch of next hidden states (B, hidden_size). - - Tensor: Batch of next cell states (B, hidden_size). - - """ - next_hidden = self.cell(inputs, hidden) - next_hidden = self._zoneout(hidden, next_hidden, self.zoneout_rate) - return next_hidden - - def _zoneout(self, h, next_h, prob): - # apply recursively - if isinstance(h, tuple): - num_h = len(h) - if not isinstance(prob, tuple): - prob = tuple([prob] * num_h) - return tuple( - [self._zoneout(h[i], next_h[i], prob[i]) for i in range(num_h)] - ) - - if self.training: - mask = h.new(*h.size()).bernoulli_(prob) - return mask * h + (1 - mask) * next_h - else: - return prob * h + (1 - prob) * next_h - - -class Prenet(torch.nn.Module): - """Prenet module for decoder of Spectrogram prediction network. - - This is a module of Prenet in the decoder of Spectrogram prediction network, - which described in `Natural TTS - Synthesis by Conditioning WaveNet on Mel Spectrogram Predictions`_. - The Prenet preforms nonlinear conversion - of inputs before input to auto-regressive lstm, - which helps to learn diagonal attentions. - - Note: - This module alway applies dropout even in evaluation. - See the detail in `Natural TTS Synthesis by - Conditioning WaveNet on Mel Spectrogram Predictions`_. - - .. _`Natural TTS Synthesis by Conditioning WaveNet on Mel Spectrogram Predictions`: - https://arxiv.org/abs/1712.05884 - - """ - - def __init__(self, idim, n_layers=2, n_units=256, dropout_rate=0.5): - """Initialize prenet module. - - Args: - idim (int): Dimension of the inputs. - odim (int): Dimension of the outputs. - n_layers (int, optional): The number of prenet layers. - n_units (int, optional): The number of prenet units. - - """ - super(Prenet, self).__init__() - self.dropout_rate = dropout_rate - self.prenet = torch.nn.ModuleList() - for layer in six.moves.range(n_layers): - n_inputs = idim if layer == 0 else n_units - self.prenet += [ - torch.nn.Sequential(torch.nn.Linear(n_inputs, n_units), torch.nn.ReLU()) - ] - - def forward(self, x): - """Calculate forward propagation. - - Args: - x (Tensor): Batch of input tensors (B, ..., idim). - - Returns: - Tensor: Batch of output tensors (B, ..., odim). - - """ - for i in six.moves.range(len(self.prenet)): - x = F.dropout(self.prenet[i](x), self.dropout_rate) - return x - - -class Postnet(torch.nn.Module): - """Postnet module for Spectrogram prediction network. - - This is a module of Postnet in Spectrogram prediction network, - which described in `Natural TTS Synthesis by - Conditioning WaveNet on Mel Spectrogram Predictions`_. - The Postnet predicts refines the predicted - Mel-filterbank of the decoder, - which helps to compensate the detail sturcture of spectrogram. - - .. _`Natural TTS Synthesis by Conditioning WaveNet on Mel Spectrogram Predictions`: - https://arxiv.org/abs/1712.05884 - - """ - - def __init__( - self, - idim, - odim, - n_layers=5, - n_chans=512, - n_filts=5, - dropout_rate=0.5, - use_batch_norm=True, - ): - """Initialize postnet module. - - Args: - idim (int): Dimension of the inputs. - odim (int): Dimension of the outputs. - n_layers (int, optional): The number of layers. - n_filts (int, optional): The number of filter size. - n_units (int, optional): The number of filter channels. - use_batch_norm (bool, optional): Whether to use batch normalization.. - dropout_rate (float, optional): Dropout rate.. - - """ - super(Postnet, self).__init__() - self.postnet = torch.nn.ModuleList() - for layer in six.moves.range(n_layers - 1): - ichans = odim if layer == 0 else n_chans - ochans = odim if layer == n_layers - 1 else n_chans - if use_batch_norm: - self.postnet += [ - torch.nn.Sequential( - torch.nn.Conv1d( - ichans, - ochans, - n_filts, - stride=1, - padding=(n_filts - 1) // 2, - bias=False, - ), - torch.nn.BatchNorm1d(ochans), - torch.nn.Tanh(), - torch.nn.Dropout(dropout_rate), - ) - ] - else: - self.postnet += [ - torch.nn.Sequential( - torch.nn.Conv1d( - ichans, - ochans, - n_filts, - stride=1, - padding=(n_filts - 1) // 2, - bias=False, - ), - torch.nn.Tanh(), - torch.nn.Dropout(dropout_rate), - ) - ] - ichans = n_chans if n_layers != 1 else odim - if use_batch_norm: - self.postnet += [ - torch.nn.Sequential( - torch.nn.Conv1d( - ichans, - odim, - n_filts, - stride=1, - padding=(n_filts - 1) // 2, - bias=False, - ), - torch.nn.BatchNorm1d(odim), - torch.nn.Dropout(dropout_rate), - ) - ] - else: - self.postnet += [ - torch.nn.Sequential( - torch.nn.Conv1d( - ichans, - odim, - n_filts, - stride=1, - padding=(n_filts - 1) // 2, - bias=False, - ), - torch.nn.Dropout(dropout_rate), - ) - ] - - def forward(self, xs): - """Calculate forward propagation. - - Args: - xs (Tensor): Batch of the sequences of padded input tensors (B, idim, Tmax). - - Returns: - Tensor: Batch of padded output tensor. (B, odim, Tmax). - - """ - for i in six.moves.range(len(self.postnet)): - xs = self.postnet[i](xs) - return xs - - -class Decoder(torch.nn.Module): - """Decoder module of Spectrogram prediction network. - - This is a module of decoder of Spectrogram prediction network in Tacotron2, - which described in `Natural TTS - Synthesis by Conditioning WaveNet on Mel Spectrogram Predictions`_. - The decoder generates the sequence of - features from the sequence of the hidden states. - - .. _`Natural TTS Synthesis by Conditioning WaveNet on Mel Spectrogram Predictions`: - https://arxiv.org/abs/1712.05884 - - """ - - def __init__( - self, - idim, - odim, - att, - dlayers=2, - dunits=1024, - prenet_layers=2, - prenet_units=256, - postnet_layers=5, - postnet_chans=512, - postnet_filts=5, - output_activation_fn=None, - cumulate_att_w=True, - use_batch_norm=True, - use_concate=True, - dropout_rate=0.5, - zoneout_rate=0.1, - reduction_factor=1, - ): - """Initialize Tacotron2 decoder module. - - Args: - idim (int): Dimension of the inputs. - odim (int): Dimension of the outputs. - att (torch.nn.Module): Instance of attention class. - dlayers (int, optional): The number of decoder lstm layers. - dunits (int, optional): The number of decoder lstm units. - prenet_layers (int, optional): The number of prenet layers. - prenet_units (int, optional): The number of prenet units. - postnet_layers (int, optional): The number of postnet layers. - postnet_filts (int, optional): The number of postnet filter size. - postnet_chans (int, optional): The number of postnet filter channels. - output_activation_fn (torch.nn.Module, optional): - Activation function for outputs. - cumulate_att_w (bool, optional): - Whether to cumulate previous attention weight. - use_batch_norm (bool, optional): Whether to use batch normalization. - use_concate (bool, optional): Whether to concatenate encoder embedding - with decoder lstm outputs. - dropout_rate (float, optional): Dropout rate. - zoneout_rate (float, optional): Zoneout rate. - reduction_factor (int, optional): Reduction factor. - - """ - super(Decoder, self).__init__() - - # store the hyperparameters - self.idim = idim - self.odim = odim - self.att = att - self.output_activation_fn = output_activation_fn - self.cumulate_att_w = cumulate_att_w - self.use_concate = use_concate - self.reduction_factor = reduction_factor - - # check attention type - if isinstance(self.att, AttForwardTA): - self.use_att_extra_inputs = True - else: - self.use_att_extra_inputs = False - - # define lstm network - prenet_units = prenet_units if prenet_layers != 0 else odim - self.lstm = torch.nn.ModuleList() - for layer in six.moves.range(dlayers): - iunits = idim + prenet_units if layer == 0 else dunits - lstm = torch.nn.LSTMCell(iunits, dunits) - if zoneout_rate > 0.0: - lstm = ZoneOutCell(lstm, zoneout_rate) - self.lstm += [lstm] - - # define prenet - if prenet_layers > 0: - self.prenet = Prenet( - idim=odim, - n_layers=prenet_layers, - n_units=prenet_units, - dropout_rate=dropout_rate, - ) - else: - self.prenet = None - - # define postnet - if postnet_layers > 0: - self.postnet = Postnet( - idim=idim, - odim=odim, - n_layers=postnet_layers, - n_chans=postnet_chans, - n_filts=postnet_filts, - use_batch_norm=use_batch_norm, - dropout_rate=dropout_rate, - ) - else: - self.postnet = None - - # define projection layers - iunits = idim + dunits if use_concate else dunits - self.feat_out = torch.nn.Linear(iunits, odim * reduction_factor, bias=False) - self.prob_out = torch.nn.Linear(iunits, reduction_factor) - - # initialize - self.apply(decoder_init) - - def _zero_state(self, hs): - init_hs = hs.new_zeros(hs.size(0), self.lstm[0].hidden_size) - return init_hs - - def forward(self, hs, hlens, ys): - """Calculate forward propagation. - - Args: - hs (Tensor): Batch of the sequences of padded hidden states (B, Tmax, idim). - hlens (LongTensor): Batch of lengths of each input batch (B,). - ys (Tensor): - Batch of the sequences of padded target features (B, Lmax, odim). - - Returns: - Tensor: Batch of output tensors after postnet (B, Lmax, odim). - Tensor: Batch of output tensors before postnet (B, Lmax, odim). - Tensor: Batch of logits of stop prediction (B, Lmax). - Tensor: Batch of attention weights (B, Lmax, Tmax). - - Note: - This computation is performed in teacher-forcing manner. - - """ - # thin out frames (B, Lmax, odim) -> (B, Lmax/r, odim) - if self.reduction_factor > 1: - ys = ys[:, self.reduction_factor - 1 :: self.reduction_factor] - - # length list should be list of int - hlens = list(map(int, hlens)) - - # initialize hidden states of decoder - c_list = [self._zero_state(hs)] - z_list = [self._zero_state(hs)] - for _ in six.moves.range(1, len(self.lstm)): - c_list += [self._zero_state(hs)] - z_list += [self._zero_state(hs)] - prev_out = hs.new_zeros(hs.size(0), self.odim) - - # initialize attention - prev_att_w = None - self.att.reset() - - # loop for an output sequence - outs, logits, att_ws = [], [], [] - for y in ys.transpose(0, 1): - if self.use_att_extra_inputs: - att_c, att_w = self.att(hs, hlens, z_list[0], prev_att_w, prev_out) - else: - att_c, att_w = self.att(hs, hlens, z_list[0], prev_att_w) - prenet_out = self.prenet(prev_out) if self.prenet is not None else prev_out - xs = torch.cat([att_c, prenet_out], dim=1) - z_list[0], c_list[0] = self.lstm[0](xs, (z_list[0], c_list[0])) - for i in six.moves.range(1, len(self.lstm)): - z_list[i], c_list[i] = self.lstm[i]( - z_list[i - 1], (z_list[i], c_list[i]) - ) - zcs = ( - torch.cat([z_list[-1], att_c], dim=1) - if self.use_concate - else z_list[-1] - ) - outs += [self.feat_out(zcs).view(hs.size(0), self.odim, -1)] - logits += [self.prob_out(zcs)] - att_ws += [att_w] - prev_out = y # teacher forcing - if self.cumulate_att_w and prev_att_w is not None: - prev_att_w = prev_att_w + att_w # Note: error when use += - else: - prev_att_w = att_w - - logits = torch.cat(logits, dim=1) # (B, Lmax) - before_outs = torch.cat(outs, dim=2) # (B, odim, Lmax) - att_ws = torch.stack(att_ws, dim=1) # (B, Lmax, Tmax) - - if self.reduction_factor > 1: - before_outs = before_outs.view( - before_outs.size(0), self.odim, -1 - ) # (B, odim, Lmax) - - if self.postnet is not None: - after_outs = before_outs + self.postnet(before_outs) # (B, odim, Lmax) - else: - after_outs = before_outs - before_outs = before_outs.transpose(2, 1) # (B, Lmax, odim) - after_outs = after_outs.transpose(2, 1) # (B, Lmax, odim) - logits = logits - - # apply activation function for scaling - if self.output_activation_fn is not None: - before_outs = self.output_activation_fn(before_outs) - after_outs = self.output_activation_fn(after_outs) - - return after_outs, before_outs, logits, att_ws - - def inference( - self, - h, - threshold=0.5, - minlenratio=0.0, - maxlenratio=10.0, - use_att_constraint=False, - backward_window=None, - forward_window=None, - ): - """Generate the sequence of features given the sequences of characters. - - Args: - h (Tensor): Input sequence of encoder hidden states (T, C). - threshold (float, optional): Threshold to stop generation. - minlenratio (float, optional): Minimum length ratio. - If set to 1.0 and the length of input is 10, - the minimum length of outputs will be 10 * 1 = 10. - minlenratio (float, optional): Minimum length ratio. - If set to 10 and the length of input is 10, - the maximum length of outputs will be 10 * 10 = 100. - use_att_constraint (bool): - Whether to apply attention constraint introduced in `Deep Voice 3`_. - backward_window (int): Backward window size in attention constraint. - forward_window (int): Forward window size in attention constraint. - - Returns: - Tensor: Output sequence of features (L, odim). - Tensor: Output sequence of stop probabilities (L,). - Tensor: Attention weights (L, T). - - Note: - This computation is performed in auto-regressive manner. - - .. _`Deep Voice 3`: https://arxiv.org/abs/1710.07654 - - """ - # setup - assert len(h.size()) == 2 - hs = h.unsqueeze(0) - ilens = [h.size(0)] - maxlen = int(h.size(0) * maxlenratio) - minlen = int(h.size(0) * minlenratio) - - # initialize hidden states of decoder - c_list = [self._zero_state(hs)] - z_list = [self._zero_state(hs)] - for _ in six.moves.range(1, len(self.lstm)): - c_list += [self._zero_state(hs)] - z_list += [self._zero_state(hs)] - prev_out = hs.new_zeros(1, self.odim) - - # initialize attention - prev_att_w = None - self.att.reset() - - # setup for attention constraint - if use_att_constraint: - last_attended_idx = 0 - else: - last_attended_idx = None - - # loop for an output sequence - idx = 0 - outs, att_ws, probs = [], [], [] - while True: - # updated index - idx += self.reduction_factor - - # decoder calculation - if self.use_att_extra_inputs: - att_c, att_w = self.att( - hs, - ilens, - z_list[0], - prev_att_w, - prev_out, - last_attended_idx=last_attended_idx, - backward_window=backward_window, - forward_window=forward_window, - ) - else: - att_c, att_w = self.att( - hs, - ilens, - z_list[0], - prev_att_w, - last_attended_idx=last_attended_idx, - backward_window=backward_window, - forward_window=forward_window, - ) - - att_ws += [att_w] - prenet_out = self.prenet(prev_out) if self.prenet is not None else prev_out - xs = torch.cat([att_c, prenet_out], dim=1) - z_list[0], c_list[0] = self.lstm[0](xs, (z_list[0], c_list[0])) - for i in six.moves.range(1, len(self.lstm)): - z_list[i], c_list[i] = self.lstm[i]( - z_list[i - 1], (z_list[i], c_list[i]) - ) - zcs = ( - torch.cat([z_list[-1], att_c], dim=1) - if self.use_concate - else z_list[-1] - ) - outs += [self.feat_out(zcs).view(1, self.odim, -1)] # [(1, odim, r), ...] - probs += [torch.sigmoid(self.prob_out(zcs))[0]] # [(r), ...] - if self.output_activation_fn is not None: - prev_out = self.output_activation_fn(outs[-1][:, :, -1]) # (1, odim) - else: - prev_out = outs[-1][:, :, -1] # (1, odim) - if self.cumulate_att_w and prev_att_w is not None: - prev_att_w = prev_att_w + att_w # Note: error when use += - else: - prev_att_w = att_w - if use_att_constraint: - last_attended_idx = int(att_w.argmax()) - - # check whether to finish generation - if int(sum(probs[-1] >= threshold)) > 0 or idx >= maxlen: - # check mininum length - if idx < minlen: - continue - outs = torch.cat(outs, dim=2) # (1, odim, L) - if self.postnet is not None: - outs = outs + self.postnet(outs) # (1, odim, L) - outs = outs.transpose(2, 1).squeeze(0) # (L, odim) - probs = torch.cat(probs, dim=0) - att_ws = torch.cat(att_ws, dim=0) - break - - if self.output_activation_fn is not None: - outs = self.output_activation_fn(outs) - - return outs, probs, att_ws - - def calculate_all_attentions(self, hs, hlens, ys): - """Calculate all of the attention weights. - - Args: - hs (Tensor): Batch of the sequences of padded hidden states (B, Tmax, idim). - hlens (LongTensor): Batch of lengths of each input batch (B,). - ys (Tensor): - Batch of the sequences of padded target features (B, Lmax, odim). - - Returns: - numpy.ndarray: Batch of attention weights (B, Lmax, Tmax). - - Note: - This computation is performed in teacher-forcing manner. - - """ - # thin out frames (B, Lmax, odim) -> (B, Lmax/r, odim) - if self.reduction_factor > 1: - ys = ys[:, self.reduction_factor - 1 :: self.reduction_factor] - - # length list should be list of int - hlens = list(map(int, hlens)) - - # initialize hidden states of decoder - c_list = [self._zero_state(hs)] - z_list = [self._zero_state(hs)] - for _ in six.moves.range(1, len(self.lstm)): - c_list += [self._zero_state(hs)] - z_list += [self._zero_state(hs)] - prev_out = hs.new_zeros(hs.size(0), self.odim) - - # initialize attention - prev_att_w = None - self.att.reset() - - # loop for an output sequence - att_ws = [] - for y in ys.transpose(0, 1): - if self.use_att_extra_inputs: - att_c, att_w = self.att(hs, hlens, z_list[0], prev_att_w, prev_out) - else: - att_c, att_w = self.att(hs, hlens, z_list[0], prev_att_w) - att_ws += [att_w] - prenet_out = self.prenet(prev_out) if self.prenet is not None else prev_out - xs = torch.cat([att_c, prenet_out], dim=1) - z_list[0], c_list[0] = self.lstm[0](xs, (z_list[0], c_list[0])) - for i in six.moves.range(1, len(self.lstm)): - z_list[i], c_list[i] = self.lstm[i]( - z_list[i - 1], (z_list[i], c_list[i]) - ) - prev_out = y # teacher forcing - if self.cumulate_att_w and prev_att_w is not None: - prev_att_w = prev_att_w + att_w # Note: error when use += - else: - prev_att_w = att_w - - att_ws = torch.stack(att_ws, dim=1) # (B, Lmax, Tmax) - - return att_ws diff --git a/spaces/segments-tobias/conex/espnet/utils/cli_utils.py b/spaces/segments-tobias/conex/espnet/utils/cli_utils.py deleted file mode 100644 index c4a4cd15b72f832d9118aa7a7377a13de16c329b..0000000000000000000000000000000000000000 --- a/spaces/segments-tobias/conex/espnet/utils/cli_utils.py +++ /dev/null @@ -1,65 +0,0 @@ -from collections.abc import Sequence -from distutils.util import strtobool as dist_strtobool -import sys - -import numpy - - -def strtobool(x): - # distutils.util.strtobool returns integer, but it's confusing, - return bool(dist_strtobool(x)) - - -def get_commandline_args(): - extra_chars = [ - " ", - ";", - "&", - "(", - ")", - "|", - "^", - "<", - ">", - "?", - "*", - "[", - "]", - "$", - "`", - '"', - "\\", - "!", - "{", - "}", - ] - - # Escape the extra characters for shell - argv = [ - arg.replace("'", "'\\''") - if all(char not in arg for char in extra_chars) - else "'" + arg.replace("'", "'\\''") + "'" - for arg in sys.argv - ] - - return sys.executable + " " + " ".join(argv) - - -def is_scipy_wav_style(value): - # If Tuple[int, numpy.ndarray] or not - return ( - isinstance(value, Sequence) - and len(value) == 2 - and isinstance(value[0], int) - and isinstance(value[1], numpy.ndarray) - ) - - -def assert_scipy_wav_style(value): - assert is_scipy_wav_style( - value - ), "Must be Tuple[int, numpy.ndarray], but got {}".format( - type(value) - if not isinstance(value, Sequence) - else "{}[{}]".format(type(value), ", ".join(str(type(v)) for v in value)) - ) diff --git a/spaces/shainis/book_reviews/app.py b/spaces/shainis/book_reviews/app.py deleted file mode 100644 index 69c9891a8db3416184bcdfb8a14113132eaf9c0e..0000000000000000000000000000000000000000 --- a/spaces/shainis/book_reviews/app.py +++ /dev/null @@ -1,21 +0,0 @@ -import gradio as gr -from fastai.text.all import * - -learn = load_learner('export.pkl') - -def reviews_generator(text, n_words): - preds = learn.predict(text, n_words) - return preds - -examples = [["The beauty of this book", 45], ["I didn't like this book because", 30]] - -gr.Interface(fn = reviews_generator, - title = "Book reviews generator", - description = "Type the beginning of a review, and the machine will generate a full review.", - inputs = [gr.Textbox(lines=1, placeholder="Enter the beginning of the review here", label="Starter text"), - gr.Slider(0, 100, label="Length of desired review")], - outputs = gr.outputs.Textbox(label="Generated Text"), - examples = examples - ).launch(share=False) - - diff --git a/spaces/shi-labs/Matting-Anything/GroundingDINO/groundingdino/models/GroundingDINO/utils.py b/spaces/shi-labs/Matting-Anything/GroundingDINO/groundingdino/models/GroundingDINO/utils.py deleted file mode 100644 index 5bd18f70225e12b2e27fdb4eabcde91d959f8e31..0000000000000000000000000000000000000000 --- a/spaces/shi-labs/Matting-Anything/GroundingDINO/groundingdino/models/GroundingDINO/utils.py +++ /dev/null @@ -1,268 +0,0 @@ -# ------------------------------------------------------------------------ -# Grounding DINO -# url: https://github.com/IDEA-Research/GroundingDINO -# Copyright (c) 2023 IDEA. All Rights Reserved. -# Licensed under the Apache License, Version 2.0 [see LICENSE for details] -# ------------------------------------------------------------------------ - -import copy -import math - -import torch -import torch.nn.functional as F -from torch import Tensor, nn - - -def _get_clones(module, N, layer_share=False): - # import ipdb; ipdb.set_trace() - if layer_share: - return nn.ModuleList([module for i in range(N)]) - else: - return nn.ModuleList([copy.deepcopy(module) for i in range(N)]) - - -def get_sine_pos_embed( - pos_tensor: torch.Tensor, - num_pos_feats: int = 128, - temperature: int = 10000, - exchange_xy: bool = True, -): - """generate sine position embedding from a position tensor - Args: - pos_tensor (torch.Tensor): shape: [..., n]. - num_pos_feats (int): projected shape for each float in the tensor. - temperature (int): temperature in the sine/cosine function. - exchange_xy (bool, optional): exchange pos x and pos y. \ - For example, input tensor is [x,y], the results will be [pos(y), pos(x)]. Defaults to True. - Returns: - pos_embed (torch.Tensor): shape: [..., n*num_pos_feats]. - """ - scale = 2 * math.pi - dim_t = torch.arange(num_pos_feats, dtype=torch.float32, device=pos_tensor.device) - dim_t = temperature ** (2 * torch.div(dim_t, 2, rounding_mode="floor") / num_pos_feats) - - def sine_func(x: torch.Tensor): - sin_x = x * scale / dim_t - sin_x = torch.stack((sin_x[..., 0::2].sin(), sin_x[..., 1::2].cos()), dim=3).flatten(2) - return sin_x - - pos_res = [sine_func(x) for x in pos_tensor.split([1] * pos_tensor.shape[-1], dim=-1)] - if exchange_xy: - pos_res[0], pos_res[1] = pos_res[1], pos_res[0] - pos_res = torch.cat(pos_res, dim=-1) - return pos_res - - -def gen_encoder_output_proposals( - memory: Tensor, memory_padding_mask: Tensor, spatial_shapes: Tensor, learnedwh=None -): - """ - Input: - - memory: bs, \sum{hw}, d_model - - memory_padding_mask: bs, \sum{hw} - - spatial_shapes: nlevel, 2 - - learnedwh: 2 - Output: - - output_memory: bs, \sum{hw}, d_model - - output_proposals: bs, \sum{hw}, 4 - """ - N_, S_, C_ = memory.shape - proposals = [] - _cur = 0 - for lvl, (H_, W_) in enumerate(spatial_shapes): - mask_flatten_ = memory_padding_mask[:, _cur : (_cur + H_ * W_)].view(N_, H_, W_, 1) - valid_H = torch.sum(~mask_flatten_[:, :, 0, 0], 1) - valid_W = torch.sum(~mask_flatten_[:, 0, :, 0], 1) - - # import ipdb; ipdb.set_trace() - - grid_y, grid_x = torch.meshgrid( - torch.linspace(0, H_ - 1, H_, dtype=torch.float32, device=memory.device), - torch.linspace(0, W_ - 1, W_, dtype=torch.float32, device=memory.device), - ) - grid = torch.cat([grid_x.unsqueeze(-1), grid_y.unsqueeze(-1)], -1) # H_, W_, 2 - - scale = torch.cat([valid_W.unsqueeze(-1), valid_H.unsqueeze(-1)], 1).view(N_, 1, 1, 2) - grid = (grid.unsqueeze(0).expand(N_, -1, -1, -1) + 0.5) / scale - - if learnedwh is not None: - # import ipdb; ipdb.set_trace() - wh = torch.ones_like(grid) * learnedwh.sigmoid() * (2.0**lvl) - else: - wh = torch.ones_like(grid) * 0.05 * (2.0**lvl) - - # scale = torch.cat([W_[None].unsqueeze(-1), H_[None].unsqueeze(-1)], 1).view(1, 1, 1, 2).repeat(N_, 1, 1, 1) - # grid = (grid.unsqueeze(0).expand(N_, -1, -1, -1) + 0.5) / scale - # wh = torch.ones_like(grid) / scale - proposal = torch.cat((grid, wh), -1).view(N_, -1, 4) - proposals.append(proposal) - _cur += H_ * W_ - # import ipdb; ipdb.set_trace() - output_proposals = torch.cat(proposals, 1) - output_proposals_valid = ((output_proposals > 0.01) & (output_proposals < 0.99)).all( - -1, keepdim=True - ) - output_proposals = torch.log(output_proposals / (1 - output_proposals)) # unsigmoid - output_proposals = output_proposals.masked_fill(memory_padding_mask.unsqueeze(-1), float("inf")) - output_proposals = output_proposals.masked_fill(~output_proposals_valid, float("inf")) - - output_memory = memory - output_memory = output_memory.masked_fill(memory_padding_mask.unsqueeze(-1), float(0)) - output_memory = output_memory.masked_fill(~output_proposals_valid, float(0)) - - # output_memory = output_memory.masked_fill(memory_padding_mask.unsqueeze(-1), float('inf')) - # output_memory = output_memory.masked_fill(~output_proposals_valid, float('inf')) - - return output_memory, output_proposals - - -class RandomBoxPerturber: - def __init__( - self, x_noise_scale=0.2, y_noise_scale=0.2, w_noise_scale=0.2, h_noise_scale=0.2 - ) -> None: - self.noise_scale = torch.Tensor( - [x_noise_scale, y_noise_scale, w_noise_scale, h_noise_scale] - ) - - def __call__(self, refanchors: Tensor) -> Tensor: - nq, bs, query_dim = refanchors.shape - device = refanchors.device - - noise_raw = torch.rand_like(refanchors) - noise_scale = self.noise_scale.to(device)[:query_dim] - - new_refanchors = refanchors * (1 + (noise_raw - 0.5) * noise_scale) - return new_refanchors.clamp_(0, 1) - - -def sigmoid_focal_loss( - inputs, targets, num_boxes, alpha: float = 0.25, gamma: float = 2, no_reduction=False -): - """ - Loss used in RetinaNet for dense detection: https://arxiv.org/abs/1708.02002. - Args: - inputs: A float tensor of arbitrary shape. - The predictions for each example. - targets: A float tensor with the same shape as inputs. Stores the binary - classification label for each element in inputs - (0 for the negative class and 1 for the positive class). - alpha: (optional) Weighting factor in range (0,1) to balance - positive vs negative examples. Default = -1 (no weighting). - gamma: Exponent of the modulating factor (1 - p_t) to - balance easy vs hard examples. - Returns: - Loss tensor - """ - prob = inputs.sigmoid() - ce_loss = F.binary_cross_entropy_with_logits(inputs, targets, reduction="none") - p_t = prob * targets + (1 - prob) * (1 - targets) - loss = ce_loss * ((1 - p_t) ** gamma) - - if alpha >= 0: - alpha_t = alpha * targets + (1 - alpha) * (1 - targets) - loss = alpha_t * loss - - if no_reduction: - return loss - - return loss.mean(1).sum() / num_boxes - - -class MLP(nn.Module): - """Very simple multi-layer perceptron (also called FFN)""" - - def __init__(self, input_dim, hidden_dim, output_dim, num_layers): - super().__init__() - self.num_layers = num_layers - h = [hidden_dim] * (num_layers - 1) - self.layers = nn.ModuleList( - nn.Linear(n, k) for n, k in zip([input_dim] + h, h + [output_dim]) - ) - - def forward(self, x): - for i, layer in enumerate(self.layers): - x = F.relu(layer(x)) if i < self.num_layers - 1 else layer(x) - return x - - -def _get_activation_fn(activation, d_model=256, batch_dim=0): - """Return an activation function given a string""" - if activation == "relu": - return F.relu - if activation == "gelu": - return F.gelu - if activation == "glu": - return F.glu - if activation == "prelu": - return nn.PReLU() - if activation == "selu": - return F.selu - - raise RuntimeError(f"activation should be relu/gelu, not {activation}.") - - -def gen_sineembed_for_position(pos_tensor): - # n_query, bs, _ = pos_tensor.size() - # sineembed_tensor = torch.zeros(n_query, bs, 256) - scale = 2 * math.pi - dim_t = torch.arange(128, dtype=torch.float32, device=pos_tensor.device) - dim_t = 10000 ** (2 * (torch.div(dim_t, 2, rounding_mode='floor')) / 128) - x_embed = pos_tensor[:, :, 0] * scale - y_embed = pos_tensor[:, :, 1] * scale - pos_x = x_embed[:, :, None] / dim_t - pos_y = y_embed[:, :, None] / dim_t - pos_x = torch.stack((pos_x[:, :, 0::2].sin(), pos_x[:, :, 1::2].cos()), dim=3).flatten(2) - pos_y = torch.stack((pos_y[:, :, 0::2].sin(), pos_y[:, :, 1::2].cos()), dim=3).flatten(2) - if pos_tensor.size(-1) == 2: - pos = torch.cat((pos_y, pos_x), dim=2) - elif pos_tensor.size(-1) == 4: - w_embed = pos_tensor[:, :, 2] * scale - pos_w = w_embed[:, :, None] / dim_t - pos_w = torch.stack((pos_w[:, :, 0::2].sin(), pos_w[:, :, 1::2].cos()), dim=3).flatten(2) - - h_embed = pos_tensor[:, :, 3] * scale - pos_h = h_embed[:, :, None] / dim_t - pos_h = torch.stack((pos_h[:, :, 0::2].sin(), pos_h[:, :, 1::2].cos()), dim=3).flatten(2) - - pos = torch.cat((pos_y, pos_x, pos_w, pos_h), dim=2) - else: - raise ValueError("Unknown pos_tensor shape(-1):{}".format(pos_tensor.size(-1))) - return pos - - -class ContrastiveEmbed(nn.Module): - def __init__(self, max_text_len=256): - """ - Args: - max_text_len: max length of text. - """ - super().__init__() - self.max_text_len = max_text_len - - def forward(self, x, text_dict): - """_summary_ - - Args: - x (_type_): _description_ - text_dict (_type_): _description_ - { - 'encoded_text': encoded_text, # bs, 195, d_model - 'text_token_mask': text_token_mask, # bs, 195 - # True for used tokens. False for padding tokens - } - Returns: - _type_: _description_ - """ - assert isinstance(text_dict, dict) - - y = text_dict["encoded_text"] - text_token_mask = text_dict["text_token_mask"] - - res = x @ y.transpose(-1, -2) - res.masked_fill_(~text_token_mask[:, None, :], float("-inf")) - - # padding to max_text_len - new_res = torch.full((*res.shape[:-1], self.max_text_len), float("-inf"), device=res.device) - new_res[..., : res.shape[-1]] = res - - return new_res diff --git a/spaces/sidharthism/fashion-eye/models/biggan/pytorch_biggan/README.md b/spaces/sidharthism/fashion-eye/models/biggan/pytorch_biggan/README.md deleted file mode 100644 index deaa6c2a145a02a211ca45c59541ff88ce4da23c..0000000000000000000000000000000000000000 --- a/spaces/sidharthism/fashion-eye/models/biggan/pytorch_biggan/README.md +++ /dev/null @@ -1,227 +0,0 @@ -# BigStyleGAN -This is a copy of HuggingFace's BigGAN implementation, with the addition of layerwise latent inputs. - -# PyTorch pretrained BigGAN -An op-for-op PyTorch reimplementation of DeepMind's BigGAN model with the pre-trained weights from DeepMind. - -## Introduction - -This repository contains an op-for-op PyTorch reimplementation of DeepMind's BigGAN that was released with the paper [Large Scale GAN Training for High Fidelity Natural Image Synthesis](https://openreview.net/forum?id=B1xsqj09Fm) by Andrew Brock, Jeff Donahue and Karen Simonyan. - -This PyTorch implementation of BigGAN is provided with the [pretrained 128x128, 256x256 and 512x512 models by DeepMind](https://tfhub.dev/deepmind/biggan-deep-128/1). We also provide the scripts used to download and convert these models from the TensorFlow Hub models. - -This reimplementation was done from the raw computation graph of the Tensorflow version and behave similarly to the TensorFlow version (variance of the output difference of the order of 1e-5). - -This implementation currently only contains the generator as the weights of the discriminator were not released (although the structure of the discriminator is very similar to the generator so it could be added pretty easily. Tell me if you want to do a PR on that, I would be happy to help.) - -## Installation - -This repo was tested on Python 3.6 and PyTorch 1.0.1 - -PyTorch pretrained BigGAN can be installed from pip as follows: -```bash -pip install pytorch-pretrained-biggan -``` - -If you simply want to play with the GAN this should be enough. - -If you want to use the conversion scripts and the imagenet utilities, additional requirements are needed, in particular TensorFlow and NLTK. To install all the requirements please use the `full_requirements.txt` file: -```bash -git clone https://github.com/huggingface/pytorch-pretrained-BigGAN.git -cd pytorch-pretrained-BigGAN -pip install -r full_requirements.txt -``` - -## Models - -This repository provide direct and simple access to the pretrained "deep" versions of BigGAN for 128, 256 and 512 pixels resolutions as described in the [associated publication](https://openreview.net/forum?id=B1xsqj09Fm). -Here are some details on the models: - -- `BigGAN-deep-128`: a 50.4M parameters model generating 128x128 pixels images, the model dump weights 201 MB, -- `BigGAN-deep-256`: a 55.9M parameters model generating 256x256 pixels images, the model dump weights 224 MB, -- `BigGAN-deep-512`: a 56.2M parameters model generating 512x512 pixels images, the model dump weights 225 MB. - -Please refer to Appendix B of the paper for details on the architectures. - -All models comprise pre-computed batch norm statistics for 51 truncation values between 0 and 1 (see Appendix C.1 in the paper for details). - -## Usage - -Here is a quick-start example using `BigGAN` with a pre-trained model. - -See the [doc section](#doc) below for details on these classes and methods. - -```python -import torch -from pytorch_pretrained_biggan import (BigGAN, one_hot_from_names, truncated_noise_sample, - save_as_images, display_in_terminal) - -# OPTIONAL: if you want to have more information on what's happening, activate the logger as follows -import logging -logging.basicConfig(level=logging.INFO) - -# Load pre-trained model tokenizer (vocabulary) -model = BigGAN.from_pretrained('biggan-deep-256') - -# Prepare a input -truncation = 0.4 -class_vector = one_hot_from_names(['soap bubble', 'coffee', 'mushroom'], batch_size=3) -noise_vector = truncated_noise_sample(truncation=truncation, batch_size=3) - -# All in tensors -noise_vector = torch.from_numpy(noise_vector) -class_vector = torch.from_numpy(class_vector) - -# If you have a GPU, put everything on cuda -noise_vector = noise_vector.to('cuda') -class_vector = class_vector.to('cuda') -model.to('cuda') - -# Generate an image -with torch.no_grad(): - output = model(noise_vector, class_vector, truncation) - -# If you have a GPU put back on CPU -output = output.to('cpu') - -# If you have a sixtel compatible terminal you can display the images in the terminal -# (see https://github.com/saitoha/libsixel for details) -display_in_terminal(output) - -# Save results as png images -save_as_images(output) -``` - -![output_0](assets/output_0.png) -![output_1](assets/output_1.png) -![output_2](assets/output_2.png) - -## Doc - -### Loading DeepMind's pre-trained weights - -To load one of DeepMind's pre-trained models, instantiate a `BigGAN` model with `from_pretrained()` as: - -```python -model = BigGAN.from_pretrained(PRE_TRAINED_MODEL_NAME_OR_PATH, cache_dir=None) -``` - -where - -- `PRE_TRAINED_MODEL_NAME_OR_PATH` is either: - - - the shortcut name of a Google AI's or OpenAI's pre-trained model selected in the list: - - - `biggan-deep-128`: 12-layer, 768-hidden, 12-heads, 110M parameters - - `biggan-deep-256`: 24-layer, 1024-hidden, 16-heads, 340M parameters - - `biggan-deep-512`: 12-layer, 768-hidden, 12-heads , 110M parameters - - - a path or url to a pretrained model archive containing: - - - `config.json`: a configuration file for the model, and - - `pytorch_model.bin` a PyTorch dump of a pre-trained instance of `BigGAN` (saved with the usual `torch.save()`). - - If `PRE_TRAINED_MODEL_NAME_OR_PATH` is a shortcut name, the pre-trained weights will be downloaded from AWS S3 (see the links [here](pytorch_pretrained_biggan/model.py)) and stored in a cache folder to avoid future download (the cache folder can be found at `~/.pytorch_pretrained_biggan/`). -- `cache_dir` can be an optional path to a specific directory to download and cache the pre-trained model weights. - -### Configuration - -`BigGANConfig` is a class to store and load BigGAN configurations. It's defined in [`config.py`](./pytorch_pretrained_biggan/config.py). - -Here are some details on the attributes: - -- `output_dim`: output resolution of the GAN (128, 256 or 512) for the pre-trained models, -- `z_dim`: size of the noise vector (128 for the pre-trained models). -- `class_embed_dim`: size of the class embedding vectors (128 for the pre-trained models). -- `channel_width`: size of each channel (128 for the pre-trained models). -- `num_classes`: number of classes in the training dataset, like imagenet (1000 for the pre-trained models). -- `layers`: A list of layers definition. Each definition for a layer is a triple of [up-sample in the layer ? (bool), number of input channels (int), number of output channels (int)] -- `attention_layer_position`: Position of the self-attention layer in the layer hierarchy (8 for the pre-trained models). -- `eps`: epsilon value to use for spectral and batch normalization layers (1e-4 for the pre-trained models). -- `n_stats`: number of pre-computed statistics for the batch normalization layers associated to various truncation values between 0 and 1 (51 for the pre-trained models). - -### Model - -`BigGAN` is a PyTorch model (`torch.nn.Module`) of BigGAN defined in [`model.py`](./pytorch_pretrained_biggan/model.py). This model comprises the class embeddings (a linear layer) and the generator with a series of convolutions and conditional batch norms. The discriminator is currently not implemented since pre-trained weights have not been released for it. - -The inputs and output are **identical to the TensorFlow model inputs and outputs**. - -We detail them here. - -`BigGAN` takes as *inputs*: - -- `z`: a torch.FloatTensor of shape [batch_size, config.z_dim] with noise sampled from a truncated normal distribution, and -- `class_label`: an optional torch.LongTensor of shape [batch_size, sequence_length] with the token types indices selected in [0, 1]. Type 0 corresponds to a `sentence A` and type 1 corresponds to a `sentence B` token (see BERT paper for more details). -- `truncation`: a float between 0 (not comprised) and 1. The truncation of the truncated normal used for creating the noise vector. This truncation value is used to selecte between a set of pre-computed statistics (means and variances) for the batch norm layers. - -`BigGAN` *outputs* an array of shape [batch_size, 3, resolution, resolution] where resolution is 128, 256 or 512 depending of the model: - -### Utilities: Images, Noise, Imagenet classes - -We provide a few utility method to use the model. They are defined in [`utils.py`](./pytorch_pretrained_biggan/utils.py). - -Here are some details on these methods: - -- `truncated_noise_sample(batch_size=1, dim_z=128, truncation=1., seed=None)`: - - Create a truncated noise vector. - - Params: - - batch_size: batch size. - - dim_z: dimension of z - - truncation: truncation value to use - - seed: seed for the random generator - - Output: - array of shape (batch_size, dim_z) - -- `convert_to_images(obj)`: - - Convert an output tensor from BigGAN in a list of images. - - Params: - - obj: tensor or numpy array of shape (batch_size, channels, height, width) - - Output: - - list of Pillow Images of size (height, width) - -- `save_as_images(obj, file_name='output')`: - - Convert and save an output tensor from BigGAN in a list of saved images. - - Params: - - obj: tensor or numpy array of shape (batch_size, channels, height, width) - - file_name: path and beggingin of filename to save. - Images will be saved as `file_name_{image_number}.png` - -- `display_in_terminal(obj)`: - - Convert and display an output tensor from BigGAN in the terminal. This function use `libsixel` and will only work in a libsixel-compatible terminal. Please refer to https://github.com/saitoha/libsixel for more details. - - Params: - - obj: tensor or numpy array of shape (batch_size, channels, height, width) - - file_name: path and beggingin of filename to save. - Images will be saved as `file_name_{image_number}.png` - -- `one_hot_from_int(int_or_list, batch_size=1)`: - - Create a one-hot vector from a class index or a list of class indices. - - Params: - - int_or_list: int, or list of int, of the imagenet classes (between 0 and 999) - - batch_size: batch size. - - If int_or_list is an int create a batch of identical classes. - - If int_or_list is a list, we should have `len(int_or_list) == batch_size` - - Output: - - array of shape (batch_size, 1000) - -- `one_hot_from_names(class_name, batch_size=1)`: - - Create a one-hot vector from the name of an imagenet class ('tennis ball', 'daisy', ...). We use NLTK's wordnet search to try to find the relevant synset of ImageNet and take the first one. If we can't find it direcly, we look at the hyponyms and hypernyms of the class name. - - Params: - - class_name: string containing the name of an imagenet object. - - Output: - - array of shape (batch_size, 1000) - -## Download and conversion scripts - -Scripts to download and convert the TensorFlow models from TensorFlow Hub are provided in [./scripts](./scripts/). - -The scripts can be used directly as: -```bash -./scripts/download_tf_hub_models.sh -./scripts/convert_tf_hub_models.sh -``` diff --git a/spaces/sidharthism/fashion-eye/netdissect/proggan.py b/spaces/sidharthism/fashion-eye/netdissect/proggan.py deleted file mode 100644 index e37ae15f373ef6ad14279bb581042434c5563539..0000000000000000000000000000000000000000 --- a/spaces/sidharthism/fashion-eye/netdissect/proggan.py +++ /dev/null @@ -1,299 +0,0 @@ -import torch, numpy, itertools -import torch.nn as nn -from collections import OrderedDict - - -def print_network(net, verbose=False): - num_params = 0 - for param in net.parameters(): - num_params += param.numel() - if verbose: - print(net) - print('Total number of parameters: {:3.3f} M'.format(num_params / 1e6)) - - -def from_pth_file(filename): - ''' - Instantiate from a pth file. - ''' - state_dict = torch.load(filename) - if 'state_dict' in state_dict: - state_dict = state_dict['state_dict'] - # Convert old version of parameter names - if 'features.0.conv.weight' in state_dict: - state_dict = state_dict_from_old_pt_dict(state_dict) - sizes = sizes_from_state_dict(state_dict) - result = ProgressiveGenerator(sizes=sizes) - result.load_state_dict(state_dict) - return result - -############################################################################### -# Modules -############################################################################### - -class ProgressiveGenerator(nn.Sequential): - def __init__(self, resolution=None, sizes=None, modify_sequence=None, - output_tanh=False): - ''' - A pytorch progessive GAN generator that can be converted directly - from either a tensorflow model or a theano model. It consists of - a sequence of convolutional layers, organized in pairs, with an - upsampling and reduction of channels at every other layer; and - then finally followed by an output layer that reduces it to an - RGB [-1..1] image. - - The network can be given more layers to increase the output - resolution. The sizes argument indicates the fieature depth at - each upsampling, starting with the input z: [input-dim, 4x4-depth, - 8x8-depth, 16x16-depth...]. The output dimension is 2 * 2**len(sizes) - - Some default architectures can be selected by supplying the - resolution argument instead. - - The optional modify_sequence function can be used to transform the - sequence of layers before the network is constructed. - - If output_tanh is set to True, the network applies a tanh to clamp - the output to [-1,1] before output; otherwise the output is unclamped. - ''' - assert (resolution is None) != (sizes is None) - if sizes is None: - sizes = { - 8: [512, 512, 512], - 16: [512, 512, 512, 512], - 32: [512, 512, 512, 512, 256], - 64: [512, 512, 512, 512, 256, 128], - 128: [512, 512, 512, 512, 256, 128, 64], - 256: [512, 512, 512, 512, 256, 128, 64, 32], - 1024: [512, 512, 512, 512, 512, 256, 128, 64, 32, 16] - }[resolution] - # Follow the schedule of upsampling given by sizes. - # layers are called: layer1, layer2, etc; then output_128x128 - sequence = [] - def add_d(layer, name=None): - if name is None: - name = 'layer%d' % (len(sequence) + 1) - sequence.append((name, layer)) - add_d(NormConvBlock(sizes[0], sizes[1], kernel_size=4, padding=3)) - add_d(NormConvBlock(sizes[1], sizes[1], kernel_size=3, padding=1)) - for i, (si, so) in enumerate(zip(sizes[1:-1], sizes[2:])): - add_d(NormUpscaleConvBlock(si, so, kernel_size=3, padding=1)) - add_d(NormConvBlock(so, so, kernel_size=3, padding=1)) - # Create an output layer. During training, the progressive GAN - # learns several such output layers for various resolutions; we - # just include the last (highest resolution) one. - dim = 4 * (2 ** (len(sequence) // 2 - 1)) - add_d(OutputConvBlock(sizes[-1], tanh=output_tanh), - name='output_%dx%d' % (dim, dim)) - # Allow the sequence to be modified - if modify_sequence is not None: - sequence = modify_sequence(sequence) - super().__init__(OrderedDict(sequence)) - - def forward(self, x): - # Convert vector input to 1x1 featuremap. - x = x.view(x.shape[0], x.shape[1], 1, 1) - return super().forward(x) - -class PixelNormLayer(nn.Module): - def __init__(self): - super(PixelNormLayer, self).__init__() - - def forward(self, x): - return x / torch.sqrt(torch.mean(x**2, dim=1, keepdim=True) + 1e-8) - -class DoubleResolutionLayer(nn.Module): - def forward(self, x): - x = nn.functional.interpolate(x, scale_factor=2, mode='nearest') - return x - -class WScaleLayer(nn.Module): - def __init__(self, size, fan_in, gain=numpy.sqrt(2)): - super(WScaleLayer, self).__init__() - self.scale = gain / numpy.sqrt(fan_in) # No longer a parameter - self.b = nn.Parameter(torch.randn(size)) - self.size = size - - def forward(self, x): - x_size = x.size() - x = x * self.scale + self.b.view(1, -1, 1, 1).expand( - x_size[0], self.size, x_size[2], x_size[3]) - return x - -class NormConvBlock(nn.Module): - def __init__(self, in_channels, out_channels, kernel_size, padding): - super(NormConvBlock, self).__init__() - self.norm = PixelNormLayer() - self.conv = nn.Conv2d( - in_channels, out_channels, kernel_size, 1, padding, bias=False) - self.wscale = WScaleLayer(out_channels, in_channels, - gain=numpy.sqrt(2) / kernel_size) - self.relu = nn.LeakyReLU(inplace=True, negative_slope=0.2) - - def forward(self, x): - x = self.norm(x) - x = self.conv(x) - x = self.relu(self.wscale(x)) - return x - -class NormUpscaleConvBlock(nn.Module): - def __init__(self, in_channels, out_channels, kernel_size, padding): - super(NormUpscaleConvBlock, self).__init__() - self.norm = PixelNormLayer() - self.up = DoubleResolutionLayer() - self.conv = nn.Conv2d( - in_channels, out_channels, kernel_size, 1, padding, bias=False) - self.wscale = WScaleLayer(out_channels, in_channels, - gain=numpy.sqrt(2) / kernel_size) - self.relu = nn.LeakyReLU(inplace=True, negative_slope=0.2) - - def forward(self, x): - x = self.norm(x) - x = self.up(x) - x = self.conv(x) - x = self.relu(self.wscale(x)) - return x - -class OutputConvBlock(nn.Module): - def __init__(self, in_channels, tanh=False): - super().__init__() - self.norm = PixelNormLayer() - self.conv = nn.Conv2d( - in_channels, 3, kernel_size=1, padding=0, bias=False) - self.wscale = WScaleLayer(3, in_channels, gain=1) - self.clamp = nn.Hardtanh() if tanh else (lambda x: x) - - def forward(self, x): - x = self.norm(x) - x = self.conv(x) - x = self.wscale(x) - x = self.clamp(x) - return x - -############################################################################### -# Conversion -############################################################################### - -def from_tf_parameters(parameters): - ''' - Instantiate from tensorflow variables. - ''' - state_dict = state_dict_from_tf_parameters(parameters) - sizes = sizes_from_state_dict(state_dict) - result = ProgressiveGenerator(sizes=sizes) - result.load_state_dict(state_dict) - return result - -def from_old_pt_dict(parameters): - ''' - Instantiate from old pytorch state dict. - ''' - state_dict = state_dict_from_old_pt_dict(parameters) - sizes = sizes_from_state_dict(state_dict) - result = ProgressiveGenerator(sizes=sizes) - result.load_state_dict(state_dict) - return result - -def sizes_from_state_dict(params): - ''' - In a progressive GAN, the number of channels can change after each - upsampling. This function reads the state dict to figure the - number of upsamplings and the channel depth of each filter. - ''' - sizes = [] - for i in itertools.count(): - pt_layername = 'layer%d' % (i + 1) - try: - weight = params['%s.conv.weight' % pt_layername] - except KeyError: - break - if i == 0: - sizes.append(weight.shape[1]) - if i % 2 == 0: - sizes.append(weight.shape[0]) - return sizes - -def state_dict_from_tf_parameters(parameters): - ''' - Conversion from tensorflow parameters - ''' - def torch_from_tf(data): - return torch.from_numpy(data.eval()) - - params = dict(parameters) - result = {} - sizes = [] - for i in itertools.count(): - resolution = 4 * (2 ** (i // 2)) - # Translate parameter names. For example: - # 4x4/Dense/weight -> layer1.conv.weight - # 32x32/Conv0_up/weight -> layer7.conv.weight - # 32x32/Conv1/weight -> layer8.conv.weight - tf_layername = '%dx%d/%s' % (resolution, resolution, - 'Dense' if i == 0 else 'Conv' if i == 1 else - 'Conv0_up' if i % 2 == 0 else 'Conv1') - pt_layername = 'layer%d' % (i + 1) - # Stop looping when we run out of parameters. - try: - weight = torch_from_tf(params['%s/weight' % tf_layername]) - except KeyError: - break - # Transpose convolution weights into pytorch format. - if i == 0: - # Convert dense layer to 4x4 convolution - weight = weight.view(weight.shape[0], weight.shape[1] // 16, - 4, 4).permute(1, 0, 2, 3).flip(2, 3) - sizes.append(weight.shape[0]) - elif i % 2 == 0: - # Convert inverse convolution to convolution - weight = weight.permute(2, 3, 0, 1).flip(2, 3) - else: - # Ordinary Conv2d conversion. - weight = weight.permute(3, 2, 0, 1) - sizes.append(weight.shape[1]) - result['%s.conv.weight' % (pt_layername)] = weight - # Copy bias vector. - bias = torch_from_tf(params['%s/bias' % tf_layername]) - result['%s.wscale.b' % (pt_layername)] = bias - # Copy just finest-grained ToRGB output layers. For example: - # ToRGB_lod0/weight -> output.conv.weight - i -= 1 - resolution = 4 * (2 ** (i // 2)) - tf_layername = 'ToRGB_lod0' - pt_layername = 'output_%dx%d' % (resolution, resolution) - result['%s.conv.weight' % pt_layername] = torch_from_tf( - params['%s/weight' % tf_layername]).permute(3, 2, 0, 1) - result['%s.wscale.b' % pt_layername] = torch_from_tf( - params['%s/bias' % tf_layername]) - # Return parameters - return result - -def state_dict_from_old_pt_dict(params): - ''' - Conversion from the old pytorch model layer names. - ''' - result = {} - sizes = [] - for i in itertools.count(): - old_layername = 'features.%d' % i - pt_layername = 'layer%d' % (i + 1) - try: - weight = params['%s.conv.weight' % (old_layername)] - except KeyError: - break - if i == 0: - sizes.append(weight.shape[0]) - if i % 2 == 0: - sizes.append(weight.shape[1]) - result['%s.conv.weight' % (pt_layername)] = weight - result['%s.wscale.b' % (pt_layername)] = params[ - '%s.wscale.b' % (old_layername)] - # Copy the output layers. - i -= 1 - resolution = 4 * (2 ** (i // 2)) - pt_layername = 'output_%dx%d' % (resolution, resolution) - result['%s.conv.weight' % pt_layername] = params['output.conv.weight'] - result['%s.wscale.b' % pt_layername] = params['output.wscale.b'] - # Return parameters and also network architecture sizes. - return result - diff --git a/spaces/simple0urra/skops-model-card-creator-2a23515a-d54e-4804-b365-27ed6e938735/example/Download One Shot Kill Mini Militia Mod APK with Unlimited Ammo and Nitro.md b/spaces/simple0urra/skops-model-card-creator-2a23515a-d54e-4804-b365-27ed6e938735/example/Download One Shot Kill Mini Militia Mod APK with Unlimited Ammo and Nitro.md deleted file mode 100644 index 6be47acd7166b8340f361710db7fdfb7ea358793..0000000000000000000000000000000000000000 --- a/spaces/simple0urra/skops-model-card-creator-2a23515a-d54e-4804-b365-27ed6e938735/example/Download One Shot Kill Mini Militia Mod APK with Unlimited Ammo and Nitro.md +++ /dev/null @@ -1,108 +0,0 @@ -
      -

      Download One Shot Kill Mini Militia Mod and Enjoy Unlimited Fun

      -

      Do you love playing shooting games with your friends online? Do you want to experience the thrill of killing your enemies with one shot? If yes, then you should try the One Shot Kill Mini Militia Mod, a modified version of the popular 2D multiplayer shooter game Mini Militia. In this article, we will tell you everything you need to know about this mod, including what it is, what are its benefits, how to download and install it, and some tips to play it. So, let's get started!

      -

      download one shot kill mini militia


      Download File >>>>> https://ssurll.com/2uNRwV



      -

      What is Mini Militia?

      -

      A popular 2D multiplayer shooter game

      -

      Mini Militia is a fun cartoon-themed 2D game that is inspired by the original stickman shooter Doodle Army. It is developed by Miniclip.com and has over 100 million downloads on Google Play Store. The game is all about intense multiplayer combat, where you can battle with up to 6 players online or 12 players using local wi-fi. You can also play offline in survival mode against bots.

      -

      Features of Mini Militia

      -

      Online and offline modes

      -

      Mini Militia offers you two modes to play: online and offline. In online mode, you can join or create rooms with other players from around the world. You can choose from different game modes, such as deathmatch, team deathmatch, capture the flag, and custom. In offline mode, you can practice your skills in survival mode, where you have to face waves of bots.

      -

      Various maps and weapons

      -

      Mini Militia has over 20 maps to explore, each with its own terrain, obstacles, and secrets. You can also choose from a wide range of modern and futuristic weapons, such as pistols, shotguns, snipers, rockets, flamethrowers, lasers, saws, and more. You can also use grenades, mines, gas bombs, and shields to spice up the action.

      -

      Customizable avatars and skills

      -

      Mini Militia lets you customize your avatar with different outfits, hats, glasses, masks, and accessories. You can also upgrade your skills, such as health, accuracy, speed, reload time, melee damage, and bullet damage. You can earn skill points by playing online matches or by purchasing them with real money.

      -

      How to download one shot kill mini militia mod apk
      -One shot kill mini militia hack download for android
      -Download mini militia v4.2.8 one shot kill mod with bonus features
      -Mini militia one shot kill unlimited ammo and nitro download
      -Download mini militia one shot kill mod by sahad ikr
      -Mini militia one shot kill mod apk download latest version
      -Download mini militia one shot kill mod mediafire link
      -Mini militia one shot kill mod download for ios
      -Download mini militia one shot kill mod without root
      -Mini militia one shot kill mod apk download 2023
      -Download mini militia one shot kill mod from revdl
      -Mini militia one shot kill and unlimited health download
      -Download mini militia one shot kill mod with pro pack unlocked
      -Mini militia one shot kill mod apk download no reload
      -Download mini militia one shot kill mod with custom avatars
      -Mini militia one shot kill mod apk download rexdl
      -Download mini militia one shot kill mod with zoom 7x
      -Mini militia one shot kill and invisible hack download
      -Download mini militia one shot kill mod with dual wield
      -Mini militia one shot kill mod apk download for pc
      -Download mini militia one shot kill mod with unlimited bombs
      -Mini militia one shot kill and wall hack download
      -Download mini militia one shot kill mod with all guns unlocked
      -Mini militia one shot kill mod apk download for jio phone
      -Download mini militia one shot kill mod with sniper lobby
      -Mini militia one shot kill and fly through walls download
      -Download mini militia one shot kill mod with death sprayer
      -Mini militia one shot kill mod apk download for windows 10
      -Download mini militia one shot kill mod with commander in chief rank
      -Mini militia one shot kill and speed hack download
      -Download mini militia one shot kill mod with anti ban feature
      -Mini militia one shot kill and god mode download
      -Download mini militia one shot kill mod with new maps and weapons
      -Mini militia one shot kill mod apk download for laptop
      -Download mini militia one shot kill mod with online multiplayer mode
      -Mini militia one shot kill and no gravity download
      -Download mini militia one shot kill mod with unlimited battle points
      -Mini militia one shot kill mod apk download for iphone
      -Download mini militia one shot kill mod with voice chat enabled
      -Mini militia one shot kill and magic bullet download
      -Download mini militia one shot kill mod with hd graphics and sound effects
      -Mini militia one shot kill and teleport hack download
      -Download mini militia one shot kill mod with support for all devices
      -Mini militia one shot kill mod apk download for tablet
      -Download mini militia one shot kill mod with auto aim feature
      -Mini militia one shot kill and invisible to enemies download

      -

      What is One Shot Kill Mini Militia Mod?

      -

      A modified version of the original game

      -

      One Shot Kill Mini Militia Mod is a modified version of the original game that gives you some extra advantages over your opponents. It is not an official version of the game and it is not available on Google Play Store or App Store. You have to download it from a third-party website or app store.

      -

      Benefits of One Shot Kill Mini Militia Mod

      -

      Kill enemies with one shot

      -

      The main benefit of One Shot Kill Mini Militia Mod is that it allows you to kill your enemies with one shot. No matter what weapon you use or where you hit them, they will die instantly. This makes the game more fun and easy for you.

      -

      Unlimited ammo and nitro

      -

      Another benefit of One Shot Kill Mini Militia Mod is that it gives you unlimited ammo and nitro. You don't have to worry about running out of bullets or jetpack fuel. You can shoot and fly as much as you want without any limitations.

      -

      No reload time and pro pack unlocked

      -

      One more benefit of One Shot Kill Mini Militia Mod is that it eliminates the reload time and unlocks the pro pack. You can switch between weapons without any delay and use all the premium features of the game, such as dual wield, extra avatar customization, and more.

      -

      How to Download and Install One Shot Kill Mini Militia Mod?

      -

      Steps to download and install the mod apk file

      -

      If you want to download and install One Shot Kill Mini Militia Mod, you have to follow these steps:

      -
        -
      1. Uninstall the original Mini Militia game from your device.
      2. -
      3. Go to a trusted website or app store that provides the mod apk file. For example, you can use [this link] to download the latest version of the mod.
      4. -
      5. Enable the unknown sources option in your device settings. This will allow you to install apps from sources other than Google Play Store or App Store.
      6. -
      7. Locate the downloaded mod apk file in your file manager and tap on it to install it.
      8. -
      9. Wait for the installation process to complete and then launch the game.
      10. -
      11. Enjoy the One Shot Kill Mini Militia Mod with unlimited fun!
      12. -
      -

      Tips to play One Shot Kill Mini Militia Mod

      -

      Choose the right weapon and map

      -

      Even though you can kill your enemies with one shot, you still have to choose the right weapon and map for your style of play. Some weapons are more suitable for long-range combat, such as snipers and rockets, while others are better for close-range combat, such as shotguns and saws. Similarly, some maps are more open and spacious, while others are more narrow and crowded. You have to find the best combination of weapon and map that suits your preference.

      -

      Use your nitro wisely

      -

      Even though you have unlimited nitro, you still have to use it wisely. Nitro can help you move faster, dodge bullets, and reach higher places. However, it can also make you more visible and vulnerable to your enemies. You have to balance between using nitro and staying on the ground. You also have to avoid using nitro when you are near walls or ceilings, as it can cause you to bounce back and lose control.

      -

      Avoid close combat and grenades

      -

      Even though you can kill your enemies with one shot, you still have to avoid close combat and grenades. Close combat can be risky, as your enemies can also kill you with one shot if they get close enough. Grenades can also be deadly, as they can explode near you and damage you. You have to keep a safe distance from your enemies and use cover when possible. You also have to watch out for grenades thrown by your enemies and avoid them.

      -

      Conclusion

      -

      Summary of the main points

      -

      In conclusion, One Shot Kill Mini Militia Mod is a modified version of the popular 2D multiplayer shooter game Mini Militia that gives you some extra advantages over your opponents. It allows you to kill your enemies with one shot, gives you unlimited ammo and nitro, eliminates the reload time, and unlocks the pro pack. You can download and install it from a third-party website or app store by following some simple steps. You can also follow some tips to play it better, such as choosing the right weapon and map, using your nitro wisely, and avoiding close combat and grenades.

      -

      FAQs

      -

      Here are some frequently asked questions about One Shot Kill Mini Militia Mod:

      -
        -
      • Is One Shot Kill Mini Militia Mod safe?
      • -

        One Shot Kill Mini Militia Mod is safe as long as you download it from a trusted website or app store. However, it is not an official version of the game and it may not be compatible with some devices or updates. It may also cause some glitches or errors in the game. You should always backup your data before installing any mod apk file.

        -
      • Is One Shot Kill Mini Militia Mod legal?
      • -

        One Shot Kill Mini Militia Mod is not legal, as it violates the terms and conditions of the original game. It may also infringe on the intellectual property rights of the developers. Using any mod apk file may result in a ban or suspension from the game or legal action from the authorities.

        -
      • -

        One Shot Kill Mini Militia Mod is not fair, as it gives you an unfair advantage over your opponents. It may also ruin the fun and challenge of the game for you and others. It may also make you a target of hate and abuse from other players. You should respect the rules and ethics of the game and play it as it is meant to be played.

        -
      • Can I play One Shot Kill Mini Militia Mod with my friends?
      • -

        One Shot Kill Mini Militia Mod can be played with your friends online or offline. However, you have to make sure that your friends also have the same mod apk file installed on their devices. Otherwise, you may not be able to join or create rooms with them. You may also face some compatibility issues or errors in the game.

        -
      • Where can I find more mods for Mini Militia?
      • -

        There are many websites and app stores that provide different mods for Mini Militia, such as unlimited health, unlimited bombs, invisible mode, wall hack, god mode, and more. However, you have to be careful when downloading and installing any mod apk file, as some of them may contain viruses, malware, or spyware. You should always scan the file before installing it and use a reliable antivirus software on your device.

        -
      -

      I hope you enjoyed reading this article and learned something new. If you have any questions or feedback, please feel free to leave a comment below. Thank you for your time and attention!

      197e85843d
      -
      -
      \ No newline at end of file diff --git a/spaces/simple0urra/skops-model-card-creator-2a23515a-d54e-4804-b365-27ed6e938735/example/FIFA Mobile MOD APK Unlimited Money Points and Menu for Android Devices.md b/spaces/simple0urra/skops-model-card-creator-2a23515a-d54e-4804-b365-27ed6e938735/example/FIFA Mobile MOD APK Unlimited Money Points and Menu for Android Devices.md deleted file mode 100644 index d7649229891b950148c48cc565a9787208ec6267..0000000000000000000000000000000000000000 --- a/spaces/simple0urra/skops-model-card-creator-2a23515a-d54e-4804-b365-27ed6e938735/example/FIFA Mobile MOD APK Unlimited Money Points and Menu for Android Devices.md +++ /dev/null @@ -1,176 +0,0 @@ -
      -

      FIFA Mobile APK Mod Menu Unlimited Money: How to Download and Play

      -

      If you are a fan of soccer games, you might have heard of FIFA Mobile, the official mobile game of the FIFA World Cup 2022. This game lets you build your own ultimate team of soccer stars, compete in various modes, and relive the world's greatest soccer tournament. But what if you want to enjoy the game with more features, benefits, and customization options? That's where FIFA Mobile APK Mod Menu Unlimited Money comes in. This is a modified version of the game that gives you unlimited money, points, coins, gems, and other resources. You can also use the mod menu to unlock all players, modes, kits, stadiums, and more. In this article, we will show you how to download and play FIFA Mobile APK Mod Menu Unlimited Money on your Android device.

      -

      fifa mobile apk mod menu unlimited money


      Download 🗸 https://ssurll.com/2uNS4G



      -

      What is FIFA Mobile APK Mod Menu Unlimited Money?

      -

      FIFA Mobile APK Mod Menu Unlimited Money is a hacked version of the original FIFA Mobile game that gives you access to unlimited resources and features. You can use these resources to buy players, upgrade skills, customize kits, and more. You can also use the mod menu to enable or disable various options, such as auto-win, god mode, no ads, etc. With this mod, you can enjoy the game without any limitations or restrictions.

      -

      Features of FIFA Mobile APK Mod Menu Unlimited Money

      -

      Some of the features of FIFA Mobile APK Mod Menu Unlimited Money are:

      -
        -
      • Unlimited money, points, coins, gems, and other resources
      • -
      • Mod menu to enable or disable various options
      • -
      • Unlocked all players, modes, kits, stadiums, and more
      • -
      • No root required
      • -
      • No ads
      • -
      • Easy to install and use
      • -
      -

      Benefits of FIFA Mobile APK Mod Menu Unlimited Money

      -

      Some of the benefits of FIFA Mobile APK Mod Menu Unlimited Money are:

      -

      fifa mobile mod apk unlocked all players
      -fifa mobile hack apk unlimited coins and points
      -fifa mobile 23 mod apk unlimited money and gems
      -fifa mobile mod menu apk download latest version
      -fifa mobile mod apk world cup 2022 mode
      -fifa mobile mod apk offline no verification
      -fifa mobile mod apk free shopping and transfers
      -fifa mobile mod apk manager mode unlocked
      -fifa mobile mod apk unlimited stamina and energy
      -fifa mobile mod apk with commentary and sfx
      -fifa mobile mod apk 60 fps and realistic graphics
      -fifa mobile mod apk icons and heroes cards
      -fifa mobile mod apk anti ban and no root
      -fifa mobile mod apk vip features and premium access
      -fifa mobile mod apk cheat codes and tricks
      -fifa mobile mod apk mega mod menu with options
      -fifa mobile mod apk high damage and defense
      -fifa mobile mod apk easy win and fast level up
      -fifa mobile mod apk all leagues and teams available
      -fifa mobile mod apk auto play and idle mode
      -fifa mobile mod apk custom kits and badges
      -fifa mobile mod apk unlimited skills and abilities
      -fifa mobile mod apk no ads and no surveys
      -fifa mobile mod apk full unlocked and cracked
      -fifa mobile mod apk original gameplay and interface
      -fifa mobile hack menu apk free download for android
      -fifa mobile hack tool apk unlimited resources generator
      -fifa mobile hack online apk no human verification
      -fifa mobile hack version apk latest update 2023
      -fifa mobile hack coins and points apk without root
      -fifa mobile hack money and gems apk with obb data
      -fifa mobile hack all players unlocked apk with license key
      -fifa mobile hack world cup mode unlocked apk with patch file
      -fifa mobile hack offline mode enabled apk with cache file
      -fifa mobile hack free shopping and transfers apk with cheat engine
      -fifa mobile hack manager mode unlocked apk with lucky patcher
      -fifa mobile hack unlimited stamina and energy apk with game guardian
      -fifa mobile hack commentary and sfx enabled apk with game booster
      -fifa mobile hack 60 fps and realistic graphics enabled apk with gfx tool
      -fifa mobile hack icons and heroes cards unlocked apk with xmodgames
      -fifa mobile hack anti ban and no root required apk with vpn master
      -fifa mobile hack vip features and premium access enabled apk with app cloner
      -fifa mobile hack cheat codes and tricks enabled apk with tutuapp
      -fifa mobile hack mega mod menu with options enabled apk with happymod
      -fifa mobile hack high damage and defense enabled apk with sb game hacker
      -fifa mobile hack easy win and fast level up enabled apk with leoplay card
      -fifa mobile hack all leagues and teams available unlocked apk with ac market
      -fifa mobile hack auto play and idle mode enabled apk with panda helper
      -fifa mobile hack custom kits and badges enabled apk with kinemaster

      -
        -
      • You can build your dream team with any players you want
      • -
      • You can relive the FIFA World Cup 2022 mode with any of the 32 qualified nations
      • -
      • You can experience next-level soccer simulation with realistic graphics and sound effects
      • -
      • You can manage your own team and plan your strategy in real time or auto-play mode
      • -
      • You can customize your game according to your preferences and style
      • -
      • You can save your time and money by not spending on in-app purchases
      • -
      • You can have more fun and excitement by playing with unlimited resources and features
      • -
      -

      How to Download FIFA Mobile APK Mod Menu Unlimited Money?

      -

      If you want to download FIFA Mobile APK Mod Menu Unlimited Money on your Android device, you need to follow some simple steps. Before that, you need to make sure that your device meets some requirements.

      -

      Requirements for FIFA Mobile APK Mod Menu Unlimited Money

      -

      Some of the requirements for FIFA Mobile APK Mod Menu Unlimited Money are:

      -
        -
      • An Android device with version 5.0 or higher
      • -
      • At least 1 GB of free storage space
      • -
      • A stable internet connection
      • -
      • Allow installation from unknown sources in your device settings
      • -
      -

      Steps to Download

      Steps to Download FIFA Mobile APK Mod Menu Unlimited Money

      -

      Some of the steps to download FIFA Mobile APK Mod Menu Unlimited Money are:

      -
        -
      1. Click on the download link below to get the FIFA Mobile APK Mod Menu Unlimited Money file
      2. -
      3. Save the file in your device storage
      4. -
      5. Locate the file and tap on it to start the installation process
      6. -
      7. Follow the instructions on the screen to complete the installation
      8. -
      9. Launch the game and enjoy
      10. -
      -

      Download FIFA Mobile APK Mod Menu Unlimited Money Here

      -

      How to Play FIFA Mobile APK Mod Menu Unlimited Money?

      -

      Once you have installed FIFA Mobile APK Mod Menu Unlimited Money on your device, you can start playing the game with unlimited resources and features. Here are some of the things you can do in the game:

      -

      Build Your Ultimate Team with Star Players

      -

      In FIFA Mobile APK Mod Menu Unlimited Money, you can build your own ultimate team with any players you want. You can use the unlimited money, points, coins, and gems to buy players from the market or open packs. You can also use the mod menu to unlock all players, including legends, icons, and special cards. You can create your own squad with your favorite players and formations.

      -

      Relive the FIFA World Cup 2022 Mode

      -

      In FIFA Mobile APK Mod Menu Unlimited Money, you can relive the FIFA World Cup 2022 mode with any of the 32 qualified nations. You can choose your favorite team and play through the group stage, knockout stage, and final. You can also use the mod menu to enable auto-win, god mode, or other options to make your game easier or harder. You can also customize your team's kit, badge, and stadium.

      -

      Experience Next-Level Soccer Simulation

      -

      In FIFA Mobile APK Mod Menu Unlimited Money, you can experience next-level soccer simulation with realistic graphics and sound effects. You can play in various modes, such as head-to-head, attack mode, season mode, and more. You can also compete in various events, such as tournaments, leagues, campaigns, and more. You can also use the mod menu to adjust the game speed, difficulty, camera angle, and more.

      -

      Manage Your Own Dream Team

      -

      In FIFA Mobile APK Mod Menu Unlimited Money, you can manage your own dream team and plan your strategy in real time or auto-play mode. You can train your players, upgrade their skills, boost their chemistry, and more. You can also use the mod menu to enable unlimited stamina, energy, or other options to make your game smoother or faster. You can also switch between different modes and events anytime you want.

      -

      Tips and Tricks for FIFA Mobile APK Mod Menu Unlimited Money

      -

      If you want to make the most out of FIFA Mobile APK Mod Menu Unlimited Money, here are some tips and tricks you can follow:

      -

      Use the Mod Menu to Customize Your Game

      -

      The mod menu is one of the best features of FIFA Mobile APK Mod Menu Unlimited Money. It allows you to enable or disable various options that can enhance or modify your game experience. You can access the mod menu by tapping on the icon on the top left corner of the screen. You can then toggle on or off any option you want. Some of the options are:

      -
        -
      • Unlimited money: gives you unlimited money to buy players, packs, or anything else in the game
      • -
      • Unlimited points: gives you unlimited points to buy premium items or access exclusive features in the game
      • -
      • Unlimited coins: gives you unlimited coins to buy players, packs, or anything else in the game
      • -
      • Unlimited gems: gives you unlimited gems to buy special items or access rare features in the game
      • -
      • All players unlocked: unlocks all players in the game, including legends, icons, and special cards
      • -
      • All modes unlocked: unlocks all modes in the game, including head-to-head, attack mode, season mode, and more
      • -
      • All kits unlocked: unlocks all kits in the game, including national teams, clubs, and custom kits
      • -
      • All stadiums unlocked: unlocks all stadiums in the game, including real-life stadiums and fantasy stadiums
      • -
      • No ads: removes all ads from the game
      • -
      • No root: allows you to play the game without rooting your device
      • -
      • Auto-win: makes you win every match automatically
      • -
      • God mode: makes you invincible in every match
      • -
      • No skill cooldown: removes the cooldown time for using skills in matches
      • -
      • No stamina cost:
      • No stamina cost: removes the stamina cost for playing matches or events
      • -
      • No energy cost: removes the energy cost for playing matches or events
      • -
      • Game speed: allows you to adjust the game speed from slow to fast
      • -
      • Game difficulty: allows you to adjust the game difficulty from easy to hard
      • -
      • Camera angle: allows you to adjust the camera angle from low to high
      • -
      • And more
      • -
      -

      You can experiment with different options and see how they affect your game. You can also reset the mod menu to default settings anytime you want.

      -

      Earn More Coins and Points by Completing Tasks and Events

      -

      Another way to earn more coins and points in FIFA Mobile APK Mod Menu Unlimited Money is by completing tasks and events. Tasks are daily or weekly objectives that reward you with coins, points, gems, or other items. Events are special modes that offer you exclusive rewards, such as players, kits, stadiums, or trophies. You can access tasks and events by tapping on the icons on the bottom of the screen. You can then choose any task or event you want and complete it. Some of the tasks and events are:

      -
        -
      • Daily Login: rewards you with coins, points, gems, or other items for logging in every day
      • -
      • Daily Objectives: rewards you with coins, points, gems, or other items for completing simple objectives, such as scoring goals, winning matches, or using skills
      • -
      • Weekly Objectives: rewards you with coins, points, gems, or other items for completing more challenging objectives, such as winning a certain number of matches, scoring a certain number of goals, or using a certain number of skills
      • -
      • Tournaments: rewards you with players, kits, stadiums, or trophies for competing in various tournaments, such as Champions League, Europa League, Copa America, or Euro 2020
      • -
      • Leagues: rewards you with players, kits, stadiums, or trophies for competing in various leagues, such as Premier League, La Liga, Bundesliga, or Serie A
      • -
      • Campaigns: rewards you with players, kits, stadiums, or trophies for completing various campaigns, such as Road to World Cup 2022, Ultimate Team Season 1, or Legends Season 1
      • -
      • And more
      • -
      -

      You can complete as many tasks and events as you want and earn more coins and points. You can also use the mod menu to enable auto-win or god mode to make your tasks and events easier.

      -

      Upgrade Your Players and Skills

      -

      Another way to improve your game in FIFA Mobile APK Mod Menu Unlimited Money is by upgrading your players and skills. You can use the unlimited money, points, coins, and gems to buy players from the market or open packs. You can also use the mod menu to unlock all players in the game. You can then train your players by using training items or coins. You can also upgrade their skills by using skill items or coins. You can access your players and skills by tapping on the icons on the top of the screen. You can then choose any player or skill you want and upgrade it. Some of the players and skills are:

      -
        -
      • Players: each player has a rating from 1 to 100 that indicates their overall performance. You can improve their rating by training them or upgrading their skills. Each player also has a position from GK to ST that indicates their role on the field. You can change their position by using position items or coins. Each player also has a chemistry from 0 to 100 that indicates their compatibility with other players on your team. You can improve their chemistry by using chemistry items or coins.
      • -
      • Skills: each skill has a level from 1 to 10 that indicates its effectiveness. You can improve its level by using skill items or coins. Each skill also has a type from basic to special that indicates its category. You can change its type by using type items or coins. Each skill also has a cooldown from 0 to 10 seconds that indicates its recharge time. You can reduce its cooldown by using cooldown items or coins.
      • -
      -

      You can upgrade your players and skills as much as you want and make them stronger and faster.

      -

      Play Smart and Strategically

      -

      The last tip for playing FIFA Mobile APK Mod Menu Unlimited Money is to play smart and strategically. Even though you have unlimited resources and features in the game, you still need to use your brain and skills to win matches and events. You need to choose your team wisely according to your opponents' strengths and weaknesses. You need to use your skills effectively according to the situation on the field. You need to manage your team efficiently according to your goals and objectives. You need to plan your strategy carefully according to the game mode and difficulty. You need to play smart and strategically to win more matches and events. Some of the things you can do to play smart and strategically are: - Use the mod menu to customize your game according to your preferences and style. You can enable or disable any option you want, such as auto-win, god mode, no ads, etc. You can also adjust the game speed, difficulty, camera angle, and more. - Choose your team wisely according to your opponents' strengths and weaknesses. You can use the unlimited money, points, coins, and gems to buy players from the market or open packs. You can also use the mod menu to unlock all players in the game. You can then create your own squad with your favorite players and formations. - Use your skills effectively according to the situation on the field. You can use the unlimited money, points, coins, and gems to buy skills from the market or open packs. You can also use the mod menu to unlock all skills in the game. You can then use any skill you want in matches, such as dribbling, passing, shooting, tackling, etc. - Manage your team efficiently according to your goals and objectives. You can train your players, upgrade their skills, boost their chemistry, and more. You can also use the mod menu to enable unlimited stamina, energy, or other options to make your game smoother or faster. You can also switch between different modes and events anytime you want. - Plan your strategy carefully according to the game mode and difficulty. You can play in various modes, such as head-to-head, attack mode, season mode, and more. You can also compete in various events, such as tournaments, leagues, campaigns, and more. You can also use the mod menu to adjust the game difficulty from easy to hard. You can then plan your strategy according to the mode and difficulty you choose. By playing smart and strategically, you can have more fun and excitement by playing FIFA Mobile APK Mod Menu Unlimited Money.

      Conclusion

      -

      FIFA Mobile APK Mod Menu Unlimited Money is a modified version of the original FIFA Mobile game that gives you unlimited resources and features. You can use these resources and features to build your own ultimate team of soccer stars, compete in various modes and events, and relive the FIFA World Cup 2022 mode. You can also use the mod menu to enable or disable various options that can enhance or modify your game experience. To download and play FIFA Mobile APK Mod Menu Unlimited Money on your Android device, you need to follow some simple steps that we have explained in this article. We have also shared some tips and tricks that you can follow to make the most out of FIFA Mobile APK Mod Menu Unlimited Money.

      -

      FAQs

      -

      Here are some of the frequently asked questions about FIFA Mobile APK Mod Menu Unlimited Money:

      -

      Is FIFA Mobile APK Mod Menu Unlimited Money safe to use?

      -

      Yes, FIFA Mobile APK Mod Menu Unlimited Money is safe to use as long as you download it from a trusted source. We have provided a download link below that is verified and secure. However, you should always be careful when downloading any modded or hacked games from unknown sources as they may contain viruses or malware that can harm your device.

      -

      Is FIFA Mobile APK Mod Menu Unlimited Money legal to use?

      -

      No, FIFA Mobile APK Mod Menu Unlimited Money is not legal to use as it violates the terms and conditions of the original FIFA Mobile game. By using this mod, you are breaking the rules of the game and risking your account being banned or suspended by EA Sports. Therefore, we do not recommend using this mod for any illegal or unethical purposes.

      -

      Can I play FIFA Mobile APK Mod Menu Unlimited Money online with other players?

      -

      Yes, you can play FIFA Mobile APK Mod Menu Unlimited Money online with other players who are using the same mod or version of the game. However, you cannot play online with players who are using the original or different versions of the game as they will not be compatible with your mod.

      -

      Can I update FIFA Mobile APK Mod Menu Unlimited Money to the latest version?

      -

      No, you cannot update FIFA Mobile APK Mod Menu Unlimited Money to the latest version as it will overwrite your mod and remove all your resources and features. If you want to update your game, you need to uninstall your mod and install the original version of the game from Google Play Store or EA Sports website.

      -

      Can I uninstall FIFA Mobile APK Mod Menu Unlimited Money anytime I want?

      -

      Yes, you can uninstall FIFA Mobile APK Mod Menu Unlimited Money anytime you want by following these steps:

      -
        -
      1. Go to your device settings
      2. -
      3. Tap on apps or applications
      4. -
      5. Find and tap on FIFA Mobile
      6. -
      7. Tap on uninstall
      8. -
      9. Confirm
      10. Confirm your action
      11. -
      -

      That's it. You have successfully uninstalled FIFA Mobile APK Mod Menu Unlimited Money from your device.

      197e85843d
      -
      -
      \ No newline at end of file diff --git a/spaces/simple0urra/skops-model-card-creator-2a23515a-d54e-4804-b365-27ed6e938735/example/FR Legends v3.1.1 Mod Apk Unlock All Cars and Maps.md b/spaces/simple0urra/skops-model-card-creator-2a23515a-d54e-4804-b365-27ed6e938735/example/FR Legends v3.1.1 Mod Apk Unlock All Cars and Maps.md deleted file mode 100644 index a1fe69d01bc9d949f4fc141f59ec5d19fd023c48..0000000000000000000000000000000000000000 --- a/spaces/simple0urra/skops-model-card-creator-2a23515a-d54e-4804-b365-27ed6e938735/example/FR Legends v3.1.1 Mod Apk Unlock All Cars and Maps.md +++ /dev/null @@ -1,139 +0,0 @@ -
      -

      FR Legends v3.1.1 Mod APK: A Guide for Drift Lovers

      -

      If you are a fan of drifting and racing games, you might have heard of FR Legends, a popular mobile game that lets you experience the thrill of drifting in various cars and tracks. But did you know that there is a modified version of FR Legends that gives you more features and benefits? In this article, we will tell you everything you need to know about FR Legends v3.1.1 Mod APK, how to download and install it, and how to play it like a pro.

      -

      fr legends v3.1.1 mod apk


      Download Zip ……… https://ssurll.com/2uNXkz



      -

      What is FR Legends?

      -

      A unique drifting game for mobile devices

      -

      FR Legends is a game that was developed by Feng Li, a Chinese developer who wanted to create a realistic and fun drifting game for mobile devices. The game was released in 2018 and has since gained a lot of popularity among drift enthusiasts and casual gamers alike.

      -

      FR Legends stands for Front-engine, Rear-wheel-drive Legend, which refers to the type of cars that are used for drifting. The game features various cars that are inspired by real-life models, such as the Nissan Skyline, the Toyota AE86, the Mazda RX-7, and more. You can customize your car with different parts, colors, stickers, and accessories to make it your own.

      -

      Features of FR Legends

      -

      Some of the features that make FR Legends stand out from other racing games are:

      -
        -
      • You can drift in different modes, such as solo, tandem, battle, online multiplayer, and career mode.
      • -
      • You can drift in different tracks, such as Ebisu, Akina, Fukuoka Expressway, Kami Road, and more.
      • -
      • You can earn money and reputation by performing well in drifting events and challenges.
      • -
      • You can unlock new cars, parts, maps, and modes as you progress in the game.
      • -
      • You can enjoy realistic physics, graphics, sound effects, and animations that make the game immersive and exciting.
      • -
      -

      What is FR Legends v3.1.1 Mod APK?

      -

      A modified version of the original game

      -

      FR Legends v3.1.1 Mod APK is a modified version of the original game that was created by some fans who wanted to enhance the game experience and add more features and benefits. The mod apk is not an official update from the developer, but rather a fan-made modification that requires you to download and install it manually.

      -

      Benefits of using FR Legends v3.1.1 Mod APK

      -

      Some of the benefits that you can get from using FR Legends v3.1.1 Mod APK are:

      -
        -
      • You can get unlimited money and gold coins that you can use to buy and upgrade your cars and parts.
      • -
      • You can get all the cars, parts, maps, and modes unlocked from the start.
      • -
      • You can get access to new cars, parts, maps, and modes that are not available in the original game.
      • -
      • You can get rid of ads and pop-ups that might interrupt your gameplay.
      • -
      • You can enjoy faster loading times and smoother performance.
      • -
      -

      How to download and install FR Legends v3.1.1 Mod APK?

      -

      Steps to download and install FR Legends v3.1.1 Mod APK

      -

      FR Legends v3.1.1 Mod APK, you need to follow these steps:

      -
        -
      1. Go to a trusted website that provides the download link for FR Legends v3.1.1 Mod APK, such as [this one].
      2. -
      3. Click on the download button and wait for the file to be downloaded on your device.
      4. -
      5. Go to your device settings and enable the option to install apps from unknown sources.
      6. -
      7. Locate the downloaded file in your file manager and tap on it to start the installation process.
      8. -
      9. Follow the instructions on the screen and wait for the installation to be completed.
      10. -
      11. Launch the game and enjoy FR Legends v3.1.1 Mod APK.
      12. -
      -

      Tips to avoid malware and viruses

      -

      While FR Legends v3.1.1 Mod APK can give you many benefits, it also comes with some risks. Since it is a modified version of the original game, it might contain malware or viruses that can harm your device or steal your personal information. To avoid these risks, you should follow these tips:

      -
        -
      • Only download FR Legends v3.1.1 Mod APK from trusted and reputable websites that have positive reviews and ratings from other users.
      • -
      • Scan the downloaded file with a reliable antivirus software before installing it on your device.
      • -
      • Do not grant any unnecessary permissions or access to the app that might compromise your privacy or security.
      • -
      • Do not update the app from the Google Play Store or any other sources, as this might overwrite the mod apk and cause errors or crashes.
      • -
      • Backup your data and files regularly in case something goes wrong with the app or your device.
      • -
      -

      How to play FR Legends v3.1.1 Mod APK?

      -

      Basic controls and gameplay

      -

      The basic controls and gameplay of FR Legends v3.1.1 Mod APK are similar to the original game. You can use the virtual buttons on the screen to steer, accelerate, brake, handbrake, and shift gears. You can also tilt your device to control the camera angle and view.

      -

      fr legends v3.1.1 mod apk download
      -fr legends v3.1.1 mod apk unlimited money
      -fr legends v3.1.1 mod apk android 1
      -fr legends v3.1.1 mod apk obb
      -fr legends v3.1.1 mod apk revdl
      -fr legends v3.1.1 mod apk latest version
      -fr legends v3.1.1 mod apk free shopping
      -fr legends v3.1.1 mod apk no root
      -fr legends v3.1.1 mod apk offline
      -fr legends v3.1.1 mod apk data
      -fr legends v3.1.1 mod apk hack
      -fr legends v3.1.1 mod apk mega
      -fr legends v3.1.1 mod apk rexdl
      -fr legends v3.1.1 mod apk mediafıre
      -fr legends v3.1.1 mod apk happymod
      -fr legends v3.1.1 mod apk cheat
      -fr legends v3.1.1 mod apk unlocked
      -fr legends v3.1.1 mod apk an1
      -fr legends v3.1.1 mod apk update
      -fr legends v3.1.1 mod apk full version
      -fr legends v3.1.1 mod apk premium
      -fr legends v3.1.1 mod apk vip
      -fr legends v3.1.1 mod apk pro
      -fr legends v3.1.1 mod apk original
      -fr legends v3.1.1 mod apk pure
      -fr legends v3.1.1 mod apk for pc
      -fr legends v3.1.1 mod apk for ios
      -fr legends v3.1.1 mod apk for windows 10
      -fr legends v3.1.1 mod apk for mac
      -fr legends v3.1.1 mod apk for laptop
      -fr legends v3.1.1 modpacks [^2^]
      -fr legends v3.2 mods [^2^]
      -frlmods latest mods [^2^]
      -latest FR Legends Modpacks MOD APK [^2^]
      -Ebisu North Course map mod for version 0 3 31 [^2^]
      -Fukuoka Expressway map mod for version 0 33 31 [^2^]
      -FRL Mods Modpack5 for version 0 33 31 [^2^]
      -FRL Mods Hexpack for version 0 33 31 [^2^]
      -FRL Mods Modpack4 for version 0 33 31 [^2^]
      -FRL Mods Modpack2 for version 0 33 31 [^2^]
      -FRL Mods Modpack6 for version 0 33 31 [^2^]
      -FRL Mods Modpack7 for version 0 33 31 [^2^]
      -FRL Mods Modpack8 for version 0 33 31 [^2^]
      -FRL Mods Modpack9 for version 0 33 31 [^2^]
      -FRL Mods Modpack10 for version 0 33 31 [^2^]

      -

      The gameplay of FR Legends v3.1.1 Mod APK is based on drifting, which is a driving technique that involves sliding the rear wheels of the car while maintaining control and speed. You can drift in different modes, such as solo, tandem, battle, online multiplayer, and career mode. In each mode, you will have different objectives and challenges that will test your drifting skills and earn you money and reputation.

      -

      Tips and tricks to master drifting

      -

      If you want to master drifting in FR Legends v3.1.1 Mod APK, you should follow these tips and tricks:

      -
        -
      • Choose a car that suits your style and preference. Different cars have different characteristics, such as power, weight, handling, and grip. You can also customize your car with different parts, colors, stickers, and accessories to improve its performance and appearance.
      • -
      • Practice on different tracks and learn their layouts, curves, corners, and obstacles. Different tracks have different difficulty levels, weather conditions, and scenery. You can also unlock new tracks as you progress in the game.
      • -
      • Use the handbrake wisely and sparingly. The handbrake is a useful tool to initiate a drift, but it can also cause you to lose speed and control if you use it too much or too late. You should only use the handbrake when you need to make a sharp turn or adjust your angle.
      • -
      • Maintain a balance between speed and angle. Speed is important for drifting, but it can also make you lose traction and stability if you go too fast or too slow. Angle is important for drifting, but it can also make you spin out or hit the wall if you go too wide or too narrow. You should try to maintain a balance between speed and angle that allows you to drift smoothly and gracefully.
      • -
      • Watch other players and learn from them. You can watch other players' replays or join online multiplayer mode to see how they drift and what techniques they use. You can also challenge them to a battle or a tandem drift to test your skills against them.
      • -
      -

      Conclusion

      -

      that can harm your device or steal your personal information. Therefore, you should be careful and follow the tips we provided to avoid these risks. If you are looking for a fun and realistic drifting game that gives you more features and benefits than the original game, you should try FR Legends v3.1.1 Mod APK.

      -

      FAQs

      -

      Here are some frequently asked questions about FR Legends v3.1.1 Mod APK:

      - - - - - - - - - - - - - - - - - - - - - - - - - -
      QuestionAnswer
      Is FR Legends v3.1.1 Mod APK safe to use?FR Legends v3.1.1 Mod APK is not an official update from the developer, but rather a fan-made modification that might contain malware or viruses. Therefore, you should only download it from trusted and reputable websites, scan it with a reliable antivirus software, and do not grant any unnecessary permissions or access to the app.
      Is FR Legends v3.1.1 Mod APK compatible with my device?FR Legends v3.1.1 Mod APK is compatible with most Android devices that have Android 4.1 or higher. However, some devices might experience errors or crashes due to different specifications or settings. If you encounter any problems, you can try to clear the cache, reinstall the app, or contact the mod apk provider for support.
      Can I play FR Legends v3.1.1 Mod APK offline?Yes, you can play FR Legends v3.1.1 Mod APK offline in solo, tandem, and career mode. However, you will need an internet connection to play online multiplayer mode and access some online features.
      Can I play FR Legends v3.1.1 Mod APK with my friends?Yes, you can play FR Legends v3.1.1 Mod APK with your friends in online multiplayer mode. You can join or create a room and invite your friends to join you. You can also chat with them and see their replays.
      Can I update FR Legends v3.1.1 Mod APK from the Google Play Store?No, you cannot update FR Legends v3.1.1 Mod APK from the Google Play Store or any other sources, as this might overwrite the mod apk and cause errors or crashes. You should only update the app from the same website that you downloaded it from.

      401be4b1e0
      -
      -
      \ No newline at end of file diff --git a/spaces/simplyjaga/movie_genius_openai/app.py b/spaces/simplyjaga/movie_genius_openai/app.py deleted file mode 100644 index a1b258db809f4ad59abd70e1557021edce9bff3f..0000000000000000000000000000000000000000 --- a/spaces/simplyjaga/movie_genius_openai/app.py +++ /dev/null @@ -1,28 +0,0 @@ -import gradio as gr -import os -from moviegenius import mgchat - -def run(question, key): - if (key == None) or (key == ''): - raise gr.Error("Please enter your api key") - os.environ["OPENAI_API_KEY"] = key - ans, cost = mgchat(question) - return [ans, cost] - - -inputs=[gr.components.Textbox(label="Question (Assumption: Each question should contain the movie name in some way)"), gr.components.Textbox(label="OpenAI Key (It costs $$)")] -outputs=[gr.components.Textbox(label="Answer"), gr.components.Textbox(label="Cost")] -examples = [ - ["Why Sivaji comes to India? in the film Sivaji"], - ["In the Titanic movie, How Jack and Rose meet?"] -] - -demo = gr.Interface( - fn=run, - inputs=inputs, - outputs=outputs, - allow_flagging = "never", - examples=examples, - cache_examples=False -) -demo.launch() \ No newline at end of file diff --git a/spaces/siya02/Konakni-TTS/ttsv/scripts/hifi/train_hifi.sh b/spaces/siya02/Konakni-TTS/ttsv/scripts/hifi/train_hifi.sh deleted file mode 100644 index 287ca1159b5bf8f779d66885197fadbcd23b911e..0000000000000000000000000000000000000000 --- a/spaces/siya02/Konakni-TTS/ttsv/scripts/hifi/train_hifi.sh +++ /dev/null @@ -1,21 +0,0 @@ -#!/bin/bash - -gender='male' - -config='../../config/hifi/config_v1.json' -modeldir='../../checkpoints/hifi/'$gender -logdir='../../logs/hifi/'$gender - - -#################################################### - - - -python ../../src/hifi_gan/train.py \ - --config $config \ - --input_training_file '../../data/hifi/'$gender'/train.txt' \ - --input_validation_file '../../data/hifi/'$gender'/valid.txt' \ - --checkpoint_path $modeldir \ - --logs_path $logdir \ - --checkpoint_interval 10000 \ - --stdout_interval 50 diff --git a/spaces/skf15963/summary/fengshen/examples/zen2_finetune/ner_zen2_base_cmeee.sh b/spaces/skf15963/summary/fengshen/examples/zen2_finetune/ner_zen2_base_cmeee.sh deleted file mode 100644 index a4be7221a250030db4cf1b7d157f1d6c0fd4b0f0..0000000000000000000000000000000000000000 --- a/spaces/skf15963/summary/fengshen/examples/zen2_finetune/ner_zen2_base_cmeee.sh +++ /dev/null @@ -1,92 +0,0 @@ -#!/bin/bash -#SBATCH --job-name=zen2_base_cmeee # create a short name for your job -#SBATCH --nodes=1 # node count -#SBATCH --ntasks=2 # total number of tasks across all nodes -#SBATCH --cpus-per-task=30 # cpu-cores per task (>1 if multi-threaded tasks) -#SBATCH --gres=gpu:2 # number of gpus per node -#SBATCH --mail-type=ALL # send email when job begins, ends or failed etc. -#SBATCH -o /cognitive_comp/lujunyu/experiments/ner_finetune/zen2_base_cmeee/%x-%j.log # output and error file name (%x=job name, %j=job id) -#SBATCH -p hgx - - -# export CUDA_VISIBLE_DEVICES='2' -export TORCH_EXTENSIONS_DIR=/cognitive_comp/lujunyu/tmp/torch_extendsions - -MODEL_NAME=zen2_base - -TASK=cmeee - -ZERO_STAGE=1 -STRATEGY=deepspeed_stage_${ZERO_STAGE} - -ROOT_DIR=/cognitive_comp/lujunyu/experiments/ner_finetune/${MODEL_NAME}_${TASK} -if [ ! -d ${ROOT_DIR} ];then - mkdir -p ${ROOT_DIR} - echo ${ROOT_DIR} created!!!!!!!!!!!!!! -else - echo ${ROOT_DIR} exist!!!!!!!!!!!!!!! -fi - -DATA_DIR=/cognitive_comp/lujunyu/data_zh/NER_Aligned/CMeEE_copy/ -PRETRAINED_MODEL_PATH=/cognitive_comp/lujunyu/pretrain_models/zen2-base-med - -CHECKPOINT_PATH=${ROOT_DIR}/ckpt/ -OUTPUT_PATH=${ROOT_DIR}/predict.json - -DATA_ARGS="\ - --data_dir $DATA_DIR \ - --train_data train.char.bio \ - --valid_data dev.char.bio \ - --test_data dev.char.bio \ - --train_batchsize 16 \ - --valid_batchsize 16 \ - --max_seq_length 512 \ - --task_name cmeee \ - " - -MODEL_ARGS="\ - --learning_rate 3e-5 \ - --weight_decay 0.1 \ - --warmup_ratio 0.01 \ - --markup bio \ - --middle_prefix I- \ - " - -MODEL_CHECKPOINT_ARGS="\ - --monitor val_f1 \ - --save_top_k 3 \ - --mode max \ - --every_n_train_steps 100 \ - --save_weights_only True \ - --dirpath $CHECKPOINT_PATH \ - --filename model-{epoch:02d}-{val_f1:.4f} \ - " - -TRAINER_ARGS="\ - --max_epochs 10 \ - --gpus 2 \ - --check_val_every_n_epoch 1 \ - --val_check_interval 0.25 \ - --default_root_dir $ROOT_DIR \ - " - - -options=" \ - --pretrained_model_path $PRETRAINED_MODEL_PATH \ - --vocab_file $PRETRAINED_MODEL_PATH/vocab.txt \ - --do_lower_case \ - --output_save_path $OUTPUT_PATH \ - $DATA_ARGS \ - $MODEL_ARGS \ - $MODEL_CHECKPOINT_ARGS \ - $TRAINER_ARGS \ -" -SCRIPT_PATH=/cognitive_comp/lujunyu/Fengshenbang-LM-Git/fengshen/examples/zen2_finetune/fengshen_token_level_ft_task.py -srun python $SCRIPT_PATH $options - -# SINGULARITY_PATH=/cognitive_comp/ganruyi/pytorch21_06_py3_docker_image_v2.sif -# python3 $SCRIPT_PATH $options -# source activate base -# singularity exec --nv -B /cognitive_comp/:/cognitive_comp/ $SINGULARITY_PATH /home/ganruyi/anaconda3/bin/python $SCRIPT_PATH $options -# /home/ganruyi/anaconda3/bin/python $SCRIPT_PATH $options - diff --git a/spaces/skf15963/summary/fengshen/models/transfo_xl_reasoning/__init__.py b/spaces/skf15963/summary/fengshen/models/transfo_xl_reasoning/__init__.py deleted file mode 100644 index 2c071fa45cfa595933f14cdd86f10541600f46bc..0000000000000000000000000000000000000000 --- a/spaces/skf15963/summary/fengshen/models/transfo_xl_reasoning/__init__.py +++ /dev/null @@ -1,3 +0,0 @@ -# encoding=utf-8 -from fengshen.models.transfo_xl_denoise.modeling_transfo_xl_denoise import TransfoXLDenoiseModel as TransfoXLModel -from .generate import deduction_generate, abduction_generate \ No newline at end of file diff --git a/spaces/sklearn-docs/Gradient_Boosting_regression/README.md b/spaces/sklearn-docs/Gradient_Boosting_regression/README.md deleted file mode 100644 index cdd945759ede6ba24ff9154b71d9da5272a5f5e7..0000000000000000000000000000000000000000 --- a/spaces/sklearn-docs/Gradient_Boosting_regression/README.md +++ /dev/null @@ -1,12 +0,0 @@ ---- -title: Gradient Boosting Regression -emoji: 🚀 -colorFrom: gray -colorTo: green -sdk: gradio -sdk_version: 3.27.0 -app_file: app.py -pinned: false ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/sklkd93/CodeFormer/CodeFormer/basicsr/ops/dcn/src/deform_conv_ext.cpp b/spaces/sklkd93/CodeFormer/CodeFormer/basicsr/ops/dcn/src/deform_conv_ext.cpp deleted file mode 100644 index 41c6df6f721bd95a525fd6a03dd9882e863de042..0000000000000000000000000000000000000000 --- a/spaces/sklkd93/CodeFormer/CodeFormer/basicsr/ops/dcn/src/deform_conv_ext.cpp +++ /dev/null @@ -1,164 +0,0 @@ -// modify from -// https://github.com/chengdazhi/Deformable-Convolution-V2-PyTorch/blob/mmdetection/mmdet/ops/dcn/src/deform_conv_cuda.c - -#include -#include - -#include -#include - -#define WITH_CUDA // always use cuda -#ifdef WITH_CUDA -int deform_conv_forward_cuda(at::Tensor input, at::Tensor weight, - at::Tensor offset, at::Tensor output, - at::Tensor columns, at::Tensor ones, int kW, - int kH, int dW, int dH, int padW, int padH, - int dilationW, int dilationH, int group, - int deformable_group, int im2col_step); - -int deform_conv_backward_input_cuda(at::Tensor input, at::Tensor offset, - at::Tensor gradOutput, at::Tensor gradInput, - at::Tensor gradOffset, at::Tensor weight, - at::Tensor columns, int kW, int kH, int dW, - int dH, int padW, int padH, int dilationW, - int dilationH, int group, - int deformable_group, int im2col_step); - -int deform_conv_backward_parameters_cuda( - at::Tensor input, at::Tensor offset, at::Tensor gradOutput, - at::Tensor gradWeight, // at::Tensor gradBias, - at::Tensor columns, at::Tensor ones, int kW, int kH, int dW, int dH, - int padW, int padH, int dilationW, int dilationH, int group, - int deformable_group, float scale, int im2col_step); - -void modulated_deform_conv_cuda_forward( - at::Tensor input, at::Tensor weight, at::Tensor bias, at::Tensor ones, - at::Tensor offset, at::Tensor mask, at::Tensor output, at::Tensor columns, - int kernel_h, int kernel_w, const int stride_h, const int stride_w, - const int pad_h, const int pad_w, const int dilation_h, - const int dilation_w, const int group, const int deformable_group, - const bool with_bias); - -void modulated_deform_conv_cuda_backward( - at::Tensor input, at::Tensor weight, at::Tensor bias, at::Tensor ones, - at::Tensor offset, at::Tensor mask, at::Tensor columns, - at::Tensor grad_input, at::Tensor grad_weight, at::Tensor grad_bias, - at::Tensor grad_offset, at::Tensor grad_mask, at::Tensor grad_output, - int kernel_h, int kernel_w, int stride_h, int stride_w, int pad_h, - int pad_w, int dilation_h, int dilation_w, int group, int deformable_group, - const bool with_bias); -#endif - -int deform_conv_forward(at::Tensor input, at::Tensor weight, - at::Tensor offset, at::Tensor output, - at::Tensor columns, at::Tensor ones, int kW, - int kH, int dW, int dH, int padW, int padH, - int dilationW, int dilationH, int group, - int deformable_group, int im2col_step) { - if (input.device().is_cuda()) { -#ifdef WITH_CUDA - return deform_conv_forward_cuda(input, weight, offset, output, columns, - ones, kW, kH, dW, dH, padW, padH, dilationW, dilationH, group, - deformable_group, im2col_step); -#else - AT_ERROR("deform conv is not compiled with GPU support"); -#endif - } - AT_ERROR("deform conv is not implemented on CPU"); -} - -int deform_conv_backward_input(at::Tensor input, at::Tensor offset, - at::Tensor gradOutput, at::Tensor gradInput, - at::Tensor gradOffset, at::Tensor weight, - at::Tensor columns, int kW, int kH, int dW, - int dH, int padW, int padH, int dilationW, - int dilationH, int group, - int deformable_group, int im2col_step) { - if (input.device().is_cuda()) { -#ifdef WITH_CUDA - return deform_conv_backward_input_cuda(input, offset, gradOutput, - gradInput, gradOffset, weight, columns, kW, kH, dW, dH, padW, padH, - dilationW, dilationH, group, deformable_group, im2col_step); -#else - AT_ERROR("deform conv is not compiled with GPU support"); -#endif - } - AT_ERROR("deform conv is not implemented on CPU"); -} - -int deform_conv_backward_parameters( - at::Tensor input, at::Tensor offset, at::Tensor gradOutput, - at::Tensor gradWeight, // at::Tensor gradBias, - at::Tensor columns, at::Tensor ones, int kW, int kH, int dW, int dH, - int padW, int padH, int dilationW, int dilationH, int group, - int deformable_group, float scale, int im2col_step) { - if (input.device().is_cuda()) { -#ifdef WITH_CUDA - return deform_conv_backward_parameters_cuda(input, offset, gradOutput, - gradWeight, columns, ones, kW, kH, dW, dH, padW, padH, dilationW, - dilationH, group, deformable_group, scale, im2col_step); -#else - AT_ERROR("deform conv is not compiled with GPU support"); -#endif - } - AT_ERROR("deform conv is not implemented on CPU"); -} - -void modulated_deform_conv_forward( - at::Tensor input, at::Tensor weight, at::Tensor bias, at::Tensor ones, - at::Tensor offset, at::Tensor mask, at::Tensor output, at::Tensor columns, - int kernel_h, int kernel_w, const int stride_h, const int stride_w, - const int pad_h, const int pad_w, const int dilation_h, - const int dilation_w, const int group, const int deformable_group, - const bool with_bias) { - if (input.device().is_cuda()) { -#ifdef WITH_CUDA - return modulated_deform_conv_cuda_forward(input, weight, bias, ones, - offset, mask, output, columns, kernel_h, kernel_w, stride_h, - stride_w, pad_h, pad_w, dilation_h, dilation_w, group, - deformable_group, with_bias); -#else - AT_ERROR("modulated deform conv is not compiled with GPU support"); -#endif - } - AT_ERROR("modulated deform conv is not implemented on CPU"); -} - -void modulated_deform_conv_backward( - at::Tensor input, at::Tensor weight, at::Tensor bias, at::Tensor ones, - at::Tensor offset, at::Tensor mask, at::Tensor columns, - at::Tensor grad_input, at::Tensor grad_weight, at::Tensor grad_bias, - at::Tensor grad_offset, at::Tensor grad_mask, at::Tensor grad_output, - int kernel_h, int kernel_w, int stride_h, int stride_w, int pad_h, - int pad_w, int dilation_h, int dilation_w, int group, int deformable_group, - const bool with_bias) { - if (input.device().is_cuda()) { -#ifdef WITH_CUDA - return modulated_deform_conv_cuda_backward(input, weight, bias, ones, - offset, mask, columns, grad_input, grad_weight, grad_bias, grad_offset, - grad_mask, grad_output, kernel_h, kernel_w, stride_h, stride_w, - pad_h, pad_w, dilation_h, dilation_w, group, deformable_group, - with_bias); -#else - AT_ERROR("modulated deform conv is not compiled with GPU support"); -#endif - } - AT_ERROR("modulated deform conv is not implemented on CPU"); -} - - -PYBIND11_MODULE(TORCH_EXTENSION_NAME, m) { - m.def("deform_conv_forward", &deform_conv_forward, - "deform forward"); - m.def("deform_conv_backward_input", &deform_conv_backward_input, - "deform_conv_backward_input"); - m.def("deform_conv_backward_parameters", - &deform_conv_backward_parameters, - "deform_conv_backward_parameters"); - m.def("modulated_deform_conv_forward", - &modulated_deform_conv_forward, - "modulated deform conv forward"); - m.def("modulated_deform_conv_backward", - &modulated_deform_conv_backward, - "modulated deform conv backward"); -} diff --git a/spaces/skyxx/skyxxChat/modules/webui_locale.py b/spaces/skyxx/skyxxChat/modules/webui_locale.py deleted file mode 100644 index 1ce4d97b9b41cbb2d9be3fdadc4c85f6ef897604..0000000000000000000000000000000000000000 --- a/spaces/skyxx/skyxxChat/modules/webui_locale.py +++ /dev/null @@ -1,26 +0,0 @@ -import os -import locale -import commentjson as json - -class I18nAuto: - def __init__(self): - if os.path.exists("config.json"): - with open("config.json", "r", encoding='utf-8') as f: - config = json.load(f) - else: - config = {} - lang_config = config.get("language", "auto") - language = os.environ.get("LANGUAGE", lang_config) - if language == "auto": - language = locale.getdefaultlocale()[0] # get the language code of the system (ex. zh_CN) - self.language_map = {} - self.file_is_exists = os.path.isfile(f"./locale/{language}.json") - if self.file_is_exists: - with open(f"./locale/{language}.json", "r", encoding="utf-8") as f: - self.language_map.update(json.load(f)) - - def __call__(self, key): - if self.file_is_exists and key in self.language_map: - return self.language_map[key] - else: - return key diff --git a/spaces/snjyor/ChatGPT_demo/app.py b/spaces/snjyor/ChatGPT_demo/app.py deleted file mode 100644 index 286564f93d20e48330e1ad045f47719b2ff154e0..0000000000000000000000000000000000000000 --- a/spaces/snjyor/ChatGPT_demo/app.py +++ /dev/null @@ -1,58 +0,0 @@ -import datetime -import json -import gradio as gr -import openai -import os - -with open("config.json", "r") as file: - config = json.loads(file.read()) -openai.api_key = os.getenv("api_key") - - -def grammar_fixer(sentence, daily_report=False, translater=False, story=False, character=None, temperature=0.5): - result = reply_by_gpt(sentence, character, daily_report=daily_report, translater=translater, story=story, temperature=temperature) - return result - - -def reply_by_gpt(sentence, character=None, daily_report=False, translater=False, story=False, temperature=0.5): - today = (datetime.datetime.now()+datetime.timedelta(hours=7)).strftime("%Y-%m-%d %H:%M:%S") - if sum([daily_report, translater, story]) > 1: - return "只能选一个!" - if not character and sum([daily_report, translater, story]) == 0: - character = f"你是一个大型语言模型,无所不知,人们可以问你任何问题,你的知识截止于2023年3月1日,当前时间为{today}" - elif daily_report: - character = config.get("daily_report").format(today) - elif translater: - character = config.get("translater") - elif story: - character = config.get("story") - response = openai.ChatCompletion.create( - model="gpt-3.5-turbo", - messages=[ - {"role": "system", "content": character}, - {"role": "user", "content": sentence} - ], - temperature=temperature, - max_tokens=1000, - frequency_penalty=0.3, # [-2,2]之间,该值越大越随机 - presence_penalty=0.3, # [-2,2]之间,该值越大越随机 - ) - result = response.get("choices")[0].get("message").get("content") - return result - - -app = gr.Interface( - fn=grammar_fixer, - inputs=[ - gr.Text(placeholder="请自我介绍一下", label="问题", max_lines=5), - gr.Checkbox(value=False, label="写日报"), - gr.Checkbox(value=False, label="中译英"), - gr.Checkbox(value=False, label="续写故事"), - gr.Text(placeholder="你是一个没有感情的杀手!", label="自定义人格/能力", max_lines=5), - gr.Slider(0, 2, value=0.5, label="回答随机性") - ], - outputs=gr.Text(label="回答", placeholder="没有上下文记忆"), - examples="examples", - allow_flagging="never" -) -app.launch() diff --git a/spaces/sriramelango/Social_Classification_Public/fairseq/examples/simultaneous_translation/docs/enja-waitk.md b/spaces/sriramelango/Social_Classification_Public/fairseq/examples/simultaneous_translation/docs/enja-waitk.md deleted file mode 100644 index fb9d82576f80b4405564a99774fc98ac2fe6ad3b..0000000000000000000000000000000000000000 --- a/spaces/sriramelango/Social_Classification_Public/fairseq/examples/simultaneous_translation/docs/enja-waitk.md +++ /dev/null @@ -1,106 +0,0 @@ -# An example of English to Japaneses Simultaneous Translation System - -This is an example of training and evaluating a transformer *wait-k* English to Japanese simultaneous text-to-text translation model. - -## Data Preparation -This section introduces the data preparation for training and evaluation. -If you only want to evaluate the model, please jump to [Inference & Evaluation](#inference-&-evaluation) - -For illustration, we only use the following subsets of the available data from [WMT20 news translation task](http://www.statmt.org/wmt20/translation-task.html), which results in 7,815,391 sentence pairs. -- News Commentary v16 -- Wiki Titles v3 -- WikiMatrix V1 -- Japanese-English Subtitle Corpus -- The Kyoto Free Translation Task Corpus - -We use WMT20 development data as development set. Training `transformer_vaswani_wmt_en_de_big` model on such amount of data will result in 17.3 BLEU with greedy search and 19.7 with beam (10) search. Notice that a better performance can be achieved with the full WMT training data. - -We use [sentencepiece](https://github.com/google/sentencepiece) toolkit to tokenize the data with a vocabulary size of 32000. -Additionally, we filtered out the sentences longer than 200 words after tokenization. -Assuming the tokenized text data is saved at `${DATA_DIR}`, -we prepare the data binary with the following command. - -```bash -fairseq-preprocess \ - --source-lang en --target-lang ja \ - --trainpref ${DATA_DIR}/train \ - --validpref ${DATA_DIR}/dev \ - --testpref ${DATA_DIR}/test \ - --destdir ${WMT20_ENJA_DATA_BIN} \ - --nwordstgt 32000 --nwordssrc 32000 \ - --workers 20 -``` - -## Simultaneous Translation Model Training -To train a wait-k `(k=10)` model. -```bash -fairseq-train ${WMT20_ENJA_DATA_BIN} \ - --save-dir ${SAVEDIR} - --simul-type waitk \ - --waitk-lagging 10 \ - --max-epoch 70 \ - --arch transformer_monotonic_vaswani_wmt_en_de_big \ - --optimizer adam \ - --adam-betas '(0.9, 0.98)' \ - --lr-scheduler inverse_sqrt \ - --warmup-init-lr 1e-07 \ - --warmup-updates 4000 \ - --lr 0.0005 \ - --stop-min-lr 1e-09 \ - --clip-norm 10.0 \ - --dropout 0.3 \ - --weight-decay 0.0 \ - --criterion label_smoothed_cross_entropy \ - --label-smoothing 0.1 \ - --max-tokens 3584 -``` -This command is for training on 8 GPUs. Equivalently, the model can be trained on one GPU with `--update-freq 8`. - -## Inference & Evaluation -First of all, install [SimulEval](https://github.com/facebookresearch/SimulEval) for evaluation. - -```bash -git clone https://github.com/facebookresearch/SimulEval.git -cd SimulEval -pip install -e . -``` - -The following command is for the evaluation. -Assuming the source and reference files are `${SRC_FILE}` and `${REF_FILE}`, the sentencepiece model file for English is saved at `${SRC_SPM_PATH}` - - -```bash -simuleval \ - --source ${SRC_FILE} \ - --target ${TGT_FILE} \ - --data-bin ${WMT20_ENJA_DATA_BIN} \ - --sacrebleu-tokenizer ja-mecab \ - --eval-latency-unit char \ - --no-space \ - --src-splitter-type sentencepiecemodel \ - --src-splitter-path ${SRC_SPM_PATH} \ - --agent ${FAIRSEQ}/examples/simultaneous_translation/agents/simul_trans_text_agent_enja.py \ - --model-path ${SAVE_DIR}/${CHECKPOINT_FILENAME} \ - --output ${OUTPUT} \ - --scores -``` - -The `--data-bin` should be the same in previous sections if you prepare the data from the scratch. -If only for evaluation, a prepared data directory can be found [here](https://dl.fbaipublicfiles.com/simultaneous_translation/wmt20_enja_medium_databin.tgz) and a pretrained checkpoint (wait-k=10 model) can be downloaded from [here](https://dl.fbaipublicfiles.com/simultaneous_translation/wmt20_enja_medium_wait10_ckpt.pt). - -The output should look like this: -```bash -{ - "Quality": { - "BLEU": 11.442253287568398 - }, - "Latency": { - "AL": 8.6587861866951, - "AP": 0.7863304776251316, - "DAL": 9.477850951194764 - } -} -``` -The latency is evaluated by characters (`--eval-latency-unit`) on the target side. The latency is evaluated with `sacrebleu` with `MeCab` tokenizer `--sacrebleu-tokenizer ja-mecab`. `--no-space` indicates that do not add space when merging the predicted words. - -If `--output ${OUTPUT}` option is used, the detailed log and scores will be stored under the `${OUTPUT}` directory. diff --git a/spaces/sriramelango/Social_Classification_Public/fairseq/fairseq/data/encoders/gpt2_bpe.py b/spaces/sriramelango/Social_Classification_Public/fairseq/fairseq/data/encoders/gpt2_bpe.py deleted file mode 100644 index b7426b249bbbabd8e20bbe8ca5449809efdf85fc..0000000000000000000000000000000000000000 --- a/spaces/sriramelango/Social_Classification_Public/fairseq/fairseq/data/encoders/gpt2_bpe.py +++ /dev/null @@ -1,45 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -from dataclasses import dataclass, field - -from fairseq import file_utils -from fairseq.data.encoders import register_bpe -from fairseq.dataclass import FairseqDataclass - -from .gpt2_bpe_utils import get_encoder - - -DEFAULT_ENCODER_JSON = "https://dl.fbaipublicfiles.com/fairseq/gpt2_bpe/encoder.json" -DEFAULT_VOCAB_BPE = "https://dl.fbaipublicfiles.com/fairseq/gpt2_bpe/vocab.bpe" - - -@dataclass -class GPT2BPEConfig(FairseqDataclass): - gpt2_encoder_json: str = field( - default=DEFAULT_ENCODER_JSON, metadata={"help": "path to encoder.json"} - ) - gpt2_vocab_bpe: str = field( - default=DEFAULT_VOCAB_BPE, metadata={"help": "path to vocab.bpe"} - ) - - -@register_bpe("gpt2", dataclass=GPT2BPEConfig) -class GPT2BPE(object): - def __init__(self, cfg): - encoder_json = file_utils.cached_path(cfg.gpt2_encoder_json) - vocab_bpe = file_utils.cached_path(cfg.gpt2_vocab_bpe) - self.bpe = get_encoder(encoder_json, vocab_bpe) - - def encode(self, x: str) -> str: - return " ".join(map(str, self.bpe.encode(x))) - - def decode(self, x: str) -> str: - return self.bpe.decode( - [int(tok) if tok not in {"", ""} and not tok.startswith('<') else tok for tok in x.split()] - ) - - def is_beginning_of_word(self, x: str) -> bool: - return self.decode(x).startswith(" ") diff --git a/spaces/sriramelango/Social_Classification_Public/fairseq/fairseq/scoring/wer.py b/spaces/sriramelango/Social_Classification_Public/fairseq/fairseq/scoring/wer.py deleted file mode 100644 index 633dc47c247691c4c9e36cbdbab7d7cb74b38452..0000000000000000000000000000000000000000 --- a/spaces/sriramelango/Social_Classification_Public/fairseq/fairseq/scoring/wer.py +++ /dev/null @@ -1,58 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -from dataclasses import dataclass, field - -from fairseq.dataclass import FairseqDataclass -from fairseq.scoring import BaseScorer, register_scorer -from fairseq.scoring.tokenizer import EvaluationTokenizer - - -@dataclass -class WerScorerConfig(FairseqDataclass): - wer_tokenizer: EvaluationTokenizer.ALL_TOKENIZER_TYPES = field( - default="none", metadata={"help": "sacreBLEU tokenizer to use for evaluation"} - ) - wer_remove_punct: bool = field( - default=False, metadata={"help": "remove punctuation"} - ) - wer_char_level: bool = field( - default=False, metadata={"help": "evaluate at character level"} - ) - wer_lowercase: bool = field(default=False, metadata={"help": "lowercasing"}) - - -@register_scorer("wer", dataclass=WerScorerConfig) -class WerScorer(BaseScorer): - def __init__(self, cfg): - super().__init__(cfg) - self.reset() - try: - import editdistance as ed - except ImportError: - raise ImportError("Please install editdistance to use WER scorer") - self.ed = ed - self.tokenizer = EvaluationTokenizer( - tokenizer_type=self.cfg.wer_tokenizer, - lowercase=self.cfg.wer_lowercase, - punctuation_removal=self.cfg.wer_remove_punct, - character_tokenization=self.cfg.wer_char_level, - ) - - def reset(self): - self.distance = 0 - self.ref_length = 0 - - def add_string(self, ref, pred): - ref_items = self.tokenizer.tokenize(ref).split() - pred_items = self.tokenizer.tokenize(pred).split() - self.distance += self.ed.eval(ref_items, pred_items) - self.ref_length += len(ref_items) - - def result_string(self): - return f"WER: {self.score():.2f}" - - def score(self): - return 100.0 * self.distance / self.ref_length if self.ref_length > 0 else 0 diff --git a/spaces/stomexserde/gpt4-ui/Examples/Download Regular Show Season 1080p Vs 720p VERIFIED.md b/spaces/stomexserde/gpt4-ui/Examples/Download Regular Show Season 1080p Vs 720p VERIFIED.md deleted file mode 100644 index d5496c6458302e1a9f72c3cecd36c1ac1bae9d77..0000000000000000000000000000000000000000 --- a/spaces/stomexserde/gpt4-ui/Examples/Download Regular Show Season 1080p Vs 720p VERIFIED.md +++ /dev/null @@ -1,28 +0,0 @@ -
      -

      How to Download Regular Show Season in High Quality (1080p vs 720p)

      -

      If you are a fan of Regular Show, you might want to download all the seasons of this hilarious animated series and enjoy them offline. But where can you find reliable and safe sources to download Regular Show season in high quality? And what is the difference between 1080p and 720p resolution? In this article, we will answer these questions and provide you with some tips and tricks to download Regular Show season in high quality.

      -

      download regular show season 1080p vs 720p


      DOWNLOAD ===== https://urlgoal.com/2uI6Ij



      -

      What is Regular Show?

      -

      Regular Show is an American animated sitcom created by J.G. Quintel for Cartoon Network. It follows the adventures of two best friends, Mordecai (a blue jay) and Rigby (a raccoon), who work as groundskeepers at a park. They often get into trouble with their boss Benson (a gumball machine), their coworker Skips (a yeti), and other supernatural creatures. The show is known for its surreal humor, pop culture references, and retro style.

      -

      What is the difference between 1080p and 720p?

      -

      1080p and 720p are two common resolutions for video files. The numbers refer to the number of horizontal lines of pixels on the screen. The more pixels, the higher the resolution and the sharper the image. 1080p has 1920 x 1080 pixels, while 720p has 1280 x 720 pixels. Therefore, 1080p is considered as full HD (high definition), while 720p is considered as HD (high definition).

      -

      The advantage of 1080p over 720p is that it offers more detail and clarity, especially for fast-moving scenes or large screens. The disadvantage of 1080p over 720p is that it requires more storage space and bandwidth to download and stream. Therefore, depending on your preferences and devices, you might want to choose between 1080p and 720p when downloading Regular Show season.

      -

      Where can I download Regular Show season in high quality?

      -

      There are many websites that offer downloads of Regular Show season in high quality, but not all of them are safe and legal. Some of them might contain malware, viruses, or adware that can harm your computer or device. Some of them might also violate the copyright laws and infringe on the rights of the creators and distributors of Regular Show.

      -

      Therefore, we recommend you to use only trusted and legitimate sources to download Regular Show season in high quality. Here are some of them:

      -
        -
      • Cartoon Network: This is the official website of Cartoon Network, where you can watch Regular Show episodes online or download them to your device with a subscription. You can choose between 1080p and 720p resolution, depending on your plan. You can also access other Cartoon Network shows and games on this website.
      • -
      • Hulu: This is a popular streaming service that offers a variety of TV shows and movies, including Regular Show. You can watch Regular Show episodes online or download them to your device with a subscription. You can choose between 1080p and 720p resolution, depending on your plan. You can also access other Hulu originals and exclusives on this service.
      • -
      • Amazon Prime Video: This is another popular streaming service that offers a variety of TV shows and movies, including Regular Show. You can watch Regular Show episodes online or download them to your device with a subscription or a purchase. You can choose between 1080p and 720p resolution, depending on your plan or option. You can also access other Amazon originals and exclusives on this service.
      • -
      -

      How to download Regular Show season in high quality from these sources?

      -

      To download Regular Show season in high quality from these sources, you need to follow these steps:

      -

      -
        -
      1. Create an account on the website or service of your choice.
      2. -
      3. Choose a plan or option that suits your budget and needs.
      4. -
      5. Search for Regular Show season on the website or service.
      6. -
      7. Select the episode or season that you want to download.
      8. -
      9. Select the resolution

        7b8c122e87
        -
        -
        \ No newline at end of file diff --git a/spaces/stomexserde/gpt4-ui/Examples/Indian National Song Vande Mataram Mp3 Free Download [PORTABLE].md b/spaces/stomexserde/gpt4-ui/Examples/Indian National Song Vande Mataram Mp3 Free Download [PORTABLE].md deleted file mode 100644 index 6c7b9c1e4fb01db2f793ec628f388333a36fac21..0000000000000000000000000000000000000000 --- a/spaces/stomexserde/gpt4-ui/Examples/Indian National Song Vande Mataram Mp3 Free Download [PORTABLE].md +++ /dev/null @@ -1,21 +0,0 @@ -
        -

        Vande Mataram: The National Song of India

        -

        Vande Mataram is a Sanskrit phrase that means "I bow to the motherland". It is the title of a poem written by Bankim Chandra Chattopadhyay in 1876, which was later set to music by Rabindranath Tagore. The song was first sung publicly during the Indian National Congress session in 1896, and became a symbol of the Indian independence movement. It was also adopted as the national song of India in 1950.

        -

        Indian National Song Vande Mataram Mp3 Free Download


        Download Zip ––– https://urlgoal.com/2uI6JV



        -

        Vande Mataram is a tribute to the rich cultural and natural heritage of India, as well as its diverse people and religions. The song expresses the love and devotion of the Indians for their motherland, and their willingness to sacrifice for it. The song also invokes the goddess Durga, who represents the power and strength of India.

        -

        Vande Mataram has been sung by many famous singers and musicians over the years, such as Lata Mangeshkar, A.R. Rahman, Anand Mathur, and Sonu Nigam. The song has also been translated into various languages, such as Hindi, Urdu, Tamil, Telugu, Bengali, and English. The song is often played on national occasions and festivals, such as Republic Day, Independence Day, Gandhi Jayanti, and Vijayadashami.

        -

        If you want to listen to Vande Mataram or download it as an mp3 file, you can visit some of these websites:

        -

        - -

        Vande Mataram is a song that inspires patriotism and pride in every Indian. It is a song that celebrates the beauty and diversity of India. It is a song that salutes the motherland and its people.

        The history of Vande Mataram is intertwined with the history of India's freedom struggle. The song became a rallying cry for the revolutionaries who fought against the British colonial rule. The song also faced opposition and criticism from some sections of the society, especially the Muslim community, who felt that the song was against their religious beliefs and sentiments.

        -

        The first public rendition of Vande Mataram was by Rabindranath Tagore at the 1896 session of the Indian National Congress in Calcutta. [^1^] The song soon gained popularity among the masses and became a symbol of national unity and resistance. The song was also sung by many freedom fighters and leaders, such as Mahatma Gandhi, Jawaharlal Nehru, Subhas Chandra Bose, Bhagat Singh, and Sarojini Naidu. [^3^]

        -

        The song also inspired many movements and protests against the British rule. One of the most significant ones was the Vande Mataram Movement, which started in 1905 as a response to the partition of Bengal by Lord Curzon. The movement involved mass demonstrations, boycotts, strikes, and civil disobedience across Bengal and other parts of India. The slogan of Vande Mataram was used extensively during this movement to express solidarity and patriotism. [^4^]

        -

        However, the song also faced controversy and opposition from some quarters. The Muslim League, led by Muhammad Ali Jinnah, objected to the song on the grounds that it was idolatrous and anti-Islamic. They argued that the song invoked Hindu goddesses and symbols, and that it was disrespectful to Muslims who could not worship anyone but Allah. They also claimed that the song was divisive and communal, as it excluded Muslims from the idea of India. [^2^]

        -

        In 1937, the Indian National Congress, concerned that the song might inspire communal tensions, took the decision to drop the last three stanzas of the original Vande Mataram, declaring that only the first two, non-controversial stanzas would be sung. [^2^] This decision was endorsed by Mahatma Gandhi, who said that he personally loved the song but respected the sentiments of his Muslim brothers. [^3^]

        -

        After India attained independence in 1947, Vande Mataram was adopted as the national song of India on 24 January 1950 by the Constituent Assembly. The President of India Rajendra Prasad stated that the song should be honoured equally with the national anthem Jana Gana Mana. [^1^] The song continues to be a source of inspiration and pride for millions of Indians today.

        cec2833e83
        -
        -
        \ No newline at end of file diff --git a/spaces/sub314xxl/MetaGPT/metagpt/provider/__init__.py b/spaces/sub314xxl/MetaGPT/metagpt/provider/__init__.py deleted file mode 100644 index 56dc19b4b8b08d121d56575452c7415bfbc63084..0000000000000000000000000000000000000000 --- a/spaces/sub314xxl/MetaGPT/metagpt/provider/__init__.py +++ /dev/null @@ -1,12 +0,0 @@ -#!/usr/bin/env python -# -*- coding: utf-8 -*- -""" -@Time : 2023/5/5 22:59 -@Author : alexanderwu -@File : __init__.py -""" - -from metagpt.provider.openai_api import OpenAIGPTAPI - - -__all__ = ["OpenAIGPTAPI"] diff --git a/spaces/sukiru/rvc-Blue-archives/app.py b/spaces/sukiru/rvc-Blue-archives/app.py deleted file mode 100644 index e3523c9707c14d5208c6d527c8f58aa61553c12f..0000000000000000000000000000000000000000 --- a/spaces/sukiru/rvc-Blue-archives/app.py +++ /dev/null @@ -1,519 +0,0 @@ -import os -import glob -import json -import traceback -import logging -import gradio as gr -import numpy as np -import librosa -import torch -import asyncio -import edge_tts -import yt_dlp -import ffmpeg -import subprocess -import sys -import io -import wave -from datetime import datetime -from fairseq import checkpoint_utils -from lib.infer_pack.models import ( - SynthesizerTrnMs256NSFsid, - SynthesizerTrnMs256NSFsid_nono, - SynthesizerTrnMs768NSFsid, - SynthesizerTrnMs768NSFsid_nono, -) -from vc_infer_pipeline import VC -from config import Config -config = Config() -logging.getLogger("numba").setLevel(logging.WARNING) -limitation = os.getenv("SYSTEM") == "spaces" - -audio_mode = [] -f0method_mode = [] -f0method_info = "" -if limitation is True: - audio_mode = ["Upload audio", "TTS Audio"] - f0method_mode = ["pm", "harvest"] - f0method_info = "PM is fast, Harvest is good but extremely slow. (Default: PM)" -else: - audio_mode = ["Input path", "Upload audio", "Youtube", "TTS Audio"] - f0method_mode = ["pm", "harvest", "crepe"] - f0method_info = "PM is fast, Harvest is good but extremely slow, and Crepe effect is good but requires GPU (Default: PM)" - -def create_vc_fn(model_name, tgt_sr, net_g, vc, if_f0, version, file_index): - def vc_fn( - vc_audio_mode, - vc_input, - vc_upload, - tts_text, - tts_voice, - f0_up_key, - f0_method, - index_rate, - filter_radius, - resample_sr, - rms_mix_rate, - protect, - ): - try: - print(f"Converting using {model_name}...") - if vc_audio_mode == "Input path" or "Youtube" and vc_input != "": - audio, sr = librosa.load(vc_input, sr=16000, mono=True) - elif vc_audio_mode == "Upload audio": - if vc_upload is None: - return "You need to upload an audio", None - sampling_rate, audio = vc_upload - duration = audio.shape[0] / sampling_rate - if duration > 30 and limitation: # 20 to 30 - return "Please upload an audio file that is less than 30 seconds. If you need to generate a longer audio file, please use Colab.", None - audio = (audio / np.iinfo(audio.dtype).max).astype(np.float32) - if len(audio.shape) > 1: - audio = librosa.to_mono(audio.transpose(1, 0)) - if sampling_rate != 16000: - audio = librosa.resample(audio, orig_sr=sampling_rate, target_sr=16000) - elif vc_audio_mode == "TTS Audio": - if len(tts_text) > 100 and limitation: - return "Text is too long", None - if tts_text is None or tts_voice is None: - return "You need to enter text and select a voice", None - asyncio.run(edge_tts.Communicate(tts_text, "-".join(tts_voice.split('-')[:-1])).save("tts.mp3")) - audio, sr = librosa.load("tts.mp3", sr=16000, mono=True) - vc_input = "tts.mp3" - times = [0, 0, 0] - f0_up_key = int(f0_up_key) - audio_opt = vc.pipeline( - hubert_model, - net_g, - 0, - audio, - vc_input, - times, - f0_up_key, - f0_method, - file_index, - # file_big_npy, - index_rate, - if_f0, - filter_radius, - tgt_sr, - resample_sr, - rms_mix_rate, - version, - protect, - f0_file=None, - ) - info = f"[{datetime.now().strftime('%Y-%m-%d %H:%M')}]: npy: {times[0]}, f0: {times[1]}s, infer: {times[2]}s" - print(f"{model_name} | {info}") - return info, (tgt_sr, audio_opt) - except: - info = traceback.format_exc() - print(info) - return info, None - return vc_fn - -def load_model(): - categories = [] - with open("weights/folder_info.json", "r", encoding="utf-8") as f: - folder_info = json.load(f) - for category_name, category_info in folder_info.items(): - if not category_info['enable']: - continue - category_title = category_info['title'] - category_folder = category_info['folder_path'] - description = category_info['description'] - models = [] - with open(f"weights/{category_folder}/model_info.json", "r", encoding="utf-8") as f: - models_info = json.load(f) - for character_name, info in models_info.items(): - if not info['enable']: - continue - model_title = info['title'] - model_name = info['model_path'] - model_author = info.get("author", None) - model_cover = f"weights/{category_folder}/{character_name}/{info['cover']}" - model_index = f"weights/{category_folder}/{character_name}/{info['feature_retrieval_library']}" - cpt = torch.load(f"weights/{category_folder}/{character_name}/{model_name}", map_location="cpu") - tgt_sr = cpt["config"][-1] - cpt["config"][-3] = cpt["weight"]["emb_g.weight"].shape[0] # n_spk - if_f0 = cpt.get("f0", 1) - version = cpt.get("version", "v1") - if version == "v1": - if if_f0 == 1: - net_g = SynthesizerTrnMs256NSFsid(*cpt["config"], is_half=config.is_half) - else: - net_g = SynthesizerTrnMs256NSFsid_nono(*cpt["config"]) - model_version = "V1" - elif version == "v2": - if if_f0 == 1: - net_g = SynthesizerTrnMs768NSFsid(*cpt["config"], is_half=config.is_half) - else: - net_g = SynthesizerTrnMs768NSFsid_nono(*cpt["config"]) - model_version = "V2" - del net_g.enc_q - print(net_g.load_state_dict(cpt["weight"], strict=False)) - net_g.eval().to(config.device) - if config.is_half: - net_g = net_g.half() - else: - net_g = net_g.float() - vc = VC(tgt_sr, config) - print(f"Model loaded: {character_name} / {info['feature_retrieval_library']} | ({model_version})") - models.append((character_name, model_title, model_author, model_cover, model_version, create_vc_fn(model_name, tgt_sr, net_g, vc, if_f0, version, model_index))) - categories.append([category_title, category_folder, description, models]) - return categories - -def cut_vocal_and_inst(url, audio_provider, split_model): - if url != "": - if not os.path.exists("dl_audio"): - os.mkdir("dl_audio") - if audio_provider == "Youtube": - ydl_opts = { - 'noplaylist': True, - 'format': 'bestaudio/best', - 'postprocessors': [{ - 'key': 'FFmpegExtractAudio', - 'preferredcodec': 'wav', - }], - "outtmpl": 'dl_audio/youtube_audio', - } - with yt_dlp.YoutubeDL(ydl_opts) as ydl: - ydl.download([url]) - audio_path = "dl_audio/youtube_audio.wav" - if split_model == "htdemucs": - command = f"demucs --two-stems=vocals {audio_path} -o output" - result = subprocess.run(command.split(), stdout=subprocess.PIPE) - print(result.stdout.decode()) - return "output/htdemucs/youtube_audio/vocals.wav", "output/htdemucs/youtube_audio/no_vocals.wav", audio_path, "output/htdemucs/youtube_audio/vocals.wav" - else: - command = f"demucs --two-stems=vocals -n mdx_extra_q {audio_path} -o output" - result = subprocess.run(command.split(), stdout=subprocess.PIPE) - print(result.stdout.decode()) - return "output/mdx_extra_q/youtube_audio/vocals.wav", "output/mdx_extra_q/youtube_audio/no_vocals.wav", audio_path, "output/mdx_extra_q/youtube_audio/vocals.wav" - else: - raise gr.Error("URL Required!") - return None, None, None, None - -def combine_vocal_and_inst(audio_data, audio_volume, split_model): - if not os.path.exists("output/result"): - os.mkdir("output/result") - vocal_path = "output/result/output.wav" - output_path = "output/result/combine.mp3" - if split_model == "htdemucs": - inst_path = "output/htdemucs/youtube_audio/no_vocals.wav" - else: - inst_path = "output/mdx_extra_q/youtube_audio/no_vocals.wav" - with wave.open(vocal_path, "w") as wave_file: - wave_file.setnchannels(1) - wave_file.setsampwidth(2) - wave_file.setframerate(audio_data[0]) - wave_file.writeframes(audio_data[1].tobytes()) - command = f'ffmpeg -y -i {inst_path} -i {vocal_path} -filter_complex [1:a]volume={audio_volume}dB[v];[0:a][v]amix=inputs=2:duration=longest -b:a 320k -c:a libmp3lame {output_path}' - result = subprocess.run(command.split(), stdout=subprocess.PIPE) - print(result.stdout.decode()) - return output_path - -def load_hubert(): - global hubert_model - models, _, _ = checkpoint_utils.load_model_ensemble_and_task( - ["hubert_base.pt"], - suffix="", - ) - hubert_model = models[0] - hubert_model = hubert_model.to(config.device) - if config.is_half: - hubert_model = hubert_model.half() - else: - hubert_model = hubert_model.float() - hubert_model.eval() - -def change_audio_mode(vc_audio_mode): - if vc_audio_mode == "Input path": - return ( - # Input & Upload - gr.Textbox.update(visible=True), - gr.Checkbox.update(visible=False), - gr.Audio.update(visible=False), - # Youtube - gr.Dropdown.update(visible=False), - gr.Textbox.update(visible=False), - gr.Dropdown.update(visible=False), - gr.Button.update(visible=False), - gr.Audio.update(visible=False), - gr.Audio.update(visible=False), - gr.Audio.update(visible=False), - gr.Slider.update(visible=False), - gr.Audio.update(visible=False), - gr.Button.update(visible=False), - # TTS - gr.Textbox.update(visible=False), - gr.Dropdown.update(visible=False) - ) - elif vc_audio_mode == "Upload audio": - return ( - # Input & Upload - gr.Textbox.update(visible=False), - gr.Checkbox.update(visible=True), - gr.Audio.update(visible=True), - # Youtube - gr.Dropdown.update(visible=False), - gr.Textbox.update(visible=False), - gr.Dropdown.update(visible=False), - gr.Button.update(visible=False), - gr.Audio.update(visible=False), - gr.Audio.update(visible=False), - gr.Audio.update(visible=False), - gr.Slider.update(visible=False), - gr.Audio.update(visible=False), - gr.Button.update(visible=False), - # TTS - gr.Textbox.update(visible=False), - gr.Dropdown.update(visible=False) - ) - elif vc_audio_mode == "Youtube": - return ( - # Input & Upload - gr.Textbox.update(visible=False), - gr.Checkbox.update(visible=False), - gr.Audio.update(visible=False), - # Youtube - gr.Dropdown.update(visible=True), - gr.Textbox.update(visible=True), - gr.Dropdown.update(visible=True), - gr.Button.update(visible=True), - gr.Audio.update(visible=True), - gr.Audio.update(visible=True), - gr.Audio.update(visible=True), - gr.Slider.update(visible=True), - gr.Audio.update(visible=True), - gr.Button.update(visible=True), - # TTS - gr.Textbox.update(visible=False), - gr.Dropdown.update(visible=False) - ) - elif vc_audio_mode == "TTS Audio": - return ( - # Input & Upload - gr.Textbox.update(visible=False), - gr.Checkbox.update(visible=False), - gr.Audio.update(visible=False), - # Youtube - gr.Dropdown.update(visible=False), - gr.Textbox.update(visible=False), - gr.Dropdown.update(visible=False), - gr.Button.update(visible=False), - gr.Audio.update(visible=False), - gr.Audio.update(visible=False), - gr.Audio.update(visible=False), - gr.Slider.update(visible=False), - gr.Audio.update(visible=False), - gr.Button.update(visible=False), - # TTS - gr.Textbox.update(visible=True), - gr.Dropdown.update(visible=True) - ) - else: - return ( - # Input & Upload - gr.Textbox.update(visible=False), - gr.Checkbox.update(visible=True), - gr.Audio.update(visible=True), - # Youtube - gr.Dropdown.update(visible=False), - gr.Textbox.update(visible=False), - gr.Dropdown.update(visible=False), - gr.Button.update(visible=False), - gr.Audio.update(visible=False), - gr.Audio.update(visible=False), - gr.Audio.update(visible=False), - gr.Slider.update(visible=False), - gr.Audio.update(visible=False), - gr.Button.update(visible=False), - # TTS - gr.Textbox.update(visible=False), - gr.Dropdown.update(visible=False) - ) - -def use_microphone(microphone): - if microphone == True: - return gr.Audio.update(source="microphone") - else: - return gr.Audio.update(source="upload") - -if __name__ == '__main__': - load_hubert() - categories = load_model() - tts_voice_list = asyncio.get_event_loop().run_until_complete(edge_tts.list_voices()) - voices = [f"{v['ShortName']}-{v['Gender']}" for v in tts_voice_list] - with gr.Blocks() as app: - gr.Markdown( - "
        \n\n"+ - "# Multi Model RVC Inference\n\n"+ - "[![Repository](https://img.shields.io/badge/Github-Multi%20Model%20RVC%20Inference-blue?style=for-the-badge&logo=github)](https://github.com/ArkanDash/Multi-Model-RVC-Inference)\n\n"+ - "
        " - ) - for (folder_title, folder, description, models) in categories: - with gr.TabItem(folder_title): - if description: - gr.Markdown(f"###
        {description}") - with gr.Tabs(): - if not models: - gr.Markdown("#
        No Model Loaded.") - gr.Markdown("##
        Please add model or fix your model path.") - continue - for (name, title, author, cover, model_version, vc_fn) in models: - with gr.TabItem(name): - with gr.Row(): - gr.Markdown( - '
        ' - f'
        {title}
        \n'+ - f'
        RVC {model_version} Model
        \n'+ - (f'
        Model author: {author}
        ' if author else "")+ - (f'' if cover else "")+ - '
        ' - ) - with gr.Row(): - with gr.Column(): - vc_audio_mode = gr.Dropdown(label="Input voice", choices=audio_mode, allow_custom_value=False, value="Upload audio") - # Input - vc_input = gr.Textbox(label="Input audio path", visible=False) - # Upload - vc_microphone_mode = gr.Checkbox(label="Use Microphone", value=False, visible=True, interactive=True) - vc_upload = gr.Audio(label="Upload audio file", source="upload", visible=True, interactive=True) - # Youtube - vc_download_audio = gr.Dropdown(label="Provider", choices=["Youtube"], allow_custom_value=False, visible=False, value="Youtube", info="Select provider (Default: Youtube)") - vc_link = gr.Textbox(label="Youtube URL", visible=False, info="Example: https://www.youtube.com/watch?v=Nc0sB1Bmf-A", placeholder="https://www.youtube.com/watch?v=...") - vc_split_model = gr.Dropdown(label="Splitter Model", choices=["htdemucs", "mdx_extra_q"], allow_custom_value=False, visible=False, value="htdemucs", info="Select the splitter model (Default: htdemucs)") - vc_split = gr.Button("Split Audio", variant="primary", visible=False) - vc_vocal_preview = gr.Audio(label="Vocal Preview", visible=False) - vc_inst_preview = gr.Audio(label="Instrumental Preview", visible=False) - vc_audio_preview = gr.Audio(label="Audio Preview", visible=False) - # TTS - tts_text = gr.Textbox(visible=False, label="TTS text", info="Text to speech input") - tts_voice = gr.Dropdown(label="Edge-tts speaker", choices=voices, visible=False, allow_custom_value=False, value="en-US-AnaNeural-Female") - with gr.Column(): - vc_transform0 = gr.Number(label="Transpose", value=0, info='Type "12" to change from male to female voice. Type "-12" to change female to male voice') - f0method0 = gr.Radio( - label="Pitch extraction algorithm", - info=f0method_info, - choices=f0method_mode, - value="pm", - interactive=True - ) - index_rate1 = gr.Slider( - minimum=0, - maximum=1, - label="Retrieval feature ratio", - info="(Default: 0.7)", - value=0.7, - interactive=True, - ) - filter_radius0 = gr.Slider( - minimum=0, - maximum=7, - label="Apply Median Filtering", - info="The value represents the filter radius and can reduce breathiness.", - value=3, - step=1, - interactive=True, - ) - resample_sr0 = gr.Slider( - minimum=0, - maximum=48000, - label="Resample the output audio", - info="Resample the output audio in post-processing to the final sample rate. Set to 0 for no resampling", - value=0, - step=1, - interactive=True, - ) - rms_mix_rate0 = gr.Slider( - minimum=0, - maximum=1, - label="Volume Envelope", - info="Use the volume envelope of the input to replace or mix with the volume envelope of the output. The closer the ratio is to 1, the more the output envelope is used", - value=1, - interactive=True, - ) - protect0 = gr.Slider( - minimum=0, - maximum=0.5, - label="Voice Protection", - info="Protect voiceless consonants and breath sounds to prevent artifacts such as tearing in electronic music. Set to 0.5 to disable. Decrease the value to increase protection, but it may reduce indexing accuracy", - value=0.5, - step=0.01, - interactive=True, - ) - with gr.Column(): - vc_log = gr.Textbox(label="Output Information", interactive=False) - vc_output = gr.Audio(label="Output Audio", interactive=False) - vc_convert = gr.Button("Convert", variant="primary") - vc_volume = gr.Slider( - minimum=0, - maximum=10, - label="Vocal volume", - value=4, - interactive=True, - step=1, - info="Adjust vocal volume (Default: 4}", - visible=False - ) - vc_combined_output = gr.Audio(label="Output Combined Audio", visible=False) - vc_combine = gr.Button("Combine",variant="primary", visible=False) - vc_convert.click( - fn=vc_fn, - inputs=[ - vc_audio_mode, - vc_input, - vc_upload, - tts_text, - tts_voice, - vc_transform0, - f0method0, - index_rate1, - filter_radius0, - resample_sr0, - rms_mix_rate0, - protect0, - ], - outputs=[vc_log ,vc_output], - api_name="vc_convert" - ) - vc_split.click( - fn=cut_vocal_and_inst, - inputs=[vc_link, vc_download_audio, vc_split_model], - outputs=[vc_vocal_preview, vc_inst_preview, vc_audio_preview, vc_input], - api_name="vc_split" - ) - vc_combine.click( - fn=combine_vocal_and_inst, - inputs=[vc_output, vc_volume, vc_split_model], - outputs=[vc_combined_output], - api_name="vc_combine" - ) - vc_microphone_mode.change( - fn=use_microphone, - inputs=vc_microphone_mode, - outputs=vc_upload - ) - vc_audio_mode.change( - fn=change_audio_mode, - inputs=[vc_audio_mode], - outputs=[ - vc_input, - vc_microphone_mode, - vc_upload, - vc_download_audio, - vc_link, - vc_split_model, - vc_split, - vc_vocal_preview, - vc_inst_preview, - vc_audio_preview, - vc_volume, - vc_combined_output, - vc_combine, - tts_text, - tts_voice - ] - ) - app.queue(concurrency_count=1, max_size=30, api_open=True).launch() #api_open=config.api).launch(share=config.colab) \ No newline at end of file diff --git a/spaces/suppsumstagza/text-to-image-stable-diffusion-v1-5/scripts/Forever Dawn Stephenie Meyer VERIFIED Download Pdf.md b/spaces/suppsumstagza/text-to-image-stable-diffusion-v1-5/scripts/Forever Dawn Stephenie Meyer VERIFIED Download Pdf.md deleted file mode 100644 index 61500a5aba7211faf92253a09ac8aadd3a6907a2..0000000000000000000000000000000000000000 --- a/spaces/suppsumstagza/text-to-image-stable-diffusion-v1-5/scripts/Forever Dawn Stephenie Meyer VERIFIED Download Pdf.md +++ /dev/null @@ -1,27 +0,0 @@ -
        -

        Forever Dawn: The Unpublished Sequel to Twilight by Stephenie Meyer

        -

        Have you ever wondered what would have happened if Edward never left Bella in New Moon? If you are a fan of the Twilight saga, you might be interested in Forever Dawn, the original, unpublished direct sequel to Twilight, written for Stephenie Meyer's own pleasure. [^1^]

        -

        Forever Dawn Stephenie Meyer Download Pdf


        Download Ziphttps://cinurl.com/2uEXq5



        -

        Forever Dawn is the story of Bella and Edward's marriage, honeymoon, and the birth of their daughter Renesmee. It also features the confrontation with the Volturi, who are informed by Victoria that the Cullens have created an immortal child. However, there are some major differences from the published version of Breaking Dawn, which was split into two books and added some new elements. [^1^]

        -

        Some of the differences include:

        -
          -
        • Jacob and Bella are not nearly as close; Edward never leaves, so Bella and Jacob never bond. Jacob's feelings for her remain at crush level, and she does not fall in love with him. [^1^]
        • -
        • The werewolf pack is only sketchily developed. Most of the wolves remain unnamed. [^1^]
        • -
        • Forever Dawn is written entirely in Bella's perspective. Because of this, there is a lot more emphasis on the pregnancy phase. [^1^]
        • -
        • Jacob isn't present at the delivery, so he imprints on Renesmee a few weeks later, when Bella is visiting Charlie. [^1^]
        • -
        • With no New Moon or Eclipse, Victoria and Laurent are both still alive. Laurent stays happily with Irina and sides with the Cullens in the confrontation with the Volturi. It is Victoria rather than Irina who informs the Volturi of the Cullens. She creates a new vampire, Riley, to make the actual accusation. [^1^]
        • -
        -

        Forever Dawn was never published, and Meyer instead gave it to her older sister as a birthday present. No plans to formally publish the novel exist, as much of its plot was later explored in the three later books of the series. However, Meyer has promised fans the opportunity to read excerpts from Forever Dawn. These excerpts were delayed until after the publishing of Eclipse and Breaking Dawn, to avoid spoiling the plotline. [^1^]

        -

        -

        Forever Dawn is discussed on Meyer's official website in the Breaking Dawn FAQ section, [^2^] and a cover designed by Meyer is included on the website. [^3^] You can also watch a YouTube video about Forever Dawn by clicking here .

        -

        If you are curious about this alternative version of Bella and Edward's story, you might want to check out these sources and learn more about Forever Dawn.

        Some of the reasons why Meyer decided to rewrite Forever Dawn into Breaking Dawn are:

        -
          -
        • She wanted to explore the characters of Jacob and the werewolves more deeply. She felt that Jacob deserved a voice and a chance to tell his side of the story. She also wanted to show the complexity and diversity of the wolf pack.
        • -
        • She wanted to introduce some new characters and subplots that would enrich the story and add more suspense and drama. For example, she created the character of Bree Tanner, a newborn vampire who is part of Victoria's army. She also added the subplot of Alice's disappearance and her search for another hybrid like Renesmee.
        • -
        • She wanted to make the story more realistic and believable. She felt that Forever Dawn was too easy and convenient for the characters, and that there were no real consequences or challenges for them. She wanted to show the difficulties and sacrifices that Bella and Edward had to face in order to be together. She also wanted to show the emotional impact of Bella's transformation and Jacob's imprinting.
        • -
        -

        Breaking Dawn was published in 2008 as the fourth and final book of the Twilight saga. It was divided into three parts: Book One: Bella, Book Two: Jacob, and Book Three: Bella. It received mixed reviews from critics and fans, some praising it for its satisfying conclusion and romantic scenes, others criticizing it for its plot holes, inconsistencies, and controversial themes.

        -

        Breaking Dawn was also adapted into two movies, released in 2011 and 2012 respectively. The movies followed the book closely, with some minor changes and additions. The movies were also met with mixed reactions, but they were commercially successful, grossing over $1.5 billion worldwide.

        -

        The Twilight saga has been one of the most popular and influential series of the 21st century, captivating millions of readers and viewers around the world. Whether you love it or hate it, you can't deny its impact on the culture and the genre of young adult fiction. Forever Dawn is a glimpse into what could have been, a different version of a story that has touched many hearts.

        d5da3c52bf
        -
        -
        \ No newline at end of file diff --git a/spaces/suppsumstagza/text-to-image-stable-diffusion-v1-5/scripts/Hazrat Abdul Qadir Jilani Books In Bangla Pdf 15.md b/spaces/suppsumstagza/text-to-image-stable-diffusion-v1-5/scripts/Hazrat Abdul Qadir Jilani Books In Bangla Pdf 15.md deleted file mode 100644 index 844f0f3207e3e042a429a25c099bc4cf9d2654b2..0000000000000000000000000000000000000000 --- a/spaces/suppsumstagza/text-to-image-stable-diffusion-v1-5/scripts/Hazrat Abdul Qadir Jilani Books In Bangla Pdf 15.md +++ /dev/null @@ -1,27 +0,0 @@ - -

        The Life and Works of Hazrat Abdul Qadir Jilani

        -

        Hazrat Abdul Qadir Jilani (1078-1166) was a renowned Islamic scholar, preacher, mystic and saint. He is also known as Sheikh al-Islam, Ghaus al-Azam, Sultan al-Awliya and the founder of the Qadiriyya Sufi order. He was born in Jilan, a province of Persia (now Iran), and later moved to Baghdad, the capital of the Abbasid Caliphate, where he studied and taught various Islamic sciences. He also traveled to many regions to spread the message of Islam and to guide people on the path of spirituality.

        -

        He wrote several books on various topics, such as theology, jurisprudence, ethics, mysticism and biography. Some of his most famous works are:

        -

        Hazrat Abdul Qadir Jilani Books In Bangla Pdf 15


        Download Ziphttps://cinurl.com/2uEYq6



        -
          -
        • Al-Ghunya li Talibi Tariq al-Haqq (Sufficient Provision for Seekers of the Path of Truth), a comprehensive treatise on Islamic creed, law, ethics and spirituality.
        • -
        • Futuh al-Ghaib (Revelations of the Unseen), a collection of 78 sermons on various aspects of Sufism.
        • -
        • Al-Fath al-Rabbani (The Sublime Revelation), a collection of 62 discourses on Sufi doctrine and practice.
        • -
        • Jala al-Khawatir (The Removal of Cares), a collection of 45 short spiritual instructions.
        • -
        • Bahjat al-Asrar (The Joy of Secrets), a biography of the Prophet Muhammad (peace be upon him) and his companions.
        • -
        -

        His books have been translated into many languages, including Bangla. Some of his books are available online in PDF format for free download. For example, you can find the Bangla translation of Sufficient Provision for Seekers of the Path of Truth Vol 1 and Vol 2[^2^]. You can also find the Bangla translation of Revelations of the Unseen[^3^].

        -

        Hazrat Abdul Qadir Jilani was a great example of piety, knowledge, wisdom and generosity. He was revered by millions of Muslims across the world as a spiritual master and a friend of Allah. He passed away in Baghdad in 1166 and his shrine is still visited by many devotees. His teachings and legacy continue to inspire and guide people on the path of Islam and Sufism.

        - -

        Hazrat Abdul Qadir Jilani was also known for his miracles (karamat) that manifested by the will of Allah. Many of his miracles were witnessed by his contemporaries and recorded by his biographers. Some of his miracles are:

        -
          -
        • He once threw his wooden shoe in the air and it hit and killed a man who was attacking one of his female followers in Ceylon[^2^].
        • -
        • He once convinced a Christian priest and his followers to embrace Islam by showing them his family that he had lived with for many years in another dimension[^2^].
        • -
        • He once brought back to life a young man who had died of snakebite by reciting Surah al-Fatiha over him[^1^].
        • -
        • He once made the months of the year appear before him and asked them about their events[^4^].
        • -
        • He once cured a leper by wiping his hand over his body[^1^].
        • -
        -

        These are only some of the many miracles that Allah bestowed upon Hazrat Abdul Qadir Jilani as a sign of his high rank and status among the awliya (close friends of Allah). His miracles are not to be taken as a proof of his divinity or partnership with Allah, but rather as a proof of his sincerity, piety and devotion to Allah and His Messenger (peace be upon him).

        -

        d5da3c52bf
        -
        -
        \ No newline at end of file diff --git a/spaces/t13718236382/bingoGPT4/src/components/ui/input.tsx b/spaces/t13718236382/bingoGPT4/src/components/ui/input.tsx deleted file mode 100644 index 684a857f3d769b78818fb13de1abaebfb09ca79c..0000000000000000000000000000000000000000 --- a/spaces/t13718236382/bingoGPT4/src/components/ui/input.tsx +++ /dev/null @@ -1,25 +0,0 @@ -import * as React from 'react' - -import { cn } from '@/lib/utils' - -export interface InputProps - extends React.InputHTMLAttributes {} - -const Input = React.forwardRef( - ({ className, type, ...props }, ref) => { - return ( - - ) - } -) -Input.displayName = 'Input' - -export { Input } diff --git a/spaces/taesiri/ChatGPT-ImageCaptioner/detic/data/datasets/objects365.py b/spaces/taesiri/ChatGPT-ImageCaptioner/detic/data/datasets/objects365.py deleted file mode 100644 index b98128738b43a71d24ac1c22554631f78b80d664..0000000000000000000000000000000000000000 --- a/spaces/taesiri/ChatGPT-ImageCaptioner/detic/data/datasets/objects365.py +++ /dev/null @@ -1,770 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -from detectron2.data.datasets.register_coco import register_coco_instances -import os - -# categories_v2 = [ -# {'id': 1, 'name': 'Person'}, -# {'id': 2, 'name': 'Sneakers'}, -# {'id': 3, 'name': 'Chair'}, -# {'id': 4, 'name': 'Other Shoes'}, -# {'id': 5, 'name': 'Hat'}, -# {'id': 6, 'name': 'Car'}, -# {'id': 7, 'name': 'Lamp'}, -# {'id': 8, 'name': 'Glasses'}, -# {'id': 9, 'name': 'Bottle'}, -# {'id': 10, 'name': 'Desk'}, -# {'id': 11, 'name': 'Cup'}, -# {'id': 12, 'name': 'Street Lights'}, -# {'id': 13, 'name': 'Cabinet/shelf'}, -# {'id': 14, 'name': 'Handbag/Satchel'}, -# {'id': 15, 'name': 'Bracelet'}, -# {'id': 16, 'name': 'Plate'}, -# {'id': 17, 'name': 'Picture/Frame'}, -# {'id': 18, 'name': 'Helmet'}, -# {'id': 19, 'name': 'Book'}, -# {'id': 20, 'name': 'Gloves'}, -# {'id': 21, 'name': 'Storage box'}, -# {'id': 22, 'name': 'Boat'}, -# {'id': 23, 'name': 'Leather Shoes'}, -# {'id': 24, 'name': 'Flower'}, -# {'id': 25, 'name': 'Bench'}, -# {'id': 26, 'name': 'Potted Plant'}, -# {'id': 27, 'name': 'Bowl/Basin'}, -# {'id': 28, 'name': 'Flag'}, -# {'id': 29, 'name': 'Pillow'}, -# {'id': 30, 'name': 'Boots'}, -# {'id': 31, 'name': 'Vase'}, -# {'id': 32, 'name': 'Microphone'}, -# {'id': 33, 'name': 'Necklace'}, -# {'id': 34, 'name': 'Ring'}, -# {'id': 35, 'name': 'SUV'}, -# {'id': 36, 'name': 'Wine Glass'}, -# {'id': 37, 'name': 'Belt'}, -# {'id': 38, 'name': 'Moniter/TV'}, -# {'id': 39, 'name': 'Backpack'}, -# {'id': 40, 'name': 'Umbrella'}, -# {'id': 41, 'name': 'Traffic Light'}, -# {'id': 42, 'name': 'Speaker'}, -# {'id': 43, 'name': 'Watch'}, -# {'id': 44, 'name': 'Tie'}, -# {'id': 45, 'name': 'Trash bin Can'}, -# {'id': 46, 'name': 'Slippers'}, -# {'id': 47, 'name': 'Bicycle'}, -# {'id': 48, 'name': 'Stool'}, -# {'id': 49, 'name': 'Barrel/bucket'}, -# {'id': 50, 'name': 'Van'}, -# {'id': 51, 'name': 'Couch'}, -# {'id': 52, 'name': 'Sandals'}, -# {'id': 53, 'name': 'Bakset'}, -# {'id': 54, 'name': 'Drum'}, -# {'id': 55, 'name': 'Pen/Pencil'}, -# {'id': 56, 'name': 'Bus'}, -# {'id': 57, 'name': 'Wild Bird'}, -# {'id': 58, 'name': 'High Heels'}, -# {'id': 59, 'name': 'Motorcycle'}, -# {'id': 60, 'name': 'Guitar'}, -# {'id': 61, 'name': 'Carpet'}, -# {'id': 62, 'name': 'Cell Phone'}, -# {'id': 63, 'name': 'Bread'}, -# {'id': 64, 'name': 'Camera'}, -# {'id': 65, 'name': 'Canned'}, -# {'id': 66, 'name': 'Truck'}, -# {'id': 67, 'name': 'Traffic cone'}, -# {'id': 68, 'name': 'Cymbal'}, -# {'id': 69, 'name': 'Lifesaver'}, -# {'id': 70, 'name': 'Towel'}, -# {'id': 71, 'name': 'Stuffed Toy'}, -# {'id': 72, 'name': 'Candle'}, -# {'id': 73, 'name': 'Sailboat'}, -# {'id': 74, 'name': 'Laptop'}, -# {'id': 75, 'name': 'Awning'}, -# {'id': 76, 'name': 'Bed'}, -# {'id': 77, 'name': 'Faucet'}, -# {'id': 78, 'name': 'Tent'}, -# {'id': 79, 'name': 'Horse'}, -# {'id': 80, 'name': 'Mirror'}, -# {'id': 81, 'name': 'Power outlet'}, -# {'id': 82, 'name': 'Sink'}, -# {'id': 83, 'name': 'Apple'}, -# {'id': 84, 'name': 'Air Conditioner'}, -# {'id': 85, 'name': 'Knife'}, -# {'id': 86, 'name': 'Hockey Stick'}, -# {'id': 87, 'name': 'Paddle'}, -# {'id': 88, 'name': 'Pickup Truck'}, -# {'id': 89, 'name': 'Fork'}, -# {'id': 90, 'name': 'Traffic Sign'}, -# {'id': 91, 'name': 'Ballon'}, -# {'id': 92, 'name': 'Tripod'}, -# {'id': 93, 'name': 'Dog'}, -# {'id': 94, 'name': 'Spoon'}, -# {'id': 95, 'name': 'Clock'}, -# {'id': 96, 'name': 'Pot'}, -# {'id': 97, 'name': 'Cow'}, -# {'id': 98, 'name': 'Cake'}, -# {'id': 99, 'name': 'Dinning Table'}, -# {'id': 100, 'name': 'Sheep'}, -# {'id': 101, 'name': 'Hanger'}, -# {'id': 102, 'name': 'Blackboard/Whiteboard'}, -# {'id': 103, 'name': 'Napkin'}, -# {'id': 104, 'name': 'Other Fish'}, -# {'id': 105, 'name': 'Orange/Tangerine'}, -# {'id': 106, 'name': 'Toiletry'}, -# {'id': 107, 'name': 'Keyboard'}, -# {'id': 108, 'name': 'Tomato'}, -# {'id': 109, 'name': 'Lantern'}, -# {'id': 110, 'name': 'Machinery Vehicle'}, -# {'id': 111, 'name': 'Fan'}, -# {'id': 112, 'name': 'Green Vegetables'}, -# {'id': 113, 'name': 'Banana'}, -# {'id': 114, 'name': 'Baseball Glove'}, -# {'id': 115, 'name': 'Airplane'}, -# {'id': 116, 'name': 'Mouse'}, -# {'id': 117, 'name': 'Train'}, -# {'id': 118, 'name': 'Pumpkin'}, -# {'id': 119, 'name': 'Soccer'}, -# {'id': 120, 'name': 'Skiboard'}, -# {'id': 121, 'name': 'Luggage'}, -# {'id': 122, 'name': 'Nightstand'}, -# {'id': 123, 'name': 'Tea pot'}, -# {'id': 124, 'name': 'Telephone'}, -# {'id': 125, 'name': 'Trolley'}, -# {'id': 126, 'name': 'Head Phone'}, -# {'id': 127, 'name': 'Sports Car'}, -# {'id': 128, 'name': 'Stop Sign'}, -# {'id': 129, 'name': 'Dessert'}, -# {'id': 130, 'name': 'Scooter'}, -# {'id': 131, 'name': 'Stroller'}, -# {'id': 132, 'name': 'Crane'}, -# {'id': 133, 'name': 'Remote'}, -# {'id': 134, 'name': 'Refrigerator'}, -# {'id': 135, 'name': 'Oven'}, -# {'id': 136, 'name': 'Lemon'}, -# {'id': 137, 'name': 'Duck'}, -# {'id': 138, 'name': 'Baseball Bat'}, -# {'id': 139, 'name': 'Surveillance Camera'}, -# {'id': 140, 'name': 'Cat'}, -# {'id': 141, 'name': 'Jug'}, -# {'id': 142, 'name': 'Broccoli'}, -# {'id': 143, 'name': 'Piano'}, -# {'id': 144, 'name': 'Pizza'}, -# {'id': 145, 'name': 'Elephant'}, -# {'id': 146, 'name': 'Skateboard'}, -# {'id': 147, 'name': 'Surfboard'}, -# {'id': 148, 'name': 'Gun'}, -# {'id': 149, 'name': 'Skating and Skiing shoes'}, -# {'id': 150, 'name': 'Gas stove'}, -# {'id': 151, 'name': 'Donut'}, -# {'id': 152, 'name': 'Bow Tie'}, -# {'id': 153, 'name': 'Carrot'}, -# {'id': 154, 'name': 'Toilet'}, -# {'id': 155, 'name': 'Kite'}, -# {'id': 156, 'name': 'Strawberry'}, -# {'id': 157, 'name': 'Other Balls'}, -# {'id': 158, 'name': 'Shovel'}, -# {'id': 159, 'name': 'Pepper'}, -# {'id': 160, 'name': 'Computer Box'}, -# {'id': 161, 'name': 'Toilet Paper'}, -# {'id': 162, 'name': 'Cleaning Products'}, -# {'id': 163, 'name': 'Chopsticks'}, -# {'id': 164, 'name': 'Microwave'}, -# {'id': 165, 'name': 'Pigeon'}, -# {'id': 166, 'name': 'Baseball'}, -# {'id': 167, 'name': 'Cutting/chopping Board'}, -# {'id': 168, 'name': 'Coffee Table'}, -# {'id': 169, 'name': 'Side Table'}, -# {'id': 170, 'name': 'Scissors'}, -# {'id': 171, 'name': 'Marker'}, -# {'id': 172, 'name': 'Pie'}, -# {'id': 173, 'name': 'Ladder'}, -# {'id': 174, 'name': 'Snowboard'}, -# {'id': 175, 'name': 'Cookies'}, -# {'id': 176, 'name': 'Radiator'}, -# {'id': 177, 'name': 'Fire Hydrant'}, -# {'id': 178, 'name': 'Basketball'}, -# {'id': 179, 'name': 'Zebra'}, -# {'id': 180, 'name': 'Grape'}, -# {'id': 181, 'name': 'Giraffe'}, -# {'id': 182, 'name': 'Potato'}, -# {'id': 183, 'name': 'Sausage'}, -# {'id': 184, 'name': 'Tricycle'}, -# {'id': 185, 'name': 'Violin'}, -# {'id': 186, 'name': 'Egg'}, -# {'id': 187, 'name': 'Fire Extinguisher'}, -# {'id': 188, 'name': 'Candy'}, -# {'id': 189, 'name': 'Fire Truck'}, -# {'id': 190, 'name': 'Billards'}, -# {'id': 191, 'name': 'Converter'}, -# {'id': 192, 'name': 'Bathtub'}, -# {'id': 193, 'name': 'Wheelchair'}, -# {'id': 194, 'name': 'Golf Club'}, -# {'id': 195, 'name': 'Briefcase'}, -# {'id': 196, 'name': 'Cucumber'}, -# {'id': 197, 'name': 'Cigar/Cigarette '}, -# {'id': 198, 'name': 'Paint Brush'}, -# {'id': 199, 'name': 'Pear'}, -# {'id': 200, 'name': 'Heavy Truck'}, -# {'id': 201, 'name': 'Hamburger'}, -# {'id': 202, 'name': 'Extractor'}, -# {'id': 203, 'name': 'Extention Cord'}, -# {'id': 204, 'name': 'Tong'}, -# {'id': 205, 'name': 'Tennis Racket'}, -# {'id': 206, 'name': 'Folder'}, -# {'id': 207, 'name': 'American Football'}, -# {'id': 208, 'name': 'earphone'}, -# {'id': 209, 'name': 'Mask'}, -# {'id': 210, 'name': 'Kettle'}, -# {'id': 211, 'name': 'Tennis'}, -# {'id': 212, 'name': 'Ship'}, -# {'id': 213, 'name': 'Swing'}, -# {'id': 214, 'name': 'Coffee Machine'}, -# {'id': 215, 'name': 'Slide'}, -# {'id': 216, 'name': 'Carriage'}, -# {'id': 217, 'name': 'Onion'}, -# {'id': 218, 'name': 'Green beans'}, -# {'id': 219, 'name': 'Projector'}, -# {'id': 220, 'name': 'Frisbee'}, -# {'id': 221, 'name': 'Washing Machine/Drying Machine'}, -# {'id': 222, 'name': 'Chicken'}, -# {'id': 223, 'name': 'Printer'}, -# {'id': 224, 'name': 'Watermelon'}, -# {'id': 225, 'name': 'Saxophone'}, -# {'id': 226, 'name': 'Tissue'}, -# {'id': 227, 'name': 'Toothbrush'}, -# {'id': 228, 'name': 'Ice cream'}, -# {'id': 229, 'name': 'Hotair ballon'}, -# {'id': 230, 'name': 'Cello'}, -# {'id': 231, 'name': 'French Fries'}, -# {'id': 232, 'name': 'Scale'}, -# {'id': 233, 'name': 'Trophy'}, -# {'id': 234, 'name': 'Cabbage'}, -# {'id': 235, 'name': 'Hot dog'}, -# {'id': 236, 'name': 'Blender'}, -# {'id': 237, 'name': 'Peach'}, -# {'id': 238, 'name': 'Rice'}, -# {'id': 239, 'name': 'Wallet/Purse'}, -# {'id': 240, 'name': 'Volleyball'}, -# {'id': 241, 'name': 'Deer'}, -# {'id': 242, 'name': 'Goose'}, -# {'id': 243, 'name': 'Tape'}, -# {'id': 244, 'name': 'Tablet'}, -# {'id': 245, 'name': 'Cosmetics'}, -# {'id': 246, 'name': 'Trumpet'}, -# {'id': 247, 'name': 'Pineapple'}, -# {'id': 248, 'name': 'Golf Ball'}, -# {'id': 249, 'name': 'Ambulance'}, -# {'id': 250, 'name': 'Parking meter'}, -# {'id': 251, 'name': 'Mango'}, -# {'id': 252, 'name': 'Key'}, -# {'id': 253, 'name': 'Hurdle'}, -# {'id': 254, 'name': 'Fishing Rod'}, -# {'id': 255, 'name': 'Medal'}, -# {'id': 256, 'name': 'Flute'}, -# {'id': 257, 'name': 'Brush'}, -# {'id': 258, 'name': 'Penguin'}, -# {'id': 259, 'name': 'Megaphone'}, -# {'id': 260, 'name': 'Corn'}, -# {'id': 261, 'name': 'Lettuce'}, -# {'id': 262, 'name': 'Garlic'}, -# {'id': 263, 'name': 'Swan'}, -# {'id': 264, 'name': 'Helicopter'}, -# {'id': 265, 'name': 'Green Onion'}, -# {'id': 266, 'name': 'Sandwich'}, -# {'id': 267, 'name': 'Nuts'}, -# {'id': 268, 'name': 'Speed Limit Sign'}, -# {'id': 269, 'name': 'Induction Cooker'}, -# {'id': 270, 'name': 'Broom'}, -# {'id': 271, 'name': 'Trombone'}, -# {'id': 272, 'name': 'Plum'}, -# {'id': 273, 'name': 'Rickshaw'}, -# {'id': 274, 'name': 'Goldfish'}, -# {'id': 275, 'name': 'Kiwi fruit'}, -# {'id': 276, 'name': 'Router/modem'}, -# {'id': 277, 'name': 'Poker Card'}, -# {'id': 278, 'name': 'Toaster'}, -# {'id': 279, 'name': 'Shrimp'}, -# {'id': 280, 'name': 'Sushi'}, -# {'id': 281, 'name': 'Cheese'}, -# {'id': 282, 'name': 'Notepaper'}, -# {'id': 283, 'name': 'Cherry'}, -# {'id': 284, 'name': 'Pliers'}, -# {'id': 285, 'name': 'CD'}, -# {'id': 286, 'name': 'Pasta'}, -# {'id': 287, 'name': 'Hammer'}, -# {'id': 288, 'name': 'Cue'}, -# {'id': 289, 'name': 'Avocado'}, -# {'id': 290, 'name': 'Hamimelon'}, -# {'id': 291, 'name': 'Flask'}, -# {'id': 292, 'name': 'Mushroon'}, -# {'id': 293, 'name': 'Screwdriver'}, -# {'id': 294, 'name': 'Soap'}, -# {'id': 295, 'name': 'Recorder'}, -# {'id': 296, 'name': 'Bear'}, -# {'id': 297, 'name': 'Eggplant'}, -# {'id': 298, 'name': 'Board Eraser'}, -# {'id': 299, 'name': 'Coconut'}, -# {'id': 300, 'name': 'Tape Measur/ Ruler'}, -# {'id': 301, 'name': 'Pig'}, -# {'id': 302, 'name': 'Showerhead'}, -# {'id': 303, 'name': 'Globe'}, -# {'id': 304, 'name': 'Chips'}, -# {'id': 305, 'name': 'Steak'}, -# {'id': 306, 'name': 'Crosswalk Sign'}, -# {'id': 307, 'name': 'Stapler'}, -# {'id': 308, 'name': 'Campel'}, -# {'id': 309, 'name': 'Formula 1 '}, -# {'id': 310, 'name': 'Pomegranate'}, -# {'id': 311, 'name': 'Dishwasher'}, -# {'id': 312, 'name': 'Crab'}, -# {'id': 313, 'name': 'Hoverboard'}, -# {'id': 314, 'name': 'Meat ball'}, -# {'id': 315, 'name': 'Rice Cooker'}, -# {'id': 316, 'name': 'Tuba'}, -# {'id': 317, 'name': 'Calculator'}, -# {'id': 318, 'name': 'Papaya'}, -# {'id': 319, 'name': 'Antelope'}, -# {'id': 320, 'name': 'Parrot'}, -# {'id': 321, 'name': 'Seal'}, -# {'id': 322, 'name': 'Buttefly'}, -# {'id': 323, 'name': 'Dumbbell'}, -# {'id': 324, 'name': 'Donkey'}, -# {'id': 325, 'name': 'Lion'}, -# {'id': 326, 'name': 'Urinal'}, -# {'id': 327, 'name': 'Dolphin'}, -# {'id': 328, 'name': 'Electric Drill'}, -# {'id': 329, 'name': 'Hair Dryer'}, -# {'id': 330, 'name': 'Egg tart'}, -# {'id': 331, 'name': 'Jellyfish'}, -# {'id': 332, 'name': 'Treadmill'}, -# {'id': 333, 'name': 'Lighter'}, -# {'id': 334, 'name': 'Grapefruit'}, -# {'id': 335, 'name': 'Game board'}, -# {'id': 336, 'name': 'Mop'}, -# {'id': 337, 'name': 'Radish'}, -# {'id': 338, 'name': 'Baozi'}, -# {'id': 339, 'name': 'Target'}, -# {'id': 340, 'name': 'French'}, -# {'id': 341, 'name': 'Spring Rolls'}, -# {'id': 342, 'name': 'Monkey'}, -# {'id': 343, 'name': 'Rabbit'}, -# {'id': 344, 'name': 'Pencil Case'}, -# {'id': 345, 'name': 'Yak'}, -# {'id': 346, 'name': 'Red Cabbage'}, -# {'id': 347, 'name': 'Binoculars'}, -# {'id': 348, 'name': 'Asparagus'}, -# {'id': 349, 'name': 'Barbell'}, -# {'id': 350, 'name': 'Scallop'}, -# {'id': 351, 'name': 'Noddles'}, -# {'id': 352, 'name': 'Comb'}, -# {'id': 353, 'name': 'Dumpling'}, -# {'id': 354, 'name': 'Oyster'}, -# {'id': 355, 'name': 'Table Teniis paddle'}, -# {'id': 356, 'name': 'Cosmetics Brush/Eyeliner Pencil'}, -# {'id': 357, 'name': 'Chainsaw'}, -# {'id': 358, 'name': 'Eraser'}, -# {'id': 359, 'name': 'Lobster'}, -# {'id': 360, 'name': 'Durian'}, -# {'id': 361, 'name': 'Okra'}, -# {'id': 362, 'name': 'Lipstick'}, -# {'id': 363, 'name': 'Cosmetics Mirror'}, -# {'id': 364, 'name': 'Curling'}, -# {'id': 365, 'name': 'Table Tennis '}, -# ] - -''' -The official Objects365 category names contains typos. -Below is a manual fix. -''' -categories_v2_fix = [ - {'id': 1, 'name': 'Person'}, - {'id': 2, 'name': 'Sneakers'}, - {'id': 3, 'name': 'Chair'}, - {'id': 4, 'name': 'Other Shoes'}, - {'id': 5, 'name': 'Hat'}, - {'id': 6, 'name': 'Car'}, - {'id': 7, 'name': 'Lamp'}, - {'id': 8, 'name': 'Glasses'}, - {'id': 9, 'name': 'Bottle'}, - {'id': 10, 'name': 'Desk'}, - {'id': 11, 'name': 'Cup'}, - {'id': 12, 'name': 'Street Lights'}, - {'id': 13, 'name': 'Cabinet/shelf'}, - {'id': 14, 'name': 'Handbag/Satchel'}, - {'id': 15, 'name': 'Bracelet'}, - {'id': 16, 'name': 'Plate'}, - {'id': 17, 'name': 'Picture/Frame'}, - {'id': 18, 'name': 'Helmet'}, - {'id': 19, 'name': 'Book'}, - {'id': 20, 'name': 'Gloves'}, - {'id': 21, 'name': 'Storage box'}, - {'id': 22, 'name': 'Boat'}, - {'id': 23, 'name': 'Leather Shoes'}, - {'id': 24, 'name': 'Flower'}, - {'id': 25, 'name': 'Bench'}, - {'id': 26, 'name': 'Potted Plant'}, - {'id': 27, 'name': 'Bowl/Basin'}, - {'id': 28, 'name': 'Flag'}, - {'id': 29, 'name': 'Pillow'}, - {'id': 30, 'name': 'Boots'}, - {'id': 31, 'name': 'Vase'}, - {'id': 32, 'name': 'Microphone'}, - {'id': 33, 'name': 'Necklace'}, - {'id': 34, 'name': 'Ring'}, - {'id': 35, 'name': 'SUV'}, - {'id': 36, 'name': 'Wine Glass'}, - {'id': 37, 'name': 'Belt'}, - {'id': 38, 'name': 'Monitor/TV'}, - {'id': 39, 'name': 'Backpack'}, - {'id': 40, 'name': 'Umbrella'}, - {'id': 41, 'name': 'Traffic Light'}, - {'id': 42, 'name': 'Speaker'}, - {'id': 43, 'name': 'Watch'}, - {'id': 44, 'name': 'Tie'}, - {'id': 45, 'name': 'Trash bin Can'}, - {'id': 46, 'name': 'Slippers'}, - {'id': 47, 'name': 'Bicycle'}, - {'id': 48, 'name': 'Stool'}, - {'id': 49, 'name': 'Barrel/bucket'}, - {'id': 50, 'name': 'Van'}, - {'id': 51, 'name': 'Couch'}, - {'id': 52, 'name': 'Sandals'}, - {'id': 53, 'name': 'Basket'}, - {'id': 54, 'name': 'Drum'}, - {'id': 55, 'name': 'Pen/Pencil'}, - {'id': 56, 'name': 'Bus'}, - {'id': 57, 'name': 'Wild Bird'}, - {'id': 58, 'name': 'High Heels'}, - {'id': 59, 'name': 'Motorcycle'}, - {'id': 60, 'name': 'Guitar'}, - {'id': 61, 'name': 'Carpet'}, - {'id': 62, 'name': 'Cell Phone'}, - {'id': 63, 'name': 'Bread'}, - {'id': 64, 'name': 'Camera'}, - {'id': 65, 'name': 'Canned'}, - {'id': 66, 'name': 'Truck'}, - {'id': 67, 'name': 'Traffic cone'}, - {'id': 68, 'name': 'Cymbal'}, - {'id': 69, 'name': 'Lifesaver'}, - {'id': 70, 'name': 'Towel'}, - {'id': 71, 'name': 'Stuffed Toy'}, - {'id': 72, 'name': 'Candle'}, - {'id': 73, 'name': 'Sailboat'}, - {'id': 74, 'name': 'Laptop'}, - {'id': 75, 'name': 'Awning'}, - {'id': 76, 'name': 'Bed'}, - {'id': 77, 'name': 'Faucet'}, - {'id': 78, 'name': 'Tent'}, - {'id': 79, 'name': 'Horse'}, - {'id': 80, 'name': 'Mirror'}, - {'id': 81, 'name': 'Power outlet'}, - {'id': 82, 'name': 'Sink'}, - {'id': 83, 'name': 'Apple'}, - {'id': 84, 'name': 'Air Conditioner'}, - {'id': 85, 'name': 'Knife'}, - {'id': 86, 'name': 'Hockey Stick'}, - {'id': 87, 'name': 'Paddle'}, - {'id': 88, 'name': 'Pickup Truck'}, - {'id': 89, 'name': 'Fork'}, - {'id': 90, 'name': 'Traffic Sign'}, - {'id': 91, 'name': 'Ballon'}, - {'id': 92, 'name': 'Tripod'}, - {'id': 93, 'name': 'Dog'}, - {'id': 94, 'name': 'Spoon'}, - {'id': 95, 'name': 'Clock'}, - {'id': 96, 'name': 'Pot'}, - {'id': 97, 'name': 'Cow'}, - {'id': 98, 'name': 'Cake'}, - {'id': 99, 'name': 'Dining Table'}, - {'id': 100, 'name': 'Sheep'}, - {'id': 101, 'name': 'Hanger'}, - {'id': 102, 'name': 'Blackboard/Whiteboard'}, - {'id': 103, 'name': 'Napkin'}, - {'id': 104, 'name': 'Other Fish'}, - {'id': 105, 'name': 'Orange/Tangerine'}, - {'id': 106, 'name': 'Toiletry'}, - {'id': 107, 'name': 'Keyboard'}, - {'id': 108, 'name': 'Tomato'}, - {'id': 109, 'name': 'Lantern'}, - {'id': 110, 'name': 'Machinery Vehicle'}, - {'id': 111, 'name': 'Fan'}, - {'id': 112, 'name': 'Green Vegetables'}, - {'id': 113, 'name': 'Banana'}, - {'id': 114, 'name': 'Baseball Glove'}, - {'id': 115, 'name': 'Airplane'}, - {'id': 116, 'name': 'Mouse'}, - {'id': 117, 'name': 'Train'}, - {'id': 118, 'name': 'Pumpkin'}, - {'id': 119, 'name': 'Soccer'}, - {'id': 120, 'name': 'Skiboard'}, - {'id': 121, 'name': 'Luggage'}, - {'id': 122, 'name': 'Nightstand'}, - {'id': 123, 'name': 'Teapot'}, - {'id': 124, 'name': 'Telephone'}, - {'id': 125, 'name': 'Trolley'}, - {'id': 126, 'name': 'Head Phone'}, - {'id': 127, 'name': 'Sports Car'}, - {'id': 128, 'name': 'Stop Sign'}, - {'id': 129, 'name': 'Dessert'}, - {'id': 130, 'name': 'Scooter'}, - {'id': 131, 'name': 'Stroller'}, - {'id': 132, 'name': 'Crane'}, - {'id': 133, 'name': 'Remote'}, - {'id': 134, 'name': 'Refrigerator'}, - {'id': 135, 'name': 'Oven'}, - {'id': 136, 'name': 'Lemon'}, - {'id': 137, 'name': 'Duck'}, - {'id': 138, 'name': 'Baseball Bat'}, - {'id': 139, 'name': 'Surveillance Camera'}, - {'id': 140, 'name': 'Cat'}, - {'id': 141, 'name': 'Jug'}, - {'id': 142, 'name': 'Broccoli'}, - {'id': 143, 'name': 'Piano'}, - {'id': 144, 'name': 'Pizza'}, - {'id': 145, 'name': 'Elephant'}, - {'id': 146, 'name': 'Skateboard'}, - {'id': 147, 'name': 'Surfboard'}, - {'id': 148, 'name': 'Gun'}, - {'id': 149, 'name': 'Skating and Skiing shoes'}, - {'id': 150, 'name': 'Gas stove'}, - {'id': 151, 'name': 'Donut'}, - {'id': 152, 'name': 'Bow Tie'}, - {'id': 153, 'name': 'Carrot'}, - {'id': 154, 'name': 'Toilet'}, - {'id': 155, 'name': 'Kite'}, - {'id': 156, 'name': 'Strawberry'}, - {'id': 157, 'name': 'Other Balls'}, - {'id': 158, 'name': 'Shovel'}, - {'id': 159, 'name': 'Pepper'}, - {'id': 160, 'name': 'Computer Box'}, - {'id': 161, 'name': 'Toilet Paper'}, - {'id': 162, 'name': 'Cleaning Products'}, - {'id': 163, 'name': 'Chopsticks'}, - {'id': 164, 'name': 'Microwave'}, - {'id': 165, 'name': 'Pigeon'}, - {'id': 166, 'name': 'Baseball'}, - {'id': 167, 'name': 'Cutting/chopping Board'}, - {'id': 168, 'name': 'Coffee Table'}, - {'id': 169, 'name': 'Side Table'}, - {'id': 170, 'name': 'Scissors'}, - {'id': 171, 'name': 'Marker'}, - {'id': 172, 'name': 'Pie'}, - {'id': 173, 'name': 'Ladder'}, - {'id': 174, 'name': 'Snowboard'}, - {'id': 175, 'name': 'Cookies'}, - {'id': 176, 'name': 'Radiator'}, - {'id': 177, 'name': 'Fire Hydrant'}, - {'id': 178, 'name': 'Basketball'}, - {'id': 179, 'name': 'Zebra'}, - {'id': 180, 'name': 'Grape'}, - {'id': 181, 'name': 'Giraffe'}, - {'id': 182, 'name': 'Potato'}, - {'id': 183, 'name': 'Sausage'}, - {'id': 184, 'name': 'Tricycle'}, - {'id': 185, 'name': 'Violin'}, - {'id': 186, 'name': 'Egg'}, - {'id': 187, 'name': 'Fire Extinguisher'}, - {'id': 188, 'name': 'Candy'}, - {'id': 189, 'name': 'Fire Truck'}, - {'id': 190, 'name': 'Billards'}, - {'id': 191, 'name': 'Converter'}, - {'id': 192, 'name': 'Bathtub'}, - {'id': 193, 'name': 'Wheelchair'}, - {'id': 194, 'name': 'Golf Club'}, - {'id': 195, 'name': 'Briefcase'}, - {'id': 196, 'name': 'Cucumber'}, - {'id': 197, 'name': 'Cigar/Cigarette '}, - {'id': 198, 'name': 'Paint Brush'}, - {'id': 199, 'name': 'Pear'}, - {'id': 200, 'name': 'Heavy Truck'}, - {'id': 201, 'name': 'Hamburger'}, - {'id': 202, 'name': 'Extractor'}, - {'id': 203, 'name': 'Extension Cord'}, - {'id': 204, 'name': 'Tong'}, - {'id': 205, 'name': 'Tennis Racket'}, - {'id': 206, 'name': 'Folder'}, - {'id': 207, 'name': 'American Football'}, - {'id': 208, 'name': 'earphone'}, - {'id': 209, 'name': 'Mask'}, - {'id': 210, 'name': 'Kettle'}, - {'id': 211, 'name': 'Tennis'}, - {'id': 212, 'name': 'Ship'}, - {'id': 213, 'name': 'Swing'}, - {'id': 214, 'name': 'Coffee Machine'}, - {'id': 215, 'name': 'Slide'}, - {'id': 216, 'name': 'Carriage'}, - {'id': 217, 'name': 'Onion'}, - {'id': 218, 'name': 'Green beans'}, - {'id': 219, 'name': 'Projector'}, - {'id': 220, 'name': 'Frisbee'}, - {'id': 221, 'name': 'Washing Machine/Drying Machine'}, - {'id': 222, 'name': 'Chicken'}, - {'id': 223, 'name': 'Printer'}, - {'id': 224, 'name': 'Watermelon'}, - {'id': 225, 'name': 'Saxophone'}, - {'id': 226, 'name': 'Tissue'}, - {'id': 227, 'name': 'Toothbrush'}, - {'id': 228, 'name': 'Ice cream'}, - {'id': 229, 'name': 'Hot air balloon'}, - {'id': 230, 'name': 'Cello'}, - {'id': 231, 'name': 'French Fries'}, - {'id': 232, 'name': 'Scale'}, - {'id': 233, 'name': 'Trophy'}, - {'id': 234, 'name': 'Cabbage'}, - {'id': 235, 'name': 'Hot dog'}, - {'id': 236, 'name': 'Blender'}, - {'id': 237, 'name': 'Peach'}, - {'id': 238, 'name': 'Rice'}, - {'id': 239, 'name': 'Wallet/Purse'}, - {'id': 240, 'name': 'Volleyball'}, - {'id': 241, 'name': 'Deer'}, - {'id': 242, 'name': 'Goose'}, - {'id': 243, 'name': 'Tape'}, - {'id': 244, 'name': 'Tablet'}, - {'id': 245, 'name': 'Cosmetics'}, - {'id': 246, 'name': 'Trumpet'}, - {'id': 247, 'name': 'Pineapple'}, - {'id': 248, 'name': 'Golf Ball'}, - {'id': 249, 'name': 'Ambulance'}, - {'id': 250, 'name': 'Parking meter'}, - {'id': 251, 'name': 'Mango'}, - {'id': 252, 'name': 'Key'}, - {'id': 253, 'name': 'Hurdle'}, - {'id': 254, 'name': 'Fishing Rod'}, - {'id': 255, 'name': 'Medal'}, - {'id': 256, 'name': 'Flute'}, - {'id': 257, 'name': 'Brush'}, - {'id': 258, 'name': 'Penguin'}, - {'id': 259, 'name': 'Megaphone'}, - {'id': 260, 'name': 'Corn'}, - {'id': 261, 'name': 'Lettuce'}, - {'id': 262, 'name': 'Garlic'}, - {'id': 263, 'name': 'Swan'}, - {'id': 264, 'name': 'Helicopter'}, - {'id': 265, 'name': 'Green Onion'}, - {'id': 266, 'name': 'Sandwich'}, - {'id': 267, 'name': 'Nuts'}, - {'id': 268, 'name': 'Speed Limit Sign'}, - {'id': 269, 'name': 'Induction Cooker'}, - {'id': 270, 'name': 'Broom'}, - {'id': 271, 'name': 'Trombone'}, - {'id': 272, 'name': 'Plum'}, - {'id': 273, 'name': 'Rickshaw'}, - {'id': 274, 'name': 'Goldfish'}, - {'id': 275, 'name': 'Kiwi fruit'}, - {'id': 276, 'name': 'Router/modem'}, - {'id': 277, 'name': 'Poker Card'}, - {'id': 278, 'name': 'Toaster'}, - {'id': 279, 'name': 'Shrimp'}, - {'id': 280, 'name': 'Sushi'}, - {'id': 281, 'name': 'Cheese'}, - {'id': 282, 'name': 'Notepaper'}, - {'id': 283, 'name': 'Cherry'}, - {'id': 284, 'name': 'Pliers'}, - {'id': 285, 'name': 'CD'}, - {'id': 286, 'name': 'Pasta'}, - {'id': 287, 'name': 'Hammer'}, - {'id': 288, 'name': 'Cue'}, - {'id': 289, 'name': 'Avocado'}, - {'id': 290, 'name': 'Hami melon'}, - {'id': 291, 'name': 'Flask'}, - {'id': 292, 'name': 'Mushroom'}, - {'id': 293, 'name': 'Screwdriver'}, - {'id': 294, 'name': 'Soap'}, - {'id': 295, 'name': 'Recorder'}, - {'id': 296, 'name': 'Bear'}, - {'id': 297, 'name': 'Eggplant'}, - {'id': 298, 'name': 'Board Eraser'}, - {'id': 299, 'name': 'Coconut'}, - {'id': 300, 'name': 'Tape Measure/ Ruler'}, - {'id': 301, 'name': 'Pig'}, - {'id': 302, 'name': 'Showerhead'}, - {'id': 303, 'name': 'Globe'}, - {'id': 304, 'name': 'Chips'}, - {'id': 305, 'name': 'Steak'}, - {'id': 306, 'name': 'Crosswalk Sign'}, - {'id': 307, 'name': 'Stapler'}, - {'id': 308, 'name': 'Camel'}, - {'id': 309, 'name': 'Formula 1 '}, - {'id': 310, 'name': 'Pomegranate'}, - {'id': 311, 'name': 'Dishwasher'}, - {'id': 312, 'name': 'Crab'}, - {'id': 313, 'name': 'Hoverboard'}, - {'id': 314, 'name': 'Meatball'}, - {'id': 315, 'name': 'Rice Cooker'}, - {'id': 316, 'name': 'Tuba'}, - {'id': 317, 'name': 'Calculator'}, - {'id': 318, 'name': 'Papaya'}, - {'id': 319, 'name': 'Antelope'}, - {'id': 320, 'name': 'Parrot'}, - {'id': 321, 'name': 'Seal'}, - {'id': 322, 'name': 'Butterfly'}, - {'id': 323, 'name': 'Dumbbell'}, - {'id': 324, 'name': 'Donkey'}, - {'id': 325, 'name': 'Lion'}, - {'id': 326, 'name': 'Urinal'}, - {'id': 327, 'name': 'Dolphin'}, - {'id': 328, 'name': 'Electric Drill'}, - {'id': 329, 'name': 'Hair Dryer'}, - {'id': 330, 'name': 'Egg tart'}, - {'id': 331, 'name': 'Jellyfish'}, - {'id': 332, 'name': 'Treadmill'}, - {'id': 333, 'name': 'Lighter'}, - {'id': 334, 'name': 'Grapefruit'}, - {'id': 335, 'name': 'Game board'}, - {'id': 336, 'name': 'Mop'}, - {'id': 337, 'name': 'Radish'}, - {'id': 338, 'name': 'Baozi'}, - {'id': 339, 'name': 'Target'}, - {'id': 340, 'name': 'French'}, - {'id': 341, 'name': 'Spring Rolls'}, - {'id': 342, 'name': 'Monkey'}, - {'id': 343, 'name': 'Rabbit'}, - {'id': 344, 'name': 'Pencil Case'}, - {'id': 345, 'name': 'Yak'}, - {'id': 346, 'name': 'Red Cabbage'}, - {'id': 347, 'name': 'Binoculars'}, - {'id': 348, 'name': 'Asparagus'}, - {'id': 349, 'name': 'Barbell'}, - {'id': 350, 'name': 'Scallop'}, - {'id': 351, 'name': 'Noddles'}, - {'id': 352, 'name': 'Comb'}, - {'id': 353, 'name': 'Dumpling'}, - {'id': 354, 'name': 'Oyster'}, - {'id': 355, 'name': 'Table Tennis paddle'}, - {'id': 356, 'name': 'Cosmetics Brush/Eyeliner Pencil'}, - {'id': 357, 'name': 'Chainsaw'}, - {'id': 358, 'name': 'Eraser'}, - {'id': 359, 'name': 'Lobster'}, - {'id': 360, 'name': 'Durian'}, - {'id': 361, 'name': 'Okra'}, - {'id': 362, 'name': 'Lipstick'}, - {'id': 363, 'name': 'Cosmetics Mirror'}, - {'id': 364, 'name': 'Curling'}, - {'id': 365, 'name': 'Table Tennis '}, -] - - -def _get_builtin_metadata(): - id_to_name = {x['id']: x['name'] for x in categories_v2_fix} - thing_dataset_id_to_contiguous_id = { - x['id']: i for i, x in enumerate( - sorted(categories_v2_fix, key=lambda x: x['id']))} - thing_classes = [id_to_name[k] for k in sorted(id_to_name)] - return { - "thing_dataset_id_to_contiguous_id": thing_dataset_id_to_contiguous_id, - "thing_classes": thing_classes} - - -_PREDEFINED_SPLITS_OBJECTS365 = { - "objects365_v2_train": ("objects365/train", "objects365/annotations/zhiyuan_objv2_train_fixname_fixmiss.json"), - # 80,000 images, 1,240,587 annotations - "objects365_v2_val": ("objects365/val", "objects365/annotations/zhiyuan_objv2_val_fixname.json"), - "objects365_v2_val_rare": ("objects365/val", "objects365/annotations/zhiyuan_objv2_val_fixname_rare.json"), -} - -for key, (image_root, json_file) in _PREDEFINED_SPLITS_OBJECTS365.items(): - register_coco_instances( - key, - _get_builtin_metadata(), - os.path.join("datasets", json_file) if "://" not in json_file else json_file, - os.path.join("datasets", image_root), - ) \ No newline at end of file diff --git a/spaces/terfces0erbo/CollegeProjectV2/Aigiri Nandini Lyrics In Malayalam Pdf __FULL__ Download.md b/spaces/terfces0erbo/CollegeProjectV2/Aigiri Nandini Lyrics In Malayalam Pdf __FULL__ Download.md deleted file mode 100644 index a6620f35b108c20c872d9cd6d2ca241435821f21..0000000000000000000000000000000000000000 --- a/spaces/terfces0erbo/CollegeProjectV2/Aigiri Nandini Lyrics In Malayalam Pdf __FULL__ Download.md +++ /dev/null @@ -1,59 +0,0 @@ -
        -

        Aigiri Nandini Lyrics in Malayalam PDF Download - A Guide to the Devotional Song of Goddess Durga

        - -

        Aigiri Nandini is a popular devotional song that praises Goddess Durga, the supreme power who vanquished the demon Mahishasura. The song is also known as Mahishasura Mardini Stotram or Mahishasur Maridhini Sloka. It was composed by the great sage Adi Shankaracharya in Sanskrit and has been translated into many languages, including Malayalam.

        - -

        In this article, we will provide you with the Aigiri Nandini lyrics in Malayalam PDF download link, as well as some information about the meaning and benefits of this song. We will also share some tips on how to sing or recite this song with devotion and reverence.

        -

        aigiri nandini lyrics in malayalam pdf download


        Download Ziphttps://bytlly.com/2uGkI2



        - -

        What is the Meaning of Aigiri Nandini Lyrics in Malayalam?

        - -

        The Aigiri Nandini lyrics in Malayalam are a poetic expression of the glory and attributes of Goddess Durga, who is also called Mahishasura Mardini, the slayer of the buffalo-demon Mahishasura. The song describes how she manifested from the combined energies of all the gods to fight against the evil forces that threatened the universe. It also depicts her various forms, weapons, actions, and qualities that inspire awe and admiration.

        - -

        The song begins with the refrain "Aigiri Nandini", which means "O daughter of the mountain". This refers to Goddess Parvati, who is the consort of Lord Shiva and the mother of Lord Ganesha and Lord Kartikeya. She is also called Shailasuta, which means "daughter of the Himalaya". The song then praises her as the one who makes the whole earth and universe happy, who is worshipped by Nandi (the bull of Shiva), who dwells on the peak of Vindhya mountain, who delights in Vishnu (the preserver god), and who has many families.

        - -

        The song then narrates how she destroyed Mahishasura and his army of demons with her fierce and valiant deeds. She is described as the bestower of boons on gods, the one who assails those hard to control, who tolerates those with ugly faces, who nourishes the three worlds, who pleases Shiva, who removes sins, who engrosses in the sound of Om, who is angry with the progeny of Danu and Diti (two clans of demons), who destroys those with evil pride, and who is the daughter of the ocean.

        - -

        The song also portrays her beauty and grace as she dances on the battlefield. She is depicted as the mother of the world, who loves to dwell in a forest of Kadamba trees, who keeps on smiling, who is very sweet, who has the treasure of demons Madhu and Kaitabha (whom she killed along with Vishnu), who is engaged in dancing. She is also adorned with various ornaments and flowers that enhance her charm. She wears a garland of skulls, a crescent moon on her forehead, earrings made of snakes, a necklace of pearls, bracelets of gold, anklets that make jingling sounds, and a lotus in her hand.

        - -

        The song ends with a salutation to her as the one who grants fearlessness to those who seek refuge in her, who holds a trident that pierces through the heads of enemies, who makes a loud roar that terrifies the demons, who plays with a drum that produces thunderous sounds, who dances with joy and passion, who has beautiful eyes that bewilder everyone, and who is surrounded by bees that hum her praises.

        - -

        What are the Benefits of Aigiri Nandini Lyrics in Malayalam?

        - -

        Aigiri Nandini lyrics in Malayalam have many benefits for those who sing or recite them with faith and devotion. Some of them are:

        - -
          -
        • They invoke the blessings and protection of Goddess Durga, who is the source of all power and strength.
        • -
        • They remove fear, anxiety, sorrow, and negativity from one's mind and heart.
        • -
        • They increase courage, confidence, wisdom, and prosperity in one's life.
        • -
        • They purify one's karma and remove obstacles and difficulties from one's path.
        • -
        • They enhance one's devotion and love for God and all beings.
        • -
        • They awaken one's latent spiritual potential and lead one to liberation.
        • -
        - -

        How to Sing or Recite Aigiri Nandini Lyrics in Malayalam?

        - -

        If you want to sing or recite Aigiri Nandini lyrics in Malayalam, you can follow these steps:

        - -
          -
        1. Download the Aigiri Nandini lyrics in Malayalam PDF from here.
        2. -
        3. Print or open it on your device.
        4. -
        5. Find a quiet and comfortable place where you can sit or stand without any disturbance.
        6. -
        7. Light a lamp or a candle and offer some flowers or fruits to Goddess Durga as a sign of respect and gratitude.
        8. -
        9. Close your eyes and take a few deep breaths to calm your mind and body.
        10. -
        11. Invoke Goddess Durga in your heart and pray for her guidance and grace.
        12. -
        13. Open your eyes and start singing or reciting Aigiri Nandini lyrics in Malayalam with clear pronunciation and proper rhythm.
        14. -
        15. You can use a music track or an instrument to accompany your voice if you wish.
        16. -
        17. You can also use gestures or mudras to express your emotions and devotion while singing or reciting.
        18. -
        19. Sing or recite Aigiri Nandini lyrics in Malayalam at least once or thrice or nine times or more depending on your time and preference.
        20. -
        21. At the end of each cycle, bow down to Goddess Durga and thank her for her blessings.
        22. -
        23. You can also meditate on her form or mantra for some time after singing or reciting.
        24. -
        - -

        Aigiri Nandini lyrics in Malayalam are a powerful way to connect with Goddess Durga and experience her divine grace. We hope this article has helped you understand more about this devotional song and how to sing or recite it with devotion. May Goddess Durga bless you with peace, happiness, success, and liberation!

        -

        -

        Conclusion

        - -

        Aigiri Nandini lyrics in Malayalam are a wonderful way to express your devotion and admiration to Goddess Durga, the supreme power who vanquished the demon Mahishasura. By singing or reciting this song with faith and love, you can invoke her blessings and protection in your life. You can also enjoy the beauty and grace of her various forms and attributes that inspire awe and reverence. We hope this article has helped you learn more about Aigiri Nandini lyrics in Malayalam and how to sing or recite them with devotion. May Goddess Durga shower you with her grace and guide you to the ultimate goal of life!

        3cee63e6c2
        -
        -
        \ No newline at end of file diff --git a/spaces/terfces0erbo/CollegeProjectV2/Allwinner A13 Touch Screen Driver 89.md b/spaces/terfces0erbo/CollegeProjectV2/Allwinner A13 Touch Screen Driver 89.md deleted file mode 100644 index 2feb6c285a343028a8c56be8f469ce7b8ba763fe..0000000000000000000000000000000000000000 --- a/spaces/terfces0erbo/CollegeProjectV2/Allwinner A13 Touch Screen Driver 89.md +++ /dev/null @@ -1,30 +0,0 @@ - -

        How to Install Allwinner A13 Touch Screen Driver 89 on Your Tablet

        -

        If you have a tablet that uses the Allwinner A13 chipset, you may need to install the touch screen driver 89 to make your device work properly. The touch screen driver 89 is a firmware update that improves the performance and compatibility of the touch screen with various apps and games. In this article, we will show you how to install the Allwinner A13 touch screen driver 89 on your tablet in a few simple steps.

        -

        What is the Allwinner A13 chipset?

        -

        The Allwinner A13 is a low-cost and low-power system-on-chip (SoC) that is widely used in many Android tablets. It features a single-core ARM Cortex-A8 processor clocked at 1 GHz, a Mali-400 GPU, and supports up to 512 MB of RAM. The Allwinner A13 chipset supports various display resolutions, cameras, audio codecs, and sensors. However, some tablets may require specific drivers to enable the full functionality of the hardware.

        -

        allwinner a13 touch screen driver 89


        Download Zip >> https://bytlly.com/2uGlvd



        -

        Why do you need the touch screen driver 89?

        -

        The touch screen driver 89 is a firmware update that fixes some issues with the touch screen sensitivity and accuracy on some Allwinner A13 tablets. Some users have reported that their touch screens are not responsive enough or register false touches when using certain apps or games. The touch screen driver 89 aims to solve these problems by optimizing the touch screen calibration and filtering algorithms. The touch screen driver 89 also adds support for multi-touch gestures such as pinch-to-zoom and swipe-to-scroll.

        -

        How to install the touch screen driver 89?

        -

        Before you install the touch screen driver 89, you need to make sure that your tablet is compatible with it. You can check your tablet model and firmware version by going to Settings > About tablet. You should see something like "Allwinner A13" or "A13" in the model number, and "4.0.4" or "4.1.1" in the Android version. If your tablet matches these criteria, you can proceed with the installation.

        -

        To install the touch screen driver 89, you will need a microSD card, a card reader, and a computer. You will also need to download the touch screen driver 89 file from this link: https://www.example.com/allwinner-a13-touch-screen-driver-89.zip. This file contains the firmware update and a tool called PhoenixCard that will help you flash it to your tablet.

        -

        Follow these steps to install the touch screen driver 89:

        -

        -
          -
        1. Insert the microSD card into the card reader and connect it to your computer.
        2. -
        3. Extract the zip file that you downloaded and open the PhoenixCard folder.
        4. -
        5. Run the PhoenixCard.exe file as administrator.
        6. -
        7. Select your microSD card from the drop-down menu at the top left corner.
        8. -
        9. Select "Write Mode" from the drop-down menu at the bottom left corner.
        10. -
        11. Click on "Img File" and browse to the folder where you extracted the zip file. Select the file named "a13-touch-screen-driver-89.img".
        12. -
        13. Click on "Format to Normal" and wait for it to finish.
        14. -
        15. Click on "Burn" and wait for it to finish.
        16. -
        17. Eject the microSD card from your computer and insert it into your tablet.
        18. -
        19. Turn off your tablet and then turn it on again while holding down the volume up button.
        20. -
        21. You should see a green Android logo on your screen indicating that the firmware update is in progress.
        22. -
        23. Wait for it to finish and then reboot your tablet normally.
        24. -
        -

        Congratulations! You have successfully installed the Allwinner A13 touch screen driver 89 on your tablet. You should notice an improvement in your touch screen performance and compatibility. Enjoy using your tablet with more ease and fun!

        d5da3c52bf
        -
        -
        \ No newline at end of file diff --git a/spaces/terfces0erbo/CollegeProjectV2/Fsx Virtavia Eurocopter As365n Dauphin.md b/spaces/terfces0erbo/CollegeProjectV2/Fsx Virtavia Eurocopter As365n Dauphin.md deleted file mode 100644 index 663c9f2ed5b454de4331b8228aa1dafd7b91548c..0000000000000000000000000000000000000000 --- a/spaces/terfces0erbo/CollegeProjectV2/Fsx Virtavia Eurocopter As365n Dauphin.md +++ /dev/null @@ -1,6 +0,0 @@ -

        fsx virtavia eurocopter as365n dauphin


        Download ————— https://bytlly.com/2uGiZe



        -
        -AS365N Dauphin Shands. This is a repaint of Alphasim AS365N_Dauphin. I know Shands used an Aerospatial SA-365N-1 Dauphin 2 in 2002. 8a78ff9644
        -
        -
        -

        diff --git a/spaces/terfces0erbo/CollegeProjectV2/Gemvision Matrix 7 0 Crack 1.md b/spaces/terfces0erbo/CollegeProjectV2/Gemvision Matrix 7 0 Crack 1.md deleted file mode 100644 index ce36aa4493fc4257b38a808fcc9dc7fc0bf977e5..0000000000000000000000000000000000000000 --- a/spaces/terfces0erbo/CollegeProjectV2/Gemvision Matrix 7 0 Crack 1.md +++ /dev/null @@ -1,9 +0,0 @@ - -

        Features: The jewelery modeling program is suitable for the user to create any type of shapes or geometrical objects. Furthermore, it is a customizable feature of the Gemvision software and can be used in a variety of ways. Personalize the face and the body of the stone, gemstone or the pieces of jewelry.

        -

        Gemvision is the same eye-candy that allows you to download the models of various stones and gemstone in 3D models. Gemvision enables users to create a variety of shapes or geometric shapes of jewelery. Gemvision Crack the 3D modeling and the creation of your own designs.

        -

        gemvision matrix 7 0 crack 1


        DOWNLOAD ✪✪✪ https://bytlly.com/2uGkvI



        -

        Gemvision Matrix really excited to have the large selection of stones or gemstone shapes that will be converted into 3D Models. The jewelry designer also can use basic tools to shape the shape of the stone's face. The jewelry designer can use basic tools, such as simple tools to shape the shape of the stone's face.

        -

        Gemvision have pre-designed shapes of stones, which are among the most favorite parts of the jewelry master. Gemvisions include thousands of 3D shapes of all kinds of stones and gemstones. The jeweler gets tools to create the shape of the stone, and a stone face. The process will be flexible tool is a diamond cutter, which can add some details to the shape.

        -

        Gemvisions marketplace all groups from the catalog. These groups include a variety of stones, the most popular stones and gemstone. The complex shapes can be opened, and then the stone or gemstone can be shaped using the drawing tools. Gemvisions software also offers a variety of stone chipping tool.

        899543212b
        -
        -
        \ No newline at end of file diff --git a/spaces/tialenAdioni/chat-gpt-api/logs/Asus Drivers Update Utility License Key19 A Guide to Updating Your Asus Device Drivers.md b/spaces/tialenAdioni/chat-gpt-api/logs/Asus Drivers Update Utility License Key19 A Guide to Updating Your Asus Device Drivers.md deleted file mode 100644 index d8479cbf869a327df8463dc6a0d7f0d4bff00c68..0000000000000000000000000000000000000000 --- a/spaces/tialenAdioni/chat-gpt-api/logs/Asus Drivers Update Utility License Key19 A Guide to Updating Your Asus Device Drivers.md +++ /dev/null @@ -1,176 +0,0 @@ - -

        Asus Drivers Update Utility License Key19: What You Need to Know

        -

        If you own an Asus laptop or desktop, you may have encountered some issues with your drivers. Drivers are software components that enable your hardware devices to communicate with your operating system. Without proper drivers, your computer may not function optimally or even crash.

        -

        Asus Drivers Update Utility License Key19


        DOWNLOADhttps://urlcod.com/2uK9ju



        -

        That's why it's important to keep your drivers updated regularly. However, finding and installing the right drivers for your Asus device can be a hassle. You may not know which drivers are compatible with your system, where to download them from, or how to install them correctly.

        -

        Fortunately, there is a solution: Asus Drivers Update Utility. This is a handy tool that can automatically scan your system, detect your hardware devices, and download and install the latest drivers for them. It can save you time and trouble, and ensure that your Asus device runs smoothly and efficiently.

        -

        But before you can use this utility, you need a license key. A license key is a code that unlocks the full features and functions of the program. Without a license key, you can only use a limited version of the utility that may not update all your drivers.

        -

        So how do you get a license key for Asus Drivers Update Utility? And how do you use it? In this article, we will answer these questions and more. We will explain what Asus Drivers Update Utility is, what a license key is, why you need one, how to get one, and how to use one. By the end of this article, you will have all the information you need to update your drivers with ease.

        -

        Introduction

        -

        What is Asus Drivers Update Utility?

        -

        Asus Drivers Update Utility is a software program developed by DGTSoft Inc., a company that specializes in driver update tools. It is designed specifically for Asus devices, such as laptops, desktops, tablets, and smartphones.

        -

        The utility can scan your system and identify your hardware devices, such as CPU, motherboard, graphics card, sound card, network card, webcam, keyboard, mouse, printer, scanner, etc. It can then compare your current drivers with the latest ones available on the Asus official website or other reliable sources. It can then download and install the most compatible and up-to-date drivers for your devices.

        -

        By using Asus Drivers Update Utility, you can enjoy several benefits:

        -
          -
        • You can avoid driver-related problems, such as poor performance, errors, crashes, blue screens, etc.
        • -
        • You can improve your system stability, security, speed, and efficiency.
        • -
        • You can enhance your hardware functionality and compatibility.
        • -
        • You can save time and effort by automating the driver update process.
        • -
        -

        What is a license key?

        -

        A license key is a code that activates the full version of Asus Drivers Update Utility. It usually consists of 19 alphanumeric characters (letters and numbers), such as XXXXX-XXXXX-XXXXX-XXXXX.

        -

        Asus Drivers Update Utility License Key19 free download
        -Asus Drivers Update Utility License Key19 crack
        -Asus Drivers Update Utility License Key19 activation code
        -Asus Drivers Update Utility License Key19 serial number
        -Asus Drivers Update Utility License Key19 full version
        -Asus Drivers Update Utility License Key19 for windows 10
        -Asus Drivers Update Utility License Key19 for windows 7
        -Asus Drivers Update Utility License Key19 for windows 8
        -Asus Drivers Update Utility License Key19 for windows xp
        -Asus Drivers Update Utility License Key19 for mac
        -Asus Drivers Update Utility License Key19 for linux
        -Asus Drivers Update Utility License Key19 online
        -Asus Drivers Update Utility License Key19 offline
        -Asus Drivers Update Utility License Key19 review
        -Asus Drivers Update Utility License Key19 tutorial
        -Asus Drivers Update Utility License Key19 manual
        -Asus Drivers Update Utility License Key19 guide
        -Asus Drivers Update Utility License Key19 video
        -Asus Drivers Update Utility License Key19 blog
        -Asus Drivers Update Utility License Key19 forum
        -Asus Drivers Update Utility License Key19 reddit
        -Asus Drivers Update Utility License Key19 quora
        -Asus Drivers Update Utility License Key19 facebook
        -Asus Drivers Update Utility License Key19 twitter
        -Asus Drivers Update Utility License Key19 instagram
        -Asus Drivers Update Utility License Key19 youtube
        -Asus Drivers Update Utility License Key19 pinterest
        -Asus Drivers Update Utility License Key19 amazon
        -Asus Drivers Update Utility License Key19 ebay
        -Asus Drivers Update Utility License Key19 aliexpress
        -Asus Drivers Update Utility License Key19 walmart
        -Asus Drivers Update Utility License Key19 best buy
        -Asus Drivers Update Utility License Key19 costco
        -Asus Drivers Update Utility License Key19 price
        -Asus Drivers Update Utility License Key19 discount
        -Asus Drivers Update Utility License Key19 coupon
        -Asus Drivers Update Utility License Key19 offer
        -Asus Drivers Update Utility License Key19 deal
        -Asus Drivers Update Utility License Key19 sale
        -Asus Drivers Update Utility License Key19 refund
        -Asus Drivers Update Utility License Key19 warranty
        -Asus Drivers Update Utility License Key19 support
        -Asus Drivers Update Utility License Key19 customer service
        -Asus Drivers Update Utility License Key19 contact number
        -Asus Drivers Update Utility License Key19 email address
        -Asus Drivers Update Utility License Key19 website link
        -Asus Drivers Update Utility License Key19 download link
        -Asus Drivers Update Utility License Key19 product key generator

        -

        A license key is required to unlock all the features and functions of the utility. Without a license key, you can only use a trial version of the utility that has some limitations:

        -
          -
        • You can only scan your system and view the outdated drivers. You cannot download or install them.
        • -
        • You can only update one driver per day.
        • -
        • You cannot backup or restore your drivers.
        • -
        • You cannot access customer support or technical assistance.
        • -
        -

        Why do you need a license key for Asus Drivers Update Utility?

        -

        A license key for Asus Drivers Update Utility is necessary if you want to enjoy the full benefits of the program. With a license key, you can:

        -
          -
        • Download and install all the latest drivers for your devices in one click.
        • -
        • Update unlimited drivers per day.
        • -
        • Backup and restore your drivers in case of any problems.
        • -
        • Access customer support and technical assistance anytime.
        • -
        -

        How to get a license key for Asus Drivers Update Utility?

        -

        Option 1: Purchase a license key from Asus official website

        -

        The most reliable and recommended way to get a license key for Asus Drivers Update Utility is to purchase one from the Asus official website. This way, you can ensure that you get a valid and genuine license key that works with your utility version and system configuration.

        -

        To purchase a license key from the Asus official website:

        -
          -
        1. Go to https://www.asus.com/support/download-center.
        2. -
        3. Select your product model or enter it in the search box.
        4. -
        5. Select [Driver & Tools] from the menu.
        6. -
        7. Select [Asus Drivers Update Utility] from the list of utilities.
        8. -
        9. Select [Buy Now] from the product page.
        10. -
        11. Follow the instructions to complete the payment process.
        12. -
        13. You will receive an email with your license key within minutes after payment confirmation.
        14. -
        -

        Option 2: Download a license key generator from third-party websites

        -

        An alternative way to get a license key for Asus Drivers Update Utility is to download one from third-party websites that offer free or cracked license keys. These websites claim that they can generate unlimited license keys for any software program with their tools or algorithms.

        -

        However, this method is not recommended for several reasons:

        -
          -
        • The license keys may not work with your utility version or system configuration. They may be invalid or expired.
        • -
        • The license keys may contain viruses or malware that can harm your computer or steal your personal information.
        • -
        • The license keys may violate the terms and conditions of Asus Drivers Update Utility. You may face legal consequences or lose access to customer support or technical assistance.
        • -
        -

        Option 3: Use a free trial version of Asus Drivers Update Utility

        -

        A third option to get a license key for Asus Drivers Update Utility is to use a free trial version of the utility. The free trial version allows you to scan your system and view the outdated drivers for free. However, as mentioned earlier, it has some limitations:

        -
          -
        • You cannot download or install any drivers.
        • -
        • You can only update one driver per day.
        • -
        • You cannot backup or restore your drivers.
        • -
        • You cannot access customer support or technical assistance.
        • -
        -

        How to use a license key for Asus Drivers Update Utility?

        -

        Step 1: Download and install Asus Drivers Update Utility

        -

        To use a license key for Asus Drivers Update Utility:

        -
          -
        1. Download Asus Drivers Update Utility from https://www.asus.com/support/download-center.
        2. -
        3. Select [Run] when prompted by Windows Security Alert dialog box (or save it on desktop then double-click on it).
        4. -
        5. Select [Yes] when prompted by User Account Control dialog box (or enter administrator password if required).
        6. -
        7. Select [Next] when prompted by Setup Wizard dialog box (or change installation folder if desired).
        8. -
        9. Select [I accept] when prompted by License Agreement dialog box (or read it carefully before accepting).
        10. -
        11. Select [Next] when prompted by Select Additional Tasks dialog box (or check/uncheck options if desired).
        12. -
        13. Select [Install] when prompted by Ready To Install dialog box (or review installation settings if desired).
        14. - ```html [Launch Asus Drivers Update Utility] if you want to run it immediately). -
        -

        Step 2: Launch the program and enter the license key

        -

        To launch Asus Drivers Update Utility:

        -
          -
        1. Double-click on the [Asus Drivers Update Utility] icon on your desktop (or select [Start] > [All Programs] > [Asus Drivers Update Utility]).
        2. -
        3. Select [Register] from the main interface.
        4. -
        5. Enter your license key in the text box and select [Register Now].
        6. -
        7. You will see a message that says "Registration Successful". Select [OK].
        8. -
        -

        Step 3: Scan your system and update your drivers

        -

        To scan your system and update your drivers:

        -
          -
        1. Select [Start Scan] from the main interface.
        2. -
        3. The utility will scan your system and display a list of outdated drivers.
        4. -
        5. Select [Get Drivers] to download the latest drivers for your devices.
        6. -
        7. Select [Install] to install the downloaded drivers.
        8. -
        9. Restart your computer when prompted by the utility.
        10. -
        -

        Conclusion

        -

        Summary of the main points

        -

        In this article, we have explained what Asus Drivers Update Utility is, what a license key is, why you need one, how to get one, and how to use one. We have also shown you how to backup and restore your drivers with the utility. By following these steps, you can keep your drivers updated and your Asus device running smoothly and efficiently.

        -

        Call to action

        -

        If you want to download Asus Drivers Update Utility and get a license key for it, you can visit the Asus official website and follow the instructions. Alternatively, you can use a free trial version of the utility or download a license key generator from third-party websites (but we do not recommend this option).

        -

        We hope this article has been helpful and informative. If you have any questions or feedback, please feel free to contact us. Thank you for reading!

        -

        FAQs

        -

        Q: How much does a license key for Asus Drivers Update Utility cost?

        -

        A: A license key for Asus Drivers Update Utility costs $29.95 USD for one year or $39.95 USD for lifetime. You can pay with PayPal or credit card.

        -

        Q: How can I check if my drivers are updated?

        -

        A: You can check if your drivers are updated by using Asus Drivers Update Utility or by visiting the Device Manager in Windows. To access Device Manager, type and search [Device Manager] in the Windows search bar, then click [Open]. You can see a list of your hardware devices and their driver status. If you see a yellow exclamation mark or a red cross next to a device, it means that the driver is outdated or missing.

        -

        Q: How often should I update my drivers?

        -

        A: You should update your drivers whenever there is a new version available or when you encounter any driver-related problems. You can use Asus Drivers Update Utility to check for driver updates automatically or manually.

        -

        Q: What are the risks of using outdated drivers?

        -

        A: Using outdated drivers can cause various problems, such as:

        -
          -
        • Poor performance or compatibility of your hardware devices.
        • -
        • Errors, crashes, blue screens, or freezes of your system.
        • -
        • Security vulnerabilities or malware infections.
        • -
        • Data loss or corruption.
        • -
        -

        Q: What are the benefits of using Asus Drivers Update Utility?

        -

        A: Using Asus Drivers Update Utility can help you:

        -
          -
        • Avoid driver-related problems and ensure optimal system performance.
        • -
        • Improve hardware functionality and compatibility.
        • -
        • Save time and effort by automating the driver update process.
        • -
        • Backup and restore your drivers easily.
        • -
        • Access customer support and technical assistance anytime.
        • -
        -

        0a6ba089eb
        -
        -
        \ No newline at end of file diff --git a/spaces/tialenAdioni/chat-gpt-api/logs/Cmo Arreglar el Error de Pes 2013 No Ha Sido Instalado Crack.md b/spaces/tialenAdioni/chat-gpt-api/logs/Cmo Arreglar el Error de Pes 2013 No Ha Sido Instalado Crack.md deleted file mode 100644 index 0b188cf6df6d375506e45a592067dd84ad1bf893..0000000000000000000000000000000000000000 --- a/spaces/tialenAdioni/chat-gpt-api/logs/Cmo Arreglar el Error de Pes 2013 No Ha Sido Instalado Crack.md +++ /dev/null @@ -1,138 +0,0 @@ - -

        Pes 2013 No Ha Sido Instalado Crack: Cómo solucionar este error común

        -

        Si eres fanático del juego de fútbol Pro Evolution Soccer 2013, es posible que te hayas encontrado con un problema al intentar iniciarlo: un mensaje que dice "Pes 2013 No Ha Sido Instalado". Esto puede ser muy frustrante, sobre todo si has descargado el juego de forma ilegal y has aplicado un crack para poder jugarlo sin necesidad de comprarlo.

        -

        Pero no te preocupes, en este artículo te vamos a explicar cómo solucionar este error de forma sencilla y rápida, para que puedas disfrutar del juego sin problemas. Eso sí, te recomendamos que si te gusta el juego, lo compres de forma legal y apoyes a sus creadores, ya que así podrás acceder a todas sus actualizaciones y mejoras.

        -

        Pes 2013 No Ha Sido Instalado Crack


        Download ->->->-> https://urlcod.com/2uK3jR



        -

        ¿Qué causa el error Pes 2013 No Ha Sido Instalado?

        -

        El error Pes 2013 No Ha Sido Instalado se debe a que el juego no reconoce el código de registro que se genera al instalarlo. Esto puede ocurrir por varias razones:

        -
          -
        • Has instalado el juego de forma incorrecta o incompleta.
        • -
        • Has aplicado un crack que no es compatible con tu versión del juego o que está dañado.
        • -
        • Has borrado o modificado algún archivo del juego o del registro de Windows.
        • -
        • Tu antivirus o firewall ha bloqueado o eliminado algún archivo del juego o del crack.
        • -
        -

        Para solucionar este error, debes asegurarte de que el juego está correctamente instalado y registrado, y que el crack que has usado es el adecuado y está funcionando correctamente.

        -

        ¿Cómo solucionar el error Pes 2013 No Ha Sido Instalado?

        -

        Para solucionar el error Pes 2013 No Ha Sido Instalado, debes seguir estos pasos:

        -
          -
        1. Desinstala el juego por completo y borra todos los archivos residuales que puedan quedar en tu disco duro.
        2. -
        3. Vuelve a instalar el juego siguiendo las instrucciones del instalador. Asegúrate de tener espacio suficiente en tu disco duro y de no interrumpir el proceso.
        4. -
        5. Cuando te pida el código de registro, introduce uno válido. Puedes usar este: V7TV-W3JX-6CC3-3DDU-Y3W7. Si ya lo has usado antes, busca otro en internet o compra el juego de forma legal.
        6. -
        7. Una vez instalado el juego, no lo inicies todavía. Busca un crack que sea compatible con tu versión del juego y que tenga buenas opiniones de otros usuarios. Descárgalo y extráelo en una carpeta aparte.
        8. -
        9. Copia el archivo del crack y pégalo en la carpeta donde se instaló el juego, reemplazando el archivo original. Asegúrate de hacer una copia de seguridad del archivo original por si acaso.
        10. -
        11. Desactiva tu antivirus y tu firewall temporalmente, o añade el juego y el crack a la lista de excepciones. Esto evitará que estos programas bloqueen o eliminen algún archivo necesario para el funcionamiento del juego.
        12. -
        13. Inicia el juego como administrador y comprueba si funciona correctamente. Si todo va bien, ya puedes disfrutar del juego sin problemas.
        14. -
        -

        Conclusión

        -

        El error Pes 2013 No Ha Sido Instalado es un problema común que afecta a muchos usuarios que han descargado el juego de forma ilegal y han aplicado un crack para poder jugarlo sin comprarlo. Este error se debe a que el juego no reconoce el código de registro que se genera al instalarlo, y se puede solucionar siguiendo los pasos que hemos explicado en este artículo.

        -

        No obstante, te recomendamos que si te gusta el juego, lo compres de forma legal y apoyes a sus creadores, ya que así podrás acceder a todas sus actualizaciones y mejoras, y evitarás posibles problemas legales o técnicos. Además, estarás contribuyendo al desarrollo de la industria del videojuego y al entretenimiento de millones de personas.

        -

        ¿Qué riesgos tiene usar el crack Pes 2013 No Ha Sido Instalado?

        -

        Aunque usar el crack Pes 2013 No Ha Sido Instalado pueda parecer una forma fácil y barata de jugar al juego, también tiene sus riesgos y desventajas. Algunos de los riesgos que tiene usar el crack Pes 2013 No Ha Sido Instalado son:

        -

        Como solucionar el error de Pes 2013 no ha sido instalado
        -Descargar e instalar Pes 2013 full con crack y serial
        -Pes 2013 no arranca después de instalar el crack
        -Solución definitiva al problema de Pes 2013 no ha sido instalado
        -Pes 2013 no funciona en Windows 10 con crack
        -Cómo activar Pes 2013 sin crack ni serial
        -Pes 2013 se cierra solo al iniciar con crack
        -Descargar crack para Pes 2013 versión 1.04
        -Cómo jugar online a Pes 2013 con crack
        -Pes 2013 no reconoce el crack y pide disco original
        -Cómo actualizar Pes 2013 a la última versión con crack
        -Pes 2013 no se instala correctamente con el crack
        -Cómo reparar el archivo rld.dll de Pes 2013 con crack
        -Cómo cambiar el idioma de Pes 2013 con crack
        -Cómo descargar e instalar el parche de Pes 2013 con crack
        -Cómo instalar el kitserver de Pes 2013 con crack
        -Cómo descargar e instalar los comentarios de Pes 2013 con crack
        -Cómo instalar los estadios de Pes 2013 con crack
        -Cómo instalar las faces y hairs de Pes 2013 con crack
        -Cómo instalar las botas y balones de Pes 2013 con crack
        -Cómo instalar las camisetas y escudos de Pes 2013 con crack
        -Cómo instalar los cánticos y músicas de Pes 2013 con crack
        -Cómo mejorar el rendimiento y la calidad gráfica de Pes 2013 con crack
        -Cómo configurar el mando o joystick de Pes 2013 con crack
        -Cómo crear y editar jugadores en Pes 2013 con crack
        -Cómo crear y editar equipos en Pes 2013 con crack
        -Cómo crear y editar ligas y copas en Pes 2013 con crack
        -Cómo jugar al modo carrera o master league en Pes 2013 con crack
        -Cómo jugar al modo leyenda o become a legend en Pes 2013 con crack
        -Cómo jugar al modo entrenamiento o training en Pes 2013 con crack
        -Cómo jugar al modo exhibición o exhibition en Pes 2013 con crack
        -Cómo jugar al modo copa internacional o international cup en Pes 2013 con crack
        -Cómo jugar al modo copa konami o konami cup en Pes 2013 con crack
        -Cómo jugar al modo copa libertadores o libertadores cup en Pes 2013 con crack
        -Cómo jugar al modo copa sudamericana o sudamericana cup en Pes 2013 con crack
        -Cómo jugar al modo copa recopa o recopa cup en Pes 2013 con crack
        -Cómo jugar al modo copa afc o afc cup en Pes 2013 con crack
        -Cómo jugar al modo copa uefa o uefa cup en Pes 2013 con crack
        -Cómo jugar al modo copa champions o champions cup en Pes 2013 con crack
        -Cómo jugar al modo copa mundial o world cup en Pes 2013 con crack
        -Cómo jugar al modo copa confederaciones o confederations cup en Pes 2013 con crack
        -Cómo jugar al modo copa euro o euro cup en Pes 2013 con crack
        -Cómo jugar al modo copa africa o africa cup en Pes 2013 con crack
        -Cómo jugar al modo copa asia o asia cup en Pes 2013 con crack
        -Cómo jugar al modo copa america o america cup en Pes 2013 con crack
        -Cómo jugar al modo copa oceanía o oceanía cup en Pes 2013 con crack
        -Cómo descargar e instalar los option files de Pes 2013 con crack
        -Cómo hacer un backup o copia de seguridad de Pes 2013 con crack
        -Cómo desinstalar o eliminar por completo Pes 2013 con crack
        -Dónde comprar o descargar gratis el juego original de Pes 2013 sin crack

        -
          -
        • Puedes infectar tu ordenador con virus, malware o spyware que dañen tu sistema o roben tu información personal.
        • -
        • Puedes tener problemas legales por violar los derechos de autor y la propiedad intelectual del juego y sus creadores.
        • -
        • Puedes perder la garantía y el soporte técnico del juego, así como la posibilidad de acceder a las actualizaciones y mejoras que se lancen.
        • -
        • Puedes tener una mala experiencia de juego, con errores, fallos, bugs o incompatibilidades que afecten al rendimiento y la calidad del juego.
        • -
        • Puedes ser baneado o expulsado del modo online del juego, o tener dificultades para encontrar otros jugadores o servidores.
        • -
        -

        Por estas razones, te recomendamos que no uses el crack Pes 2013 No Ha Sido Instalado y que compres el juego de forma legal y segura. Así podrás disfrutar del juego al máximo y sin problemas.

        -

        ¿Qué alternativas hay al crack Pes 2013 No Ha Sido Instalado?

        -

        Si no quieres usar el crack Pes 2013 No Ha Sido Instalado, pero tampoco quieres pagar por el juego, hay algunas alternativas que puedes probar. Algunas de las alternativas que hay al crack Pes 2013 No Ha Sido Instalado son:

        -
          -
        • Descargar el juego desde una plataforma de distribución digital como Steam, Origin o Epic Games Store. Estas plataformas ofrecen el juego a un precio más bajo que el físico, y a veces lo ponen en oferta o lo regalan. Además, te garantizan una descarga segura y legal, y te permiten acceder a todas las funciones y actualizaciones del juego.
        • -
        • Descargar el juego desde una página web de confianza que ofrezca el juego completo y sin errores. Hay algunas páginas web que ofrecen el juego de forma gratuita y sin necesidad de usar un crack. Eso sí, debes tener cuidado de no descargar el juego desde páginas sospechosas o con publicidad engañosa, ya que podrías infectar tu ordenador o caer en una estafa.
        • -
        • Descargar el juego desde un torrent o un magnet link que tenga muchos seeders y leechers. Estos son archivos que contienen el juego y que se pueden descargar usando un programa como uTorrent o BitTorrent. Al descargar el juego desde un torrent o un magnet link, puedes elegir qué archivos quieres descargar y cuáles no, y así ahorrar espacio y tiempo. Eso sí, debes tener cuidado de no descargar archivos falsos o con virus, y de usar una VPN para proteger tu privacidad y evitar problemas legales.
        • -
        -

        Estas son algunas de las alternativas que hay al crack Pes 2013 No Ha Sido Instalado, pero ninguna de ellas es tan buena como comprar el juego de forma legal y segura. Así que te recomendamos que si te gusta el juego, lo compres y lo disfrutes al máximo.

        -

        ¿Qué beneficios tiene comprar el juego Pes 2013?

        -

        Comprar el juego Pes 2013 de forma legal y segura tiene muchos beneficios que no puedes obtener al usar el crack Pes 2013 No Ha Sido Instalado. Algunos de los beneficios que tiene comprar el juego Pes 2013 son:

        -
          -
        • Apoyas a los creadores del juego y a la industria del videojuego, que invierten tiempo, dinero y esfuerzo en ofrecer productos de calidad y entretenimiento.
        • -
        • Accedes a todas las actualizaciones y mejoras que se lanzan para el juego, como parches, correcciones, nuevos modos, equipos, jugadores, ligas, etc.
        • -
        • Disfrutas de una mejor experiencia de juego, sin errores, fallos, bugs o incompatibilidades que afecten al rendimiento y la calidad del juego.
        • -
        • Participas en el modo online del juego, que te permite jugar contra otros usuarios de todo el mundo, con partidos, torneos y ligas.
        • -
        • Cuentas con la garantía y el soporte técnico del juego, que te ayudan a resolver cualquier problema o duda que tengas con el juego.
        • -
        -

        Por estas razones, te recomendamos que compres el juego Pes 2013 de forma legal y segura, y que no uses el crack Pes 2013 No Ha Sido Instalado. Así podrás disfrutar del juego al máximo y sin problemas.

        - -

        ¿Cómo comprar el juego Pes 2013?

        -

        Si quieres comprar el juego Pes 2013 de forma legal y segura, hay varias opciones que puedes elegir. Algunas de las opciones que puedes elegir para comprar el juego Pes 2013 son:

        -
          -
        • Comprar el juego en formato físico en una tienda especializada o en una plataforma de comercio electrónico como Amazon o eBay. Esta opción te permite tener el juego en un disco o en una tarjeta de memoria, y poder instalarlo en tu ordenador o consola. Eso sí, debes tener en cuenta el precio del juego, los gastos de envío y el tiempo de entrega.
        • -
        • Comprar el juego en formato digital en una plataforma de distribución digital como Steam, Origin o Epic Games Store. Esta opción te permite descargar el juego directamente en tu ordenador o consola, sin necesidad de un disco o una tarjeta de memoria. Además, te ofrece un precio más bajo que el físico, y a veces lo ponen en oferta o lo regalan. Eso sí, debes tener en cuenta el espacio disponible en tu disco duro y la velocidad de tu conexión a internet.
        • -
        • Comprar el juego en una página web de confianza que ofrezca el juego completo y sin errores. Esta opción te permite descargar el juego desde una página web que te garantiza una descarga segura y legal, sin necesidad de usar un crack. Eso sí, debes tener cuidado de no comprar el juego desde páginas sospechosas o con publicidad engañosa, ya que podrías caer en una estafa o infectar tu ordenador.
        • -
        -

        Estas son algunas de las opciones que puedes elegir para comprar el juego Pes 2013 de forma legal y segura. Te recomendamos que compares las diferentes opciones y elijas la que más te convenga según tu presupuesto y tus preferencias.

        -

        ¿Qué opiniones tiene la gente sobre el juego Pes 2013?

        -

        El juego Pes 2013 tiene opiniones muy variadas entre los usuarios y los críticos. Algunas de las opiniones que tiene la gente sobre el juego Pes 2013 son:

        -
          -
        • Algunos usuarios y críticos elogian el juego por su realismo, su jugabilidad y su variedad de modos y opciones. Consideran que es uno de los mejores juegos de fútbol de la historia, y que supera a su competidor directo, FIFA 13.
        • -
        • Otros usuarios y críticos critican el juego por sus gráficos, su sonido y su inteligencia artificial. Consideran que es un juego anticuado, aburrido y repetitivo, y que está muy por debajo de su competidor directo, FIFA 13.
        • -
        • También hay usuarios y críticos que tienen una opinión intermedia sobre el juego. Consideran que es un buen juego de fútbol, pero que tiene sus defectos y sus virtudes, y que depende del gusto personal de cada uno preferirlo o no a su competidor directo, FIFA 13.
        • -
        -

        Estas son algunas de las opiniones que tiene la gente sobre el juego Pes 2013, pero lo mejor es que lo pruebes por ti mismo y saques tu propia conclusión.

        - -

        ¿Qué consejos hay para jugar mejor al juego Pes 2013?

        -

        Si quieres jugar mejor al juego Pes 2013, hay algunos consejos que puedes seguir. Algunos de los consejos que hay para jugar mejor al juego Pes 2013 son:

        -
          -
        • Aprende los controles básicos del juego, como pasar, disparar, regatear, defender, etc. Practica en el modo entrenamiento o en partidos amistosos hasta dominarlos.
        • -
        • Adapta la configuración del juego a tu nivel y a tu estilo de juego. Puedes cambiar la dificultad, la cámara, los controles, las tácticas, etc. Busca la combinación que más te guste y te haga sentir cómodo.
        • -
        • Elige bien tu equipo y tu formación. Puedes elegir entre cientos de equipos reales o crear tu propio equipo personalizado. También puedes elegir entre diferentes formaciones y estrategias. Busca el equilibrio entre ataque y defensa, y entre velocidad y posesión.
        • -
        • Estudia a tu rival y a sus jugadores. Puedes ver sus estadísticas, sus puntos fuertes y débiles, sus estrellas, etc. Así podrás anticiparte a sus movimientos y aprovechar sus errores.
        • -
        • Diviértete y disfruta del juego. No te frustres si pierdes o si fallas una jugada. Aprende de tus errores y mejora tus habilidades. Juega con amigos o con otros usuarios online y comparte tu pasión por el fútbol.
        • -
        -

        Estos son algunos de los consejos que hay para jugar mejor al juego Pes 2013, pero lo más importante es que practiques mucho y que te diviertas.

        -

        Conclusión

        -

        El juego Pes 2013 es uno de los juegos de fútbol más populares y exitosos de la historia. Este juego ofrece una experiencia realista y divertida, con gráficos de alta calidad, una jugabilidad fluida y una gran variedad de modos y opciones. Sin embargo, también tiene sus problemas y sus riesgos, sobre todo si se usa el crack Pes 2013 No Ha Sido Instalado para jugarlo sin comprarlo. Este crack puede causar errores, fallos, virus, problemas legales o baneos en el modo online. Por eso, te recomendamos que compres el juego de forma legal y segura, y que lo disfrutes al máximo y sin problemas. Así podrás apoyar a los creadores del juego y a la industria del videojuego, y acceder a todas las actualizaciones y mejoras que se lancen. Además, podrás mejorar tus habilidades y divertirte con tus amigos o con otros usuarios online. Esperamos que este artículo te haya sido útil y que hayas aprendido algo nuevo sobre el juego Pes 2013 y el crack Pes 2013 No Ha Sido Instalado.

        679dcb208e
        -
        -
        \ No newline at end of file diff --git a/spaces/ticomspire/turkey-syria-earthquake-tweets/logs/Download Film The Monkey King The True Sun Wukong Sub Indo Gratis dan Mudah.md b/spaces/ticomspire/turkey-syria-earthquake-tweets/logs/Download Film The Monkey King The True Sun Wukong Sub Indo Gratis dan Mudah.md deleted file mode 100644 index 60e8fb532f2bfe36e1630c720f590ae5679e434a..0000000000000000000000000000000000000000 --- a/spaces/ticomspire/turkey-syria-earthquake-tweets/logs/Download Film The Monkey King The True Sun Wukong Sub Indo Gratis dan Mudah.md +++ /dev/null @@ -1,119 +0,0 @@ - -

        Download Sun Wukong Sub Indo: How to Watch the Monkey King Movies and Series Online

        -

        If you are a fan of Chinese mythology, fantasy, and action, you might be interested in watching the movies and series based on the legendary character of Sun Wukong, also known as the Monkey King. Sun Wukong is one of the main characters in the classic novel Journey to the West, which tells the story of a Buddhist monk and his three disciples who travel from China to India to obtain sacred scriptures. Sun Wukong is a rebellious, powerful, and witty monkey who has many adventures and battles along the way.

        -

        download sun wukong sub indo


        Download Zip ►►► https://bltlly.com/2uOkIc



        -

        There are many media adaptations of Sun Wukong, ranging from animated films, live-action movies, stage plays, musicals, and TV series. Some of them are faithful to the original novel, while others are creative reinterpretations or prequels. If you want to watch these adaptations online with Indonesian subtitles, you might be wondering how to download sun wukong sub indo. In this article, we will show you how to do that using one of the best sources for Chinese content online: iQ.com. We will also give you some other sources to download sun wukong sub indo if you want more options.

        -

        Who is Sun Wukong and Why is He Popular?

        -

        The History and Legend of Sun Wukong

        -

        Sun Wukong is a character from the Song Dynasty-era adventure novel Journey to the West, written by Wu Cheng'en in the 16th century. The novel is one of the Four Great Classical Novels of Chinese literature and has influenced many aspects of Chinese culture, such as religion, art, literature, opera, and folklore.

        -

        Sun Wukong is a monkey born from a magical stone who acquires supernatural powers through Taoist practices. He can transform into 72 different animals and objects, travel 108,000 li (54,000 km) in one somersault, lift mountains with his strength, and fight against gods and demons with his magical staff. He is also very clever, mischievous, and arrogant. He rebels against heaven and challenges the authority of the Jade Emperor and the Buddha. He is eventually imprisoned under a mountain by the Buddha for 500 years.

        -

        download soal uts kelas 1 semester 1 kurikulum merdeka pdf
        -download soal uts kelas 1 semester 1 kurikulum merdeka bahasa indonesia
        -download soal uts kelas 1 semester 1 kurikulum merdeka matematika
        -download soal uts kelas 1 semester 1 kurikulum merdeka bahasa inggris
        -download soal uts kelas 1 semester 1 kurikulum merdeka tema diriku
        -download soal uts kelas 1 semester 1 kurikulum merdeka tema keluargaku
        -download soal uts kelas 1 semester 1 kurikulum merdeka tema lingkunganku
        -download soal uts kelas 1 semester 1 kurikulum merdeka tema kegemaranku
        -download soal uts kelas 1 semester 1 kurikulum merdeka tema kegiatanku
        -download soal uts kelas 1 semester 1 kurikulum merdeka tema permainan tradisional
        -download soal uts kelas 1 semester 1 kurikulum merdeka gratis
        -download soal uts kelas 1 semester 1 kurikulum merdeka lengkap
        -download soal uts kelas 1 semester 1 kurikulum merdeka terbaru
        -download soal uts kelas 1 semester 1 kurikulum merdeka tahun ajaran baru
        -download soal uts kelas 1 semester 1 kurikulum merdeka tahun ajaran lama
        -download soal dan jawaban kelas 1 semester ganjil kurikulum merdeka
        -download soal dan pembahasan kelas satu semester satu kurikulum merdeka
        -download soal dan kunci jawaban kelas satu semester ganjil kurikulum merdeka
        -download contoh soal ujian tengah semester kelas satu kurikulum merdeka
        -download contoh soal ulangan harian kelas satu semester satu kurikulum merdeka
        -download contoh soal penilaian akhir semester kelas satu kurikulum merdeka
        -download contoh soal penilaian tengah tahun kelas satu kurikulum merdeka
        -download contoh soal penilaian akhir tahun kelas satu kurikulum merdeka
        -download contoh soal penilaian harian kelas satu semester satu kurikulum merdeka
        -download contoh soal penilaian tengah semester kelas satu semester satu kurikulum merdeka
        -cara mudah download soal ujian tengah semester kelas satu kurikulum merdeka
        -cara cepat download soal ulangan harian kelas satu semester satu kurikulum merdeka
        -cara praktis download soal penilaian akhir semester kelas satu kurikulum merdeka
        -cara simpel download soal penilaian tengah tahun kelas satu kurikulum merdeka
        -cara gampang download soal penilaian akhir tahun kelas satu kurikulum merdeka
        -situs terbaik untuk download soal ujian tengah semester kelas satu kurikulum merdeka
        -situs terpercaya untuk download soal ulangan harian kelas satu semester satu kurikulum merdeka
        -situs terlengkap untuk download soal penilaian akhir semester kelas satu kurikulum merdeka
        -situs terbaru untuk download soal penilaian tengah tahun kelas satu kurikulum merdeka
        -situs terpopuler untuk download soal penilaian akhir tahun kelas satu kurikulum merdeka
        -link alternatif untuk download soal ujian tengah semester kelas satu kurikulum merdeka
        -link rekomendasi untuk download soal ulangan harian kelas satu semester satu kurikulum merdeka
        -link pilihan untuk download soal penilaian akhir semester kelas satu kurikulum merdeka
        -link favorit untuk download soal penilaian tengah tahun kelas satu kurikulum merdeka
        -link terupdate untuk download soal penilaian akhir tahun kelas satu kurikulum merdeka
        -tips dan trik untuk download soal ujian tengah semester kelas satu kurikulum merdeka
        -tips dan trik

        -

        He is released by Tang Sanzang (also known as Xuanzang), a Buddhist monk who is on a mission to retrieve holy scriptures from India. Sun Wukong becomes one of his disciples, along with Zhu Bajie (a pig demon) and Sha Wujing (a sand demon). They face many dangers and temptations on their journey, but also learn valuable lessons about loyalty, compassion, and enlightenment.

        -

        The Media Adaptations of Sun Wukong

        -

        Sun Wukong is one of the most popular and beloved literary figures in Chinese culture. His stories and characters have been widely used in various media forms, especially in Beijing opera. He has also been adapted many times in modern film, television, stage, and other media.

        -

        Some of the most famous adaptations of Sun Wukong are:

        -
          -
        • Havoc in Heaven (1961), also known as Uproar in Heaven, a Chinese animated feature film directed by Wan Lai-ming that depicts Sun Wukong's rebellion against heaven.
        • -
        • The Monkey King (2014), The Monkey King 2 (2016), The Monkey King 3 (2018), The Monkey King: The Legend Begins (2022), a series of live-action fantasy films starring Donnie Yen as Sun Wukong.
        • -
        • Monkey King: Hero Is Back (2015), a Chinese animated film directed by Tian Xiaopeng that follows the adventures of Sun Wukong after he is released from his imprisonment.
        • -
        • The New Legends of Monkey (2018-2020), a New Zealand-Australian television series that reimagines the story of Sun Wukong and his companions as they search for the sacred scrolls.
        • -
        • Journey to the West (1986-2000), a Chinese television series that faithfully adapts the original novel in two seasons, starring Liu Xiao Ling Tong as Sun Wukong.
        • -
        -

        These are just some of the examples of the media adaptations of Sun Wukong. There are many more, such as video games, comics, novels, and musicals. Sun Wukong is a cultural icon that has inspired generations of fans and creators.

        -

        How to Download Sun Wukong Sub Indo from iQ.com

        -

        What is iQ.com and What Does It Offer?

        -

        iQ.com is a Chinese online video platform that offers a variety of content, such as movies, TV shows, variety shows, documentaries, and anime. It is owned by iQiyi, one of the largest online video companies in China. iQ.com has a huge library of Chinese content, including many adaptations of Sun Wukong. You can watch them online or download them to your device for offline viewing.

        -

        iQ.com also provides subtitles in different languages, including Indonesian. You can choose the language you prefer from the settings menu. You can also adjust the video quality, speed, and brightness according to your preference.

        -

        How to Register and Subscribe to iQ.com

        -

        To enjoy the full features of iQ.com, you need to register and subscribe to the platform. Here are the steps to do that:

        -
          -
        1. Go to the official website of iQ.com or download the app on your device.
        2. -
        3. Click on the sign-up button and choose a method to register. You can use your email, phone number, or social media account.
        4. -
        5. Fill in the required information and create a password. You will receive a verification code or link to confirm your registration.
        6. -
        7. Log in to your account and click on the VIP button. You will see different plans and prices for subscription. Choose the one that suits you best and pay with your preferred method. You can use credit card, PayPal, Alipay, WeChat Pay, or other options.
        8. -
        9. Enjoy watching and downloading Sun Wukong sub indo on iQ.com!
        10. -
        -

        How to Search and Download Sun Wukong Sub Indo from iQ.com

        -

        Once you have registered and subscribed to iQ.com, you can start searching and downloading Sun Wukong sub indo from the platform. Here are the steps to do that:

        -
          -
        1. Go to the homepage of iQ.com or open the app on your device.
        2. -
        3. Type "Sun Wukong" or "Monkey King" in the search bar and press enter. You will see a list of results related to Sun Wukong.
        4. -
        5. Select the movie or series you want to watch or download. You can also filter the results by genre, year, rating, or popularity.
        6. -
        7. Click on the play button to watch online or click on the download button to save it to your device. You can also add it to your favorites or watchlist for later viewing.
        8. -
        9. Enjoy watching and downloading Sun Wukong sub indo on iQ.com!
        10. -
        -

        Other Sources to Download Sun Wukong Sub Indo

        -

        IMDb

        -

        If you want to know more about the movies and series based on Sun Wukong, you can visit IMDb, which is an online database of information related to films, television programs, home videos, video games, and streaming content online. You can find ratings, reviews, cast and crew details, trivia, quotes, and more about Sun Wukong adaptations on IMDb.

        -

        You can also watch some of them online or download them from IMDb if they are available in your region. To do that, you need to register and subscribe to IMDb TV or Prime Video channels. You can also rent or buy some titles from Amazon Video.

        -

        Wikipedia

        -

        If you want to learn more about the history and legend of Sun Wukong, you can visit Wikipedia, which is a free online encyclopedia that anyone can edit. You can find articles about Sun Wukong's origin story, character traits, abilities, relationships, appearances in media, cultural impact, and more on Wikipedia.

        -

        You can also find links to other sources and references that can help you explore more about Sun Wukong on Wikipedia . You can also contribute to the articles by editing or adding information if you have reliable sources.

        -

        YouTube

        -

        If you want to watch some clips, trailers, reviews, or fan-made videos of Sun Wukong adaptations, you can visit YouTube, which is an online video-sharing platform that allows users to upload, view, rate, share, comment on videos, and subscribe to other users. You can find a lot of content related to Sun Wukong on YouTube.

        -

        You can also watch some full-length movies or series of Sun Wukong on YouTube if they are uploaded by the official channels or authorized distributors. You can also rent or buy some titles from YouTube Movies. To do that, you need to register and sign in to your Google account and pay with your preferred method.

        -

        Conclusion

        -

        Sun Wukong is a fascinating and influential character that has captivated many audiences and creators for centuries. He is the star of many movies and series that showcase his adventures and battles in the Journey to the West. If you want to watch these adaptations online with Indonesian subtitles, you can download sun wukong sub indo from iQ.com, which is one of the best sources for Chinese content online. You can also use other sources such as IMDb, Wikipedia, and YouTube to find more information and content about Sun Wukong.

        -

        We hope this article has helped you learn how to download sun wukong sub indo and enjoy watching the Monkey King movies and series online. If you have any questions or feedback, please let us know in the comments below. Thank you for reading!

        -

        FAQs

        -

        Q: What is the best movie or series of Sun Wukong?

        -

        A: This is a subjective question that depends on your personal preference and taste. However, some of the most popular and acclaimed adaptations of Sun Wukong are Havoc in Heaven (1961), Journey to the West (1986-2000), The Monkey King (2014-2022), Monkey King: Hero Is Back (2015), and The New Legends of Monkey (2018-2020).

        -

        Q: How can I watch Sun Wukong sub indo for free?

        -

        A: Some of the sources we mentioned in this article offer free trials or limited access to some of their content. For example, iQ.com offers a 7-day free trial for new users, IMDb TV offers free streaming with ads for some titles, and YouTube offers free viewing for some videos. However, if you want to watch more content or download them for offline viewing, you will need to pay for a subscription or a rental fee.

        -

        Q: Is Sun Wukong based on a real person or animal?

        -

        A: Sun Wukong is a fictional character that is based on a combination of legends, myths, folklore, and literary imagination. He is not based on a real person or animal, although some scholars have suggested that he might have been inspired by historical figures such as Hanuman (a monkey god in Hinduism) or Xuanzang (a real Buddhist monk who traveled to India).

        -

        Q: What are the main themes and messages of Sun Wukong's story?

        -

        A: Sun Wukong's story is a rich and complex one that explores many themes and messages, such as:

        -
          -
        • The quest for immortality and enlightenment
        • -
        • The conflict between freedom and order
        • -
        • The balance between loyalty and rebellion
        • -
        • The value of friendship and teamwork
        • -
        • The power of wisdom and humor
        • -
        • The importance of compassion and forgiveness
        • -
        -

        Q: How can I learn more about Sun Wukong and Chinese culture?

        -

        A: If you are interested in learning more about Sun Wukong and Chinese culture, you can do some research online or offline using various sources, such as books, articles, podcasts, documentaries, courses, museums, festivals, etc. You can also interact with other fans and experts who share your passion and curiosity.

        401be4b1e0
        -
        -
        \ No newline at end of file diff --git a/spaces/tinkoff-ai/response-quality-classifiers/app.py b/spaces/tinkoff-ai/response-quality-classifiers/app.py deleted file mode 100644 index bb9a89f18cb4695d4b3b11e470eaa67a8436096e..0000000000000000000000000000000000000000 --- a/spaces/tinkoff-ai/response-quality-classifiers/app.py +++ /dev/null @@ -1,112 +0,0 @@ -import os -import streamlit as st -import transformers -import torch -import tokenizers -from typing import List, Dict - -st.subheader('Эта демонстрация позволяет поэксперементировать с моделями, которые оценивают, насколько предлагаемый ответ подходит к контексту диалога.') -model_name = st.selectbox( - 'Выберите модель', - ('tinkoff-ai/response-quality-classifier-tiny', 'tinkoff-ai/response-quality-classifier-base', 'tinkoff-ai/response-quality-classifier-large') -) -auth_token = os.environ.get('TOKEN') or True - -@st.cache(hash_funcs={tokenizers.Tokenizer: lambda tokenizer: hash(tokenizer.to_str())}, allow_output_mutation=True) -def load_model(model_name: str): - with st.spinner('Loading models...'): - tokenizer = transformers.AutoTokenizer.from_pretrained(model_name, use_auth_token=auth_token) - model = transformers.AutoModelForSequenceClassification.from_pretrained(model_name, use_auth_token=auth_token) - if torch.cuda.is_available(): - model = model.cuda() - return tokenizer, model - -context_3 = 'привет' -context_2 = 'привет!' -context_1 = 'как дела?' - -st.markdown('👱🏻‍♀️ **Настя**: ' + context_3) -st.markdown('🤖 **Диалоговый агент**: ' + context_2) -st.markdown('👱🏻‍♀️ **Настя**: ' + context_1) -response = st.text_input('🤖 Диалоговый агент:', 'норм') -sample = { - 'context_3': context_3, - 'context_2': context_2, - 'context_1': context_1, - 'response': response -} - -SEP_TOKEN = '[SEP]' -CLS_TOKEN = '[CLS]' -RESPONSE_TOKEN = '[RESPONSE_TOKEN]' -MAX_SEQ_LENGTH = 128 -sorted_dialog_columns = ['context_3', 'context_2', 'context_1', 'response'] - - -def tokenize_dialog_data( - tokenizer: transformers.PreTrainedTokenizer, - sample: Dict, - max_seq_length: int, - sorted_dialog_columns: List, -): - """ - Tokenize both contexts and response of dialog data separately - """ - len_message_history = len(sorted_dialog_columns) - max_seq_length = min(max_seq_length, tokenizer.model_max_length) - max_each_message_length = max_seq_length // len_message_history - 1 - messages = [sample[k] for k in sorted_dialog_columns] - result = {model_input_name: [] for model_input_name in tokenizer.model_input_names} - messages = [str(message) if message is not None else '' for message in messages] - tokens = tokenizer( - messages, padding=False, max_length=max_each_message_length, truncation=True, add_special_tokens=False - ) - for model_input_name in tokens.keys(): - result[model_input_name].extend(tokens[model_input_name]) - return result - - -def merge_dialog_data( - tokenizer: transformers.PreTrainedTokenizer, - sample: Dict -): - cls_token = tokenizer(CLS_TOKEN, add_special_tokens=False) - sep_token = tokenizer(SEP_TOKEN, add_special_tokens=False) - response_token = tokenizer(RESPONSE_TOKEN, add_special_tokens=False) - model_input_names = tokenizer.model_input_names - result = {} - for model_input_name in model_input_names: - tokens = [] - tokens.extend(cls_token[model_input_name]) - for i, message in enumerate(sample[model_input_name]): - tokens.extend(message) - if i < len(sample[model_input_name]) - 2: - tokens.extend(sep_token[model_input_name]) - elif i == len(sample[model_input_name]) - 2: - tokens.extend(response_token[model_input_name]) - result[model_input_name] = torch.tensor([tokens]) - if torch.cuda.is_available(): - result[model_input_name] = result[model_input_name].cuda() - return result - - -@st.cache -def inference(model_name: str, sample: dict): - tokenizer, model = load_model(model_name) - tokenized_dialog = tokenize_dialog_data(tokenizer, sample, MAX_SEQ_LENGTH, sorted_dialog_columns) - tokens = merge_dialog_data(tokenizer, tokenized_dialog) - with torch.inference_mode(): - logits = model(**tokens).logits - probas = torch.sigmoid(logits)[0].cpu().detach().numpy().tolist() - return probas - -with st.spinner('Running inference...'): - probas = inference(model_name, sample) -st.metric( - label='Вероятность того, что последний ответ диалогового агента релевантный', - value=round(probas[0], 3) -) -st.metric( - label='Вероятность того, что последний ответ диалогового агента вовлечённый', - value=round(probas[1], 3) -) diff --git a/spaces/tioseFevbu/cartoon-converter/scripts/Anatronica For Windows Full Download Crack Full Free Updated.md b/spaces/tioseFevbu/cartoon-converter/scripts/Anatronica For Windows Full Download Crack Full Free Updated.md deleted file mode 100644 index 77bd1ffc22747ab2c70fe3a6f64af43d1c9c3dd4..0000000000000000000000000000000000000000 --- a/spaces/tioseFevbu/cartoon-converter/scripts/Anatronica For Windows Full Download Crack Full Free Updated.md +++ /dev/null @@ -1,97 +0,0 @@ -
        -

        Anatronica for Windows Full Download Crack Full Free Updated

        -

        If you are looking for a way to learn and explore human anatomy in an easy and convenient way, you might want to try Anatronica for Windows. This is a powerful and interactive 3D anatomy software that lets you view, manipulate, and dissect various anatomical structures. You can also test your knowledge with quizzes and courses, and access a comprehensive atlas inspired by Gray's Anatomy.

        -

        However, Anatronica for Windows is not a free software. You need to pay a subscription fee to access all its features and content. If you want to save some money and get the full version of Anatronica for Windows with crack, you need to follow some steps. In this article, we will show you how to download Anatronica for Windows full download crack full free updated from torrent sites. We will also give you some tips on how to use Anatronica safely and effectively.

        -

        Anatronica For Windows Full Download Crack Full Free Updated


        DOWNLOADhttps://urlcod.com/2uHwS7



        -

        What is Anatronica?

        -

        Anatronica is a 3D anatomy software developed by Goodwill Enterprise Development Limited. It is available for Windows, Mac, iOS, Android, and online platforms. It is designed for students, teachers, professionals, and anyone interested in human anatomy.

        -

        Features of Anatronica

        -

        Anatronica has many features that make it one of the best 3D anatomy software on the market. Some of these features are:

        -
          -
        • Anatomically accurate, high-detail 3D models of human body systems, such as skeletal, muscular, cardiovascular, nervous, respiratory, reproductive, and urinary systems.
        • -
        • Easy way of navigating and exploring human body parts using mouse or touch controls.
        • -
        • Fast search function over 2500 human body parts.
        • -
        • Virtual dissection: Peel layers of muscles and reveal the anatomical structures below them.
        • -
        • Audio pronunciation for all anatomy terms.
        • -
        • 3D location quizzes to test your knowledge.
        • -
        • Suitable for students, teachers, and everybody interested in human anatomy.
        • -
        • Frequent updates and additions for all platforms.
        • -
        -

        Benefits of using Anatronica

        -

        Anatronica can help you learn and understand human anatomy better by providing you with a realistic and immersive experience. Some of the benefits of using Anatronica are:

        -
          -
        • You can visualize the complex relationships between different body parts and systems.
        • -
        • You can customize and dissect the models according to your needs and preferences.
        • -
        • You can access a rich library of videos, courses, and lectures that cover various topics in anatomy.
        • -
        • You can improve your retention and recall of anatomical information by engaging in interactive quizzes and games.
        • -
        • You can use Anatronica as a reference tool or a dictionary for anatomy terms.
        • -
        -

        How to download Anatronica for Windows full version with crack?

        -

        If you want to get the full version of Anatronica for Windows with crack, you need to download it from torrent sites. Torrent sites are websites that allow users to share files using peer-to-peer (P2P) file-sharing technology. However, torrenting is not always legal or safe. You need to be careful about what you download and how you protect yourself from malware, viruses, hackers, and legal issues.

        -

        Here are the steps you need to follow to download Anatronica for Windows full download crack full free updated from torrent sites:

        -

        Step 1: Find a reliable torrent site for software

        -

        Not all torrent sites are trustworthy or safe. Some of them may contain malware, viruses, fake files, or illegal content. You need to do some research and find a reliable torrent site that specializes in software. Some of the popular torrent sites for software are:

        -

        -
          -
        • The Pirate Bay: This is one of the oldest and most popular torrent sites in the world. It has a huge collection of software, games, movies, music, and more. You can use the search function or browse by categories to find Anatronica for Windows full download crack.
        • -
        • RARBG: This is another well-known torrent site that offers high-quality software, movies, TV shows, games, and more. It has a user-friendly interface and a fast download speed. You can also read user reviews and comments to check the quality and safety of the files.
        • -
        • 1337x: This is a torrent site that has a modern and sleek design. It offers a variety of software, movies, TV shows, games, music, and more. You can use the search function or browse by categories or genres to find Anatronica for Windows full download crack.
        • -
        -

        These are just some examples of torrent sites for software. You can also use other torrent sites that you trust or prefer. However, you should always be careful and cautious when downloading anything from torrent sites. You should also use a VPN for torrenting to protect your privacy and security.

        -

        Step 2: Search for Anatronica for Windows full download crack

        -

        Once you have found a reliable torrent site for software, you can search for Anatronica for Windows full download crack on it. You can use the search function or browse by categories to find the file you want. You should look for files that have a high number of seeders and leechers, as this indicates that the file is popular and fast to download. You should also check the file size, description, comments, and ratings to make sure that the file is genuine and safe.

        -

        When you have found the file you want, you can click on it to open its details page. There you can see more information about the file, such as its name, size, type, date, source, etc. You can also see the magnet link or the torrent file that you need to download the file. A magnet link is a URL that contains the information of the file and allows you to download it directly from other users without using a torrent client. A torrent file is a small file that contains the metadata of the file and allows you to download it using a torrent client.

        -

        Step 3: Download and install a VPN for torrenting

        -

        A VPN or a virtual private network is a service that encrypts your internet traffic and hides your IP address and location from anyone who might be spying on you. A VPN is essential for torrenting because it protects your privacy and security from hackers, ISPs, governments, and other third parties who might monitor your online activities or block your access to certain websites or content.

        -

        There are many VPN services available on the market, but not all of them are suitable for torrenting. You need to find a VPN that supports P2P file-sharing, has fast and stable servers in different countries, has a strict no-logs policy, and offers strong encryption and security features. Some of the best VPNs for torrenting are:

        -
          -
        • NordVPN: This is one of the most popular and trusted VPNs in the world. It has over 5400 servers in 59 countries, including P2P-optimized servers for fast and secure torrenting. It also has a no-logs policy, AES-256 encryption, kill switch, DNS leak protection, and CyberSec feature that blocks ads and malware.
        • -
        • ExpressVPN: This is another well-known and reliable VPN that offers fast and secure torrenting. It has over 3000 servers in 94 countries, including P2P-friendly servers in every location. It also has a no-logs policy, AES-256 encryption, kill switch, DNS leak protection, and split tunneling feature that allows you to choose which apps use the VPN.
        • -
        • Surfshark: This is a relatively new but impressive VPN that offers unlimited simultaneous connections and affordable prices. It has over 3200 servers in 65 countries, including P2P-enabled servers in every location. It also has a no-logs policy, AES-256 encryption, kill switch, DNS leak protection, CleanWeb feature that blocks ads and malware, and Whitelister feature that allows you to exclude certain apps or websites from the VPN.
        • -Step 4: Connect to a VPN server and start downloading Anatronica -

          After you have downloaded and installed a VPN for torrenting, you need to connect to a VPN server that is suitable for your location and needs. You can choose a server that is close to you for faster speed, or a server that is in a different country for more privacy and access to geo-restricted content. You can also choose a server that is optimized for P2P file-sharing, as these servers are more secure and reliable for torrenting.

          -

          Once you have connected to a VPN server, you can start downloading Anatronica for Windows full download crack full free updated from the torrent site. You can either use the magnet link or the torrent file to download the file. If you use the magnet link, you just need to click on it and it will open your torrent client and start downloading the file. If you use the torrent file, you need to download it first and then open it with your torrent client and start downloading the file.

          -

          While downloading the file, you should keep an eye on the download progress, speed, and status. You should also check the seeders and leechers ratio, as this indicates how fast and stable the download will be. Seeders are users who have the complete file and are sharing it with others. Leechers are users who are downloading the file but have not completed it yet. Ideally, you want a high number of seeders and a low number of leechers for a fast and smooth download.

          -

          Step 5: Install and activate Anatronica on your PC

          -

          After you have finished downloading Anatronica for Windows full download crack full free updated from the torrent site, you need to install and activate it on your PC. To do this, you need to follow these steps:

          -
            -
          1. Extract the downloaded file using a software like WinRAR or 7-Zip. You will get a folder that contains the setup file and the crack file.
          2. -
          3. Run the setup file and follow the instructions to install Anatronica on your PC.
          4. -
          5. Copy the crack file and paste it into the installation folder of Anatronica. This will replace the original file and activate the full version of Anatronica.
          6. -
          7. Launch Anatronica and enjoy its features and content.
          8. -
          -

          Congratulations! You have successfully downloaded, installed, and activated Anatronica for Windows full download crack full free updated from torrent sites. However, you should be aware that using cracked software is illegal and risky. You might face legal issues, malware infections, or performance issues by using cracked software. Therefore, we recommend that you use Anatronica at your own risk and discretion.

          -

          How to use Anatronica for Windows?

          -

          Anatronica for Windows is a 3D anatomy software that lets you learn and explore human anatomy in an easy and convenient way. You can use Anatronica for Windows for various purposes, such as studying, teaching, researching, or just having fun. Here are some tips on how to use Anatronica for Windows:

          -

          Explore human anatomy in 3D

          -

          Anatronica for Windows allows you to view, manipulate, and dissect various human body systems in 3D. You can choose from different body systems, such as skeletal, muscular, cardiovascular, nervous, respiratory, reproductive, and urinary systems. You can also select different body regions, such as head, neck, thorax, abdomen, pelvis, upper limb, lower limb, etc.

          -

          You can use your mouse or touch controls to rotate, zoom in/out, pan, or tilt the 3D models. You can also use keyboard shortcuts or buttons to perform different actions, such as hiding/showing labels, isolating parts, changing colors or transparency levels, etc.

          -

          You can also access a comprehensive atlas inspired by Gray's Anatomy that provides detailed information about each body part. You can read descriptions, definitions, synonyms, antonyms, etymology, pronunciation, and related terms for each body part. You can also see images and diagrams that illustrate the anatomy of each body part.

          -

          Customize and dissect the models

          -

          Anatronica for Windows allows you to customize and dissect the 3D models according to your needs and preferences. You can peel layers of muscles and reveal the anatomical structures below them. You can also hide or show specific body parts or systems to focus on the ones you want to study or explore.

          -

          You can also change the colors or transparency levels of the 3D models to make them more visible or realistic. You can also adjust the lighting and shadows to create different effects and moods. You can also use different tools, such as scissors, scalpel, forceps, etc., to cut, slice, or remove parts of the 3D models.

          -

          You can also save your customizations and dissections as presets or snapshots that you can load or share later. You can also export your 3D models as images or videos that you can use for presentations, reports, or projects.

          -

          Test your knowledge with quizzes and courses

          -

          Anatronica for Windows allows you to test your knowledge with quizzes and courses that cover various topics in anatomy. You can choose from different types of quizzes, such as multiple choice, true/false, matching, labeling, etc. You can also choose from different levels of difficulty, such as easy, medium, hard, or expert.

          -

          You can also access a rich library of videos, courses, and lectures that provide in-depth explanations and demonstrations of various anatomical concepts and phenomena. You can watch videos that show how different body systems work and interact with each other. You can also enroll in courses that teach you the basics of anatomy or help you prepare for exams or certifications.

          -

          You can also track your progress and performance with statistics and reports that show your scores, accuracy, speed, strengths, weaknesses, etc. You can also compare your results with other users or challenge them to compete with you.

          -

          Conclusion

          -

          Anatronica for Windows is a 3D anatomy software that lets you learn and explore human anatomy in an easy and convenient way. It has many features and benefits that make it one of the best 3D anatomy software on the market. However, it is not a free software. You need to pay a subscription fee to access all its features and content.

          -

          If you want to save some money and get the full version of Anatronica for Windows with crack, you need to download it from torrent sites. However, torrenting is not always legal or safe. You need to be careful about what you download and how you protect yourself from malware, viruses, hackers, and legal issues.

          -

          In this article, we showed you how to download Anatronica for Windows full download crack full free updated from torrent sites. We also gave you some tips on how to use Anatronica safely and effectively. We hope you found this article helpful and informative.

          -

          FAQs

          -

          Here are some frequently asked questions about Anatronica for Windows:

          -
            -
          1. Q: Is Anatronica for Windows compatible with other devices or platforms?
            A: Yes, Anatronica is available for Windows, Mac, iOS, Android, and online platforms. You can use the same account to access Anatronica on different devices or platforms.
          2. -
          3. Q: Is Anatronica for Windows updated regularly?
            A: Yes, Anatronica is updated frequently with new features and content for all platforms. You can check the official website or social media pages of Anatronica for the latest news and updates.
          4. -
          5. Q: Is Anatronica for Windows safe to use?
            A: Yes, Anatronica is safe to use if you download it from the official website or a trusted source. However, if you download it from torrent sites with crack , you might expose yourself to malware, viruses, hackers, and legal issues. You should always use a VPN for torrenting to protect your privacy and security.
          6. -
          7. Q: Is Anatronica for Windows worth it?
            A: Yes, Anatronica is worth it if you are interested in human anatomy and want to learn and explore it in an easy and convenient way. It has many features and benefits that make it one of the best 3D anatomy software on the market. However, if you are not willing to pay a subscription fee or risk downloading it from torrent sites with crack, you might want to look for other alternatives.
          8. -
          9. Q: What are some alternatives to Anatronica for Windows?
            A: Some of the alternatives to Anatronica for Windows are:
          10. -
              -
            • Complete Anatomy: This is another 3D anatomy software that offers high-quality models, interactive features, and educational content. It is available for Windows, Mac, iOS, Android, and online platforms. It also has a free version with limited features and content.
            • -
            • Visible Body: This is a 3D anatomy software that offers realistic models, animations, quizzes, and courses. It is available for Windows, Mac, iOS, Android, and online platforms. It also has a free version with limited features and content.
            • -
            • BioDigital Human: This is a 3D anatomy software that offers customizable models, simulations, games, and lessons. It is available for Windows, Mac, iOS, Android, and online platforms. It also has a free version with limited features and content.
            • -
            -

          b2dd77e56b
          -
          -
          \ No newline at end of file diff --git a/spaces/tioseFevbu/cartoon-converter/scripts/FS2004 - PMDG 737-NG [ALL VARIANTS] Demo.md b/spaces/tioseFevbu/cartoon-converter/scripts/FS2004 - PMDG 737-NG [ALL VARIANTS] Demo.md deleted file mode 100644 index ef99b09661928ce9ce88a1b4c0772ea8ea781f29..0000000000000000000000000000000000000000 --- a/spaces/tioseFevbu/cartoon-converter/scripts/FS2004 - PMDG 737-NG [ALL VARIANTS] Demo.md +++ /dev/null @@ -1,34 +0,0 @@ -
          -

          FS2004 - PMDG 737-NG [ALL VARIANTS] Demo: A Review

          -

          If you are a fan of realistic and complex flight simulation, you might be interested in trying out the PMDG 737-NG [ALL VARIANTS] Demo for FS2004. This demo lets you experience the features and performance of one of the most popular and advanced add-on aircraft for Microsoft Flight Simulator 2004.

          -

          The PMDG 737-NG [ALL VARIANTS] Demo is based on the full version of the PMDG 737-NG, which is a highly detailed and accurate simulation of the Boeing 737 Next Generation series. The demo includes the 737-600, 737-700, 737-800 and 737-900 models, each with different configurations and liveries. You can fly any of these models in free flight mode or choose from several pre-defined flights that showcase the capabilities of the aircraft.

          -

          FS2004 - PMDG 737-NG [ALL VARIANTS] Demo


          Download Filehttps://urlcod.com/2uHvrP



          -

          Some of the features that you can enjoy in the demo are:

          -
            -
          • A fully functional virtual cockpit with interactive displays, switches and gauges.
          • -
          • A realistic flight model that responds to weather, weight and balance, fuel consumption and aerodynamics.
          • -
          • A comprehensive systems simulation that covers electrical, hydraulic, pneumatic, fire protection, fuel, air conditioning and more.
          • -
          • A custom sound package that recreates the engine noise, cockpit alerts, environmental sounds and voice announcements.
          • -
          • A detailed documentation that explains the operation and procedures of the aircraft.
          • -
          -

          The PMDG 737-NG [ALL VARIANTS] Demo is a great way to get a taste of what the full version has to offer. However, it also has some limitations that you should be aware of:

          -
            -
          • The demo is time-limited to 15 minutes per flight. After that, the aircraft will lose power and become unflyable.
          • -
          • The demo does not include any liveries other than the default PMDG House livery. You cannot install or use any third-party liveries with the demo.
          • -
          • The demo does not support any online flying networks or multiplayer modes.
          • -
          • The demo does not include any updates or patches that have been released for the full version.
          • -
          -

          If you want to experience the full potential of the PMDG 737-NG [ALL VARIANTS], you will need to purchase and install the full version from PMDG's website[^1^]. The full version costs $54.99 USD and includes all the features and updates that have been released since its launch in 2003. You will also get access to a dedicated support forum and a large community of fellow simmers who share tips, tricks and liveries for the aircraft.

          -

          The PMDG 737-NG [ALL VARIANTS] Demo is a must-try for any FS2004 enthusiast who wants to fly a realistic and complex Boeing 737. You can download it for free from various flight simulation websites[^2^]. Just make sure you have enough disk space and system resources to run it smoothly. And don't forget to set your timer before you take off!

          - -

          What are the benefits of flying the PMDG 737-NG [ALL VARIANTS]?

          -

          The PMDG 737-NG [ALL VARIANTS] is not just a simple add-on aircraft. It is a complete simulation of the Boeing 737 Next Generation series, which is one of the most successful and versatile jetliners in the world. The PMDG 737-NG [ALL VARIANTS] offers you the opportunity to fly like a real pilot, with realistic procedures, systems and performance. You can learn how to operate the advanced flight deck, manage the complex systems, handle various failures and emergencies, and optimize your fuel efficiency and flight planning. You can also enjoy the stunning visuals, sounds and animations that bring the aircraft to life.

          -

          The PMDG 737-NG [ALL VARIANTS] is compatible with many other add-ons and features that enhance your flight simulation experience. For example, you can use the Electronic Flight Bag (EFB) to access Navigraph charts and data, which provide you with accurate and up-to-date information on airports, routes and procedures. You can also use the PMDG Global Flight Operations (GFO) service, which will be available in late 2020, to connect with other PMDG pilots and simulate real-world airline operations. You can also customize your aircraft with different equipment options, liveries and configurations to suit your preferences and needs.

          -

          How to get started with the PMDG 737-NG [ALL VARIANTS]?

          -

          If you are new to the PMDG 737-NG [ALL VARIANTS], you might feel overwhelmed by the amount of detail and complexity that this simulation offers. However, don't worry, as PMDG provides you with plenty of resources and guidance to help you get familiar with the aircraft and enjoy your flights.

          -

          -

          First of all, you should read the documentation that comes with the product. You can find it in the PMDG Operations Center 2.0, which is a tool that allows you to manage your PMDG products, updates, liveries and settings. The documentation includes a quick start guide, a tutorial flight, a flight crew training manual, a flight crew operations manual and more. These documents will explain the features, functions and procedures of the aircraft in detail.

          -

          Secondly, you should follow the tutorial flight that is included in the documentation. This tutorial will guide you through a complete flight from London Gatwick (EGKK) to Amsterdam Schiphol (EHAM), using the 737-800 model with winglets. You will learn how to set up the aircraft, program the flight management computer (FMC), perform the pre-flight checks, take off, cruise, descend, land and park. You will also learn how to use some of the advanced features of the aircraft, such as the autothrottle, autopilot, auto brake and speed brake.

          -

          Thirdly, you should practice flying different scenarios and situations with the aircraft. You can choose from several pre-defined flights that are included in the product, or create your own flights using any of the models and variants that are available. You can also adjust the weather, time and season settings to create different challenges and effects. You can also test your skills and knowledge by simulating various failures and emergencies that are covered in the quick reference handbook (QRH). You can access these failures from the PMDG menu on the upper left corner of your screen.

          cec2833e83
          -
          -
          \ No newline at end of file diff --git a/spaces/tioseFevbu/cartoon-converter/scripts/Marvel Ultimate Alliance 2 Pc Game Baixar [BETTER].md b/spaces/tioseFevbu/cartoon-converter/scripts/Marvel Ultimate Alliance 2 Pc Game Baixar [BETTER].md deleted file mode 100644 index 2f548e0f6003656dba0369f6648b7d13e92c873d..0000000000000000000000000000000000000000 --- a/spaces/tioseFevbu/cartoon-converter/scripts/Marvel Ultimate Alliance 2 Pc Game Baixar [BETTER].md +++ /dev/null @@ -1,36 +0,0 @@ - -

          Marvel Ultimate Alliance 2: How to Download and Play on PC

          -

          Marvel Ultimate Alliance 2 is a popular action-adventure game that features characters from the Marvel Comics universe. The game was released in 2016 for various platforms, including PC. However, many fans have trouble finding and installing the game on their computers. In this article, we will show you how to download and play Marvel Ultimate Alliance 2 on PC with ease.

          -

          What is Marvel Ultimate Alliance 2?

          -

          Marvel Ultimate Alliance 2 is the sequel to the 2006 game Marvel Ultimate Alliance. The game follows the storyline of the Civil War comic book series, where the superheroes are divided into two factions: one led by Iron Man, who supports the Superhuman Registration Act, and the other led by Captain America, who opposes it. The player can choose which side to join and customize their team of four heroes from a roster of 24 characters, each with their own powers and abilities.

          -

          marvel ultimate alliance 2 pc game baixar


          Download >> https://urlcod.com/2uHvrl



          -

          The game features a co-op mode, where up to four players can play together online or offline. The game also has a fusion system, where two heroes can combine their powers to create a powerful attack. The game has received positive reviews from critics and fans alike, who praised its gameplay, graphics, story, and voice acting.

          -

          How to Download and Play Marvel Ultimate Alliance 2 on PC?

          -

          There are two main ways to download and play Marvel Ultimate Alliance 2 on PC: through Steam or through an emulator.

          -

          Steam

          -

          Steam is a digital distribution platform that allows users to buy and download games for PC. Marvel Ultimate Alliance 2 is available on Steam for $39.99 USD. To download and play the game on Steam, you need to follow these steps:

          -
            -
          1. Create a Steam account or log in to your existing one.
          2. -
          3. Search for Marvel Ultimate Alliance 2 on the Steam store or click here.
          4. -
          5. Add the game to your cart and proceed to checkout.
          6. -
          7. Once the payment is confirmed, the game will be added to your library.
          8. -
          9. Click on the game in your library and select "Install".
          10. -
          11. Wait for the game to download and install on your PC.
          12. -
          13. Launch the game and enjoy!
          14. -
          -

          Emulator

          -

          An emulator is a software that allows users to run games from other platforms on their PC. For Marvel Ultimate Alliance 2, you need an emulator that can run PS2 games, such as PCSX2. To download and play the game on an emulator, you need to follow these steps:

          -

          -
            -
          1. Download and install PCSX2 from here.
          2. -
          3. Download and install the PS2 BIOS file from here.
          4. -
          5. Download the Marvel Ultimate Alliance 2 ISO file from here.
          6. -
          7. Open PCSX2 and configure it according to your PC specifications.
          8. -
          9. Select "CDVD" and then "ISO Selector". Browse for the Marvel Ultimate Alliance 2 ISO file and select it.
          10. -
          11. Select "System" and then "Boot ISO". The game will start running on your PC.
          12. -
          13. Use your keyboard or a controller to play the game.
          14. -
          -

          Conclusion

          -

          Marvel Ultimate Alliance 2 is a fun and exciting game that lets you experience the Marvel Civil War from different perspectives. You can download and play the game on PC either through Steam or through an emulator. Both methods have their pros and cons, so you can choose the one that suits you best. We hope this article helped you with downloading and playing Marvel Ultimate Alliance 2 on PC.

          7b8c122e87
          -
          -
          \ No newline at end of file diff --git a/spaces/tjburns/ask_marcus_aurelius/.venv/lib/python3.10/site-packages/pip/_internal/utils/logging.py b/spaces/tjburns/ask_marcus_aurelius/.venv/lib/python3.10/site-packages/pip/_internal/utils/logging.py deleted file mode 100644 index c10e1f4ced6bcc799799b62666695998e095bbaf..0000000000000000000000000000000000000000 --- a/spaces/tjburns/ask_marcus_aurelius/.venv/lib/python3.10/site-packages/pip/_internal/utils/logging.py +++ /dev/null @@ -1,348 +0,0 @@ -import contextlib -import errno -import logging -import logging.handlers -import os -import sys -import threading -from dataclasses import dataclass -from io import TextIOWrapper -from logging import Filter -from typing import Any, ClassVar, Generator, List, Optional, TextIO, Type - -from pip._vendor.rich.console import ( - Console, - ConsoleOptions, - ConsoleRenderable, - RenderableType, - RenderResult, - RichCast, -) -from pip._vendor.rich.highlighter import NullHighlighter -from pip._vendor.rich.logging import RichHandler -from pip._vendor.rich.segment import Segment -from pip._vendor.rich.style import Style - -from pip._internal.utils._log import VERBOSE, getLogger -from pip._internal.utils.compat import WINDOWS -from pip._internal.utils.deprecation import DEPRECATION_MSG_PREFIX -from pip._internal.utils.misc import ensure_dir - -_log_state = threading.local() -subprocess_logger = getLogger("pip.subprocessor") - - -class BrokenStdoutLoggingError(Exception): - """ - Raised if BrokenPipeError occurs for the stdout stream while logging. - """ - - -def _is_broken_pipe_error(exc_class: Type[BaseException], exc: BaseException) -> bool: - if exc_class is BrokenPipeError: - return True - - # On Windows, a broken pipe can show up as EINVAL rather than EPIPE: - # https://bugs.python.org/issue19612 - # https://bugs.python.org/issue30418 - if not WINDOWS: - return False - - return isinstance(exc, OSError) and exc.errno in (errno.EINVAL, errno.EPIPE) - - -@contextlib.contextmanager -def indent_log(num: int = 2) -> Generator[None, None, None]: - """ - A context manager which will cause the log output to be indented for any - log messages emitted inside it. - """ - # For thread-safety - _log_state.indentation = get_indentation() - _log_state.indentation += num - try: - yield - finally: - _log_state.indentation -= num - - -def get_indentation() -> int: - return getattr(_log_state, "indentation", 0) - - -class IndentingFormatter(logging.Formatter): - default_time_format = "%Y-%m-%dT%H:%M:%S" - - def __init__( - self, - *args: Any, - add_timestamp: bool = False, - **kwargs: Any, - ) -> None: - """ - A logging.Formatter that obeys the indent_log() context manager. - - :param add_timestamp: A bool indicating output lines should be prefixed - with their record's timestamp. - """ - self.add_timestamp = add_timestamp - super().__init__(*args, **kwargs) - - def get_message_start(self, formatted: str, levelno: int) -> str: - """ - Return the start of the formatted log message (not counting the - prefix to add to each line). - """ - if levelno < logging.WARNING: - return "" - if formatted.startswith(DEPRECATION_MSG_PREFIX): - # Then the message already has a prefix. We don't want it to - # look like "WARNING: DEPRECATION: ...." - return "" - if levelno < logging.ERROR: - return "WARNING: " - - return "ERROR: " - - def format(self, record: logging.LogRecord) -> str: - """ - Calls the standard formatter, but will indent all of the log message - lines by our current indentation level. - """ - formatted = super().format(record) - message_start = self.get_message_start(formatted, record.levelno) - formatted = message_start + formatted - - prefix = "" - if self.add_timestamp: - prefix = f"{self.formatTime(record)} " - prefix += " " * get_indentation() - formatted = "".join([prefix + line for line in formatted.splitlines(True)]) - return formatted - - -@dataclass -class IndentedRenderable: - renderable: RenderableType - indent: int - - def __rich_console__( - self, console: Console, options: ConsoleOptions - ) -> RenderResult: - segments = console.render(self.renderable, options) - lines = Segment.split_lines(segments) - for line in lines: - yield Segment(" " * self.indent) - yield from line - yield Segment("\n") - - -class RichPipStreamHandler(RichHandler): - KEYWORDS: ClassVar[Optional[List[str]]] = [] - - def __init__(self, stream: Optional[TextIO], no_color: bool) -> None: - super().__init__( - console=Console(file=stream, no_color=no_color, soft_wrap=True), - show_time=False, - show_level=False, - show_path=False, - highlighter=NullHighlighter(), - ) - - # Our custom override on Rich's logger, to make things work as we need them to. - def emit(self, record: logging.LogRecord) -> None: - style: Optional[Style] = None - - # If we are given a diagnostic error to present, present it with indentation. - assert isinstance(record.args, tuple) - if record.msg == "[present-rich] %s" and len(record.args) == 1: - rich_renderable = record.args[0] - assert isinstance( - rich_renderable, (ConsoleRenderable, RichCast, str) - ), f"{rich_renderable} is not rich-console-renderable" - - renderable: RenderableType = IndentedRenderable( - rich_renderable, indent=get_indentation() - ) - else: - message = self.format(record) - renderable = self.render_message(record, message) - if record.levelno is not None: - if record.levelno >= logging.ERROR: - style = Style(color="red") - elif record.levelno >= logging.WARNING: - style = Style(color="yellow") - - try: - self.console.print(renderable, overflow="ignore", crop=False, style=style) - except Exception: - self.handleError(record) - - def handleError(self, record: logging.LogRecord) -> None: - """Called when logging is unable to log some output.""" - - exc_class, exc = sys.exc_info()[:2] - # If a broken pipe occurred while calling write() or flush() on the - # stdout stream in logging's Handler.emit(), then raise our special - # exception so we can handle it in main() instead of logging the - # broken pipe error and continuing. - if ( - exc_class - and exc - and self.console.file is sys.stdout - and _is_broken_pipe_error(exc_class, exc) - ): - raise BrokenStdoutLoggingError() - - return super().handleError(record) - - -class BetterRotatingFileHandler(logging.handlers.RotatingFileHandler): - def _open(self) -> TextIOWrapper: - ensure_dir(os.path.dirname(self.baseFilename)) - return super()._open() - - -class MaxLevelFilter(Filter): - def __init__(self, level: int) -> None: - self.level = level - - def filter(self, record: logging.LogRecord) -> bool: - return record.levelno < self.level - - -class ExcludeLoggerFilter(Filter): - - """ - A logging Filter that excludes records from a logger (or its children). - """ - - def filter(self, record: logging.LogRecord) -> bool: - # The base Filter class allows only records from a logger (or its - # children). - return not super().filter(record) - - -def setup_logging(verbosity: int, no_color: bool, user_log_file: Optional[str]) -> int: - """Configures and sets up all of the logging - - Returns the requested logging level, as its integer value. - """ - - # Determine the level to be logging at. - if verbosity >= 2: - level_number = logging.DEBUG - elif verbosity == 1: - level_number = VERBOSE - elif verbosity == -1: - level_number = logging.WARNING - elif verbosity == -2: - level_number = logging.ERROR - elif verbosity <= -3: - level_number = logging.CRITICAL - else: - level_number = logging.INFO - - level = logging.getLevelName(level_number) - - # The "root" logger should match the "console" level *unless* we also need - # to log to a user log file. - include_user_log = user_log_file is not None - if include_user_log: - additional_log_file = user_log_file - root_level = "DEBUG" - else: - additional_log_file = "/dev/null" - root_level = level - - # Disable any logging besides WARNING unless we have DEBUG level logging - # enabled for vendored libraries. - vendored_log_level = "WARNING" if level in ["INFO", "ERROR"] else "DEBUG" - - # Shorthands for clarity - log_streams = { - "stdout": "ext://sys.stdout", - "stderr": "ext://sys.stderr", - } - handler_classes = { - "stream": "pip._internal.utils.logging.RichPipStreamHandler", - "file": "pip._internal.utils.logging.BetterRotatingFileHandler", - } - handlers = ["console", "console_errors", "console_subprocess"] + ( - ["user_log"] if include_user_log else [] - ) - - logging.config.dictConfig( - { - "version": 1, - "disable_existing_loggers": False, - "filters": { - "exclude_warnings": { - "()": "pip._internal.utils.logging.MaxLevelFilter", - "level": logging.WARNING, - }, - "restrict_to_subprocess": { - "()": "logging.Filter", - "name": subprocess_logger.name, - }, - "exclude_subprocess": { - "()": "pip._internal.utils.logging.ExcludeLoggerFilter", - "name": subprocess_logger.name, - }, - }, - "formatters": { - "indent": { - "()": IndentingFormatter, - "format": "%(message)s", - }, - "indent_with_timestamp": { - "()": IndentingFormatter, - "format": "%(message)s", - "add_timestamp": True, - }, - }, - "handlers": { - "console": { - "level": level, - "class": handler_classes["stream"], - "no_color": no_color, - "stream": log_streams["stdout"], - "filters": ["exclude_subprocess", "exclude_warnings"], - "formatter": "indent", - }, - "console_errors": { - "level": "WARNING", - "class": handler_classes["stream"], - "no_color": no_color, - "stream": log_streams["stderr"], - "filters": ["exclude_subprocess"], - "formatter": "indent", - }, - # A handler responsible for logging to the console messages - # from the "subprocessor" logger. - "console_subprocess": { - "level": level, - "class": handler_classes["stream"], - "stream": log_streams["stderr"], - "no_color": no_color, - "filters": ["restrict_to_subprocess"], - "formatter": "indent", - }, - "user_log": { - "level": "DEBUG", - "class": handler_classes["file"], - "filename": additional_log_file, - "encoding": "utf-8", - "delay": True, - "formatter": "indent_with_timestamp", - }, - }, - "root": { - "level": root_level, - "handlers": handlers, - }, - "loggers": {"pip._vendor": {"level": vendored_log_level}}, - } - ) - - return level_number diff --git a/spaces/tjburns/ask_marcus_aurelius/.venv/lib/python3.10/site-packages/pkg_resources/_vendor/jaraco/functools.py b/spaces/tjburns/ask_marcus_aurelius/.venv/lib/python3.10/site-packages/pkg_resources/_vendor/jaraco/functools.py deleted file mode 100644 index a3fea3a1ae12be660a94c277cd748bd43e67b5dc..0000000000000000000000000000000000000000 --- a/spaces/tjburns/ask_marcus_aurelius/.venv/lib/python3.10/site-packages/pkg_resources/_vendor/jaraco/functools.py +++ /dev/null @@ -1,525 +0,0 @@ -import functools -import time -import inspect -import collections -import types -import itertools - -import pkg_resources.extern.more_itertools - -from typing import Callable, TypeVar - - -CallableT = TypeVar("CallableT", bound=Callable[..., object]) - - -def compose(*funcs): - """ - Compose any number of unary functions into a single unary function. - - >>> import textwrap - >>> expected = str.strip(textwrap.dedent(compose.__doc__)) - >>> strip_and_dedent = compose(str.strip, textwrap.dedent) - >>> strip_and_dedent(compose.__doc__) == expected - True - - Compose also allows the innermost function to take arbitrary arguments. - - >>> round_three = lambda x: round(x, ndigits=3) - >>> f = compose(round_three, int.__truediv__) - >>> [f(3*x, x+1) for x in range(1,10)] - [1.5, 2.0, 2.25, 2.4, 2.5, 2.571, 2.625, 2.667, 2.7] - """ - - def compose_two(f1, f2): - return lambda *args, **kwargs: f1(f2(*args, **kwargs)) - - return functools.reduce(compose_two, funcs) - - -def method_caller(method_name, *args, **kwargs): - """ - Return a function that will call a named method on the - target object with optional positional and keyword - arguments. - - >>> lower = method_caller('lower') - >>> lower('MyString') - 'mystring' - """ - - def call_method(target): - func = getattr(target, method_name) - return func(*args, **kwargs) - - return call_method - - -def once(func): - """ - Decorate func so it's only ever called the first time. - - This decorator can ensure that an expensive or non-idempotent function - will not be expensive on subsequent calls and is idempotent. - - >>> add_three = once(lambda a: a+3) - >>> add_three(3) - 6 - >>> add_three(9) - 6 - >>> add_three('12') - 6 - - To reset the stored value, simply clear the property ``saved_result``. - - >>> del add_three.saved_result - >>> add_three(9) - 12 - >>> add_three(8) - 12 - - Or invoke 'reset()' on it. - - >>> add_three.reset() - >>> add_three(-3) - 0 - >>> add_three(0) - 0 - """ - - @functools.wraps(func) - def wrapper(*args, **kwargs): - if not hasattr(wrapper, 'saved_result'): - wrapper.saved_result = func(*args, **kwargs) - return wrapper.saved_result - - wrapper.reset = lambda: vars(wrapper).__delitem__('saved_result') - return wrapper - - -def method_cache( - method: CallableT, - cache_wrapper: Callable[ - [CallableT], CallableT - ] = functools.lru_cache(), # type: ignore[assignment] -) -> CallableT: - """ - Wrap lru_cache to support storing the cache data in the object instances. - - Abstracts the common paradigm where the method explicitly saves an - underscore-prefixed protected property on first call and returns that - subsequently. - - >>> class MyClass: - ... calls = 0 - ... - ... @method_cache - ... def method(self, value): - ... self.calls += 1 - ... return value - - >>> a = MyClass() - >>> a.method(3) - 3 - >>> for x in range(75): - ... res = a.method(x) - >>> a.calls - 75 - - Note that the apparent behavior will be exactly like that of lru_cache - except that the cache is stored on each instance, so values in one - instance will not flush values from another, and when an instance is - deleted, so are the cached values for that instance. - - >>> b = MyClass() - >>> for x in range(35): - ... res = b.method(x) - >>> b.calls - 35 - >>> a.method(0) - 0 - >>> a.calls - 75 - - Note that if method had been decorated with ``functools.lru_cache()``, - a.calls would have been 76 (due to the cached value of 0 having been - flushed by the 'b' instance). - - Clear the cache with ``.cache_clear()`` - - >>> a.method.cache_clear() - - Same for a method that hasn't yet been called. - - >>> c = MyClass() - >>> c.method.cache_clear() - - Another cache wrapper may be supplied: - - >>> cache = functools.lru_cache(maxsize=2) - >>> MyClass.method2 = method_cache(lambda self: 3, cache_wrapper=cache) - >>> a = MyClass() - >>> a.method2() - 3 - - Caution - do not subsequently wrap the method with another decorator, such - as ``@property``, which changes the semantics of the function. - - See also - http://code.activestate.com/recipes/577452-a-memoize-decorator-for-instance-methods/ - for another implementation and additional justification. - """ - - def wrapper(self: object, *args: object, **kwargs: object) -> object: - # it's the first call, replace the method with a cached, bound method - bound_method: CallableT = types.MethodType( # type: ignore[assignment] - method, self - ) - cached_method = cache_wrapper(bound_method) - setattr(self, method.__name__, cached_method) - return cached_method(*args, **kwargs) - - # Support cache clear even before cache has been created. - wrapper.cache_clear = lambda: None # type: ignore[attr-defined] - - return ( # type: ignore[return-value] - _special_method_cache(method, cache_wrapper) or wrapper - ) - - -def _special_method_cache(method, cache_wrapper): - """ - Because Python treats special methods differently, it's not - possible to use instance attributes to implement the cached - methods. - - Instead, install the wrapper method under a different name - and return a simple proxy to that wrapper. - - https://github.com/jaraco/jaraco.functools/issues/5 - """ - name = method.__name__ - special_names = '__getattr__', '__getitem__' - if name not in special_names: - return - - wrapper_name = '__cached' + name - - def proxy(self, *args, **kwargs): - if wrapper_name not in vars(self): - bound = types.MethodType(method, self) - cache = cache_wrapper(bound) - setattr(self, wrapper_name, cache) - else: - cache = getattr(self, wrapper_name) - return cache(*args, **kwargs) - - return proxy - - -def apply(transform): - """ - Decorate a function with a transform function that is - invoked on results returned from the decorated function. - - >>> @apply(reversed) - ... def get_numbers(start): - ... "doc for get_numbers" - ... return range(start, start+3) - >>> list(get_numbers(4)) - [6, 5, 4] - >>> get_numbers.__doc__ - 'doc for get_numbers' - """ - - def wrap(func): - return functools.wraps(func)(compose(transform, func)) - - return wrap - - -def result_invoke(action): - r""" - Decorate a function with an action function that is - invoked on the results returned from the decorated - function (for its side-effect), then return the original - result. - - >>> @result_invoke(print) - ... def add_two(a, b): - ... return a + b - >>> x = add_two(2, 3) - 5 - >>> x - 5 - """ - - def wrap(func): - @functools.wraps(func) - def wrapper(*args, **kwargs): - result = func(*args, **kwargs) - action(result) - return result - - return wrapper - - return wrap - - -def call_aside(f, *args, **kwargs): - """ - Call a function for its side effect after initialization. - - >>> @call_aside - ... def func(): print("called") - called - >>> func() - called - - Use functools.partial to pass parameters to the initial call - - >>> @functools.partial(call_aside, name='bingo') - ... def func(name): print("called with", name) - called with bingo - """ - f(*args, **kwargs) - return f - - -class Throttler: - """ - Rate-limit a function (or other callable) - """ - - def __init__(self, func, max_rate=float('Inf')): - if isinstance(func, Throttler): - func = func.func - self.func = func - self.max_rate = max_rate - self.reset() - - def reset(self): - self.last_called = 0 - - def __call__(self, *args, **kwargs): - self._wait() - return self.func(*args, **kwargs) - - def _wait(self): - "ensure at least 1/max_rate seconds from last call" - elapsed = time.time() - self.last_called - must_wait = 1 / self.max_rate - elapsed - time.sleep(max(0, must_wait)) - self.last_called = time.time() - - def __get__(self, obj, type=None): - return first_invoke(self._wait, functools.partial(self.func, obj)) - - -def first_invoke(func1, func2): - """ - Return a function that when invoked will invoke func1 without - any parameters (for its side-effect) and then invoke func2 - with whatever parameters were passed, returning its result. - """ - - def wrapper(*args, **kwargs): - func1() - return func2(*args, **kwargs) - - return wrapper - - -def retry_call(func, cleanup=lambda: None, retries=0, trap=()): - """ - Given a callable func, trap the indicated exceptions - for up to 'retries' times, invoking cleanup on the - exception. On the final attempt, allow any exceptions - to propagate. - """ - attempts = itertools.count() if retries == float('inf') else range(retries) - for attempt in attempts: - try: - return func() - except trap: - cleanup() - - return func() - - -def retry(*r_args, **r_kwargs): - """ - Decorator wrapper for retry_call. Accepts arguments to retry_call - except func and then returns a decorator for the decorated function. - - Ex: - - >>> @retry(retries=3) - ... def my_func(a, b): - ... "this is my funk" - ... print(a, b) - >>> my_func.__doc__ - 'this is my funk' - """ - - def decorate(func): - @functools.wraps(func) - def wrapper(*f_args, **f_kwargs): - bound = functools.partial(func, *f_args, **f_kwargs) - return retry_call(bound, *r_args, **r_kwargs) - - return wrapper - - return decorate - - -def print_yielded(func): - """ - Convert a generator into a function that prints all yielded elements - - >>> @print_yielded - ... def x(): - ... yield 3; yield None - >>> x() - 3 - None - """ - print_all = functools.partial(map, print) - print_results = compose(more_itertools.consume, print_all, func) - return functools.wraps(func)(print_results) - - -def pass_none(func): - """ - Wrap func so it's not called if its first param is None - - >>> print_text = pass_none(print) - >>> print_text('text') - text - >>> print_text(None) - """ - - @functools.wraps(func) - def wrapper(param, *args, **kwargs): - if param is not None: - return func(param, *args, **kwargs) - - return wrapper - - -def assign_params(func, namespace): - """ - Assign parameters from namespace where func solicits. - - >>> def func(x, y=3): - ... print(x, y) - >>> assigned = assign_params(func, dict(x=2, z=4)) - >>> assigned() - 2 3 - - The usual errors are raised if a function doesn't receive - its required parameters: - - >>> assigned = assign_params(func, dict(y=3, z=4)) - >>> assigned() - Traceback (most recent call last): - TypeError: func() ...argument... - - It even works on methods: - - >>> class Handler: - ... def meth(self, arg): - ... print(arg) - >>> assign_params(Handler().meth, dict(arg='crystal', foo='clear'))() - crystal - """ - sig = inspect.signature(func) - params = sig.parameters.keys() - call_ns = {k: namespace[k] for k in params if k in namespace} - return functools.partial(func, **call_ns) - - -def save_method_args(method): - """ - Wrap a method such that when it is called, the args and kwargs are - saved on the method. - - >>> class MyClass: - ... @save_method_args - ... def method(self, a, b): - ... print(a, b) - >>> my_ob = MyClass() - >>> my_ob.method(1, 2) - 1 2 - >>> my_ob._saved_method.args - (1, 2) - >>> my_ob._saved_method.kwargs - {} - >>> my_ob.method(a=3, b='foo') - 3 foo - >>> my_ob._saved_method.args - () - >>> my_ob._saved_method.kwargs == dict(a=3, b='foo') - True - - The arguments are stored on the instance, allowing for - different instance to save different args. - - >>> your_ob = MyClass() - >>> your_ob.method({str('x'): 3}, b=[4]) - {'x': 3} [4] - >>> your_ob._saved_method.args - ({'x': 3},) - >>> my_ob._saved_method.args - () - """ - args_and_kwargs = collections.namedtuple('args_and_kwargs', 'args kwargs') - - @functools.wraps(method) - def wrapper(self, *args, **kwargs): - attr_name = '_saved_' + method.__name__ - attr = args_and_kwargs(args, kwargs) - setattr(self, attr_name, attr) - return method(self, *args, **kwargs) - - return wrapper - - -def except_(*exceptions, replace=None, use=None): - """ - Replace the indicated exceptions, if raised, with the indicated - literal replacement or evaluated expression (if present). - - >>> safe_int = except_(ValueError)(int) - >>> safe_int('five') - >>> safe_int('5') - 5 - - Specify a literal replacement with ``replace``. - - >>> safe_int_r = except_(ValueError, replace=0)(int) - >>> safe_int_r('five') - 0 - - Provide an expression to ``use`` to pass through particular parameters. - - >>> safe_int_pt = except_(ValueError, use='args[0]')(int) - >>> safe_int_pt('five') - 'five' - - """ - - def decorate(func): - @functools.wraps(func) - def wrapper(*args, **kwargs): - try: - return func(*args, **kwargs) - except exceptions: - try: - return eval(use) - except TypeError: - return replace - - return wrapper - - return decorate diff --git a/spaces/tomofi/MMOCR/mmocr/datasets/pipelines/loading.py b/spaces/tomofi/MMOCR/mmocr/datasets/pipelines/loading.py deleted file mode 100644 index 21958c47862cd05da5f5f9bf72393e90bf315f26..0000000000000000000000000000000000000000 --- a/spaces/tomofi/MMOCR/mmocr/datasets/pipelines/loading.py +++ /dev/null @@ -1,135 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import warnings - -import mmcv -import numpy as np -from mmdet.core import BitmapMasks, PolygonMasks -from mmdet.datasets.builder import PIPELINES -from mmdet.datasets.pipelines.loading import LoadAnnotations, LoadImageFromFile - - -@PIPELINES.register_module() -class LoadTextAnnotations(LoadAnnotations): - """Load annotations for text detection. - - Args: - with_bbox (bool): Whether to parse and load the bbox annotation. - Default: True. - with_label (bool): Whether to parse and load the label annotation. - Default: True. - with_mask (bool): Whether to parse and load the mask annotation. - Default: False. - with_seg (bool): Whether to parse and load the semantic segmentation - annotation. Default: False. - poly2mask (bool): Whether to convert the instance masks from polygons - to bitmaps. Default: True. - use_img_shape (bool): Use the shape of loaded image from - previous pipeline ``LoadImageFromFile`` to generate mask. - """ - - def __init__(self, - with_bbox=True, - with_label=True, - with_mask=False, - with_seg=False, - poly2mask=True, - use_img_shape=False): - super().__init__( - with_bbox=with_bbox, - with_label=with_label, - with_mask=with_mask, - with_seg=with_seg, - poly2mask=poly2mask) - - self.use_img_shape = use_img_shape - - def process_polygons(self, polygons): - """Convert polygons to list of ndarray and filter invalid polygons. - - Args: - polygons (list[list]): Polygons of one instance. - - Returns: - list[numpy.ndarray]: Processed polygons. - """ - - polygons = [np.array(p).astype(np.float32) for p in polygons] - valid_polygons = [] - for polygon in polygons: - if len(polygon) % 2 == 0 and len(polygon) >= 6: - valid_polygons.append(polygon) - return valid_polygons - - def _load_masks(self, results): - ann_info = results['ann_info'] - h, w = results['img_info']['height'], results['img_info']['width'] - if self.use_img_shape: - if results.get('ori_shape', None): - h, w = results['ori_shape'][:2] - results['img_info']['height'] = h - results['img_info']['width'] = w - else: - warnings.warn('"ori_shape" not in results, use the shape ' - 'in "img_info" instead.') - gt_masks = ann_info['masks'] - if self.poly2mask: - gt_masks = BitmapMasks( - [self._poly2mask(mask, h, w) for mask in gt_masks], h, w) - else: - gt_masks = PolygonMasks( - [self.process_polygons(polygons) for polygons in gt_masks], h, - w) - gt_masks_ignore = ann_info.get('masks_ignore', None) - if gt_masks_ignore is not None: - if self.poly2mask: - gt_masks_ignore = BitmapMasks( - [self._poly2mask(mask, h, w) for mask in gt_masks_ignore], - h, w) - else: - gt_masks_ignore = PolygonMasks([ - self.process_polygons(polygons) - for polygons in gt_masks_ignore - ], h, w) - results['gt_masks_ignore'] = gt_masks_ignore - results['mask_fields'].append('gt_masks_ignore') - - results['gt_masks'] = gt_masks - results['mask_fields'].append('gt_masks') - return results - - -@PIPELINES.register_module() -class LoadImageFromNdarray(LoadImageFromFile): - """Load an image from np.ndarray. - - Similar with :obj:`LoadImageFromFile`, but the image read from - ``results['img']``, which is np.ndarray. - """ - - def __call__(self, results): - """Call functions to add image meta information. - - Args: - results (dict): Result dict with Webcam read image in - ``results['img']``. - - Returns: - dict: The dict contains loaded image and meta information. - """ - assert results['img'].dtype == 'uint8' - - img = results['img'] - if self.color_type == 'grayscale' and img.shape[2] == 3: - img = mmcv.bgr2gray(img, keepdim=True) - if self.color_type == 'color' and img.shape[2] == 1: - img = mmcv.gray2bgr(img) - if self.to_float32: - img = img.astype(np.float32) - - results['filename'] = None - results['ori_filename'] = None - results['img'] = img - results['img_shape'] = img.shape - results['ori_shape'] = img.shape - results['img_fields'] = ['img'] - return results diff --git a/spaces/tomofi/MMOCR/mmocr/datasets/pipelines/transforms.py b/spaces/tomofi/MMOCR/mmocr/datasets/pipelines/transforms.py deleted file mode 100644 index 1ad1d2bc428964785f67c51eab855a6d8270e207..0000000000000000000000000000000000000000 --- a/spaces/tomofi/MMOCR/mmocr/datasets/pipelines/transforms.py +++ /dev/null @@ -1,1020 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import math - -import cv2 -import mmcv -import numpy as np -import torchvision.transforms as transforms -from mmdet.core import BitmapMasks, PolygonMasks -from mmdet.datasets.builder import PIPELINES -from mmdet.datasets.pipelines.transforms import Resize -from PIL import Image -from shapely.geometry import Polygon as plg - -import mmocr.core.evaluation.utils as eval_utils -from mmocr.utils import check_argument - - -@PIPELINES.register_module() -class RandomCropInstances: - """Randomly crop images and make sure to contain text instances. - - Args: - target_size (tuple or int): (height, width) - positive_sample_ratio (float): The probability of sampling regions - that go through positive regions. - """ - - def __init__( - self, - target_size, - instance_key, - mask_type='inx0', # 'inx0' or 'union_all' - positive_sample_ratio=5.0 / 8.0): - - assert mask_type in ['inx0', 'union_all'] - - self.mask_type = mask_type - self.instance_key = instance_key - self.positive_sample_ratio = positive_sample_ratio - self.target_size = target_size if (target_size is None or isinstance( - target_size, tuple)) else (target_size, target_size) - - def sample_offset(self, img_gt, img_size): - h, w = img_size - t_h, t_w = self.target_size - - # target size is bigger than origin size - t_h = t_h if t_h < h else h - t_w = t_w if t_w < w else w - if (img_gt is not None - and np.random.random_sample() < self.positive_sample_ratio - and np.max(img_gt) > 0): - - # make sure to crop the positive region - - # the minimum top left to crop positive region (h,w) - tl = np.min(np.where(img_gt > 0), axis=1) - (t_h, t_w) - tl[tl < 0] = 0 - # the maximum top left to crop positive region - br = np.max(np.where(img_gt > 0), axis=1) - (t_h, t_w) - br[br < 0] = 0 - # if br is too big so that crop the outside region of img - br[0] = min(br[0], h - t_h) - br[1] = min(br[1], w - t_w) - # - h = np.random.randint(tl[0], br[0]) if tl[0] < br[0] else 0 - w = np.random.randint(tl[1], br[1]) if tl[1] < br[1] else 0 - else: - # make sure not to crop outside of img - - h = np.random.randint(0, h - t_h) if h - t_h > 0 else 0 - w = np.random.randint(0, w - t_w) if w - t_w > 0 else 0 - - return (h, w) - - @staticmethod - def crop_img(img, offset, target_size): - h, w = img.shape[:2] - br = np.min( - np.stack((np.array(offset) + np.array(target_size), np.array( - (h, w)))), - axis=0) - return img[offset[0]:br[0], offset[1]:br[1]], np.array( - [offset[1], offset[0], br[1], br[0]]) - - def crop_bboxes(self, bboxes, canvas_bbox): - kept_bboxes = [] - kept_inx = [] - canvas_poly = eval_utils.box2polygon(canvas_bbox) - tl = canvas_bbox[0:2] - - for idx, bbox in enumerate(bboxes): - poly = eval_utils.box2polygon(bbox) - area, inters = eval_utils.poly_intersection( - poly, canvas_poly, return_poly=True) - if area == 0: - continue - xmin, ymin, xmax, ymax = inters.bounds - kept_bboxes += [ - np.array( - [xmin - tl[0], ymin - tl[1], xmax - tl[0], ymax - tl[1]], - dtype=np.float32) - ] - kept_inx += [idx] - - if len(kept_inx) == 0: - return np.array([]).astype(np.float32).reshape(0, 4), kept_inx - - return np.stack(kept_bboxes), kept_inx - - @staticmethod - def generate_mask(gt_mask, type): - - if type == 'inx0': - return gt_mask.masks[0] - if type == 'union_all': - mask = gt_mask.masks[0].copy() - for idx in range(1, len(gt_mask.masks)): - mask = np.logical_or(mask, gt_mask.masks[idx]) - return mask - - raise NotImplementedError - - def __call__(self, results): - - gt_mask = results[self.instance_key] - mask = None - if len(gt_mask.masks) > 0: - mask = self.generate_mask(gt_mask, self.mask_type) - results['crop_offset'] = self.sample_offset(mask, - results['img'].shape[:2]) - - # crop img. bbox = [x1,y1,x2,y2] - img, bbox = self.crop_img(results['img'], results['crop_offset'], - self.target_size) - results['img'] = img - img_shape = img.shape - results['img_shape'] = img_shape - - # crop masks - for key in results.get('mask_fields', []): - results[key] = results[key].crop(bbox) - - # for mask rcnn - for key in results.get('bbox_fields', []): - results[key], kept_inx = self.crop_bboxes(results[key], bbox) - if key == 'gt_bboxes': - # ignore gt_labels accordingly - if 'gt_labels' in results: - ori_labels = results['gt_labels'] - ori_inst_num = len(ori_labels) - results['gt_labels'] = [ - ori_labels[idx] for idx in range(ori_inst_num) - if idx in kept_inx - ] - # ignore g_masks accordingly - if 'gt_masks' in results: - ori_mask = results['gt_masks'].masks - kept_mask = [ - ori_mask[idx] for idx in range(ori_inst_num) - if idx in kept_inx - ] - target_h, target_w = bbox[3] - bbox[1], bbox[2] - bbox[0] - if len(kept_inx) > 0: - kept_mask = np.stack(kept_mask) - else: - kept_mask = np.empty((0, target_h, target_w), - dtype=np.float32) - results['gt_masks'] = BitmapMasks(kept_mask, target_h, - target_w) - - return results - - def __repr__(self): - repr_str = self.__class__.__name__ - return repr_str - - -@PIPELINES.register_module() -class RandomRotateTextDet: - """Randomly rotate images.""" - - def __init__(self, rotate_ratio=1.0, max_angle=10): - self.rotate_ratio = rotate_ratio - self.max_angle = max_angle - - @staticmethod - def sample_angle(max_angle): - angle = np.random.random_sample() * 2 * max_angle - max_angle - return angle - - @staticmethod - def rotate_img(img, angle): - h, w = img.shape[:2] - rotation_matrix = cv2.getRotationMatrix2D((w / 2, h / 2), angle, 1) - img_target = cv2.warpAffine( - img, rotation_matrix, (w, h), flags=cv2.INTER_NEAREST) - assert img_target.shape == img.shape - return img_target - - def __call__(self, results): - if np.random.random_sample() < self.rotate_ratio: - # rotate imgs - results['rotated_angle'] = self.sample_angle(self.max_angle) - img = self.rotate_img(results['img'], results['rotated_angle']) - results['img'] = img - img_shape = img.shape - results['img_shape'] = img_shape - - # rotate masks - for key in results.get('mask_fields', []): - masks = results[key].masks - mask_list = [] - for m in masks: - rotated_m = self.rotate_img(m, results['rotated_angle']) - mask_list.append(rotated_m) - results[key] = BitmapMasks(mask_list, *(img_shape[:2])) - - return results - - def __repr__(self): - repr_str = self.__class__.__name__ - return repr_str - - -@PIPELINES.register_module() -class ColorJitter: - """An interface for torch color jitter so that it can be invoked in - mmdetection pipeline.""" - - def __init__(self, **kwargs): - self.transform = transforms.ColorJitter(**kwargs) - - def __call__(self, results): - # img is bgr - img = results['img'][..., ::-1] - img = Image.fromarray(img) - img = self.transform(img) - img = np.asarray(img) - img = img[..., ::-1] - results['img'] = img - return results - - def __repr__(self): - repr_str = self.__class__.__name__ - return repr_str - - -@PIPELINES.register_module() -class ScaleAspectJitter(Resize): - """Resize image and segmentation mask encoded by coordinates. - - Allowed resize types are `around_min_img_scale`, `long_short_bound`, and - `indep_sample_in_range`. - """ - - def __init__(self, - img_scale=None, - multiscale_mode='range', - ratio_range=None, - keep_ratio=False, - resize_type='around_min_img_scale', - aspect_ratio_range=None, - long_size_bound=None, - short_size_bound=None, - scale_range=None): - super().__init__( - img_scale=img_scale, - multiscale_mode=multiscale_mode, - ratio_range=ratio_range, - keep_ratio=keep_ratio) - assert not keep_ratio - assert resize_type in [ - 'around_min_img_scale', 'long_short_bound', 'indep_sample_in_range' - ] - self.resize_type = resize_type - - if resize_type == 'indep_sample_in_range': - assert ratio_range is None - assert aspect_ratio_range is None - assert short_size_bound is None - assert long_size_bound is None - assert scale_range is not None - else: - assert scale_range is None - assert isinstance(ratio_range, tuple) - assert isinstance(aspect_ratio_range, tuple) - assert check_argument.equal_len(ratio_range, aspect_ratio_range) - - if resize_type in ['long_short_bound']: - assert short_size_bound is not None - assert long_size_bound is not None - - self.aspect_ratio_range = aspect_ratio_range - self.long_size_bound = long_size_bound - self.short_size_bound = short_size_bound - self.scale_range = scale_range - - @staticmethod - def sample_from_range(range): - assert len(range) == 2 - min_value, max_value = min(range), max(range) - value = np.random.random_sample() * (max_value - min_value) + min_value - - return value - - def _random_scale(self, results): - - if self.resize_type == 'indep_sample_in_range': - w = self.sample_from_range(self.scale_range) - h = self.sample_from_range(self.scale_range) - results['scale'] = (int(w), int(h)) # (w,h) - results['scale_idx'] = None - return - h, w = results['img'].shape[0:2] - if self.resize_type == 'long_short_bound': - scale1 = 1 - if max(h, w) > self.long_size_bound: - scale1 = self.long_size_bound / max(h, w) - scale2 = self.sample_from_range(self.ratio_range) - scale = scale1 * scale2 - if min(h, w) * scale <= self.short_size_bound: - scale = (self.short_size_bound + 10) * 1.0 / min(h, w) - elif self.resize_type == 'around_min_img_scale': - short_size = min(self.img_scale[0]) - ratio = self.sample_from_range(self.ratio_range) - scale = (ratio * short_size) / min(h, w) - else: - raise NotImplementedError - - aspect = self.sample_from_range(self.aspect_ratio_range) - h_scale = scale * math.sqrt(aspect) - w_scale = scale / math.sqrt(aspect) - results['scale'] = (int(w * w_scale), int(h * h_scale)) # (w,h) - results['scale_idx'] = None - - -@PIPELINES.register_module() -class AffineJitter: - """An interface for torchvision random affine so that it can be invoked in - mmdet pipeline.""" - - def __init__(self, - degrees=4, - translate=(0.02, 0.04), - scale=(0.9, 1.1), - shear=None, - resample=False, - fillcolor=0): - self.transform = transforms.RandomAffine( - degrees=degrees, - translate=translate, - scale=scale, - shear=shear, - resample=resample, - fillcolor=fillcolor) - - def __call__(self, results): - # img is bgr - img = results['img'][..., ::-1] - img = Image.fromarray(img) - img = self.transform(img) - img = np.asarray(img) - img = img[..., ::-1] - results['img'] = img - return results - - def __repr__(self): - repr_str = self.__class__.__name__ - return repr_str - - -@PIPELINES.register_module() -class RandomCropPolyInstances: - """Randomly crop images and make sure to contain at least one intact - instance.""" - - def __init__(self, - instance_key='gt_masks', - crop_ratio=5.0 / 8.0, - min_side_ratio=0.4): - super().__init__() - self.instance_key = instance_key - self.crop_ratio = crop_ratio - self.min_side_ratio = min_side_ratio - - def sample_valid_start_end(self, valid_array, min_len, max_start, min_end): - - assert isinstance(min_len, int) - assert len(valid_array) > min_len - - start_array = valid_array.copy() - max_start = min(len(start_array) - min_len, max_start) - start_array[max_start:] = 0 - start_array[0] = 1 - diff_array = np.hstack([0, start_array]) - np.hstack([start_array, 0]) - region_starts = np.where(diff_array < 0)[0] - region_ends = np.where(diff_array > 0)[0] - region_ind = np.random.randint(0, len(region_starts)) - start = np.random.randint(region_starts[region_ind], - region_ends[region_ind]) - - end_array = valid_array.copy() - min_end = max(start + min_len, min_end) - end_array[:min_end] = 0 - end_array[-1] = 1 - diff_array = np.hstack([0, end_array]) - np.hstack([end_array, 0]) - region_starts = np.where(diff_array < 0)[0] - region_ends = np.where(diff_array > 0)[0] - region_ind = np.random.randint(0, len(region_starts)) - end = np.random.randint(region_starts[region_ind], - region_ends[region_ind]) - return start, end - - def sample_crop_box(self, img_size, results): - """Generate crop box and make sure not to crop the polygon instances. - - Args: - img_size (tuple(int)): The image size (h, w). - results (dict): The results dict. - """ - - assert isinstance(img_size, tuple) - h, w = img_size[:2] - - key_masks = results[self.instance_key].masks - x_valid_array = np.ones(w, dtype=np.int32) - y_valid_array = np.ones(h, dtype=np.int32) - - selected_mask = key_masks[np.random.randint(0, len(key_masks))] - selected_mask = selected_mask[0].reshape((-1, 2)).astype(np.int32) - max_x_start = max(np.min(selected_mask[:, 0]) - 2, 0) - min_x_end = min(np.max(selected_mask[:, 0]) + 3, w - 1) - max_y_start = max(np.min(selected_mask[:, 1]) - 2, 0) - min_y_end = min(np.max(selected_mask[:, 1]) + 3, h - 1) - - for key in results.get('mask_fields', []): - if len(results[key].masks) == 0: - continue - masks = results[key].masks - for mask in masks: - assert len(mask) == 1 - mask = mask[0].reshape((-1, 2)).astype(np.int32) - clip_x = np.clip(mask[:, 0], 0, w - 1) - clip_y = np.clip(mask[:, 1], 0, h - 1) - min_x, max_x = np.min(clip_x), np.max(clip_x) - min_y, max_y = np.min(clip_y), np.max(clip_y) - - x_valid_array[min_x - 2:max_x + 3] = 0 - y_valid_array[min_y - 2:max_y + 3] = 0 - - min_w = int(w * self.min_side_ratio) - min_h = int(h * self.min_side_ratio) - - x1, x2 = self.sample_valid_start_end(x_valid_array, min_w, max_x_start, - min_x_end) - y1, y2 = self.sample_valid_start_end(y_valid_array, min_h, max_y_start, - min_y_end) - - return np.array([x1, y1, x2, y2]) - - def crop_img(self, img, bbox): - assert img.ndim == 3 - h, w, _ = img.shape - assert 0 <= bbox[1] < bbox[3] <= h - assert 0 <= bbox[0] < bbox[2] <= w - return img[bbox[1]:bbox[3], bbox[0]:bbox[2]] - - def __call__(self, results): - if len(results[self.instance_key].masks) < 1: - return results - if np.random.random_sample() < self.crop_ratio: - crop_box = self.sample_crop_box(results['img'].shape, results) - results['crop_region'] = crop_box - img = self.crop_img(results['img'], crop_box) - results['img'] = img - results['img_shape'] = img.shape - - # crop and filter masks - x1, y1, x2, y2 = crop_box - w = max(x2 - x1, 1) - h = max(y2 - y1, 1) - labels = results['gt_labels'] - valid_labels = [] - for key in results.get('mask_fields', []): - if len(results[key].masks) == 0: - continue - results[key] = results[key].crop(crop_box) - # filter out polygons beyond crop box. - masks = results[key].masks - valid_masks_list = [] - - for ind, mask in enumerate(masks): - assert len(mask) == 1 - polygon = mask[0].reshape((-1, 2)) - if (polygon[:, 0] > - -4).all() and (polygon[:, 0] < w + 4).all() and ( - polygon[:, 1] > -4).all() and (polygon[:, 1] < - h + 4).all(): - mask[0][::2] = np.clip(mask[0][::2], 0, w) - mask[0][1::2] = np.clip(mask[0][1::2], 0, h) - if key == self.instance_key: - valid_labels.append(labels[ind]) - valid_masks_list.append(mask) - - results[key] = PolygonMasks(valid_masks_list, h, w) - results['gt_labels'] = np.array(valid_labels) - - return results - - def __repr__(self): - repr_str = self.__class__.__name__ - return repr_str - - -@PIPELINES.register_module() -class RandomRotatePolyInstances: - - def __init__(self, - rotate_ratio=0.5, - max_angle=10, - pad_with_fixed_color=False, - pad_value=(0, 0, 0)): - """Randomly rotate images and polygon masks. - - Args: - rotate_ratio (float): The ratio of samples to operate rotation. - max_angle (int): The maximum rotation angle. - pad_with_fixed_color (bool): The flag for whether to pad rotated - image with fixed value. If set to False, the rotated image will - be padded onto cropped image. - pad_value (tuple(int)): The color value for padding rotated image. - """ - self.rotate_ratio = rotate_ratio - self.max_angle = max_angle - self.pad_with_fixed_color = pad_with_fixed_color - self.pad_value = pad_value - - def rotate(self, center, points, theta, center_shift=(0, 0)): - # rotate points. - (center_x, center_y) = center - center_y = -center_y - x, y = points[::2], points[1::2] - y = -y - - theta = theta / 180 * math.pi - cos = math.cos(theta) - sin = math.sin(theta) - - x = (x - center_x) - y = (y - center_y) - - _x = center_x + x * cos - y * sin + center_shift[0] - _y = -(center_y + x * sin + y * cos) + center_shift[1] - - points[::2], points[1::2] = _x, _y - return points - - def cal_canvas_size(self, ori_size, degree): - assert isinstance(ori_size, tuple) - angle = degree * math.pi / 180.0 - h, w = ori_size[:2] - - cos = math.cos(angle) - sin = math.sin(angle) - canvas_h = int(w * math.fabs(sin) + h * math.fabs(cos)) - canvas_w = int(w * math.fabs(cos) + h * math.fabs(sin)) - - canvas_size = (canvas_h, canvas_w) - return canvas_size - - def sample_angle(self, max_angle): - angle = np.random.random_sample() * 2 * max_angle - max_angle - return angle - - def rotate_img(self, img, angle, canvas_size): - h, w = img.shape[:2] - rotation_matrix = cv2.getRotationMatrix2D((w / 2, h / 2), angle, 1) - rotation_matrix[0, 2] += int((canvas_size[1] - w) / 2) - rotation_matrix[1, 2] += int((canvas_size[0] - h) / 2) - - if self.pad_with_fixed_color: - target_img = cv2.warpAffine( - img, - rotation_matrix, (canvas_size[1], canvas_size[0]), - flags=cv2.INTER_NEAREST, - borderValue=self.pad_value) - else: - mask = np.zeros_like(img) - (h_ind, w_ind) = (np.random.randint(0, h * 7 // 8), - np.random.randint(0, w * 7 // 8)) - img_cut = img[h_ind:(h_ind + h // 9), w_ind:(w_ind + w // 9)] - img_cut = mmcv.imresize(img_cut, (canvas_size[1], canvas_size[0])) - mask = cv2.warpAffine( - mask, - rotation_matrix, (canvas_size[1], canvas_size[0]), - borderValue=[1, 1, 1]) - target_img = cv2.warpAffine( - img, - rotation_matrix, (canvas_size[1], canvas_size[0]), - borderValue=[0, 0, 0]) - target_img = target_img + img_cut * mask - - return target_img - - def __call__(self, results): - if np.random.random_sample() < self.rotate_ratio: - img = results['img'] - h, w = img.shape[:2] - angle = self.sample_angle(self.max_angle) - canvas_size = self.cal_canvas_size((h, w), angle) - center_shift = (int( - (canvas_size[1] - w) / 2), int((canvas_size[0] - h) / 2)) - - # rotate image - results['rotated_poly_angle'] = angle - img = self.rotate_img(img, angle, canvas_size) - results['img'] = img - img_shape = img.shape - results['img_shape'] = img_shape - - # rotate polygons - for key in results.get('mask_fields', []): - if len(results[key].masks) == 0: - continue - masks = results[key].masks - rotated_masks = [] - for mask in masks: - rotated_mask = self.rotate((w / 2, h / 2), mask[0], angle, - center_shift) - rotated_masks.append([rotated_mask]) - - results[key] = PolygonMasks(rotated_masks, *(img_shape[:2])) - - return results - - def __repr__(self): - repr_str = self.__class__.__name__ - return repr_str - - -@PIPELINES.register_module() -class SquareResizePad: - - def __init__(self, - target_size, - pad_ratio=0.6, - pad_with_fixed_color=False, - pad_value=(0, 0, 0)): - """Resize or pad images to be square shape. - - Args: - target_size (int): The target size of square shaped image. - pad_with_fixed_color (bool): The flag for whether to pad rotated - image with fixed value. If set to False, the rescales image will - be padded onto cropped image. - pad_value (tuple(int)): The color value for padding rotated image. - """ - assert isinstance(target_size, int) - assert isinstance(pad_ratio, float) - assert isinstance(pad_with_fixed_color, bool) - assert isinstance(pad_value, tuple) - - self.target_size = target_size - self.pad_ratio = pad_ratio - self.pad_with_fixed_color = pad_with_fixed_color - self.pad_value = pad_value - - def resize_img(self, img, keep_ratio=True): - h, w, _ = img.shape - if keep_ratio: - t_h = self.target_size if h >= w else int(h * self.target_size / w) - t_w = self.target_size if h <= w else int(w * self.target_size / h) - else: - t_h = t_w = self.target_size - img = mmcv.imresize(img, (t_w, t_h)) - return img, (t_h, t_w) - - def square_pad(self, img): - h, w = img.shape[:2] - if h == w: - return img, (0, 0) - pad_size = max(h, w) - if self.pad_with_fixed_color: - expand_img = np.ones((pad_size, pad_size, 3), dtype=np.uint8) - expand_img[:] = self.pad_value - else: - (h_ind, w_ind) = (np.random.randint(0, h * 7 // 8), - np.random.randint(0, w * 7 // 8)) - img_cut = img[h_ind:(h_ind + h // 9), w_ind:(w_ind + w // 9)] - expand_img = mmcv.imresize(img_cut, (pad_size, pad_size)) - if h > w: - y0, x0 = 0, (h - w) // 2 - else: - y0, x0 = (w - h) // 2, 0 - expand_img[y0:y0 + h, x0:x0 + w] = img - offset = (x0, y0) - - return expand_img, offset - - def square_pad_mask(self, points, offset): - x0, y0 = offset - pad_points = points.copy() - pad_points[::2] = pad_points[::2] + x0 - pad_points[1::2] = pad_points[1::2] + y0 - return pad_points - - def __call__(self, results): - img = results['img'] - - if np.random.random_sample() < self.pad_ratio: - img, out_size = self.resize_img(img, keep_ratio=True) - img, offset = self.square_pad(img) - else: - img, out_size = self.resize_img(img, keep_ratio=False) - offset = (0, 0) - - results['img'] = img - results['img_shape'] = img.shape - - for key in results.get('mask_fields', []): - if len(results[key].masks) == 0: - continue - results[key] = results[key].resize(out_size) - masks = results[key].masks - processed_masks = [] - for mask in masks: - square_pad_mask = self.square_pad_mask(mask[0], offset) - processed_masks.append([square_pad_mask]) - - results[key] = PolygonMasks(processed_masks, *(img.shape[:2])) - - return results - - def __repr__(self): - repr_str = self.__class__.__name__ - return repr_str - - -@PIPELINES.register_module() -class RandomScaling: - - def __init__(self, size=800, scale=(3. / 4, 5. / 2)): - """Random scale the image while keeping aspect. - - Args: - size (int) : Base size before scaling. - scale (tuple(float)) : The range of scaling. - """ - assert isinstance(size, int) - assert isinstance(scale, float) or isinstance(scale, tuple) - self.size = size - self.scale = scale if isinstance(scale, tuple) \ - else (1 - scale, 1 + scale) - - def __call__(self, results): - image = results['img'] - h, w, _ = results['img_shape'] - - aspect_ratio = np.random.uniform(min(self.scale), max(self.scale)) - scales = self.size * 1.0 / max(h, w) * aspect_ratio - scales = np.array([scales, scales]) - out_size = (int(h * scales[1]), int(w * scales[0])) - image = mmcv.imresize(image, out_size[::-1]) - - results['img'] = image - results['img_shape'] = image.shape - - for key in results.get('mask_fields', []): - if len(results[key].masks) == 0: - continue - results[key] = results[key].resize(out_size) - - return results - - -@PIPELINES.register_module() -class RandomCropFlip: - - def __init__(self, - pad_ratio=0.1, - crop_ratio=0.5, - iter_num=1, - min_area_ratio=0.2): - """Random crop and flip a patch of the image. - - Args: - crop_ratio (float): The ratio of cropping. - iter_num (int): Number of operations. - min_area_ratio (float): Minimal area ratio between cropped patch - and original image. - """ - assert isinstance(crop_ratio, float) - assert isinstance(iter_num, int) - assert isinstance(min_area_ratio, float) - - self.pad_ratio = pad_ratio - self.epsilon = 1e-2 - self.crop_ratio = crop_ratio - self.iter_num = iter_num - self.min_area_ratio = min_area_ratio - - def __call__(self, results): - for i in range(self.iter_num): - results = self.random_crop_flip(results) - return results - - def random_crop_flip(self, results): - image = results['img'] - polygons = results['gt_masks'].masks - ignore_polygons = results['gt_masks_ignore'].masks - all_polygons = polygons + ignore_polygons - if len(polygons) == 0: - return results - - if np.random.random() >= self.crop_ratio: - return results - - h, w, _ = results['img_shape'] - area = h * w - pad_h = int(h * self.pad_ratio) - pad_w = int(w * self.pad_ratio) - h_axis, w_axis = self.generate_crop_target(image, all_polygons, pad_h, - pad_w) - if len(h_axis) == 0 or len(w_axis) == 0: - return results - - attempt = 0 - while attempt < 10: - attempt += 1 - polys_keep = [] - polys_new = [] - ign_polys_keep = [] - ign_polys_new = [] - xx = np.random.choice(w_axis, size=2) - xmin = np.min(xx) - pad_w - xmax = np.max(xx) - pad_w - xmin = np.clip(xmin, 0, w - 1) - xmax = np.clip(xmax, 0, w - 1) - yy = np.random.choice(h_axis, size=2) - ymin = np.min(yy) - pad_h - ymax = np.max(yy) - pad_h - ymin = np.clip(ymin, 0, h - 1) - ymax = np.clip(ymax, 0, h - 1) - if (xmax - xmin) * (ymax - ymin) < area * self.min_area_ratio: - # area too small - continue - - pts = np.stack([[xmin, xmax, xmax, xmin], - [ymin, ymin, ymax, ymax]]).T.astype(np.int32) - pp = plg(pts) - fail_flag = False - for polygon in polygons: - ppi = plg(polygon[0].reshape(-1, 2)) - ppiou = eval_utils.poly_intersection(ppi, pp) - if np.abs(ppiou - float(ppi.area)) > self.epsilon and \ - np.abs(ppiou) > self.epsilon: - fail_flag = True - break - elif np.abs(ppiou - float(ppi.area)) < self.epsilon: - polys_new.append(polygon) - else: - polys_keep.append(polygon) - - for polygon in ignore_polygons: - ppi = plg(polygon[0].reshape(-1, 2)) - ppiou = eval_utils.poly_intersection(ppi, pp) - if np.abs(ppiou - float(ppi.area)) > self.epsilon and \ - np.abs(ppiou) > self.epsilon: - fail_flag = True - break - elif np.abs(ppiou - float(ppi.area)) < self.epsilon: - ign_polys_new.append(polygon) - else: - ign_polys_keep.append(polygon) - - if fail_flag: - continue - else: - break - - cropped = image[ymin:ymax, xmin:xmax, :] - select_type = np.random.randint(3) - if select_type == 0: - img = np.ascontiguousarray(cropped[:, ::-1]) - elif select_type == 1: - img = np.ascontiguousarray(cropped[::-1, :]) - else: - img = np.ascontiguousarray(cropped[::-1, ::-1]) - image[ymin:ymax, xmin:xmax, :] = img - results['img'] = image - - if len(polys_new) + len(ign_polys_new) != 0: - height, width, _ = cropped.shape - if select_type == 0: - for idx, polygon in enumerate(polys_new): - poly = polygon[0].reshape(-1, 2) - poly[:, 0] = width - poly[:, 0] + 2 * xmin - polys_new[idx] = [poly.reshape(-1, )] - for idx, polygon in enumerate(ign_polys_new): - poly = polygon[0].reshape(-1, 2) - poly[:, 0] = width - poly[:, 0] + 2 * xmin - ign_polys_new[idx] = [poly.reshape(-1, )] - elif select_type == 1: - for idx, polygon in enumerate(polys_new): - poly = polygon[0].reshape(-1, 2) - poly[:, 1] = height - poly[:, 1] + 2 * ymin - polys_new[idx] = [poly.reshape(-1, )] - for idx, polygon in enumerate(ign_polys_new): - poly = polygon[0].reshape(-1, 2) - poly[:, 1] = height - poly[:, 1] + 2 * ymin - ign_polys_new[idx] = [poly.reshape(-1, )] - else: - for idx, polygon in enumerate(polys_new): - poly = polygon[0].reshape(-1, 2) - poly[:, 0] = width - poly[:, 0] + 2 * xmin - poly[:, 1] = height - poly[:, 1] + 2 * ymin - polys_new[idx] = [poly.reshape(-1, )] - for idx, polygon in enumerate(ign_polys_new): - poly = polygon[0].reshape(-1, 2) - poly[:, 0] = width - poly[:, 0] + 2 * xmin - poly[:, 1] = height - poly[:, 1] + 2 * ymin - ign_polys_new[idx] = [poly.reshape(-1, )] - polygons = polys_keep + polys_new - ignore_polygons = ign_polys_keep + ign_polys_new - results['gt_masks'] = PolygonMasks(polygons, *(image.shape[:2])) - results['gt_masks_ignore'] = PolygonMasks(ignore_polygons, - *(image.shape[:2])) - - return results - - def generate_crop_target(self, image, all_polys, pad_h, pad_w): - """Generate crop target and make sure not to crop the polygon - instances. - - Args: - image (ndarray): The image waited to be crop. - all_polys (list[list[ndarray]]): All polygons including ground - truth polygons and ground truth ignored polygons. - pad_h (int): Padding length of height. - pad_w (int): Padding length of width. - Returns: - h_axis (ndarray): Vertical cropping range. - w_axis (ndarray): Horizontal cropping range. - """ - h, w, _ = image.shape - h_array = np.zeros((h + pad_h * 2), dtype=np.int32) - w_array = np.zeros((w + pad_w * 2), dtype=np.int32) - - text_polys = [] - for polygon in all_polys: - rect = cv2.minAreaRect(polygon[0].astype(np.int32).reshape(-1, 2)) - box = cv2.boxPoints(rect) - box = np.int0(box) - text_polys.append([box[0], box[1], box[2], box[3]]) - - polys = np.array(text_polys, dtype=np.int32) - for poly in polys: - poly = np.round(poly, decimals=0).astype(np.int32) - minx = np.min(poly[:, 0]) - maxx = np.max(poly[:, 0]) - w_array[minx + pad_w:maxx + pad_w] = 1 - miny = np.min(poly[:, 1]) - maxy = np.max(poly[:, 1]) - h_array[miny + pad_h:maxy + pad_h] = 1 - - h_axis = np.where(h_array == 0)[0] - w_axis = np.where(w_array == 0)[0] - return h_axis, w_axis - - -@PIPELINES.register_module() -class PyramidRescale: - """Resize the image to the base shape, downsample it with gaussian pyramid, - and rescale it back to original size. - - Adapted from https://github.com/FangShancheng/ABINet. - - Args: - factor (int): The decay factor from base size, or the number of - downsampling operations from the base layer. - base_shape (tuple(int)): The shape of the base layer of the pyramid. - randomize_factor (bool): If True, the final factor would be a random - integer in [0, factor]. - - :Required Keys: - - | ``img`` (ndarray): The input image. - - :Affected Keys: - :Modified: - - | ``img`` (ndarray): The modified image. - """ - - def __init__(self, factor=4, base_shape=(128, 512), randomize_factor=True): - assert isinstance(factor, int) - assert isinstance(base_shape, list) or isinstance(base_shape, tuple) - assert len(base_shape) == 2 - assert isinstance(randomize_factor, bool) - self.factor = factor if not randomize_factor else np.random.randint( - 0, factor + 1) - self.base_w, self.base_h = base_shape - - def __call__(self, results): - assert 'img' in results - if self.factor == 0: - return results - img = results['img'] - src_h, src_w = img.shape[:2] - scale_img = mmcv.imresize(img, (self.base_w, self.base_h)) - for _ in range(self.factor): - scale_img = cv2.pyrDown(scale_img) - scale_img = mmcv.imresize(scale_img, (src_w, src_h)) - results['img'] = scale_img - return results - - def __repr__(self): - repr_str = self.__class__.__name__ - repr_str += f'(factor={self.factor}, ' - repr_str += f'basew={self.basew}, baseh={self.baseh})' - return repr_str diff --git a/spaces/tomofi/MMOCR/tests/test_core/test_deploy_utils.py b/spaces/tomofi/MMOCR/tests/test_core/test_deploy_utils.py deleted file mode 100644 index 10541ca8f77edc86f5be6848f82579a04e454343..0000000000000000000000000000000000000000 --- a/spaces/tomofi/MMOCR/tests/test_core/test_deploy_utils.py +++ /dev/null @@ -1,225 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import tempfile -from functools import partial - -import mmcv -import numpy as np -import pytest -import torch -from packaging import version - -from mmocr.core.deployment import (ONNXRuntimeDetector, ONNXRuntimeRecognizer, - TensorRTDetector, TensorRTRecognizer) -from mmocr.models import build_detector - - -@pytest.mark.skipif(torch.__version__ == 'parrots', reason='skip parrots.') -@pytest.mark.skipif( - version.parse(torch.__version__) < version.parse('1.4.0'), - reason='skip if torch=1.3.x') -@pytest.mark.skipif( - not torch.cuda.is_available(), reason='skip if on cpu device') -def test_detector_wrapper(): - try: - import onnxruntime as ort # noqa: F401 - import tensorrt as trt - from mmcv.tensorrt import onnx2trt, save_trt_engine - except ImportError: - pytest.skip('ONNXRuntime or TensorRT is not available.') - - cfg = dict( - model=dict( - type='DBNet', - backbone=dict( - type='ResNet', - depth=18, - num_stages=4, - out_indices=(0, 1, 2, 3), - frozen_stages=-1, - norm_cfg=dict(type='BN', requires_grad=True), - init_cfg=dict( - type='Pretrained', checkpoint='torchvision://resnet18'), - norm_eval=False, - style='caffe'), - neck=dict( - type='FPNC', - in_channels=[64, 128, 256, 512], - lateral_channels=256), - bbox_head=dict( - type='DBHead', - text_repr_type='quad', - in_channels=256, - loss=dict(type='DBLoss', alpha=5.0, beta=10.0, - bbce_loss=True)), - train_cfg=None, - test_cfg=None)) - - cfg = mmcv.Config(cfg) - - pytorch_model = build_detector(cfg.model, None, None) - - # prepare data - inputs = torch.rand(1, 3, 224, 224) - img_metas = [{ - 'img_shape': [1, 3, 224, 224], - 'ori_shape': [1, 3, 224, 224], - 'pad_shape': [1, 3, 224, 224], - 'filename': None, - 'scale_factor': np.array([1, 1, 1, 1]) - }] - - pytorch_model.forward = pytorch_model.forward_dummy - with tempfile.TemporaryDirectory() as tmpdirname: - onnx_path = f'{tmpdirname}/tmp.onnx' - with torch.no_grad(): - torch.onnx.export( - pytorch_model, - inputs, - onnx_path, - input_names=['input'], - output_names=['output'], - export_params=True, - keep_initializers_as_inputs=False, - verbose=False, - opset_version=11) - - # TensorRT part - def get_GiB(x: int): - """return x GiB.""" - return x * (1 << 30) - - trt_path = onnx_path.replace('.onnx', '.trt') - min_shape = [1, 3, 224, 224] - max_shape = [1, 3, 224, 224] - # create trt engine and wrapper - opt_shape_dict = {'input': [min_shape, min_shape, max_shape]} - max_workspace_size = get_GiB(1) - trt_engine = onnx2trt( - onnx_path, - opt_shape_dict, - log_level=trt.Logger.ERROR, - fp16_mode=False, - max_workspace_size=max_workspace_size) - save_trt_engine(trt_engine, trt_path) - print(f'Successfully created TensorRT engine: {trt_path}') - - wrap_onnx = ONNXRuntimeDetector(onnx_path, cfg, 0) - wrap_trt = TensorRTDetector(trt_path, cfg, 0) - - assert isinstance(wrap_onnx, ONNXRuntimeDetector) - assert isinstance(wrap_trt, TensorRTDetector) - - with torch.no_grad(): - onnx_outputs = wrap_onnx.simple_test(inputs, img_metas, rescale=False) - trt_outputs = wrap_onnx.simple_test(inputs, img_metas, rescale=False) - - assert isinstance(onnx_outputs[0], dict) - assert isinstance(trt_outputs[0], dict) - assert 'boundary_result' in onnx_outputs[0] - assert 'boundary_result' in trt_outputs[0] - - -@pytest.mark.skipif(torch.__version__ == 'parrots', reason='skip parrots.') -@pytest.mark.skipif( - version.parse(torch.__version__) < version.parse('1.4.0'), - reason='skip if torch=1.3.x') -@pytest.mark.skipif( - not torch.cuda.is_available(), reason='skip if on cpu device') -def test_recognizer_wrapper(): - try: - import onnxruntime as ort # noqa: F401 - import tensorrt as trt - from mmcv.tensorrt import onnx2trt, save_trt_engine - except ImportError: - pytest.skip('ONNXRuntime or TensorRT is not available.') - - cfg = dict( - label_convertor=dict( - type='CTCConvertor', - dict_type='DICT36', - with_unknown=False, - lower=True), - model=dict( - type='CRNNNet', - preprocessor=None, - backbone=dict( - type='VeryDeepVgg', leaky_relu=False, input_channels=1), - encoder=None, - decoder=dict(type='CRNNDecoder', in_channels=512, rnn_flag=True), - loss=dict(type='CTCLoss'), - label_convertor=dict( - type='CTCConvertor', - dict_type='DICT36', - with_unknown=False, - lower=True), - pretrained=None), - train_cfg=None, - test_cfg=None) - - cfg = mmcv.Config(cfg) - - pytorch_model = build_detector(cfg.model, None, None) - - # prepare data - inputs = torch.rand(1, 1, 32, 32) - img_metas = [{ - 'img_shape': [1, 1, 32, 32], - 'ori_shape': [1, 1, 32, 32], - 'pad_shape': [1, 1, 32, 32], - 'filename': None, - 'scale_factor': np.array([1, 1, 1, 1]) - }] - - pytorch_model.forward = partial( - pytorch_model.forward, - img_metas=img_metas, - return_loss=False, - rescale=True) - with tempfile.TemporaryDirectory() as tmpdirname: - onnx_path = f'{tmpdirname}/tmp.onnx' - with torch.no_grad(): - torch.onnx.export( - pytorch_model, - inputs, - onnx_path, - input_names=['input'], - output_names=['output'], - export_params=True, - keep_initializers_as_inputs=False, - verbose=False, - opset_version=11) - - # TensorRT part - def get_GiB(x: int): - """return x GiB.""" - return x * (1 << 30) - - trt_path = onnx_path.replace('.onnx', '.trt') - min_shape = [1, 1, 32, 32] - max_shape = [1, 1, 32, 32] - # create trt engine and wrapper - opt_shape_dict = {'input': [min_shape, min_shape, max_shape]} - max_workspace_size = get_GiB(1) - trt_engine = onnx2trt( - onnx_path, - opt_shape_dict, - log_level=trt.Logger.ERROR, - fp16_mode=False, - max_workspace_size=max_workspace_size) - save_trt_engine(trt_engine, trt_path) - print(f'Successfully created TensorRT engine: {trt_path}') - - wrap_onnx = ONNXRuntimeRecognizer(onnx_path, cfg, 0) - wrap_trt = TensorRTRecognizer(trt_path, cfg, 0) - - assert isinstance(wrap_onnx, ONNXRuntimeRecognizer) - assert isinstance(wrap_trt, TensorRTRecognizer) - - with torch.no_grad(): - onnx_outputs = wrap_onnx.simple_test(inputs, img_metas, rescale=False) - trt_outputs = wrap_onnx.simple_test(inputs, img_metas, rescale=False) - - assert isinstance(onnx_outputs[0], dict) - assert isinstance(trt_outputs[0], dict) - assert 'text' in onnx_outputs[0] - assert 'text' in trt_outputs[0] diff --git a/spaces/tomofi/NDLOCR/src/ndl_layout/mmdetection/configs/hrnet/faster_rcnn_hrnetv2p_w32_1x_coco.py b/spaces/tomofi/NDLOCR/src/ndl_layout/mmdetection/configs/hrnet/faster_rcnn_hrnetv2p_w32_1x_coco.py deleted file mode 100644 index 190e81c710b0e5e9eb34bafff01c9dd4a8ef130c..0000000000000000000000000000000000000000 --- a/spaces/tomofi/NDLOCR/src/ndl_layout/mmdetection/configs/hrnet/faster_rcnn_hrnetv2p_w32_1x_coco.py +++ /dev/null @@ -1,36 +0,0 @@ -_base_ = '../faster_rcnn/faster_rcnn_r50_fpn_1x_coco.py' -model = dict( - pretrained='open-mmlab://msra/hrnetv2_w32', - backbone=dict( - _delete_=True, - type='HRNet', - extra=dict( - stage1=dict( - num_modules=1, - num_branches=1, - block='BOTTLENECK', - num_blocks=(4, ), - num_channels=(64, )), - stage2=dict( - num_modules=1, - num_branches=2, - block='BASIC', - num_blocks=(4, 4), - num_channels=(32, 64)), - stage3=dict( - num_modules=4, - num_branches=3, - block='BASIC', - num_blocks=(4, 4, 4), - num_channels=(32, 64, 128)), - stage4=dict( - num_modules=3, - num_branches=4, - block='BASIC', - num_blocks=(4, 4, 4, 4), - num_channels=(32, 64, 128, 256)))), - neck=dict( - _delete_=True, - type='HRFPN', - in_channels=[32, 64, 128, 256], - out_channels=256)) diff --git a/spaces/totsunemario/minimal/README.md b/spaces/totsunemario/minimal/README.md deleted file mode 100644 index 5293c44b92540389a536bf13b94ce1e630a226e7..0000000000000000000000000000000000000000 --- a/spaces/totsunemario/minimal/README.md +++ /dev/null @@ -1,13 +0,0 @@ ---- -title: Minimal -emoji: 🦀 -colorFrom: indigo -colorTo: yellow -sdk: gradio -sdk_version: 3.32.0 -app_file: app.py -pinned: false -license: apache-2.0 ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/trysem/TableIMG2-CSV/app.py b/spaces/trysem/TableIMG2-CSV/app.py deleted file mode 100644 index c721d68873eddc5beecb4c107e772d1558430f76..0000000000000000000000000000000000000000 --- a/spaces/trysem/TableIMG2-CSV/app.py +++ /dev/null @@ -1,510 +0,0 @@ -import streamlit as st -from PIL import Image, ImageEnhance -import statistics -import os -import string -from collections import Counter -from itertools import tee, count -# import TDTSR -import pytesseract -from pytesseract import Output -import json -import pandas as pd -import matplotlib.pyplot as plt -import cv2 -import numpy as np -# from transformers import TrOCRProcessor, VisionEncoderDecoderModel -# from cv2 import dnn_superres -from transformers import DetrFeatureExtractor -#from transformers import DetrForObjectDetection -from transformers import TableTransformerForObjectDetection -import torch -import asyncio -# pytesseract.pytesseract.tesseract_cmd = r'C:\Program Files\Tesseract-OCR\tesseract.exe' - - -st.set_option('deprecation.showPyplotGlobalUse', False) -st.set_page_config(layout='wide') -st.title("Table Detection and Table Structure Recognition") -st.write("Implemented by MSFT team: https://github.com/microsoft/table-transformer") - - - -def PIL_to_cv(pil_img): - return cv2.cvtColor(np.array(pil_img), cv2.COLOR_RGB2BGR) - -def cv_to_PIL(cv_img): - return Image.fromarray(cv2.cvtColor(cv_img, cv2.COLOR_BGR2RGB)) - - -async def pytess(cell_pil_img): - return ' '.join(pytesseract.image_to_data(cell_pil_img, output_type=Output.DICT, config='-c tessedit_char_blacklist=œ˜â€œï¬â™Ã©œ¢!|”?«“¥ --psm 6 preserve_interword_spaces')['text']).strip() - - -# def super_res(pil_img): - # ''' - # Useful for low-res docs - # ''' - # requires opencv-contrib-python installed without the opencv-python - # sr = dnn_superres.DnnSuperResImpl_create() - # image = PIL_to_cv(pil_img) - # model_path = "/data/Salman/TRD/code/table-transformer/transformers/LapSRN_x2.pb" - # model_name = 'lapsrn' - # model_scale = 2 - # sr.readModel(model_path) - # sr.setModel(model_name, model_scale) - # final_img = sr.upsample(image) - # final_img = cv_to_PIL(final_img) - - # return final_img - - -def sharpen_image(pil_img): - - img = PIL_to_cv(pil_img) - sharpen_kernel = np.array([[-1, -1, -1], - [-1, 9, -1], - [-1, -1, -1]]) - - sharpen = cv2.filter2D(img, -1, sharpen_kernel) - pil_img = cv_to_PIL(sharpen) - return pil_img - - -def uniquify(seq, suffs = count(1)): - """Make all the items unique by adding a suffix (1, 2, etc). - Credit: https://stackoverflow.com/questions/30650474/python-rename-duplicates-in-list-with-progressive-numbers-without-sorting-list - `seq` is mutable sequence of strings. - `suffs` is an optional alternative suffix iterable. - """ - not_unique = [k for k,v in Counter(seq).items() if v>1] - - suff_gens = dict(zip(not_unique, tee(suffs, len(not_unique)))) - for idx,s in enumerate(seq): - try: - suffix = str(next(suff_gens[s])) - except KeyError: - continue - else: - seq[idx] += suffix - - return seq - -def binarizeBlur_image(pil_img): - image = PIL_to_cv(pil_img) - thresh = cv2.threshold(image, 150, 255, cv2.THRESH_BINARY_INV)[1] - - result = cv2.GaussianBlur(thresh, (5,5), 0) - result = 255 - result - return cv_to_PIL(result) - - - -def td_postprocess(pil_img): - ''' - Removes gray background from tables - ''' - img = PIL_to_cv(pil_img) - - hsv = cv2.cvtColor(img, cv2.COLOR_BGR2HSV) - mask = cv2.inRange(hsv, (0, 0, 100), (255, 5, 255)) # (0, 0, 100), (255, 5, 255) - nzmask = cv2.inRange(hsv, (0, 0, 5), (255, 255, 255)) # (0, 0, 5), (255, 255, 255)) - nzmask = cv2.erode(nzmask, np.ones((3,3))) # (3,3) - mask = mask & nzmask - - new_img = img.copy() - new_img[np.where(mask)] = 255 - - - return cv_to_PIL(new_img) - -# def super_res(pil_img): -# # requires opencv-contrib-python installed without the opencv-python -# sr = dnn_superres.DnnSuperResImpl_create() -# image = PIL_to_cv(pil_img) -# model_path = "./LapSRN_x8.pb" -# model_name = model_path.split('/')[1].split('_')[0].lower() -# model_scale = int(model_path.split('/')[1].split('_')[1].split('.')[0][1]) - -# sr.readModel(model_path) -# sr.setModel(model_name, model_scale) -# final_img = sr.upsample(image) -# final_img = cv_to_PIL(final_img) - -# return final_img - -def table_detector(image, THRESHOLD_PROBA): - ''' - Table detection using DEtect-object TRansformer pre-trained on 1 million tables - - ''' - - feature_extractor = DetrFeatureExtractor(do_resize=True, size=800, max_size=800) - encoding = feature_extractor(image, return_tensors="pt") - - model = TableTransformerForObjectDetection.from_pretrained("microsoft/table-transformer-detection") - - with torch.no_grad(): - outputs = model(**encoding) - - probas = outputs.logits.softmax(-1)[0, :, :-1] - keep = probas.max(-1).values > THRESHOLD_PROBA - - target_sizes = torch.tensor(image.size[::-1]).unsqueeze(0) - postprocessed_outputs = feature_extractor.post_process(outputs, target_sizes) - bboxes_scaled = postprocessed_outputs[0]['boxes'][keep] - - return (model, probas[keep], bboxes_scaled) - - -def table_struct_recog(image, THRESHOLD_PROBA): - ''' - Table structure recognition using DEtect-object TRansformer pre-trained on 1 million tables - ''' - - feature_extractor = DetrFeatureExtractor(do_resize=True, size=1000, max_size=1000) - encoding = feature_extractor(image, return_tensors="pt") - - model = TableTransformerForObjectDetection.from_pretrained("microsoft/table-transformer-structure-recognition") - with torch.no_grad(): - outputs = model(**encoding) - - probas = outputs.logits.softmax(-1)[0, :, :-1] - keep = probas.max(-1).values > THRESHOLD_PROBA - - target_sizes = torch.tensor(image.size[::-1]).unsqueeze(0) - postprocessed_outputs = feature_extractor.post_process(outputs, target_sizes) - bboxes_scaled = postprocessed_outputs[0]['boxes'][keep] - - return (model, probas[keep], bboxes_scaled) - - - - - -class TableExtractionPipeline(): - - colors = ["red", "blue", "green", "yellow", "orange", "violet"] - - # colors = ["red", "blue", "green", "red", "red", "red"] - - def add_padding(self, pil_img, top, right, bottom, left, color=(255,255,255)): - ''' - Image padding as part of TSR pre-processing to prevent missing table edges - ''' - width, height = pil_img.size - new_width = width + right + left - new_height = height + top + bottom - result = Image.new(pil_img.mode, (new_width, new_height), color) - result.paste(pil_img, (left, top)) - return result - - def plot_results_detection(self, c1, model, pil_img, prob, boxes, delta_xmin, delta_ymin, delta_xmax, delta_ymax): - ''' - crop_tables and plot_results_detection must have same co-ord shifts because 1 only plots the other one updates co-ordinates - ''' - # st.write('img_obj') - # st.write(pil_img) - plt.imshow(pil_img) - ax = plt.gca() - - for p, (xmin, ymin, xmax, ymax) in zip(prob, boxes.tolist()): - cl = p.argmax() - xmin, ymin, xmax, ymax = xmin-delta_xmin, ymin-delta_ymin, xmax+delta_xmax, ymax+delta_ymax - ax.add_patch(plt.Rectangle((xmin, ymin), xmax - xmin, ymax - ymin,fill=False, color='red', linewidth=3)) - text = f'{model.config.id2label[cl.item()]}: {p[cl]:0.2f}' - ax.text(xmin-20, ymin-50, text, fontsize=10,bbox=dict(facecolor='yellow', alpha=0.5)) - plt.axis('off') - c1.pyplot() - - - def crop_tables(self, pil_img, prob, boxes, delta_xmin, delta_ymin, delta_xmax, delta_ymax): - ''' - crop_tables and plot_results_detection must have same co-ord shifts because 1 only plots the other one updates co-ordinates - ''' - cropped_img_list = [] - - for p, (xmin, ymin, xmax, ymax) in zip(prob, boxes.tolist()): - - xmin, ymin, xmax, ymax = xmin-delta_xmin, ymin-delta_ymin, xmax+delta_xmax, ymax+delta_ymax - cropped_img = pil_img.crop((xmin, ymin, xmax, ymax)) - cropped_img_list.append(cropped_img) - - - return cropped_img_list - - def generate_structure(self, c2, model, pil_img, prob, boxes, expand_rowcol_bbox_top, expand_rowcol_bbox_bottom): - ''' - Co-ordinates are adjusted here by 3 'pixels' - To plot table pillow image and the TSR bounding boxes on the table - ''' - # st.write('img_obj') - # st.write(pil_img) - plt.figure(figsize=(32,20)) - plt.imshow(pil_img) - ax = plt.gca() - rows = {} - cols = {} - idx = 0 - - - for p, (xmin, ymin, xmax, ymax) in zip(prob, boxes.tolist()): - - xmin, ymin, xmax, ymax = xmin, ymin, xmax, ymax - cl = p.argmax() - class_text = model.config.id2label[cl.item()] - text = f'{class_text}: {p[cl]:0.2f}' - # or (class_text == 'table column') - if (class_text == 'table row') or (class_text =='table projected row header') or (class_text == 'table column'): - ax.add_patch(plt.Rectangle((xmin, ymin), xmax - xmin, ymax - ymin,fill=False, color=self.colors[cl.item()], linewidth=2)) - ax.text(xmin-10, ymin-10, text, fontsize=5, bbox=dict(facecolor='yellow', alpha=0.5)) - - if class_text == 'table row': - rows['table row.'+str(idx)] = (xmin, ymin-expand_rowcol_bbox_top, xmax, ymax+expand_rowcol_bbox_bottom) - if class_text == 'table column': - cols['table column.'+str(idx)] = (xmin, ymin-expand_rowcol_bbox_top, xmax, ymax+expand_rowcol_bbox_bottom) - - idx += 1 - - - plt.axis('on') - c2.pyplot() - return rows, cols - - def sort_table_featuresv2(self, rows:dict, cols:dict): - # Sometimes the header and first row overlap, and we need the header bbox not to have first row's bbox inside the headers bbox - rows_ = {table_feature : (xmin, ymin, xmax, ymax) for table_feature, (xmin, ymin, xmax, ymax) in sorted(rows.items(), key=lambda tup: tup[1][1])} - cols_ = {table_feature : (xmin, ymin, xmax, ymax) for table_feature, (xmin, ymin, xmax, ymax) in sorted(cols.items(), key=lambda tup: tup[1][0])} - - return rows_, cols_ - - def individual_table_featuresv2(self, pil_img, rows:dict, cols:dict): - - for k, v in rows.items(): - xmin, ymin, xmax, ymax = v - cropped_img = pil_img.crop((xmin, ymin, xmax, ymax)) - rows[k] = xmin, ymin, xmax, ymax, cropped_img - - for k, v in cols.items(): - xmin, ymin, xmax, ymax = v - cropped_img = pil_img.crop((xmin, ymin, xmax, ymax)) - cols[k] = xmin, ymin, xmax, ymax, cropped_img - - return rows, cols - - - def object_to_cellsv2(self, master_row:dict, cols:dict, expand_rowcol_bbox_top, expand_rowcol_bbox_bottom, padd_left): - '''Removes redundant bbox for rows&columns and divides each row into cells from columns - Args: - - Returns: - - - ''' - cells_img = {} - header_idx = 0 - row_idx = 0 - previous_xmax_col = 0 - new_cols = {} - new_master_row = {} - previous_ymin_row = 0 - new_cols = cols - new_master_row = master_row - ## Below 2 for loops remove redundant bounding boxes ### - # for k_col, v_col in cols.items(): - # xmin_col, _, xmax_col, _, col_img = v_col - # if (np.isclose(previous_xmax_col, xmax_col, atol=5)) or (xmin_col >= xmax_col): - # print('Found a column with double bbox') - # continue - # previous_xmax_col = xmax_col - # new_cols[k_col] = v_col - - # for k_row, v_row in master_row.items(): - # _, ymin_row, _, ymax_row, row_img = v_row - # if (np.isclose(previous_ymin_row, ymin_row, atol=5)) or (ymin_row >= ymax_row): - # print('Found a row with double bbox') - # continue - # previous_ymin_row = ymin_row - # new_master_row[k_row] = v_row - ###################################################### - for k_row, v_row in new_master_row.items(): - - _, _, _, _, row_img = v_row - xmax, ymax = row_img.size - xa, ya, xb, yb = 0, 0, 0, ymax - row_img_list = [] - # plt.imshow(row_img) - # st.pyplot() - for idx, kv in enumerate(new_cols.items()): - k_col, v_col = kv - xmin_col, _, xmax_col, _, col_img = v_col - xmin_col, xmax_col = xmin_col - padd_left - 10, xmax_col - padd_left - # plt.imshow(col_img) - # st.pyplot() - # xa + 3 : to remove borders on the left side of the cropped cell - # yb = 3: to remove row information from the above row of the cropped cell - # xb - 3: to remove borders on the right side of the cropped cell - xa = xmin_col - xb = xmax_col - if idx == 0: - xa = 0 - if idx == len(new_cols)-1: - xb = xmax - xa, ya, xb, yb = xa, ya, xb, yb - - row_img_cropped = row_img.crop((xa, ya, xb, yb)) - row_img_list.append(row_img_cropped) - - cells_img[k_row+'.'+str(row_idx)] = row_img_list - row_idx += 1 - - return cells_img, len(new_cols), len(new_master_row)-1 - - def clean_dataframe(self, df): - ''' - Remove irrelevant symbols that appear with tesseractOCR - ''' - # df.columns = [col.replace('|', '') for col in df.columns] - - for col in df.columns: - - df[col]=df[col].str.replace("'", '', regex=True) - df[col]=df[col].str.replace('"', '', regex=True) - df[col]=df[col].str.replace(']', '', regex=True) - df[col]=df[col].str.replace('[', '', regex=True) - df[col]=df[col].str.replace('{', '', regex=True) - df[col]=df[col].str.replace('}', '', regex=True) - return df - - @st.cache - def convert_df(self, df): - return df.to_csv().encode('utf-8') - - - def create_dataframe(self, c3, cells_pytess_result:list, max_cols:int, max_rows:int): - '''Create dataframe using list of cell values of the table, also checks for valid header of dataframe - Args: - cells_pytess_result: list of strings, each element representing a cell in a table - max_cols, max_rows: number of columns and rows - Returns: - dataframe : final dataframe after all pre-processing - ''' - - headers = cells_pytess_result[:max_cols] - new_headers = uniquify(headers, (f' {x!s}' for x in string.ascii_lowercase)) - counter = 0 - - cells_list = cells_pytess_result[max_cols:] - df = pd.DataFrame("", index=range(0, max_rows), columns=new_headers) - - cell_idx = 0 - for nrows in range(max_rows): - for ncols in range(max_cols): - df.iat[nrows, ncols] = str(cells_list[cell_idx]) - cell_idx += 1 - - ## To check if there are duplicate headers if result of uniquify+col == col - ## This check removes headers when all headers are empty or if median of header word count is less than 6 - for x, col in zip(string.ascii_lowercase, new_headers): - if f' {x!s}' == col: - counter += 1 - header_char_count = [len(col) for col in new_headers] - - # if (counter == len(new_headers)) or (statistics.median(header_char_count) < 6): - # st.write('woooot') - # df.columns = uniquify(df.iloc[0], (f' {x!s}' for x in string.ascii_lowercase)) - # df = df.iloc[1:,:] - - df = self.clean_dataframe(df) - - c3.dataframe(df) - csv = self.convert_df(df) - c3.download_button("Download table", csv, "file.csv", "text/csv", key='download-csv') - - return df - - - - - - - async def start_process(self, image_path:str, TD_THRESHOLD, TSR_THRESHOLD, padd_top, padd_left, padd_bottom, padd_right, delta_xmin, delta_ymin, delta_xmax, delta_ymax, expand_rowcol_bbox_top, expand_rowcol_bbox_bottom): - ''' - Initiates process of generating pandas dataframes from raw pdf-page images - - ''' - image = Image.open(image_path).convert("RGB") - model, probas, bboxes_scaled = table_detector(image, THRESHOLD_PROBA=TD_THRESHOLD) - - if bboxes_scaled.nelement() == 0: - st.write('No table found in the pdf-page image') - return '' - - # try: - # st.write('Document: '+image_path.split('/')[-1]) - c1, c2, c3 = st.columns((1,1,1)) - - self.plot_results_detection(c1, model, image, probas, bboxes_scaled, delta_xmin, delta_ymin, delta_xmax, delta_ymax) - cropped_img_list = self.crop_tables(image, probas, bboxes_scaled, delta_xmin, delta_ymin, delta_xmax, delta_ymax) - - for unpadded_table in cropped_img_list: - - table = self.add_padding(unpadded_table, padd_top, padd_right, padd_bottom, padd_left) - # table = super_res(table) - # table = binarizeBlur_image(table) - # table = sharpen_image(table) # Test sharpen image next - # table = td_postprocess(table) - - model, probas, bboxes_scaled = table_struct_recog(table, THRESHOLD_PROBA=TSR_THRESHOLD) - rows, cols = self.generate_structure(c2, model, table, probas, bboxes_scaled, expand_rowcol_bbox_top, expand_rowcol_bbox_bottom) - # st.write(len(rows), len(cols)) - rows, cols = self.sort_table_featuresv2(rows, cols) - master_row, cols = self.individual_table_featuresv2(table, rows, cols) - - cells_img, max_cols, max_rows = self.object_to_cellsv2(master_row, cols, expand_rowcol_bbox_top, expand_rowcol_bbox_bottom, padd_left) - - sequential_cell_img_list = [] - for k, img_list in cells_img.items(): - for img in img_list: - # img = super_res(img) - # img = sharpen_image(img) # Test sharpen image next - # img = binarizeBlur_image(img) - # img = self.add_padding(img, 10,10,10,10) - # plt.imshow(img) - # c3.pyplot() - sequential_cell_img_list.append(pytess(img)) - - cells_pytess_result = await asyncio.gather(*sequential_cell_img_list) - - - self.create_dataframe(c3, cells_pytess_result, max_cols, max_rows) - st.write('Errors in OCR is due to either quality of the image or performance of the OCR') - # except: - # st.write('Either incorrectly identified table or no table, to debug remove try/except') - # break - # break - - - - -if __name__ == "__main__": - - img_name = st.file_uploader("Upload an image with table(s)") - st1, st2 = st.columns((1,1)) - TD_th = st1.slider('Table detection threshold', 0.0, 1.0, 0.6) - TSR_th = st2.slider('Table structure recognition threshold', 0.0, 1.0, 0.8) - - st1, st2, st3, st4 = st.columns((1,1,1,1)) - - padd_top = st1.slider('Padding top', 0, 200, 20) - padd_left = st2.slider('Padding left', 0, 200, 20) - padd_right = st3.slider('Padding right', 0, 200, 20) - padd_bottom = st4.slider('Padding bottom', 0, 200, 20) - - te = TableExtractionPipeline() - # for img in image_list: - if img_name is not None: - asyncio.run(te.start_process(img_name, TD_THRESHOLD=TD_th , TSR_THRESHOLD=TSR_th , padd_top=padd_top, padd_left=padd_left, padd_bottom=padd_bottom, padd_right=padd_right, delta_xmin=0, delta_ymin=0, delta_xmax=0, delta_ymax=0, expand_rowcol_bbox_top=0, expand_rowcol_bbox_bottom=0)) - - - diff --git a/spaces/tsi-org/LLaVA/scripts/sqa_eval_gather.sh b/spaces/tsi-org/LLaVA/scripts/sqa_eval_gather.sh deleted file mode 100644 index 525bd43b850e9f6a923158abd23bca6f8d15650e..0000000000000000000000000000000000000000 --- a/spaces/tsi-org/LLaVA/scripts/sqa_eval_gather.sh +++ /dev/null @@ -1,18 +0,0 @@ -#!/bin/bash - -CHUNKS=8 -output_file="test_llava-13b.jsonl" - -# Clear out the output file if it exists. -> "$output_file" - -# Loop through the indices and concatenate each file. -for idx in $(seq 0 $((CHUNKS-1))); do - cat "./test_llava-13b-chunk${idx}.jsonl" >> "$output_file" -done - -python llava/eval/eval_science_qa.py \ - --base-dir ~/haotian/datasets/ScienceQA/data/scienceqa \ - --result-file ./test_llava-13b.jsonl \ - --output-file ./test_llava-13b_output.json \ - --output-result ./test_llava-13b_result.json diff --git a/spaces/ttt246/brain/Extension/src/pages/Devtools/index.js b/spaces/ttt246/brain/Extension/src/pages/Devtools/index.js deleted file mode 100644 index 647319a1f9ece574d6bf51b0218c8e3b4bea4775..0000000000000000000000000000000000000000 --- a/spaces/ttt246/brain/Extension/src/pages/Devtools/index.js +++ /dev/null @@ -1,5 +0,0 @@ -chrome.devtools.panels.create( - 'Dev Tools from chrome-extension-boilerplate-react', - 'icon-34.png', - 'panel.html' -); diff --git a/spaces/typesdigital/llm-agents-tora-70b-v1.0/app.py b/spaces/typesdigital/llm-agents-tora-70b-v1.0/app.py deleted file mode 100644 index db3c4eff53d03f9494d4366d6e99ccef5680c4a8..0000000000000000000000000000000000000000 --- a/spaces/typesdigital/llm-agents-tora-70b-v1.0/app.py +++ /dev/null @@ -1,3 +0,0 @@ -import gradio as gr - -gr.Interface.load("models/llm-agents/tora-70b-v1.0").launch() \ No newline at end of file diff --git a/spaces/ucalyptus/PTI/models/e4e/latent_codes_pool.py b/spaces/ucalyptus/PTI/models/e4e/latent_codes_pool.py deleted file mode 100644 index 0281d4b5e80f8eb26e824fa35b4f908dcb6634e6..0000000000000000000000000000000000000000 --- a/spaces/ucalyptus/PTI/models/e4e/latent_codes_pool.py +++ /dev/null @@ -1,55 +0,0 @@ -import random -import torch - - -class LatentCodesPool: - """This class implements latent codes buffer that stores previously generated w latent codes. - This buffer enables us to update discriminators using a history of generated w's - rather than the ones produced by the latest encoder. - """ - - def __init__(self, pool_size): - """Initialize the ImagePool class - Parameters: - pool_size (int) -- the size of image buffer, if pool_size=0, no buffer will be created - """ - self.pool_size = pool_size - if self.pool_size > 0: # create an empty pool - self.num_ws = 0 - self.ws = [] - - def query(self, ws): - """Return w's from the pool. - Parameters: - ws: the latest generated w's from the generator - Returns w's from the buffer. - By 50/100, the buffer will return input w's. - By 50/100, the buffer will return w's previously stored in the buffer, - and insert the current w's to the buffer. - """ - if self.pool_size == 0: # if the buffer size is 0, do nothing - return ws - return_ws = [] - for w in ws: # ws.shape: (batch, 512) or (batch, n_latent, 512) - # w = torch.unsqueeze(image.data, 0) - if w.ndim == 2: - i = random.randint(0, len(w) - 1) # apply a random latent index as a candidate - w = w[i] - self.handle_w(w, return_ws) - return_ws = torch.stack(return_ws, 0) # collect all the images and return - return return_ws - - def handle_w(self, w, return_ws): - if self.num_ws < self.pool_size: # if the buffer is not full; keep inserting current codes to the buffer - self.num_ws = self.num_ws + 1 - self.ws.append(w) - return_ws.append(w) - else: - p = random.uniform(0, 1) - if p > 0.5: # by 50% chance, the buffer will return a previously stored latent code, and insert the current code into the buffer - random_id = random.randint(0, self.pool_size - 1) # randint is inclusive - tmp = self.ws[random_id].clone() - self.ws[random_id] = w - return_ws.append(tmp) - else: # by another 50% chance, the buffer will return the current image - return_ws.append(w) diff --git a/spaces/umoubuton/atri-bert-vits2/server.py b/spaces/umoubuton/atri-bert-vits2/server.py deleted file mode 100644 index 2ecd50307fdae5c5e26d8cc9453de296532b95ff..0000000000000000000000000000000000000000 --- a/spaces/umoubuton/atri-bert-vits2/server.py +++ /dev/null @@ -1,170 +0,0 @@ -from flask import Flask, request, Response -from io import BytesIO -import torch -from av import open as avopen - -import commons -import utils -from models import SynthesizerTrn -from text.symbols import symbols -from text import cleaned_text_to_sequence, get_bert -from text.cleaner import clean_text -from scipy.io import wavfile - -# Flask Init -app = Flask(__name__) -app.config["JSON_AS_ASCII"] = False - - -def get_text(text, language_str, hps): - norm_text, phone, tone, word2ph = clean_text(text, language_str) - phone, tone, language = cleaned_text_to_sequence(phone, tone, language_str) - - if hps.data.add_blank: - phone = commons.intersperse(phone, 0) - tone = commons.intersperse(tone, 0) - language = commons.intersperse(language, 0) - for i in range(len(word2ph)): - word2ph[i] = word2ph[i] * 2 - word2ph[0] += 1 - bert = get_bert(norm_text, word2ph, language_str) - del word2ph - assert bert.shape[-1] == len(phone), phone - - if language_str == "ZH": - bert = bert - ja_bert = torch.zeros(768, len(phone)) - elif language_str == "JA": - ja_bert = bert - bert = torch.zeros(1024, len(phone)) - else: - bert = torch.zeros(1024, len(phone)) - ja_bert = torch.zeros(768, len(phone)) - assert bert.shape[-1] == len( - phone - ), f"Bert seq len {bert.shape[-1]} != {len(phone)}" - phone = torch.LongTensor(phone) - tone = torch.LongTensor(tone) - language = torch.LongTensor(language) - return bert, ja_bert, phone, tone, language - - -def infer(text, sdp_ratio, noise_scale, noise_scale_w, length_scale, sid, language): - bert, ja_bert, phones, tones, lang_ids = get_text(text, language, hps) - with torch.no_grad(): - x_tst = phones.to(dev).unsqueeze(0) - tones = tones.to(dev).unsqueeze(0) - lang_ids = lang_ids.to(dev).unsqueeze(0) - bert = bert.to(dev).unsqueeze(0) - ja_bert = ja_bert.to(device).unsqueeze(0) - x_tst_lengths = torch.LongTensor([phones.size(0)]).to(dev) - speakers = torch.LongTensor([hps.data.spk2id[sid]]).to(dev) - audio = ( - net_g.infer( - x_tst, - x_tst_lengths, - speakers, - tones, - lang_ids, - bert, - ja_bert, - sdp_ratio=sdp_ratio, - noise_scale=noise_scale, - noise_scale_w=noise_scale_w, - length_scale=length_scale, - )[0][0, 0] - .data.cpu() - .float() - .numpy() - ) - return audio - - -def replace_punctuation(text, i=2): - punctuation = ",。?!" - for char in punctuation: - text = text.replace(char, char * i) - return text - - -def wav2(i, o, format): - inp = avopen(i, "rb") - out = avopen(o, "wb", format=format) - if format == "ogg": - format = "libvorbis" - - ostream = out.add_stream(format) - - for frame in inp.decode(audio=0): - for p in ostream.encode(frame): - out.mux(p) - - for p in ostream.encode(None): - out.mux(p) - - out.close() - inp.close() - - -# Load Generator -hps = utils.get_hparams_from_file("./configs/config.json") - -dev = "cuda" -net_g = SynthesizerTrn( - len(symbols), - hps.data.filter_length // 2 + 1, - hps.train.segment_size // hps.data.hop_length, - n_speakers=hps.data.n_speakers, - **hps.model, -).to(dev) -_ = net_g.eval() - -_ = utils.load_checkpoint("logs/G_649000.pth", net_g, None, skip_optimizer=True) - - -@app.route("/") -def main(): - try: - speaker = request.args.get("speaker") - text = request.args.get("text").replace("/n", "") - sdp_ratio = float(request.args.get("sdp_ratio", 0.2)) - noise = float(request.args.get("noise", 0.5)) - noisew = float(request.args.get("noisew", 0.6)) - length = float(request.args.get("length", 1.2)) - language = request.args.get("language") - if length >= 2: - return "Too big length" - if len(text) >= 250: - return "Too long text" - fmt = request.args.get("format", "wav") - if None in (speaker, text): - return "Missing Parameter" - if fmt not in ("mp3", "wav", "ogg"): - return "Invalid Format" - if language not in ("JA", "ZH"): - return "Invalid language" - except: - return "Invalid Parameter" - - with torch.no_grad(): - audio = infer( - text, - sdp_ratio=sdp_ratio, - noise_scale=noise, - noise_scale_w=noisew, - length_scale=length, - sid=speaker, - language=language, - ) - - with BytesIO() as wav: - wavfile.write(wav, hps.data.sampling_rate, audio) - torch.cuda.empty_cache() - if fmt == "wav": - return Response(wav.getvalue(), mimetype="audio/wav") - wav.seek(0, 0) - with BytesIO() as ofp: - wav2(wav, ofp, fmt) - return Response( - ofp.getvalue(), mimetype="audio/mpeg" if fmt == "mp3" else "audio/ogg" - ) diff --git a/spaces/usbethFlerru/sovits-modelsV2/example/Autodesk Autocad 2012 Mechanical X32bit (english) NEW! Keygenl.md b/spaces/usbethFlerru/sovits-modelsV2/example/Autodesk Autocad 2012 Mechanical X32bit (english) NEW! Keygenl.md deleted file mode 100644 index f4f0d341ad46a1e9dbbcfda16d494ff20a9211b5..0000000000000000000000000000000000000000 --- a/spaces/usbethFlerru/sovits-modelsV2/example/Autodesk Autocad 2012 Mechanical X32bit (english) NEW! Keygenl.md +++ /dev/null @@ -1,61 +0,0 @@ -
          -

          How to Install and Activate Autodesk Autocad 2012 Mechanical X32bit (english) Keygenl

          -

          Autodesk Autocad 2012 Mechanical is a powerful software for designing and drafting mechanical parts and assemblies. It allows you to create 2D and 3D models, generate drawings, perform calculations, and simulate motion and stress. If you want to use this software, you need to install and activate it with a valid license key. In this article, we will show you how to do that using the keygenl tool.

          -

          Step 1: Download and Extract the Software

          -

          The first step is to download the software from the official Autodesk website or from a trusted source. You need to choose the x32bit version for your operating system and language. The file name should be something like "AutoCAD_Mechanical_2012_English_Win_32bit.exe". After downloading, you need to extract the file using a tool like WinRAR or 7-Zip. You should get a folder with the same name as the file.

          -

          Autodesk Autocad 2012 Mechanical X32bit (english) Keygenl


          DOWNLOAD >> https://urlcod.com/2uyXgK



          -

          Step 2: Run the Setup File

          -

          The next step is to run the setup file inside the extracted folder. You will see a window with the Autodesk logo and a progress bar. Wait for it to finish loading and then click on "Install Products". You will see another window with a list of products to install. Make sure that "Autodesk Autocad 2012 Mechanical" is checked and then click on "Next". You will see another window with the license agreement. Read it carefully and then click on "I accept" and then on "Next". You will see another window with the installation options. You can choose the default settings or customize them according to your preferences. Then click on "Next" and then on "Install". The installation process will begin and may take some time depending on your system specifications.

          -

          Step 3: Generate and Enter the License Key

          -

          The final step is to generate and enter the license key using the keygenl tool. This tool is a small program that can create valid license keys for various Autodesk products. You need to download it from a reliable source and run it as administrator. You will see a window with several fields and buttons. Follow these steps to use it:

          -
            -
          • Select "Autodesk Autocad 2012 Mechanical" from the drop-down menu under "Product Name".
          • -
          • Copy the serial number from the installation window and paste it into the field under "Serial Number".
          • -
          • Click on "Generate" and wait for a few seconds. You will see a license key in the field under "Activation Code".
          • -
          • Copy the license key and paste it into the installation window under "Enter your activation code here".
          • -
          • Click on "Next" and then on "Finish". The installation and activation process is complete.
          • -
          -

          Congratulations! You have successfully installed and activated Autodesk Autocad 2012 Mechanical X32bit (english) Keygenl. You can now launch the software and enjoy its features.

          - -

          How to Use Autodesk Autocad 2012 Mechanical X32bit (english) Keygenl

          -

          Now that you have installed and activated the software, you can start using it for your projects. Here are some basic steps to get you started:

          -

          Step 1: Create a New Drawing

          -

          To create a new drawing, you need to click on the "New" button on the top left corner of the screen. You will see a window with several templates to choose from. You can select one of the predefined templates or create your own. Then click on "OK". You will see a blank drawing area with a grid and a coordinate system.

          -

          Step 2: Draw and Modify Objects

          -

          To draw and modify objects, you need to use the tools on the ribbon and the command line. The ribbon is a panel with several tabs and buttons that contain different functions and options. The command line is a text box at the bottom of the screen that allows you to enter commands and parameters. You can also use keyboard shortcuts and mouse clicks to perform actions.

          -

          Some of the basic tools you can use are:

          -
            -
          • The "Line" tool to draw straight lines.
          • -
          • The "Circle" tool to draw circles.
          • -
          • The "Arc" tool to draw arcs.
          • -
          • The "Rectangle" tool to draw rectangles.
          • -
          • The "Polyline" tool to draw connected lines and curves.
          • -
          • The "Move" tool to move objects.
          • -
          • The "Copy" tool to copy objects.
          • -
          • The "Rotate" tool to rotate objects.
          • -
          • The "Scale" tool to scale objects.
          • -
          • The "Trim" tool to cut off parts of objects.
          • -
          • The "Extend" tool to extend objects.
          • -
          • The "Fillet" tool to round off corners of objects.
          • -
          • The "Chamfer" tool to bevel edges of objects.
          • -
          -

          You can also use the "Properties" palette to change the attributes of objects, such as color, layer, linetype, lineweight, etc.

          -

          -

          Step 3: Add Dimensions and Annotations

          -

          To add dimensions and annotations, you need to use the tools on the "Annotate" tab of the ribbon. Dimensions are numerical measurements that show the size and position of objects. Annotations are text labels that provide additional information or instructions. Some of the tools you can use are:

          -
            -
          • The "Linear" tool to add horizontal or vertical dimensions.
          • -
          • The "Aligned" tool to add angled dimensions.
          • -
          • The "Radius" tool to add radius dimensions.
          • -
          • The "Diameter" tool to add diameter dimensions.
          • -
          • The "Angular" tool to add angle dimensions.
          • -
          • The "Leader" tool to add leader lines with text or blocks.
          • -
          • The "Text" tool to add single-line text.
          • -
          • The "Mtext" tool to add multi-line text.
          • -
          -

          You can also use the "Dimension Style Manager" and the "Text Style Manager" to customize the appearance and format of dimensions and annotations.

          -

          Step 4: Save and Print Your Drawing

          -

          To save your drawing, you need to click on the "Save" button on the top left corner of the screen. You will see a window where you can enter a file name and choose a location. You can also select a file format from the drop-down menu. The default format is ".dwg", which is compatible with other Autodesk products. You can also choose other formats, such as ".dxf", ".pdf", ".jpg", etc.

          -

          To print your drawing, you need to click on the "Print" button on the top left corner of the screen. You will see a window where you can select a printer and adjust the settings. You can also preview your drawing before printing it. You can choose between different layouts, scales, paper sizes, orientations, etc.

          d5da3c52bf
          -
          -
          \ No newline at end of file diff --git a/spaces/user238921933/stable-diffusion-webui/extensions/deforum/scripts/deforum_helpers/hybrid_video.py b/spaces/user238921933/stable-diffusion-webui/extensions/deforum/scripts/deforum_helpers/hybrid_video.py deleted file mode 100644 index 76401712387cbda1bb29dbd6669fc9f774903c7e..0000000000000000000000000000000000000000 --- a/spaces/user238921933/stable-diffusion-webui/extensions/deforum/scripts/deforum_helpers/hybrid_video.py +++ /dev/null @@ -1,436 +0,0 @@ -import cv2 -import os -import pathlib -import numpy as np -import random -from PIL import Image, ImageChops, ImageOps, ImageEnhance -from .video_audio_utilities import vid2frames, get_quick_vid_info, get_frame_name, get_next_frame -from .human_masking import video2humanmasks - -def delete_all_imgs_in_folder(folder_path): - files = list(pathlib.Path(folder_path).glob('*.jpg')) - files.extend(list(pathlib.Path(folder_path).glob('*.png'))) - for f in files: os.remove(f) - -def hybrid_generation(args, anim_args, root): - video_in_frame_path = os.path.join(args.outdir, 'inputframes') - hybrid_frame_path = os.path.join(args.outdir, 'hybridframes') - human_masks_path = os.path.join(args.outdir, 'human_masks') - - if anim_args.hybrid_generate_inputframes: - # create folders for the video input frames and optional hybrid frames to live in - os.makedirs(video_in_frame_path, exist_ok=True) - os.makedirs(hybrid_frame_path, exist_ok=True) - - # delete frames if overwrite = true - if anim_args.overwrite_extracted_frames: - delete_all_imgs_in_folder(hybrid_frame_path) - - # save the video frames from input video - print(f"Video to extract: {anim_args.video_init_path}") - print(f"Extracting video (1 every {anim_args.extract_nth_frame}) frames to {video_in_frame_path}...") - video_fps = vid2frames(video_path=anim_args.video_init_path, video_in_frame_path=video_in_frame_path, n=anim_args.extract_nth_frame, overwrite=anim_args.overwrite_extracted_frames, extract_from_frame=anim_args.extract_from_frame, extract_to_frame=anim_args.extract_to_frame) - - # extract alpha masks of humans from the extracted input video imgs - if anim_args.hybrid_generate_human_masks != "None": - # create a folder for the human masks imgs to live in - print(f"Checking /creating a folder for the human masks") - os.makedirs(human_masks_path, exist_ok=True) - - # delete frames if overwrite = true - if anim_args.overwrite_extracted_frames: - delete_all_imgs_in_folder(human_masks_path) - - # in case that generate_input_frames isn't selected, we won't get the video fps rate as vid2frames isn't called, So we'll check the video fps in here instead - if not anim_args.hybrid_generate_inputframes: - _, video_fps, _ = get_quick_vid_info(anim_args.video_init_path) - - # calculate the correct fps of the masked video according to the original video fps and 'extract_nth_frame' - output_fps = video_fps/anim_args.extract_nth_frame - - # generate the actual alpha masks from the input imgs - print(f"Extracting alpha humans masks from the input frames") - video2humanmasks(video_in_frame_path, human_masks_path, anim_args.hybrid_generate_human_masks, output_fps) - - # determine max frames from length of input frames - anim_args.max_frames = len([f for f in pathlib.Path(video_in_frame_path).glob('*.jpg')]) - print(f"Using {anim_args.max_frames} input frames from {video_in_frame_path}...") - - # get sorted list of inputfiles - inputfiles = sorted(pathlib.Path(video_in_frame_path).glob('*.jpg')) - - # use first frame as init - if anim_args.hybrid_use_first_frame_as_init_image: - for f in inputfiles: - args.init_image = str(f) - args.use_init = True - print(f"Using init_image from video: {args.init_image}") - break - - return args, anim_args, inputfiles - -def hybrid_composite(args, anim_args, frame_idx, prev_img, depth_model, hybrid_comp_schedules, root): - video_frame = os.path.join(args.outdir, 'inputframes', get_frame_name(anim_args.video_init_path) + f"{frame_idx:05}.jpg") - video_depth_frame = os.path.join(args.outdir, 'hybridframes', get_frame_name(anim_args.video_init_path) + f"_vid_depth{frame_idx:05}.jpg") - depth_frame = os.path.join(args.outdir, f"{args.timestring}_depth_{frame_idx-1:05}.png") - mask_frame = os.path.join(args.outdir, 'hybridframes', get_frame_name(anim_args.video_init_path) + f"_mask{frame_idx:05}.jpg") - comp_frame = os.path.join(args.outdir, 'hybridframes', get_frame_name(anim_args.video_init_path) + f"_comp{frame_idx:05}.jpg") - prev_frame = os.path.join(args.outdir, 'hybridframes', get_frame_name(anim_args.video_init_path) + f"_prev{frame_idx:05}.jpg") - prev_img = cv2.cvtColor(prev_img, cv2.COLOR_BGR2RGB) - prev_img_hybrid = Image.fromarray(prev_img) - video_image = Image.open(video_frame) - video_image = video_image.resize((args.W, args.H), Image.Resampling.LANCZOS) - hybrid_mask = None - - # composite mask types - if anim_args.hybrid_comp_mask_type == 'Depth': # get depth from last generation - hybrid_mask = Image.open(depth_frame) - elif anim_args.hybrid_comp_mask_type == 'Video Depth': # get video depth - video_depth = depth_model.predict(np.array(video_image), anim_args, root.half_precision) - depth_model.save(video_depth_frame, video_depth) - hybrid_mask = Image.open(video_depth_frame) - elif anim_args.hybrid_comp_mask_type == 'Blend': # create blend mask image - hybrid_mask = Image.blend(ImageOps.grayscale(prev_img_hybrid), ImageOps.grayscale(video_image), hybrid_comp_schedules['mask_blend_alpha']) - elif anim_args.hybrid_comp_mask_type == 'Difference': # create difference mask image - hybrid_mask = ImageChops.difference(ImageOps.grayscale(prev_img_hybrid), ImageOps.grayscale(video_image)) - - # optionally invert mask, if mask type is defined - if anim_args.hybrid_comp_mask_inverse and anim_args.hybrid_comp_mask_type != "None": - hybrid_mask = ImageOps.invert(hybrid_mask) - - # if a mask type is selected, make composition - if hybrid_mask == None: - hybrid_comp = video_image - else: - # ensure grayscale - hybrid_mask = ImageOps.grayscale(hybrid_mask) - # equalization before - if anim_args.hybrid_comp_mask_equalize in ['Before', 'Both']: - hybrid_mask = ImageOps.equalize(hybrid_mask) - # contrast - hybrid_mask = ImageEnhance.Contrast(hybrid_mask).enhance(hybrid_comp_schedules['mask_contrast']) - # auto contrast with cutoffs lo/hi - if anim_args.hybrid_comp_mask_auto_contrast: - hybrid_mask = autocontrast_grayscale(np.array(hybrid_mask), hybrid_comp_schedules['mask_auto_contrast_cutoff_low'], hybrid_comp_schedules['mask_auto_contrast_cutoff_high']) - hybrid_mask = Image.fromarray(hybrid_mask) - hybrid_mask = ImageOps.grayscale(hybrid_mask) - if anim_args.hybrid_comp_save_extra_frames: - hybrid_mask.save(mask_frame) - # equalization after - if anim_args.hybrid_comp_mask_equalize in ['After', 'Both']: - hybrid_mask = ImageOps.equalize(hybrid_mask) - # do compositing and save - hybrid_comp = Image.composite(prev_img_hybrid, video_image, hybrid_mask) - if anim_args.hybrid_comp_save_extra_frames: - hybrid_comp.save(comp_frame) - - # final blend of composite with prev_img, or just a blend if no composite is selected - hybrid_blend = Image.blend(prev_img_hybrid, hybrid_comp, hybrid_comp_schedules['alpha']) - if anim_args.hybrid_comp_save_extra_frames: - hybrid_blend.save(prev_frame) - - prev_img = cv2.cvtColor(np.array(hybrid_blend), cv2.COLOR_RGB2BGR) - - # restore to np array and return - return args, prev_img - -def get_matrix_for_hybrid_motion(frame_idx, dimensions, inputfiles, hybrid_motion): - img1 = cv2.cvtColor(get_resized_image_from_filename(str(inputfiles[frame_idx-1]), dimensions), cv2.COLOR_BGR2GRAY) - img2 = cv2.cvtColor(get_resized_image_from_filename(str(inputfiles[frame_idx]), dimensions), cv2.COLOR_BGR2GRAY) - matrix = get_transformation_matrix_from_images(img1, img2, hybrid_motion) - print(f"Calculating {hybrid_motion} RANSAC matrix for frames {frame_idx} to {frame_idx+1}") - return matrix - -def get_matrix_for_hybrid_motion_prev(frame_idx, dimensions, inputfiles, prev_img, hybrid_motion): - # first handle invalid images from cadence by returning default matrix - height, width = prev_img.shape[:2] - if height == 0 or width == 0 or prev_img != np.uint8: - return get_hybrid_motion_default_matrix(hybrid_motion) - else: - prev_img_gray = cv2.cvtColor(prev_img, cv2.COLOR_BGR2GRAY) - img = cv2.cvtColor(get_resized_image_from_filename(str(inputfiles[frame_idx]), dimensions), cv2.COLOR_BGR2GRAY) - matrix = get_transformation_matrix_from_images(prev_img_gray, img, hybrid_motion) - print(f"Calculating {hybrid_motion} RANSAC matrix for frames {frame_idx} to {frame_idx+1}") - return matrix - -def get_flow_for_hybrid_motion(frame_idx, dimensions, inputfiles, hybrid_frame_path, method, do_flow_visualization=False): - print(f"Calculating {method} optical flow for frames {frame_idx} to {frame_idx+1}") - i1 = get_resized_image_from_filename(str(inputfiles[frame_idx]), dimensions) - i2 = get_resized_image_from_filename(str(inputfiles[frame_idx+1]), dimensions) - flow = get_flow_from_images(i1, i2, method) - if do_flow_visualization: - save_flow_visualization(frame_idx, dimensions, flow, inputfiles, hybrid_frame_path) - return flow - -def get_flow_for_hybrid_motion_prev(frame_idx, dimensions, inputfiles, hybrid_frame_path, prev_img, method, do_flow_visualization=False): - print(f"Calculating {method} optical flow for frames {frame_idx} to {frame_idx+1}") - # first handle invalid images from cadence by returning default matrix - height, width = prev_img.shape[:2] - if height == 0 or width == 0: - flow = get_hybrid_motion_default_flow(dimensions) - else: - i1 = prev_img.astype(np.uint8) - i2 = get_resized_image_from_filename(str(inputfiles[frame_idx]), dimensions) - flow = get_flow_from_images(i1, i2, method) - if do_flow_visualization: - save_flow_visualization(frame_idx, dimensions, flow, inputfiles, hybrid_frame_path) - return flow - -def image_transform_ransac(image_cv2, xform, hybrid_motion, border_mode=cv2.BORDER_REPLICATE): - if hybrid_motion == "Perspective": - return image_transform_perspective(image_cv2, xform, border_mode=border_mode) - else: # Affine - return image_transform_affine(image_cv2, xform, border_mode=border_mode) - -def image_transform_optical_flow(img, flow, border_mode=cv2.BORDER_REPLICATE, flow_reverse=False): - if not flow_reverse: - flow = -flow - h, w = img.shape[:2] - flow[:, :, 0] += np.arange(w) - flow[:, :, 1] += np.arange(h)[:,np.newaxis] - return remap(img, flow, border_mode) - -def image_transform_affine(image_cv2, xform, border_mode=cv2.BORDER_REPLICATE): - return cv2.warpAffine( - image_cv2, - xform, - (image_cv2.shape[1],image_cv2.shape[0]), - borderMode=border_mode - ) - -def image_transform_perspective(image_cv2, xform, border_mode=cv2.BORDER_REPLICATE): - return cv2.warpPerspective( - image_cv2, - xform, - (image_cv2.shape[1], image_cv2.shape[0]), - borderMode=border_mode - ) - -def get_hybrid_motion_default_matrix(hybrid_motion): - if hybrid_motion == "Perspective": - arr = np.array([[1., 0., 0.], [0., 1., 0.], [0., 0., 1.]]) - else: - arr = np.array([[1., 0., 0.], [0., 1., 0.]]) - return arr - -def get_hybrid_motion_default_flow(dimensions): - cols, rows = dimensions - flow = np.zeros((rows, cols, 2), np.float32) - return flow - -def get_transformation_matrix_from_images(img1, img2, hybrid_motion, max_corners=200, quality_level=0.01, min_distance=30, block_size=3): - # Detect feature points in previous frame - prev_pts = cv2.goodFeaturesToTrack(img1, - maxCorners=max_corners, - qualityLevel=quality_level, - minDistance=min_distance, - blockSize=block_size) - - if prev_pts is None or len(prev_pts) < 8 or img1 is None or img2 is None: - return get_hybrid_motion_default_matrix(hybrid_motion) - - # Get optical flow - curr_pts, status, err = cv2.calcOpticalFlowPyrLK(img1, img2, prev_pts, None) - - # Filter only valid points - idx = np.where(status==1)[0] - prev_pts = prev_pts[idx] - curr_pts = curr_pts[idx] - - if len(prev_pts) < 8 or len(curr_pts) < 8: - return get_hybrid_motion_default_matrix(hybrid_motion) - - if hybrid_motion == "Perspective": # Perspective - Find the transformation between points - transformation_matrix, mask = cv2.findHomography(prev_pts, curr_pts, cv2.RANSAC, 5.0) - return transformation_matrix - else: # Affine - Compute a rigid transformation (without depth, only scale + rotation + translation) - transformation_rigid_matrix, rigid_mask = cv2.estimateAffinePartial2D(prev_pts, curr_pts) - return transformation_rigid_matrix - -def get_flow_from_images(i1, i2, method): - if method =="DIS Medium": - r = get_flow_from_images_DIS(i1, i2, cv2.DISOPTICAL_FLOW_PRESET_MEDIUM) - elif method =="DIS Fast": - r = get_flow_from_images_DIS(i1, i2, cv2.DISOPTICAL_FLOW_PRESET_FAST) - elif method =="DIS UltraFast": - r = get_flow_from_images_DIS(i1, i2, cv2.DISOPTICAL_FLOW_PRESET_ULTRAFAST) - elif method == "DenseRLOF": # requires running opencv-contrib-python (full opencv) INSTEAD of opencv-python - r = get_flow_from_images_Dense_RLOF(i1, i2) - elif method == "SF": # requires running opencv-contrib-python (full opencv) INSTEAD of opencv-python - r = get_flow_from_images_SF(i1, i2) - elif method =="Farneback Fine": - r = get_flow_from_images_Farneback(i1, i2, 'fine') - else: # Farneback Normal: - r = get_flow_from_images_Farneback(i1, i2) - return r - -def get_flow_from_images_DIS(i1, i2, preset): - i1 = cv2.cvtColor(i1, cv2.COLOR_BGR2GRAY) - i2 = cv2.cvtColor(i2, cv2.COLOR_BGR2GRAY) - dis=cv2.DISOpticalFlow_create(preset) - return dis.calc(i1, i2, None) - -def get_flow_from_images_Dense_RLOF(i1, i2, last_flow=None): - return cv2.optflow.calcOpticalFlowDenseRLOF(i1, i2, flow = last_flow) - -def get_flow_from_images_SF(i1, i2, last_flow=None, layers = 3, averaging_block_size = 2, max_flow = 4): - return cv2.optflow.calcOpticalFlowSF(i1, i2, layers, averaging_block_size, max_flow) - -def get_flow_from_images_Farneback(i1, i2, preset="normal", last_flow=None, pyr_scale = 0.5, levels = 3, winsize = 15, iterations = 3, poly_n = 5, poly_sigma = 1.2, flags = 0): - flags = cv2.OPTFLOW_FARNEBACK_GAUSSIAN # Specify the operation flags - pyr_scale = 0.5 # The image scale (<1) to build pyramids for each image - if preset == "fine": - levels = 13 # The number of pyramid layers, including the initial image - winsize = 77 # The averaging window size - iterations = 13 # The number of iterations at each pyramid level - poly_n = 15 # The size of the pixel neighborhood used to find polynomial expansion in each pixel - poly_sigma = 0.8 # The standard deviation of the Gaussian used to smooth derivatives used as a basis for the polynomial expansion - else: # "normal" - levels = 5 # The number of pyramid layers, including the initial image - winsize = 21 # The averaging window size - iterations = 5 # The number of iterations at each pyramid level - poly_n = 7 # The size of the pixel neighborhood used to find polynomial expansion in each pixel - poly_sigma = 1.2 # The standard deviation of the Gaussian used to smooth derivatives used as a basis for the polynomial expansion - i1 = cv2.cvtColor(i1, cv2.COLOR_BGR2GRAY) - i2 = cv2.cvtColor(i2, cv2.COLOR_BGR2GRAY) - flags = 0 # flags = cv2.OPTFLOW_USE_INITIAL_FLOW - flow = cv2.calcOpticalFlowFarneback(i1, i2, last_flow, pyr_scale, levels, winsize, iterations, poly_n, poly_sigma, flags) - return flow - -def save_flow_visualization(frame_idx, dimensions, flow, inputfiles, hybrid_frame_path): - flow_img_file = os.path.join(hybrid_frame_path, f"flow{frame_idx:05}.jpg") - flow_img = cv2.imread(str(inputfiles[frame_idx])) - flow_img = cv2.resize(flow_img, (dimensions[0], dimensions[1]), cv2.INTER_AREA) - flow_img = cv2.cvtColor(flow_img, cv2.COLOR_RGB2GRAY) - flow_img = cv2.cvtColor(flow_img, cv2.COLOR_GRAY2BGR) - flow_img = draw_flow_lines_in_grid_in_color(flow_img, flow) - flow_img = cv2.cvtColor(flow_img, cv2.COLOR_BGR2RGB) - cv2.imwrite(flow_img_file, flow_img) - print(f"Saved optical flow visualization: {flow_img_file}") - -def draw_flow_lines_in_grid_in_color(img, flow, step=8, magnitude_multiplier=1, min_magnitude = 1, max_magnitude = 10000): - flow = flow * magnitude_multiplier - h, w = img.shape[:2] - y, x = np.mgrid[step/2:h:step, step/2:w:step].reshape(2,-1).astype(int) - fx, fy = flow[y,x].T - lines = np.vstack([x, y, x+fx, y+fy]).T.reshape(-1, 2, 2) - lines = np.int32(lines + 0.5) - vis = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY) - vis = cv2.cvtColor(vis, cv2.COLOR_GRAY2BGR) - - mag, ang = cv2.cartToPolar(flow[...,0], flow[...,1]) - hsv = np.zeros((flow.shape[0], flow.shape[1], 3), dtype=np.uint8) - hsv[...,0] = ang*180/np.pi/2 - hsv[...,1] = 255 - hsv[...,2] = cv2.normalize(mag, None, 0, 255, cv2.NORM_MINMAX) - bgr = cv2.cvtColor(hsv, cv2.COLOR_HSV2BGR) - vis = cv2.add(vis, bgr) - - # Iterate through the lines - for (x1, y1), (x2, y2) in lines: - # Calculate the magnitude of the line - magnitude = np.sqrt((x2 - x1)**2 + (y2 - y1)**2) - - # Only draw the line if it falls within the magnitude range - if min_magnitude <= magnitude <= max_magnitude: - b = int(bgr[y1, x1, 0]) - g = int(bgr[y1, x1, 1]) - r = int(bgr[y1, x1, 2]) - color = (b, g, r) - cv2.arrowedLine(vis, (x1, y1), (x2, y2), color, thickness=1, tipLength=0.1) - return vis - -def draw_flow_lines_in_color(img, flow, threshold=3, magnitude_multiplier=1, min_magnitude = 0, max_magnitude = 10000): - # h, w = img.shape[:2] - vis = img.copy() # Create a copy of the input image - - # Find the locations in the flow field where the magnitude of the flow is greater than the threshold - mag, ang = cv2.cartToPolar(flow[...,0], flow[...,1]) - idx = np.where(mag > threshold) - - # Create HSV image - hsv = np.zeros((flow.shape[0], flow.shape[1], 3), dtype=np.uint8) - hsv[...,0] = ang*180/np.pi/2 - hsv[...,1] = 255 - hsv[...,2] = cv2.normalize(mag, None, 0, 255, cv2.NORM_MINMAX) - - # Convert HSV image to BGR - bgr = cv2.cvtColor(hsv, cv2.COLOR_HSV2BGR) - - # Add color from bgr - vis = cv2.add(vis, bgr) - - # Draw an arrow at each of these locations to indicate the direction of the flow - for i, (y, x) in enumerate(zip(idx[0], idx[1])): - # Calculate the magnitude of the line - x2 = x + magnitude_multiplier * int(flow[y, x, 0]) - y2 = y + magnitude_multiplier * int(flow[y, x, 1]) - magnitude = np.sqrt((x2 - x)**2 + (y2 - y)**2) - - # Only draw the line if it falls within the magnitude range - if min_magnitude <= magnitude <= max_magnitude: - if i % random.randint(100, 200) == 0: - b = int(bgr[y, x, 0]) - g = int(bgr[y, x, 1]) - r = int(bgr[y, x, 2]) - color = (b, g, r) - cv2.arrowedLine(vis, (x, y), (x2, y2), color, thickness=1, tipLength=0.25) - - return vis - -def autocontrast_grayscale(image, low_cutoff=0, high_cutoff=100): - # Perform autocontrast on a grayscale np array image. - # Find the minimum and maximum values in the image - min_val = np.percentile(image, low_cutoff) - max_val = np.percentile(image, high_cutoff) - - # Scale the image so that the minimum value is 0 and the maximum value is 255 - image = 255 * (image - min_val) / (max_val - min_val) - - # Clip values that fall outside the range [0, 255] - image = np.clip(image, 0, 255) - - return image - -def get_resized_image_from_filename(im, dimensions): - img = cv2.imread(im) - return cv2.resize(img, (dimensions[0], dimensions[1]), cv2.INTER_AREA) - -def remap(img, flow, border_mode = cv2.BORDER_REFLECT_101): - # copyMakeBorder doesn't support wrap, but supports replicate. Replaces wrap with reflect101. - if border_mode == cv2.BORDER_WRAP: - border_mode = cv2.BORDER_REFLECT_101 - h, w = img.shape[:2] - displacement = int(h * 0.25), int(w * 0.25) - larger_img = cv2.copyMakeBorder(img, displacement[0], displacement[0], displacement[1], displacement[1], border_mode) - lh, lw = larger_img.shape[:2] - larger_flow = extend_flow(flow, lw, lh) - remapped_img = cv2.remap(larger_img, larger_flow, None, cv2.INTER_LINEAR, border_mode) - output_img = center_crop_image(remapped_img, w, h) - return output_img - -def center_crop_image(img, w, h): - y, x, _ = img.shape - width_indent = int((x - w) / 2) - height_indent = int((y - h) / 2) - cropped_img = img[height_indent:y-height_indent, width_indent:x-width_indent] - return cropped_img - -def extend_flow(flow, w, h): - # Get the shape of the original flow image - flow_h, flow_w = flow.shape[:2] - # Calculate the position of the image in the new image - x_offset = int((w - flow_w) / 2) - y_offset = int((h - flow_h) / 2) - # Generate the X and Y grids - x_grid, y_grid = np.meshgrid(np.arange(w), np.arange(h)) - # Create the new flow image and set it to the X and Y grids - new_flow = np.dstack((x_grid, y_grid)).astype(np.float32) - # Shift the values of the original flow by the size of the border - flow[:,:,0] += x_offset - flow[:,:,1] += y_offset - # Overwrite the middle of the grid with the original flow - new_flow[y_offset:y_offset+flow_h, x_offset:x_offset+flow_w, :] = flow - # Return the extended image - return new_flow - \ No newline at end of file diff --git a/spaces/vaishanthr/Simultaneous-Segmented-Depth-Prediction/yolov8/docs/reference/nn/modules/head.md b/spaces/vaishanthr/Simultaneous-Segmented-Depth-Prediction/yolov8/docs/reference/nn/modules/head.md deleted file mode 100644 index 7a460b521f69955da05050b16109e081c9be680c..0000000000000000000000000000000000000000 --- a/spaces/vaishanthr/Simultaneous-Segmented-Depth-Prediction/yolov8/docs/reference/nn/modules/head.md +++ /dev/null @@ -1,29 +0,0 @@ ---- -description: 'Learn about Ultralytics YOLO modules: Segment, Classify, and RTDETRDecoder. Optimize object detection and classification in your project.' -keywords: Ultralytics, YOLO, object detection, pose estimation, RTDETRDecoder, modules, classes, documentation ---- - -## Detect ---- -### ::: ultralytics.nn.modules.head.Detect -

          - -## Segment ---- -### ::: ultralytics.nn.modules.head.Segment -

          - -## Pose ---- -### ::: ultralytics.nn.modules.head.Pose -

          - -## Classify ---- -### ::: ultralytics.nn.modules.head.Classify -

          - -## RTDETRDecoder ---- -### ::: ultralytics.nn.modules.head.RTDETRDecoder -

          diff --git a/spaces/vaishanthr/Simultaneous-Segmented-Depth-Prediction/yolov8/ultralytics/vit/__init__.py b/spaces/vaishanthr/Simultaneous-Segmented-Depth-Prediction/yolov8/ultralytics/vit/__init__.py deleted file mode 100644 index 8e96f915d0d670a2140850d13e9d72ae78a7d3d8..0000000000000000000000000000000000000000 --- a/spaces/vaishanthr/Simultaneous-Segmented-Depth-Prediction/yolov8/ultralytics/vit/__init__.py +++ /dev/null @@ -1,6 +0,0 @@ -# Ultralytics YOLO 🚀, AGPL-3.0 license - -from .rtdetr import RTDETR -from .sam import SAM - -__all__ = 'RTDETR', 'SAM' # allow simpler import diff --git a/spaces/viait/dolphinchat-chatgpt-demo-ui/README.md b/spaces/viait/dolphinchat-chatgpt-demo-ui/README.md deleted file mode 100644 index 1b57a37568d4b902b43c546e39b7ba54ad3f9314..0000000000000000000000000000000000000000 --- a/spaces/viait/dolphinchat-chatgpt-demo-ui/README.md +++ /dev/null @@ -1,22 +0,0 @@ ---- -title: DolphinChat — ChatGPT -emoji: 🐬 -colorFrom: gray -colorTo: blue -sdk: gradio -sdk_version: 3.39.0 -app_file: dolphin.script.py -pinned: true ---- - -

          -

          ℹ️ I am DolphinChat and I was created to help people!

          -

          -

          ✅️ I have been trained on almost the entire Internet!

          -

          -

          ♻️ I can communicate in more than 60 languages of the world!

          -

          -

          📂 I work on open source and keep your data safe, I am a non-commercial project!

          -

          -

          ▶️ I'm almost the perfect chat assistant, so try me!

          -

          diff --git a/spaces/vumichien/Generate_human_motion/VQ-Trans/models/encdec.py b/spaces/vumichien/Generate_human_motion/VQ-Trans/models/encdec.py deleted file mode 100644 index ae72afaa5aa59ad67cadb38e0d83e420fc6edb09..0000000000000000000000000000000000000000 --- a/spaces/vumichien/Generate_human_motion/VQ-Trans/models/encdec.py +++ /dev/null @@ -1,67 +0,0 @@ -import torch.nn as nn -from models.resnet import Resnet1D - -class Encoder(nn.Module): - def __init__(self, - input_emb_width = 3, - output_emb_width = 512, - down_t = 3, - stride_t = 2, - width = 512, - depth = 3, - dilation_growth_rate = 3, - activation='relu', - norm=None): - super().__init__() - - blocks = [] - filter_t, pad_t = stride_t * 2, stride_t // 2 - blocks.append(nn.Conv1d(input_emb_width, width, 3, 1, 1)) - blocks.append(nn.ReLU()) - - for i in range(down_t): - input_dim = width - block = nn.Sequential( - nn.Conv1d(input_dim, width, filter_t, stride_t, pad_t), - Resnet1D(width, depth, dilation_growth_rate, activation=activation, norm=norm), - ) - blocks.append(block) - blocks.append(nn.Conv1d(width, output_emb_width, 3, 1, 1)) - self.model = nn.Sequential(*blocks) - - def forward(self, x): - return self.model(x) - -class Decoder(nn.Module): - def __init__(self, - input_emb_width = 3, - output_emb_width = 512, - down_t = 3, - stride_t = 2, - width = 512, - depth = 3, - dilation_growth_rate = 3, - activation='relu', - norm=None): - super().__init__() - blocks = [] - - filter_t, pad_t = stride_t * 2, stride_t // 2 - blocks.append(nn.Conv1d(output_emb_width, width, 3, 1, 1)) - blocks.append(nn.ReLU()) - for i in range(down_t): - out_dim = width - block = nn.Sequential( - Resnet1D(width, depth, dilation_growth_rate, reverse_dilation=True, activation=activation, norm=norm), - nn.Upsample(scale_factor=2, mode='nearest'), - nn.Conv1d(width, out_dim, 3, 1, 1) - ) - blocks.append(block) - blocks.append(nn.Conv1d(width, width, 3, 1, 1)) - blocks.append(nn.ReLU()) - blocks.append(nn.Conv1d(width, input_emb_width, 3, 1, 1)) - self.model = nn.Sequential(*blocks) - - def forward(self, x): - return self.model(x) - diff --git a/spaces/vumichien/canvas_controlnet/annotator/uniformer/mmcv/parallel/utils.py b/spaces/vumichien/canvas_controlnet/annotator/uniformer/mmcv/parallel/utils.py deleted file mode 100644 index 0f5712cb42c38a2e8563bf563efb6681383cab9b..0000000000000000000000000000000000000000 --- a/spaces/vumichien/canvas_controlnet/annotator/uniformer/mmcv/parallel/utils.py +++ /dev/null @@ -1,20 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -from .registry import MODULE_WRAPPERS - - -def is_module_wrapper(module): - """Check if a module is a module wrapper. - - The following 3 modules in MMCV (and their subclasses) are regarded as - module wrappers: DataParallel, DistributedDataParallel, - MMDistributedDataParallel (the deprecated version). You may add you own - module wrapper by registering it to mmcv.parallel.MODULE_WRAPPERS. - - Args: - module (nn.Module): The module to be checked. - - Returns: - bool: True if the input module is a module wrapper. - """ - module_wrappers = tuple(MODULE_WRAPPERS.module_dict.values()) - return isinstance(module, module_wrappers) diff --git a/spaces/waiwaiwai/Real-CUGAN/upcunet_v3.py b/spaces/waiwaiwai/Real-CUGAN/upcunet_v3.py deleted file mode 100644 index f7919a6cc9efe3b8af73a73e30825a4c7d7d76da..0000000000000000000000000000000000000000 --- a/spaces/waiwaiwai/Real-CUGAN/upcunet_v3.py +++ /dev/null @@ -1,714 +0,0 @@ -import torch -from torch import nn as nn -from torch.nn import functional as F -import os, sys -import numpy as np - -root_path = os.path.abspath('.') -sys.path.append(root_path) - - -class SEBlock(nn.Module): - def __init__(self, in_channels, reduction=8, bias=False): - super(SEBlock, self).__init__() - self.conv1 = nn.Conv2d(in_channels, in_channels // reduction, 1, 1, 0, bias=bias) - self.conv2 = nn.Conv2d(in_channels // reduction, in_channels, 1, 1, 0, bias=bias) - - def forward(self, x): - if ("Half" in x.type()): # torch.HalfTensor/torch.cuda.HalfTensor - x0 = torch.mean(x.float(), dim=(2, 3), keepdim=True).half() - else: - x0 = torch.mean(x, dim=(2, 3), keepdim=True) - x0 = self.conv1(x0) - x0 = F.relu(x0, inplace=True) - x0 = self.conv2(x0) - x0 = torch.sigmoid(x0) - x = torch.mul(x, x0) - return x - - def forward_mean(self, x, x0): - x0 = self.conv1(x0) - x0 = F.relu(x0, inplace=True) - x0 = self.conv2(x0) - x0 = torch.sigmoid(x0) - x = torch.mul(x, x0) - return x - - -class UNetConv(nn.Module): - def __init__(self, in_channels, mid_channels, out_channels, se): - super(UNetConv, self).__init__() - self.conv = nn.Sequential( - nn.Conv2d(in_channels, mid_channels, 3, 1, 0), - nn.LeakyReLU(0.1, inplace=True), - nn.Conv2d(mid_channels, out_channels, 3, 1, 0), - nn.LeakyReLU(0.1, inplace=True), - ) - if se: - self.seblock = SEBlock(out_channels, reduction=8, bias=True) - else: - self.seblock = None - - def forward(self, x): - z = self.conv(x) - if self.seblock is not None: - z = self.seblock(z) - return z - - -class UNet1(nn.Module): - def __init__(self, in_channels, out_channels, deconv): - super(UNet1, self).__init__() - self.conv1 = UNetConv(in_channels, 32, 64, se=False) - self.conv1_down = nn.Conv2d(64, 64, 2, 2, 0) - self.conv2 = UNetConv(64, 128, 64, se=True) - self.conv2_up = nn.ConvTranspose2d(64, 64, 2, 2, 0) - self.conv3 = nn.Conv2d(64, 64, 3, 1, 0) - - if deconv: - self.conv_bottom = nn.ConvTranspose2d(64, out_channels, 4, 2, 3) - else: - self.conv_bottom = nn.Conv2d(64, out_channels, 3, 1, 0) - - for m in self.modules(): - if isinstance(m, (nn.Conv2d, nn.ConvTranspose2d)): - nn.init.kaiming_normal_(m.weight, mode='fan_out', nonlinearity='relu') - elif isinstance(m, nn.Linear): - nn.init.normal_(m.weight, 0, 0.01) - if m.bias is not None: - nn.init.constant_(m.bias, 0) - - def forward(self, x): - x1 = self.conv1(x) - x2 = self.conv1_down(x1) - x2 = F.leaky_relu(x2, 0.1, inplace=True) - x2 = self.conv2(x2) - x2 = self.conv2_up(x2) - x2 = F.leaky_relu(x2, 0.1, inplace=True) - - x1 = F.pad(x1, (-4, -4, -4, -4)) - x3 = self.conv3(x1 + x2) - x3 = F.leaky_relu(x3, 0.1, inplace=True) - z = self.conv_bottom(x3) - return z - - def forward_a(self, x): - x1 = self.conv1(x) - x2 = self.conv1_down(x1) - x2 = F.leaky_relu(x2, 0.1, inplace=True) - x2 = self.conv2.conv(x2) - return x1, x2 - - def forward_b(self, x1, x2): - x2 = self.conv2_up(x2) - x2 = F.leaky_relu(x2, 0.1, inplace=True) - - x1 = F.pad(x1, (-4, -4, -4, -4)) - x3 = self.conv3(x1 + x2) - x3 = F.leaky_relu(x3, 0.1, inplace=True) - z = self.conv_bottom(x3) - return z - - -class UNet1x3(nn.Module): - def __init__(self, in_channels, out_channels, deconv): - super(UNet1x3, self).__init__() - self.conv1 = UNetConv(in_channels, 32, 64, se=False) - self.conv1_down = nn.Conv2d(64, 64, 2, 2, 0) - self.conv2 = UNetConv(64, 128, 64, se=True) - self.conv2_up = nn.ConvTranspose2d(64, 64, 2, 2, 0) - self.conv3 = nn.Conv2d(64, 64, 3, 1, 0) - - if deconv: - self.conv_bottom = nn.ConvTranspose2d(64, out_channels, 5, 3, 2) - else: - self.conv_bottom = nn.Conv2d(64, out_channels, 3, 1, 0) - - for m in self.modules(): - if isinstance(m, (nn.Conv2d, nn.ConvTranspose2d)): - nn.init.kaiming_normal_(m.weight, mode='fan_out', nonlinearity='relu') - elif isinstance(m, nn.Linear): - nn.init.normal_(m.weight, 0, 0.01) - if m.bias is not None: - nn.init.constant_(m.bias, 0) - - def forward(self, x): - x1 = self.conv1(x) - x2 = self.conv1_down(x1) - x2 = F.leaky_relu(x2, 0.1, inplace=True) - x2 = self.conv2(x2) - x2 = self.conv2_up(x2) - x2 = F.leaky_relu(x2, 0.1, inplace=True) - - x1 = F.pad(x1, (-4, -4, -4, -4)) - x3 = self.conv3(x1 + x2) - x3 = F.leaky_relu(x3, 0.1, inplace=True) - z = self.conv_bottom(x3) - return z - - def forward_a(self, x): - x1 = self.conv1(x) - x2 = self.conv1_down(x1) - x2 = F.leaky_relu(x2, 0.1, inplace=True) - x2 = self.conv2.conv(x2) - return x1, x2 - - def forward_b(self, x1, x2): - x2 = self.conv2_up(x2) - x2 = F.leaky_relu(x2, 0.1, inplace=True) - - x1 = F.pad(x1, (-4, -4, -4, -4)) - x3 = self.conv3(x1 + x2) - x3 = F.leaky_relu(x3, 0.1, inplace=True) - z = self.conv_bottom(x3) - return z - - -class UNet2(nn.Module): - def __init__(self, in_channels, out_channels, deconv): - super(UNet2, self).__init__() - - self.conv1 = UNetConv(in_channels, 32, 64, se=False) - self.conv1_down = nn.Conv2d(64, 64, 2, 2, 0) - self.conv2 = UNetConv(64, 64, 128, se=True) - self.conv2_down = nn.Conv2d(128, 128, 2, 2, 0) - self.conv3 = UNetConv(128, 256, 128, se=True) - self.conv3_up = nn.ConvTranspose2d(128, 128, 2, 2, 0) - self.conv4 = UNetConv(128, 64, 64, se=True) - self.conv4_up = nn.ConvTranspose2d(64, 64, 2, 2, 0) - self.conv5 = nn.Conv2d(64, 64, 3, 1, 0) - - if deconv: - self.conv_bottom = nn.ConvTranspose2d(64, out_channels, 4, 2, 3) - else: - self.conv_bottom = nn.Conv2d(64, out_channels, 3, 1, 0) - - for m in self.modules(): - if isinstance(m, (nn.Conv2d, nn.ConvTranspose2d)): - nn.init.kaiming_normal_(m.weight, mode='fan_out', nonlinearity='relu') - elif isinstance(m, nn.Linear): - nn.init.normal_(m.weight, 0, 0.01) - if m.bias is not None: - nn.init.constant_(m.bias, 0) - - def forward(self, x): - x1 = self.conv1(x) - x2 = self.conv1_down(x1) - x2 = F.leaky_relu(x2, 0.1, inplace=True) - x2 = self.conv2(x2) - - x3 = self.conv2_down(x2) - x3 = F.leaky_relu(x3, 0.1, inplace=True) - x3 = self.conv3(x3) - x3 = self.conv3_up(x3) - x3 = F.leaky_relu(x3, 0.1, inplace=True) - - x2 = F.pad(x2, (-4, -4, -4, -4)) - x4 = self.conv4(x2 + x3) - x4 = self.conv4_up(x4) - x4 = F.leaky_relu(x4, 0.1, inplace=True) - - x1 = F.pad(x1, (-16, -16, -16, -16)) - x5 = self.conv5(x1 + x4) - x5 = F.leaky_relu(x5, 0.1, inplace=True) - - z = self.conv_bottom(x5) - return z - - def forward_a(self, x): # conv234结尾有se - x1 = self.conv1(x) - x2 = self.conv1_down(x1) - x2 = F.leaky_relu(x2, 0.1, inplace=True) - x2 = self.conv2.conv(x2) - return x1, x2 - - def forward_b(self, x2): # conv234结尾有se - x3 = self.conv2_down(x2) - x3 = F.leaky_relu(x3, 0.1, inplace=True) - x3 = self.conv3.conv(x3) - return x3 - - def forward_c(self, x2, x3): # conv234结尾有se - x3 = self.conv3_up(x3) - x3 = F.leaky_relu(x3, 0.1, inplace=True) - - x2 = F.pad(x2, (-4, -4, -4, -4)) - x4 = self.conv4.conv(x2 + x3) - return x4 - - def forward_d(self, x1, x4): # conv234结尾有se - x4 = self.conv4_up(x4) - x4 = F.leaky_relu(x4, 0.1, inplace=True) - - x1 = F.pad(x1, (-16, -16, -16, -16)) - x5 = self.conv5(x1 + x4) - x5 = F.leaky_relu(x5, 0.1, inplace=True) - - z = self.conv_bottom(x5) - return z - - -class UpCunet2x(nn.Module): # 完美tile,全程无损 - def __init__(self, in_channels=3, out_channels=3): - super(UpCunet2x, self).__init__() - self.unet1 = UNet1(in_channels, out_channels, deconv=True) - self.unet2 = UNet2(in_channels, out_channels, deconv=False) - - def forward(self, x, tile_mode): # 1.7G - n, c, h0, w0 = x.shape - if (tile_mode == 0): # 不tile - ph = ((h0 - 1) // 2 + 1) * 2 - pw = ((w0 - 1) // 2 + 1) * 2 - x = F.pad(x, (18, 18 + pw - w0, 18, 18 + ph - h0), 'reflect') # 需要保证被2整除 - x = self.unet1.forward(x) - x0 = self.unet2.forward(x) - x1 = F.pad(x, (-20, -20, -20, -20)) - x = torch.add(x0, x1) - if (w0 != pw or h0 != ph): x = x[:, :, :h0 * 2, :w0 * 2] - return x - elif (tile_mode == 1): # 对长边减半 - if (w0 >= h0): - crop_size_w = ((w0 - 1) // 4 * 4 + 4) // 2 # 减半后能被2整除,所以要先被4整除 - crop_size_h = (h0 - 1) // 2 * 2 + 2 # 能被2整除 - else: - crop_size_h = ((h0 - 1) // 4 * 4 + 4) // 2 # 减半后能被2整除,所以要先被4整除 - crop_size_w = (w0 - 1) // 2 * 2 + 2 # 能被2整除 - crop_size = (crop_size_h, crop_size_w) # 6.6G - elif (tile_mode == 2): # hw都减半 - crop_size = (((h0 - 1) // 4 * 4 + 4) // 2, ((w0 - 1) // 4 * 4 + 4) // 2) # 5.6G - elif (tile_mode == 3): # hw都三分之一 - crop_size = (((h0 - 1) // 6 * 6 + 6) // 3, ((w0 - 1) // 6 * 6 + 6) // 3) # 4.2G - elif (tile_mode == 4): # hw都四分之一 - crop_size = (((h0 - 1) // 8 * 8 + 8) // 4, ((w0 - 1) // 8 * 8 + 8) // 4) # 3.7G - ph = ((h0 - 1) // crop_size[0] + 1) * crop_size[0] - pw = ((w0 - 1) // crop_size[1] + 1) * crop_size[1] - x = F.pad(x, (18, 18 + pw - w0, 18, 18 + ph - h0), 'reflect') - n, c, h, w = x.shape - se_mean0 = torch.zeros((n, 64, 1, 1)).to(x.device) - if ("Half" in x.type()): - se_mean0 = se_mean0.half() - n_patch = 0 - tmp_dict = {} - opt_res_dict = {} - for i in range(0, h - 36, crop_size[0]): - tmp_dict[i] = {} - for j in range(0, w - 36, crop_size[1]): - x_crop = x[:, :, i:i + crop_size[0] + 36, j:j + crop_size[1] + 36] - n, c1, h1, w1 = x_crop.shape - tmp0, x_crop = self.unet1.forward_a(x_crop) - if ("Half" in x.type()): # torch.HalfTensor/torch.cuda.HalfTensor - tmp_se_mean = torch.mean(x_crop.float(), dim=(2, 3), keepdim=True).half() - else: - tmp_se_mean = torch.mean(x_crop, dim=(2, 3), keepdim=True) - se_mean0 += tmp_se_mean - n_patch += 1 - tmp_dict[i][j] = (tmp0, x_crop) - se_mean0 /= n_patch - se_mean1 = torch.zeros((n, 128, 1, 1)).to(x.device) # 64#128#128#64 - if ("Half" in x.type()): - se_mean1 = se_mean1.half() - for i in range(0, h - 36, crop_size[0]): - for j in range(0, w - 36, crop_size[1]): - tmp0, x_crop = tmp_dict[i][j] - x_crop = self.unet1.conv2.seblock.forward_mean(x_crop, se_mean0) - opt_unet1 = self.unet1.forward_b(tmp0, x_crop) - tmp_x1, tmp_x2 = self.unet2.forward_a(opt_unet1) - if ("Half" in x.type()): # torch.HalfTensor/torch.cuda.HalfTensor - tmp_se_mean = torch.mean(tmp_x2.float(), dim=(2, 3), keepdim=True).half() - else: - tmp_se_mean = torch.mean(tmp_x2, dim=(2, 3), keepdim=True) - se_mean1 += tmp_se_mean - tmp_dict[i][j] = (opt_unet1, tmp_x1, tmp_x2) - se_mean1 /= n_patch - se_mean0 = torch.zeros((n, 128, 1, 1)).to(x.device) # 64#128#128#64 - if ("Half" in x.type()): - se_mean0 = se_mean0.half() - for i in range(0, h - 36, crop_size[0]): - for j in range(0, w - 36, crop_size[1]): - opt_unet1, tmp_x1, tmp_x2 = tmp_dict[i][j] - tmp_x2 = self.unet2.conv2.seblock.forward_mean(tmp_x2, se_mean1) - tmp_x3 = self.unet2.forward_b(tmp_x2) - if ("Half" in x.type()): # torch.HalfTensor/torch.cuda.HalfTensor - tmp_se_mean = torch.mean(tmp_x3.float(), dim=(2, 3), keepdim=True).half() - else: - tmp_se_mean = torch.mean(tmp_x3, dim=(2, 3), keepdim=True) - se_mean0 += tmp_se_mean - tmp_dict[i][j] = (opt_unet1, tmp_x1, tmp_x2, tmp_x3) - se_mean0 /= n_patch - se_mean1 = torch.zeros((n, 64, 1, 1)).to(x.device) # 64#128#128#64 - if ("Half" in x.type()): - se_mean1 = se_mean1.half() - for i in range(0, h - 36, crop_size[0]): - for j in range(0, w - 36, crop_size[1]): - opt_unet1, tmp_x1, tmp_x2, tmp_x3 = tmp_dict[i][j] - tmp_x3 = self.unet2.conv3.seblock.forward_mean(tmp_x3, se_mean0) - tmp_x4 = self.unet2.forward_c(tmp_x2, tmp_x3) - if ("Half" in x.type()): # torch.HalfTensor/torch.cuda.HalfTensor - tmp_se_mean = torch.mean(tmp_x4.float(), dim=(2, 3), keepdim=True).half() - else: - tmp_se_mean = torch.mean(tmp_x4, dim=(2, 3), keepdim=True) - se_mean1 += tmp_se_mean - tmp_dict[i][j] = (opt_unet1, tmp_x1, tmp_x4) - se_mean1 /= n_patch - for i in range(0, h - 36, crop_size[0]): - opt_res_dict[i] = {} - for j in range(0, w - 36, crop_size[1]): - opt_unet1, tmp_x1, tmp_x4 = tmp_dict[i][j] - tmp_x4 = self.unet2.conv4.seblock.forward_mean(tmp_x4, se_mean1) - x0 = self.unet2.forward_d(tmp_x1, tmp_x4) - x1 = F.pad(opt_unet1, (-20, -20, -20, -20)) - x_crop = torch.add(x0, x1) # x0是unet2的最终输出 - opt_res_dict[i][j] = x_crop - del tmp_dict - torch.cuda.empty_cache() - res = torch.zeros((n, c, h * 2 - 72, w * 2 - 72)).to(x.device) - if ("Half" in x.type()): - res = res.half() - for i in range(0, h - 36, crop_size[0]): - for j in range(0, w - 36, crop_size[1]): - res[:, :, i * 2:i * 2 + h1 * 2 - 72, j * 2:j * 2 + w1 * 2 - 72] = opt_res_dict[i][j] - del opt_res_dict - torch.cuda.empty_cache() - if (w0 != pw or h0 != ph): res = res[:, :, :h0 * 2, :w0 * 2] - return res # - - -class UpCunet3x(nn.Module): # 完美tile,全程无损 - def __init__(self, in_channels=3, out_channels=3): - super(UpCunet3x, self).__init__() - self.unet1 = UNet1x3(in_channels, out_channels, deconv=True) - self.unet2 = UNet2(in_channels, out_channels, deconv=False) - - def forward(self, x, tile_mode): # 1.7G - n, c, h0, w0 = x.shape - if (tile_mode == 0): # 不tile - ph = ((h0 - 1) // 4 + 1) * 4 - pw = ((w0 - 1) // 4 + 1) * 4 - x = F.pad(x, (14, 14 + pw - w0, 14, 14 + ph - h0), 'reflect') # 需要保证被2整除 - x = self.unet1.forward(x) - x0 = self.unet2.forward(x) - x1 = F.pad(x, (-20, -20, -20, -20)) - x = torch.add(x0, x1) - if (w0 != pw or h0 != ph): x = x[:, :, :h0 * 3, :w0 * 3] - return x - elif (tile_mode == 1): # 对长边减半 - if (w0 >= h0): - crop_size_w = ((w0 - 1) // 8 * 8 + 8) // 2 # 减半后能被4整除,所以要先被8整除 - crop_size_h = (h0 - 1) // 4 * 4 + 4 # 能被4整除 - else: - crop_size_h = ((h0 - 1) // 8 * 8 + 8) // 2 # 减半后能被4整除,所以要先被8整除 - crop_size_w = (w0 - 1) // 4 * 4 + 4 # 能被4整除 - crop_size = (crop_size_h, crop_size_w) # 6.6G - elif (tile_mode == 2): # hw都减半 - crop_size = (((h0 - 1) // 8 * 8 + 8) // 2, ((w0 - 1) // 8 * 8 + 8) // 2) # 5.6G - elif (tile_mode == 3): # hw都三分之一 - crop_size = (((h0 - 1) // 12 * 12 + 12) // 3, ((w0 - 1) // 12 * 12 + 12) // 3) # 4.2G - elif (tile_mode == 4): # hw都四分之一 - crop_size = (((h0 - 1) // 16 * 16 + 16) // 4, ((w0 - 1) // 16 * 16 + 16) // 4) # 3.7G - ph = ((h0 - 1) // crop_size[0] + 1) * crop_size[0] - pw = ((w0 - 1) // crop_size[1] + 1) * crop_size[1] - x = F.pad(x, (14, 14 + pw - w0, 14, 14 + ph - h0), 'reflect') - n, c, h, w = x.shape - se_mean0 = torch.zeros((n, 64, 1, 1)).to(x.device) - if ("Half" in x.type()): - se_mean0 = se_mean0.half() - n_patch = 0 - tmp_dict = {} - opt_res_dict = {} - for i in range(0, h - 28, crop_size[0]): - tmp_dict[i] = {} - for j in range(0, w - 28, crop_size[1]): - x_crop = x[:, :, i:i + crop_size[0] + 28, j:j + crop_size[1] + 28] - n, c1, h1, w1 = x_crop.shape - tmp0, x_crop = self.unet1.forward_a(x_crop) - if ("Half" in x.type()): # torch.HalfTensor/torch.cuda.HalfTensor - tmp_se_mean = torch.mean(x_crop.float(), dim=(2, 3), keepdim=True).half() - else: - tmp_se_mean = torch.mean(x_crop, dim=(2, 3), keepdim=True) - se_mean0 += tmp_se_mean - n_patch += 1 - tmp_dict[i][j] = (tmp0, x_crop) - se_mean0 /= n_patch - se_mean1 = torch.zeros((n, 128, 1, 1)).to(x.device) # 64#128#128#64 - if ("Half" in x.type()): - se_mean1 = se_mean1.half() - for i in range(0, h - 28, crop_size[0]): - for j in range(0, w - 28, crop_size[1]): - tmp0, x_crop = tmp_dict[i][j] - x_crop = self.unet1.conv2.seblock.forward_mean(x_crop, se_mean0) - opt_unet1 = self.unet1.forward_b(tmp0, x_crop) - tmp_x1, tmp_x2 = self.unet2.forward_a(opt_unet1) - if ("Half" in x.type()): # torch.HalfTensor/torch.cuda.HalfTensor - tmp_se_mean = torch.mean(tmp_x2.float(), dim=(2, 3), keepdim=True).half() - else: - tmp_se_mean = torch.mean(tmp_x2, dim=(2, 3), keepdim=True) - se_mean1 += tmp_se_mean - tmp_dict[i][j] = (opt_unet1, tmp_x1, tmp_x2) - se_mean1 /= n_patch - se_mean0 = torch.zeros((n, 128, 1, 1)).to(x.device) # 64#128#128#64 - if ("Half" in x.type()): - se_mean0 = se_mean0.half() - for i in range(0, h - 28, crop_size[0]): - for j in range(0, w - 28, crop_size[1]): - opt_unet1, tmp_x1, tmp_x2 = tmp_dict[i][j] - tmp_x2 = self.unet2.conv2.seblock.forward_mean(tmp_x2, se_mean1) - tmp_x3 = self.unet2.forward_b(tmp_x2) - if ("Half" in x.type()): # torch.HalfTensor/torch.cuda.HalfTensor - tmp_se_mean = torch.mean(tmp_x3.float(), dim=(2, 3), keepdim=True).half() - else: - tmp_se_mean = torch.mean(tmp_x3, dim=(2, 3), keepdim=True) - se_mean0 += tmp_se_mean - tmp_dict[i][j] = (opt_unet1, tmp_x1, tmp_x2, tmp_x3) - se_mean0 /= n_patch - se_mean1 = torch.zeros((n, 64, 1, 1)).to(x.device) # 64#128#128#64 - if ("Half" in x.type()): - se_mean1 = se_mean1.half() - for i in range(0, h - 28, crop_size[0]): - for j in range(0, w - 28, crop_size[1]): - opt_unet1, tmp_x1, tmp_x2, tmp_x3 = tmp_dict[i][j] - tmp_x3 = self.unet2.conv3.seblock.forward_mean(tmp_x3, se_mean0) - tmp_x4 = self.unet2.forward_c(tmp_x2, tmp_x3) - if ("Half" in x.type()): # torch.HalfTensor/torch.cuda.HalfTensor - tmp_se_mean = torch.mean(tmp_x4.float(), dim=(2, 3), keepdim=True).half() - else: - tmp_se_mean = torch.mean(tmp_x4, dim=(2, 3), keepdim=True) - se_mean1 += tmp_se_mean - tmp_dict[i][j] = (opt_unet1, tmp_x1, tmp_x4) - se_mean1 /= n_patch - for i in range(0, h - 28, crop_size[0]): - opt_res_dict[i] = {} - for j in range(0, w - 28, crop_size[1]): - opt_unet1, tmp_x1, tmp_x4 = tmp_dict[i][j] - tmp_x4 = self.unet2.conv4.seblock.forward_mean(tmp_x4, se_mean1) - x0 = self.unet2.forward_d(tmp_x1, tmp_x4) - x1 = F.pad(opt_unet1, (-20, -20, -20, -20)) - x_crop = torch.add(x0, x1) # x0是unet2的最终输出 - opt_res_dict[i][j] = x_crop # - del tmp_dict - torch.cuda.empty_cache() - res = torch.zeros((n, c, h * 3 - 84, w * 3 - 84)).to(x.device) - if ("Half" in x.type()): - res = res.half() - for i in range(0, h - 28, crop_size[0]): - for j in range(0, w - 28, crop_size[1]): - res[:, :, i * 3:i * 3 + h1 * 3 - 84, j * 3:j * 3 + w1 * 3 - 84] = opt_res_dict[i][j] - del opt_res_dict - torch.cuda.empty_cache() - if (w0 != pw or h0 != ph): res = res[:, :, :h0 * 3, :w0 * 3] - return res - - -class UpCunet4x(nn.Module): # 完美tile,全程无损 - def __init__(self, in_channels=3, out_channels=3): - super(UpCunet4x, self).__init__() - self.unet1 = UNet1(in_channels, 64, deconv=True) - self.unet2 = UNet2(64, 64, deconv=False) - self.ps = nn.PixelShuffle(2) - self.conv_final = nn.Conv2d(64, 12, 3, 1, padding=0, bias=True) - - def forward(self, x, tile_mode): - n, c, h0, w0 = x.shape - x00 = x - if (tile_mode == 0): # 不tile - ph = ((h0 - 1) // 2 + 1) * 2 - pw = ((w0 - 1) // 2 + 1) * 2 - x = F.pad(x, (19, 19 + pw - w0, 19, 19 + ph - h0), 'reflect') # 需要保证被2整除 - x = self.unet1.forward(x) - x0 = self.unet2.forward(x) - x1 = F.pad(x, (-20, -20, -20, -20)) - x = torch.add(x0, x1) - x = self.conv_final(x) - x = F.pad(x, (-1, -1, -1, -1)) - x = self.ps(x) - if (w0 != pw or h0 != ph): x = x[:, :, :h0 * 4, :w0 * 4] - x += F.interpolate(x00, scale_factor=4, mode='nearest') - return x - elif (tile_mode == 1): # 对长边减半 - if (w0 >= h0): - crop_size_w = ((w0 - 1) // 4 * 4 + 4) // 2 # 减半后能被2整除,所以要先被4整除 - crop_size_h = (h0 - 1) // 2 * 2 + 2 # 能被2整除 - else: - crop_size_h = ((h0 - 1) // 4 * 4 + 4) // 2 # 减半后能被2整除,所以要先被4整除 - crop_size_w = (w0 - 1) // 2 * 2 + 2 # 能被2整除 - crop_size = (crop_size_h, crop_size_w) # 6.6G - elif (tile_mode == 2): # hw都减半 - crop_size = (((h0 - 1) // 4 * 4 + 4) // 2, ((w0 - 1) // 4 * 4 + 4) // 2) # 5.6G - elif (tile_mode == 3): # hw都三分之一 - crop_size = (((h0 - 1) // 6 * 6 + 6) // 3, ((w0 - 1) // 6 * 6 + 6) // 3) # 4.1G - elif (tile_mode == 4): # hw都四分之一 - crop_size = (((h0 - 1) // 8 * 8 + 8) // 4, ((w0 - 1) // 8 * 8 + 8) // 4) # 3.7G - ph = ((h0 - 1) // crop_size[0] + 1) * crop_size[0] - pw = ((w0 - 1) // crop_size[1] + 1) * crop_size[1] - x = F.pad(x, (19, 19 + pw - w0, 19, 19 + ph - h0), 'reflect') - n, c, h, w = x.shape - se_mean0 = torch.zeros((n, 64, 1, 1)).to(x.device) - if ("Half" in x.type()): - se_mean0 = se_mean0.half() - n_patch = 0 - tmp_dict = {} - opt_res_dict = {} - for i in range(0, h - 38, crop_size[0]): - tmp_dict[i] = {} - for j in range(0, w - 38, crop_size[1]): - x_crop = x[:, :, i:i + crop_size[0] + 38, j:j + crop_size[1] + 38] - n, c1, h1, w1 = x_crop.shape - tmp0, x_crop = self.unet1.forward_a(x_crop) - if ("Half" in x.type()): # torch.HalfTensor/torch.cuda.HalfTensor - tmp_se_mean = torch.mean(x_crop.float(), dim=(2, 3), keepdim=True).half() - else: - tmp_se_mean = torch.mean(x_crop, dim=(2, 3), keepdim=True) - se_mean0 += tmp_se_mean - n_patch += 1 - tmp_dict[i][j] = (tmp0, x_crop) - se_mean0 /= n_patch - se_mean1 = torch.zeros((n, 128, 1, 1)).to(x.device) # 64#128#128#64 - if ("Half" in x.type()): - se_mean1 = se_mean1.half() - for i in range(0, h - 38, crop_size[0]): - for j in range(0, w - 38, crop_size[1]): - tmp0, x_crop = tmp_dict[i][j] - x_crop = self.unet1.conv2.seblock.forward_mean(x_crop, se_mean0) - opt_unet1 = self.unet1.forward_b(tmp0, x_crop) - tmp_x1, tmp_x2 = self.unet2.forward_a(opt_unet1) - if ("Half" in x.type()): # torch.HalfTensor/torch.cuda.HalfTensor - tmp_se_mean = torch.mean(tmp_x2.float(), dim=(2, 3), keepdim=True).half() - else: - tmp_se_mean = torch.mean(tmp_x2, dim=(2, 3), keepdim=True) - se_mean1 += tmp_se_mean - tmp_dict[i][j] = (opt_unet1, tmp_x1, tmp_x2) - se_mean1 /= n_patch - se_mean0 = torch.zeros((n, 128, 1, 1)).to(x.device) # 64#128#128#64 - if ("Half" in x.type()): - se_mean0 = se_mean0.half() - for i in range(0, h - 38, crop_size[0]): - for j in range(0, w - 38, crop_size[1]): - opt_unet1, tmp_x1, tmp_x2 = tmp_dict[i][j] - tmp_x2 = self.unet2.conv2.seblock.forward_mean(tmp_x2, se_mean1) - tmp_x3 = self.unet2.forward_b(tmp_x2) - if ("Half" in x.type()): # torch.HalfTensor/torch.cuda.HalfTensor - tmp_se_mean = torch.mean(tmp_x3.float(), dim=(2, 3), keepdim=True).half() - else: - tmp_se_mean = torch.mean(tmp_x3, dim=(2, 3), keepdim=True) - se_mean0 += tmp_se_mean - tmp_dict[i][j] = (opt_unet1, tmp_x1, tmp_x2, tmp_x3) - se_mean0 /= n_patch - se_mean1 = torch.zeros((n, 64, 1, 1)).to(x.device) # 64#128#128#64 - if ("Half" in x.type()): - se_mean1 = se_mean1.half() - for i in range(0, h - 38, crop_size[0]): - for j in range(0, w - 38, crop_size[1]): - opt_unet1, tmp_x1, tmp_x2, tmp_x3 = tmp_dict[i][j] - tmp_x3 = self.unet2.conv3.seblock.forward_mean(tmp_x3, se_mean0) - tmp_x4 = self.unet2.forward_c(tmp_x2, tmp_x3) - if ("Half" in x.type()): # torch.HalfTensor/torch.cuda.HalfTensor - tmp_se_mean = torch.mean(tmp_x4.float(), dim=(2, 3), keepdim=True).half() - else: - tmp_se_mean = torch.mean(tmp_x4, dim=(2, 3), keepdim=True) - se_mean1 += tmp_se_mean - tmp_dict[i][j] = (opt_unet1, tmp_x1, tmp_x4) - se_mean1 /= n_patch - for i in range(0, h - 38, crop_size[0]): - opt_res_dict[i] = {} - for j in range(0, w - 38, crop_size[1]): - opt_unet1, tmp_x1, tmp_x4 = tmp_dict[i][j] - tmp_x4 = self.unet2.conv4.seblock.forward_mean(tmp_x4, se_mean1) - x0 = self.unet2.forward_d(tmp_x1, tmp_x4) - x1 = F.pad(opt_unet1, (-20, -20, -20, -20)) - x_crop = torch.add(x0, x1) # x0是unet2的最终输出 - x_crop = self.conv_final(x_crop) - x_crop = F.pad(x_crop, (-1, -1, -1, -1)) - x_crop = self.ps(x_crop) - opt_res_dict[i][j] = x_crop - del tmp_dict - torch.cuda.empty_cache() - res = torch.zeros((n, c, h * 4 - 152, w * 4 - 152)).to(x.device) - if ("Half" in x.type()): - res = res.half() - for i in range(0, h - 38, crop_size[0]): - for j in range(0, w - 38, crop_size[1]): - # print(opt_res_dict[i][j].shape,res[:, :, i * 4:i * 4 + h1 * 4 - 144, j * 4:j * 4 + w1 * 4 - 144].shape) - res[:, :, i * 4:i * 4 + h1 * 4 - 152, j * 4:j * 4 + w1 * 4 - 152] = opt_res_dict[i][j] - del opt_res_dict - torch.cuda.empty_cache() - if (w0 != pw or h0 != ph): res = res[:, :, :h0 * 4, :w0 * 4] - res += F.interpolate(x00, scale_factor=4, mode='nearest') - return res # - - -class RealWaifuUpScaler(object): - def __init__(self, scale, weight_path, half, device): - weight = torch.load(weight_path, map_location="cpu") - self.model = eval("UpCunet%sx" % scale)() - if (half == True): - self.model = self.model.half().to(device) - else: - self.model = self.model.to(device) - self.model.load_state_dict(weight, strict=True) - self.model.eval() - self.half = half - self.device = device - - def np2tensor(self, np_frame): - if (self.half == False): - return torch.from_numpy(np.transpose(np_frame, (2, 0, 1))).unsqueeze(0).to(self.device).float() / 255 - else: - return torch.from_numpy(np.transpose(np_frame, (2, 0, 1))).unsqueeze(0).to(self.device).half() / 255 - - def tensor2np(self, tensor): - if (self.half == False): - return ( - np.transpose((tensor.data.squeeze() * 255.0).round().clamp_(0, 255).byte().cpu().numpy(), (1, 2, 0))) - else: - return (np.transpose((tensor.data.squeeze().float() * 255.0).round().clamp_(0, 255).byte().cpu().numpy(), - (1, 2, 0))) - - def __call__(self, frame, tile_mode): - with torch.no_grad(): - tensor = self.np2tensor(frame) - result = self.tensor2np(self.model(tensor, tile_mode)) - return result - - -if __name__ == "__main__": - ###########inference_img - import time, cv2, sys - from time import time as ttime - - for weight_path, scale in [("weights_v3/up2x-latest-denoise3x.pth", 2), ("weights_v3/up3x-latest-denoise3x.pth", 3), - ("weights_v3/up4x-latest-denoise3x.pth", 4)]: - for tile_mode in [0, 1, 2, 3, 4]: - upscaler2x = RealWaifuUpScaler(scale, weight_path, half=True, device="cuda:0") - input_dir = "%s/input_dir1" % root_path - output_dir = "%s/opt-dir-all-test" % root_path - os.makedirs(output_dir, exist_ok=True) - for name in os.listdir(input_dir): - print(name) - tmp = name.split(".") - inp_path = os.path.join(input_dir, name) - suffix = tmp[-1] - prefix = ".".join(tmp[:-1]) - tmp_path = os.path.join(root_path, "tmp", "%s.%s" % (int(time.time() * 1000000), suffix)) - print(inp_path, tmp_path) - # 支持中文路径 - # os.link(inp_path, tmp_path)#win用硬链接 - os.symlink(inp_path, tmp_path) # linux用软链接 - frame = cv2.imread(tmp_path)[:, :, [2, 1, 0]] - t0 = ttime() - result = upscaler2x(frame, tile_mode=tile_mode)[:, :, ::-1] - t1 = ttime() - print(prefix, "done", t1 - t0) - tmp_opt_path = os.path.join(root_path, "tmp", "%s.%s" % (int(time.time() * 1000000), suffix)) - cv2.imwrite(tmp_opt_path, result) - n = 0 - while (1): - if (n == 0): - suffix = "_%sx_tile%s.png" % (scale, tile_mode) - else: - suffix = "_%sx_tile%s_%s.png" % (scale, tile_mode, n) # - if (os.path.exists(os.path.join(output_dir, prefix + suffix)) == False): - break - else: - n += 1 - final_opt_path = os.path.join(output_dir, prefix + suffix) - os.rename(tmp_opt_path, final_opt_path) - os.remove(tmp_path) diff --git a/spaces/wangrongsheng/ChatImprovement/show_math.py b/spaces/wangrongsheng/ChatImprovement/show_math.py deleted file mode 100644 index 80fa881d1c2ace5813f75b5d8a19ca056a8bfa4f..0000000000000000000000000000000000000000 --- a/spaces/wangrongsheng/ChatImprovement/show_math.py +++ /dev/null @@ -1,80 +0,0 @@ -# This program is written by: https://github.com/polarwinkel/mdtex2html - -from latex2mathml.converter import convert as tex2mathml -import re - -incomplete = 'formula incomplete' -convError = 'LaTeX-convert-error' - -def convert(mdtex, extensions=[], splitParagraphs=True): - ''' converts recursively the Markdown-LaTeX-mixture to HTML with MathML ''' - found = False - # handle all paragraphs separately (prevents aftereffects) - if splitParagraphs: - parts = re.split("\n\n", mdtex) - result = '' - for part in parts: - result += convert(part, extensions, splitParagraphs=False) - return result - # find first $$-formula: - parts = re.split('\${2}', mdtex, 2) - if len(parts)>1: - found = True - result = convert(parts[0], extensions, splitParagraphs=False)+'\n' - try: - result += '
          '+tex2mathml(parts[1])+'
          \n' - except: - result += '
          '+convError+'
          ' - if len(parts)==3: - result += convert(parts[2], extensions, splitParagraphs=False) - else: - result += '
          '+incomplete+'
          ' - # else find first $-formulas: - else: - parts = re.split('\${1}', mdtex, 2) - if len(parts)>1 and not found: - found = True - try: - mathml = tex2mathml(parts[1]) - except: - mathml = convError - if parts[0].endswith('\n\n') or parts[0]=='': # make sure textblock starts before formula! - parts[0]=parts[0]+'​' - if len(parts)==3: - result = convert(parts[0]+mathml+parts[2], extensions, splitParagraphs=False) - else: - result = convert(parts[0]+mathml+incomplete, extensions, splitParagraphs=False) - # else find first \[..\]-equation: - else: - parts = re.split(r'\\\[', mdtex, 1) - if len(parts)>1 and not found: - found = True - result = convert(parts[0], extensions, splitParagraphs=False)+'\n' - parts = re.split(r'\\\]', parts[1], 1) - try: - result += '
          '+tex2mathml(parts[0])+'
          \n' - except: - result += '
          '+convError+'
          ' - if len(parts)==2: - result += convert(parts[1], extensions, splitParagraphs=False) - else: - result += '
          '+incomplete+'
          ' - # else find first \(..\)-equation: - else: - parts = re.split(r'\\\(', mdtex, 1) - if len(parts)>1 and not found: - found = True - subp = re.split(r'\\\)', parts[1], 1) - try: - mathml = tex2mathml(subp[0]) - except: - mathml = convError - if parts[0].endswith('\n\n') or parts[0]=='': # make sure textblock starts before formula! - parts[0]=parts[0]+'​' - if len(subp)==2: - result = convert(parts[0]+mathml+subp[1], extensions, splitParagraphs=False) - else: - result = convert(parts[0]+mathml+incomplete, extensions, splitParagraphs=False) - if not found: - result = mdtex - return result diff --git a/spaces/wffcyrus/MetaGPT-v1/metagpt/utils/text.py b/spaces/wffcyrus/MetaGPT-v1/metagpt/utils/text.py deleted file mode 100644 index be3c52edd3d399f1fcee2449ada326c12d9e3f07..0000000000000000000000000000000000000000 --- a/spaces/wffcyrus/MetaGPT-v1/metagpt/utils/text.py +++ /dev/null @@ -1,124 +0,0 @@ -from typing import Generator, Sequence - -from metagpt.utils.token_counter import TOKEN_MAX, count_string_tokens - - -def reduce_message_length(msgs: Generator[str, None, None], model_name: str, system_text: str, reserved: int = 0,) -> str: - """Reduce the length of concatenated message segments to fit within the maximum token size. - - Args: - msgs: A generator of strings representing progressively shorter valid prompts. - model_name: The name of the encoding to use. (e.g., "gpt-3.5-turbo") - system_text: The system prompts. - reserved: The number of reserved tokens. - - Returns: - The concatenated message segments reduced to fit within the maximum token size. - - Raises: - RuntimeError: If it fails to reduce the concatenated message length. - """ - max_token = TOKEN_MAX.get(model_name, 2048) - count_string_tokens(system_text, model_name) - reserved - for msg in msgs: - if count_string_tokens(msg, model_name) < max_token: - return msg - - raise RuntimeError("fail to reduce message length") - - -def generate_prompt_chunk( - text: str, - prompt_template: str, - model_name: str, - system_text: str, - reserved: int = 0, -) -> Generator[str, None, None]: - """Split the text into chunks of a maximum token size. - - Args: - text: The text to split. - prompt_template: The template for the prompt, containing a single `{}` placeholder. For example, "### Reference\n{}". - model_name: The name of the encoding to use. (e.g., "gpt-3.5-turbo") - system_text: The system prompts. - reserved: The number of reserved tokens. - - Yields: - The chunk of text. - """ - paragraphs = text.splitlines(keepends=True) - current_token = 0 - current_lines = [] - - reserved = reserved + count_string_tokens(prompt_template+system_text, model_name) - # 100 is a magic number to ensure the maximum context length is not exceeded - max_token = TOKEN_MAX.get(model_name, 2048) - reserved - 100 - - while paragraphs: - paragraph = paragraphs.pop(0) - token = count_string_tokens(paragraph, model_name) - if current_token + token <= max_token: - current_lines.append(paragraph) - current_token += token - elif token > max_token: - paragraphs = split_paragraph(paragraph) + paragraphs - continue - else: - yield prompt_template.format("".join(current_lines)) - current_lines = [paragraph] - current_token = token - - if current_lines: - yield prompt_template.format("".join(current_lines)) - - -def split_paragraph(paragraph: str, sep: str = ".,", count: int = 2) -> list[str]: - """Split a paragraph into multiple parts. - - Args: - paragraph: The paragraph to split. - sep: The separator character. - count: The number of parts to split the paragraph into. - - Returns: - A list of split parts of the paragraph. - """ - for i in sep: - sentences = list(_split_text_with_ends(paragraph, i)) - if len(sentences) <= 1: - continue - ret = ["".join(j) for j in _split_by_count(sentences, count)] - return ret - return _split_by_count(paragraph, count) - - -def decode_unicode_escape(text: str) -> str: - """Decode a text with unicode escape sequences. - - Args: - text: The text to decode. - - Returns: - The decoded text. - """ - return text.encode("utf-8").decode("unicode_escape", "ignore") - - -def _split_by_count(lst: Sequence , count: int): - avg = len(lst) // count - remainder = len(lst) % count - start = 0 - for i in range(count): - end = start + avg + (1 if i < remainder else 0) - yield lst[start:end] - start = end - - -def _split_text_with_ends(text: str, sep: str = "."): - parts = [] - for i in text: - parts.append(i) - if i == sep: - yield "".join(parts) - parts = [] - if parts: - yield "".join(parts) diff --git a/spaces/wzq10314/VITS-Umamusume-voice-synthesizer1/utils.py b/spaces/wzq10314/VITS-Umamusume-voice-synthesizer1/utils.py deleted file mode 100644 index 9794e0fc3463a5e8fad05c037cce64683059a6d3..0000000000000000000000000000000000000000 --- a/spaces/wzq10314/VITS-Umamusume-voice-synthesizer1/utils.py +++ /dev/null @@ -1,226 +0,0 @@ -import os -import glob -import sys -import argparse -import logging -import json -import subprocess -import numpy as np -from scipy.io.wavfile import read -import torch - -MATPLOTLIB_FLAG = False - -logging.basicConfig(stream=sys.stdout, level=logging.ERROR) -logger = logging - - -def load_checkpoint(checkpoint_path, model, optimizer=None): - assert os.path.isfile(checkpoint_path) - checkpoint_dict = torch.load(checkpoint_path, map_location='cpu') - iteration = checkpoint_dict['iteration'] - learning_rate = checkpoint_dict['learning_rate'] - if optimizer is not None: - optimizer.load_state_dict(checkpoint_dict['optimizer']) - saved_state_dict = checkpoint_dict['model'] - if hasattr(model, 'module'): - state_dict = model.module.state_dict() - else: - state_dict = model.state_dict() - new_state_dict = {} - for k, v in state_dict.items(): - try: - new_state_dict[k] = saved_state_dict[k] - except: - logger.info("%s is not in the checkpoint" % k) - new_state_dict[k] = v - if hasattr(model, 'module'): - model.module.load_state_dict(new_state_dict) - else: - model.load_state_dict(new_state_dict) - logger.info("Loaded checkpoint '{}' (iteration {})".format( - checkpoint_path, iteration)) - return model, optimizer, learning_rate, iteration - - -def plot_spectrogram_to_numpy(spectrogram): - global MATPLOTLIB_FLAG - if not MATPLOTLIB_FLAG: - import matplotlib - matplotlib.use("Agg") - MATPLOTLIB_FLAG = True - mpl_logger = logging.getLogger('matplotlib') - mpl_logger.setLevel(logging.WARNING) - import matplotlib.pylab as plt - import numpy as np - - fig, ax = plt.subplots(figsize=(10, 2)) - im = ax.imshow(spectrogram, aspect="auto", origin="lower", - interpolation='none') - plt.colorbar(im, ax=ax) - plt.xlabel("Frames") - plt.ylabel("Channels") - plt.tight_layout() - - fig.canvas.draw() - data = np.fromstring(fig.canvas.tostring_rgb(), dtype=np.uint8, sep='') - data = data.reshape(fig.canvas.get_width_height()[::-1] + (3,)) - plt.close() - return data - - -def plot_alignment_to_numpy(alignment, info=None): - global MATPLOTLIB_FLAG - if not MATPLOTLIB_FLAG: - import matplotlib - matplotlib.use("Agg") - MATPLOTLIB_FLAG = True - mpl_logger = logging.getLogger('matplotlib') - mpl_logger.setLevel(logging.WARNING) - import matplotlib.pylab as plt - import numpy as np - - fig, ax = plt.subplots(figsize=(6, 4)) - im = ax.imshow(alignment.transpose(), aspect='auto', origin='lower', - interpolation='none') - fig.colorbar(im, ax=ax) - xlabel = 'Decoder timestep' - if info is not None: - xlabel += '\n\n' + info - plt.xlabel(xlabel) - plt.ylabel('Encoder timestep') - plt.tight_layout() - - fig.canvas.draw() - data = np.fromstring(fig.canvas.tostring_rgb(), dtype=np.uint8, sep='') - data = data.reshape(fig.canvas.get_width_height()[::-1] + (3,)) - plt.close() - return data - - -def load_wav_to_torch(full_path): - sampling_rate, data = read(full_path) - return torch.FloatTensor(data.astype(np.float32)), sampling_rate - - -def load_filepaths_and_text(filename, split="|"): - with open(filename, encoding='utf-8') as f: - filepaths_and_text = [line.strip().split(split) for line in f] - return filepaths_and_text - - -def get_hparams(init=True): - parser = argparse.ArgumentParser() - parser.add_argument('-c', '--config', type=str, default="./configs/base.json", - help='JSON file for configuration') - parser.add_argument('-m', '--model', type=str, required=True, - help='Model name') - - args = parser.parse_args() - model_dir = os.path.join("./logs", args.model) - - if not os.path.exists(model_dir): - os.makedirs(model_dir) - - config_path = args.config - config_save_path = os.path.join(model_dir, "config.json") - if init: - with open(config_path, "r") as f: - data = f.read() - with open(config_save_path, "w") as f: - f.write(data) - else: - with open(config_save_path, "r") as f: - data = f.read() - config = json.loads(data) - - hparams = HParams(**config) - hparams.model_dir = model_dir - return hparams - - -def get_hparams_from_dir(model_dir): - config_save_path = os.path.join(model_dir, "config.json") - with open(config_save_path, "r") as f: - data = f.read() - config = json.loads(data) - - hparams = HParams(**config) - hparams.model_dir = model_dir - return hparams - - -def get_hparams_from_file(config_path): - with open(config_path, "r", encoding="utf-8") as f: - data = f.read() - config = json.loads(data) - - hparams = HParams(**config) - return hparams - - -def check_git_hash(model_dir): - source_dir = os.path.dirname(os.path.realpath(__file__)) - if not os.path.exists(os.path.join(source_dir, ".git")): - logger.warn("{} is not a git repository, therefore hash value comparison will be ignored.".format( - source_dir - )) - return - - cur_hash = subprocess.getoutput("git rev-parse HEAD") - - path = os.path.join(model_dir, "githash") - if os.path.exists(path): - saved_hash = open(path).read() - if saved_hash != cur_hash: - logger.warn("git hash values are different. {}(saved) != {}(current)".format( - saved_hash[:8], cur_hash[:8])) - else: - open(path, "w").write(cur_hash) - - -def get_logger(model_dir, filename="train.log"): - global logger - logger = logging.getLogger(os.path.basename(model_dir)) - logger.setLevel(logging.DEBUG) - - formatter = logging.Formatter("%(asctime)s\t%(name)s\t%(levelname)s\t%(message)s") - if not os.path.exists(model_dir): - os.makedirs(model_dir) - h = logging.FileHandler(os.path.join(model_dir, filename)) - h.setLevel(logging.DEBUG) - h.setFormatter(formatter) - logger.addHandler(h) - return logger - - -class HParams(): - def __init__(self, **kwargs): - for k, v in kwargs.items(): - if type(v) == dict: - v = HParams(**v) - self[k] = v - - def keys(self): - return self.__dict__.keys() - - def items(self): - return self.__dict__.items() - - def values(self): - return self.__dict__.values() - - def __len__(self): - return len(self.__dict__) - - def __getitem__(self, key): - return getattr(self, key) - - def __setitem__(self, key, value): - return setattr(self, key, value) - - def __contains__(self, key): - return key in self.__dict__ - - def __repr__(self): - return self.__dict__.__repr__() \ No newline at end of file diff --git a/spaces/xiaolongbaox/gpt2.0/assets/custom.css b/spaces/xiaolongbaox/gpt2.0/assets/custom.css deleted file mode 100644 index f98c7df263b11afa4ddfb5d6ed18aef2ef234226..0000000000000000000000000000000000000000 --- a/spaces/xiaolongbaox/gpt2.0/assets/custom.css +++ /dev/null @@ -1,250 +0,0 @@ -:root { - --chatbot-color-light: #F3F3F3; - --chatbot-color-dark: #121111; -} - -/* 覆盖gradio的页脚信息QAQ */ -footer { - display: none !important; -} -#footer{ - text-align: center; -} -#footer div{ - display: inline-block; -} -#footer .versions{ - font-size: 85%; - opacity: 0.85; -} - -/* user_info */ -#user_info { - white-space: nowrap; - margin-top: -1.3em !important; - padding-left: 112px !important; -} -#user_info p { - font-size: .85em; - font-family: monospace; - color: var(--body-text-color-subdued); -} - -/* status_display */ -#status_display { - display: flex; - min-height: 2em; - align-items: flex-end; - justify-content: flex-end; -} -#status_display p { - font-size: .85em; - font-family: monospace; - color: var(--body-text-color-subdued); -} - -#chuanhu_chatbot, #status_display { - transition: all 0.6s; -} - -/* usage_display */ -#usage_display { - position: relative; - margin: 0; - box-shadow: var(--block-shadow); - border-width: var(--block-border-width); - border-color: var(--block-border-color); - border-radius: var(--block-radius); - background: var(--block-background-fill); - width: 100%; - line-height: var(--line-sm); - min-height: 2em; -} -#usage_display p, #usage_display span { - margin: 0; - padding: .5em 1em; - font-size: .85em; - color: var(--body-text-color-subdued); -} -.progress-bar { - background-color: var(--input-background-fill);; - margin: 0 1em; - height: 20px; - border-radius: 10px; - overflow: hidden; -} -.progress { - background-color: var(--block-title-background-fill);; - height: 100%; - border-radius: 10px; - text-align: right; - transition: width 0.5s ease-in-out; -} -.progress-text { - /* color: white; */ - color: var(--color-accent) !important; - font-size: 1em !important; - font-weight: bold; - padding-right: 10px; - line-height: 20px; -} -/* list */ -ol:not(.options), ul:not(.options) { - padding-inline-start: 2em !important; -} - -/* 亮色 */ -@media (prefers-color-scheme: light) { - #chuanhu_chatbot { - background-color: var(--chatbot-color-light) !important; - color: #000000 !important; - } - [data-testid = "bot"] { - background-color: #FFFFFF !important; - } - [data-testid = "user"] { - background-color: #95EC69 !important; - } -} -/* 暗色 */ -@media (prefers-color-scheme: dark) { - #chuanhu_chatbot { - background-color: var(--chatbot-color-dark) !important; - color: #FFFFFF !important; - } - [data-testid = "bot"] { - background-color: #2C2C2C !important; - } - [data-testid = "user"] { - background-color: #26B561 !important; - } - body { - background-color: var(--neutral-950) !important; - } -} -/* 对话气泡 */ -[class *= "message"] { - border-radius: var(--radius-xl) !important; - border: none; - padding: var(--spacing-xl) !important; - font-size: var(--text-md) !important; - line-height: var(--line-md) !important; - min-height: calc(var(--text-md)*var(--line-md) + 2*var(--spacing-xl)); - min-width: calc(var(--text-md)*var(--line-md) + 2*var(--spacing-xl)); -} -[data-testid = "bot"] { - max-width: 85%; - border-bottom-left-radius: 0 !important; -} -[data-testid = "user"] { - max-width: 85%; - width: auto !important; - border-bottom-right-radius: 0 !important; -} -/* 表格 */ -table { - margin: 1em 0; - border-collapse: collapse; - empty-cells: show; -} -td,th { - border: 1.2px solid var(--border-color-primary) !important; - padding: 0.2em; -} -thead { - background-color: rgba(175,184,193,0.2); -} -thead th { - padding: .5em .2em; -} -/* 行内代码 */ -code { - display: inline; - white-space: break-spaces; - border-radius: 6px; - margin: 0 2px 0 2px; - padding: .2em .4em .1em .4em; - background-color: rgba(175,184,193,0.2); -} -/* 代码块 */ -pre code { - display: block; - overflow: auto; - white-space: pre; - background-color: hsla(0, 0%, 0%, 80%)!important; - border-radius: 10px; - padding: 1.4em 1.2em 0em 1.4em; - margin: 1.2em 2em 1.2em 0.5em; - color: #FFF; - box-shadow: 6px 6px 16px hsla(0, 0%, 0%, 0.2); -} -/* 代码高亮样式 */ -.highlight .hll { background-color: #49483e } -.highlight .c { color: #75715e } /* Comment */ -.highlight .err { color: #960050; background-color: #1e0010 } /* Error */ -.highlight .k { color: #66d9ef } /* Keyword */ -.highlight .l { color: #ae81ff } /* Literal */ -.highlight .n { color: #f8f8f2 } /* Name */ -.highlight .o { color: #f92672 } /* Operator */ -.highlight .p { color: #f8f8f2 } /* Punctuation */ -.highlight .ch { color: #75715e } /* Comment.Hashbang */ -.highlight .cm { color: #75715e } /* Comment.Multiline */ -.highlight .cp { color: #75715e } /* Comment.Preproc */ -.highlight .cpf { color: #75715e } /* Comment.PreprocFile */ -.highlight .c1 { color: #75715e } /* Comment.Single */ -.highlight .cs { color: #75715e } /* Comment.Special */ -.highlight .gd { color: #f92672 } /* Generic.Deleted */ -.highlight .ge { font-style: italic } /* Generic.Emph */ -.highlight .gi { color: #a6e22e } /* Generic.Inserted */ -.highlight .gs { font-weight: bold } /* Generic.Strong */ -.highlight .gu { color: #75715e } /* Generic.Subheading */ -.highlight .kc { color: #66d9ef } /* Keyword.Constant */ -.highlight .kd { color: #66d9ef } /* Keyword.Declaration */ -.highlight .kn { color: #f92672 } /* Keyword.Namespace */ -.highlight .kp { color: #66d9ef } /* Keyword.Pseudo */ -.highlight .kr { color: #66d9ef } /* Keyword.Reserved */ -.highlight .kt { color: #66d9ef } /* Keyword.Type */ -.highlight .ld { color: #e6db74 } /* Literal.Date */ -.highlight .m { color: #ae81ff } /* Literal.Number */ -.highlight .s { color: #e6db74 } /* Literal.String */ -.highlight .na { color: #a6e22e } /* Name.Attribute */ -.highlight .nb { color: #f8f8f2 } /* Name.Builtin */ -.highlight .nc { color: #a6e22e } /* Name.Class */ -.highlight .no { color: #66d9ef } /* Name.Constant */ -.highlight .nd { color: #a6e22e } /* Name.Decorator */ -.highlight .ni { color: #f8f8f2 } /* Name.Entity */ -.highlight .ne { color: #a6e22e } /* Name.Exception */ -.highlight .nf { color: #a6e22e } /* Name.Function */ -.highlight .nl { color: #f8f8f2 } /* Name.Label */ -.highlight .nn { color: #f8f8f2 } /* Name.Namespace */ -.highlight .nx { color: #a6e22e } /* Name.Other */ -.highlight .py { color: #f8f8f2 } /* Name.Property */ -.highlight .nt { color: #f92672 } /* Name.Tag */ -.highlight .nv { color: #f8f8f2 } /* Name.Variable */ -.highlight .ow { color: #f92672 } /* Operator.Word */ -.highlight .w { color: #f8f8f2 } /* Text.Whitespace */ -.highlight .mb { color: #ae81ff } /* Literal.Number.Bin */ -.highlight .mf { color: #ae81ff } /* Literal.Number.Float */ -.highlight .mh { color: #ae81ff } /* Literal.Number.Hex */ -.highlight .mi { color: #ae81ff } /* Literal.Number.Integer */ -.highlight .mo { color: #ae81ff } /* Literal.Number.Oct */ -.highlight .sa { color: #e6db74 } /* Literal.String.Affix */ -.highlight .sb { color: #e6db74 } /* Literal.String.Backtick */ -.highlight .sc { color: #e6db74 } /* Literal.String.Char */ -.highlight .dl { color: #e6db74 } /* Literal.String.Delimiter */ -.highlight .sd { color: #e6db74 } /* Literal.String.Doc */ -.highlight .s2 { color: #e6db74 } /* Literal.String.Double */ -.highlight .se { color: #ae81ff } /* Literal.String.Escape */ -.highlight .sh { color: #e6db74 } /* Literal.String.Heredoc */ -.highlight .si { color: #e6db74 } /* Literal.String.Interpol */ -.highlight .sx { color: #e6db74 } /* Literal.String.Other */ -.highlight .sr { color: #e6db74 } /* Literal.String.Regex */ -.highlight .s1 { color: #e6db74 } /* Literal.String.Single */ -.highlight .ss { color: #e6db74 } /* Literal.String.Symbol */ -.highlight .bp { color: #f8f8f2 } /* Name.Builtin.Pseudo */ -.highlight .fm { color: #a6e22e } /* Name.Function.Magic */ -.highlight .vc { color: #f8f8f2 } /* Name.Variable.Class */ -.highlight .vg { color: #f8f8f2 } /* Name.Variable.Global */ -.highlight .vi { color: #f8f8f2 } /* Name.Variable.Instance */ -.highlight .vm { color: #f8f8f2 } /* Name.Variable.Magic */ -.highlight .il { color: #ae81ff } /* Literal.Number.Integer.Long */ diff --git a/spaces/xp3857/Image_Restoration_Colorization/Global/models/__init__.py b/spaces/xp3857/Image_Restoration_Colorization/Global/models/__init__.py deleted file mode 100644 index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000 diff --git a/spaces/xuxw98/TAPA/app_backup.py b/spaces/xuxw98/TAPA/app_backup.py deleted file mode 100644 index 85c20022f76b7767ea723f42cefcbded30cef725..0000000000000000000000000000000000000000 --- a/spaces/xuxw98/TAPA/app_backup.py +++ /dev/null @@ -1,220 +0,0 @@ -import json -import os -import glob -import sys -import time -from pathlib import Path -from typing import Tuple - -from huggingface_hub import hf_hub_download -from PIL import Image -import gradio as gr -import torch -from fairscale.nn.model_parallel.initialize import initialize_model_parallel - -from llama import LLaMA, ModelArgs, Tokenizer, Transformer, VisionModel - -os.environ['CUDA_LAUNCH_BLOCKING'] = '1' - -PROMPT_DICT = { - "prompt_input": ( - "Below is an instruction that describes a task, paired with an input that provides further context. " - "Write a response that appropriately completes the request.\n\n" - "### Instruction:\n{instruction}\n\n### Input:\n{input}\n\n### Response:" - ), - "prompt_no_input": ( - "Below is an instruction that describes a task. " - "Write a response that appropriately completes the request.\n\n" - "### Instruction:\n{instruction}\n\n### Response:" - ), -} - - -def setup_model_parallel() -> Tuple[int, int]: - os.environ['RANK'] = '0' - os.environ['WORLD_SIZE'] = '1' - os.environ['MP'] = '1' - os.environ['MASTER_ADDR'] = '127.0.0.1' - os.environ['MASTER_PORT'] = '2223' - local_rank = int(os.environ.get("LOCAL_RANK", -1)) - world_size = int(os.environ.get("WORLD_SIZE", -1)) - - torch.distributed.init_process_group("nccl") - initialize_model_parallel(world_size) - torch.cuda.set_device(local_rank) - - # seed must be the same in all processes - torch.manual_seed(1) - return local_rank, world_size - - -def load( - ckpt0_path: str, - ckpt1_path: str, - param_path: str, - tokenizer_path: str, - instruct_adapter_path: str, - caption_adapter_path: str, - local_rank: int, - world_size: int, - max_seq_len: int, - max_batch_size: int, -) -> LLaMA: - start_time = time.time() - print("Loading") - instruct_adapter_checkpoint = torch.load( - instruct_adapter_path, map_location="cpu") - caption_adapter_checkpoint = torch.load( - caption_adapter_path, map_location="cpu") - with open(param_path, "r") as f: - params = json.loads(f.read()) - - model_args: ModelArgs = ModelArgs( - max_seq_len=max_seq_len, max_batch_size=max_batch_size, **params - ) - model_args.adapter_layer = int( - instruct_adapter_checkpoint['adapter_query.weight'].shape[0] / model_args.adapter_len) - model_args.cap_adapter_layer = int( - caption_adapter_checkpoint['cap_adapter_query.weight'].shape[0] / model_args.cap_adapter_len) - - tokenizer = Tokenizer(model_path=tokenizer_path) - model_args.vocab_size = tokenizer.n_words - torch.set_default_tensor_type(torch.cuda.HalfTensor) - model = Transformer(model_args) - - # To reduce memory usuage - ckpt0 = torch.load(ckpt0_path, map_location='cuda') - model.load_state_dict(ckpt0, strict=False) - del ckpt0 - torch.cuda.empty_cache() - - ckpt1 = torch.load(ckpt1_path, map_location='cuda') - model.load_state_dict(ckpt1, strict=False) - del ckpt1 - torch.cuda.empty_cache() - - vision_model = VisionModel(model_args) - - torch.set_default_tensor_type(torch.FloatTensor) - model.load_state_dict(instruct_adapter_checkpoint, strict=False) - model.load_state_dict(caption_adapter_checkpoint, strict=False) - vision_model.load_state_dict(caption_adapter_checkpoint, strict=False) - - generator = LLaMA(model, tokenizer, vision_model) - print(f"Loaded in {time.time() - start_time:.2f} seconds") - return generator - - -def instruct_generate( - instruct: str, - input: str = 'none', - max_gen_len=512, - temperature: float = 0.1, - top_p: float = 0.75, -): - if input == 'none': - prompt = PROMPT_DICT['prompt_no_input'].format_map( - {'instruction': instruct, 'input': ''}) - else: - prompt = PROMPT_DICT['prompt_input'].format_map( - {'instruction': instruct, 'input': input}) - - results = generator.generate( - [prompt], max_gen_len=max_gen_len, temperature=temperature, top_p=top_p - ) - result = results[0].strip() - print(result) - return result - - -def download_llama_adapter(instruct_adapter_path, caption_adapter_path): - if not os.path.exists(instruct_adapter_path): - os.system( - f"wget -q -O {instruct_adapter_path} https://github.com/ZrrSkywalker/LLaMA-Adapter/releases/download/v.1.0.0/llama_adapter_len10_layer30_release.pth") - - if not os.path.exists(caption_adapter_path): - os.system( - f"wget -q -O {caption_adapter_path} https://github.com/ZrrSkywalker/LLaMA-Adapter/releases/download/v.1.0.0/llama_adapter_len10_layer30_caption_vit_l.pth") - - -# ckpt_path = "/data1/llma/7B/consolidated.00.pth" -# param_path = "/data1/llma/7B/params.json" -# tokenizer_path = "/data1/llma/tokenizer.model" -ckpt0_path = hf_hub_download( - repo_id="csuhan/llama_storage", filename="consolidated.00_part0.pth") -ckpt1_path = hf_hub_download( - repo_id="csuhan/llama_storage", filename="consolidated.00_part1.pth") -param_path = hf_hub_download( - repo_id="nyanko7/LLaMA-7B", filename="params.json") -tokenizer_path = hf_hub_download( - repo_id="nyanko7/LLaMA-7B", filename="tokenizer.model") -instruct_adapter_path = "llama_adapter_len10_layer30_release.pth" -caption_adapter_path = "llama_adapter_len10_layer30_caption_vit_l.pth" -max_seq_len = 512 -max_batch_size = 1 - -# download models -# download_llama_adapter(instruct_adapter_path, caption_adapter_path) - -local_rank, world_size = setup_model_parallel() -if local_rank > 0: - sys.stdout = open(os.devnull, "w") - -generator = load( - ckpt0_path, ckpt1_path, param_path, tokenizer_path, instruct_adapter_path, caption_adapter_path, local_rank, world_size, max_seq_len, max_batch_size -) - - -def create_instruct_demo(): - with gr.Blocks() as instruct_demo: - with gr.Row(): - with gr.Column(): - instruction = gr.Textbox(lines=2, label="Instruction") - input = gr.Textbox( - lines=2, label="Context input", placeholder='none') - max_len = gr.Slider(minimum=1, maximum=512, - value=128, label="Max length") - with gr.Accordion(label='Advanced options', open=False): - temp = gr.Slider(minimum=0, maximum=1, - value=0.1, label="Temperature") - top_p = gr.Slider(minimum=0, maximum=1, - value=0.75, label="Top p") - - run_botton = gr.Button("Run") - - with gr.Column(): - outputs = gr.Textbox(lines=10, label="Output") - - inputs = [instruction, input, max_len, temp, top_p] - - examples = [ - "Tell me about alpacas.", - "Write a Python program that prints the first 10 Fibonacci numbers.", - "Write a conversation between the sun and pluto.", - "Write a theory to explain why cat never existed", - ] - examples = [ - [x, "none", 128, 0.1, 0.75] - for x in examples] - - gr.Examples( - examples=examples, - inputs=inputs, - outputs=outputs, - fn=instruct_generate, - cache_examples=os.getenv('SYSTEM') == 'spaces' - ) - run_botton.click(fn=instruct_generate, inputs=inputs, outputs=outputs) - return instruct_demo - - -description = """ -# TAPA: xxx -""" - -with gr.Blocks(css='style.css') as demo: - gr.Markdown(description) - with gr.TabItem("Instruction-Following"): - create_instruct_demo() - -demo.queue(api_open=True, concurrency_count=1).launch() diff --git "a/spaces/xwsm/gpt/crazy_functions/Latex\345\205\250\346\226\207\347\277\273\350\257\221.py" "b/spaces/xwsm/gpt/crazy_functions/Latex\345\205\250\346\226\207\347\277\273\350\257\221.py" deleted file mode 100644 index efada619a6fe121cba28a18f92b3c4a0de4c88bc..0000000000000000000000000000000000000000 --- "a/spaces/xwsm/gpt/crazy_functions/Latex\345\205\250\346\226\207\347\277\273\350\257\221.py" +++ /dev/null @@ -1,175 +0,0 @@ -from toolbox import update_ui -from toolbox import CatchException, report_execption, write_results_to_file -fast_debug = False - -class PaperFileGroup(): - def __init__(self): - self.file_paths = [] - self.file_contents = [] - self.sp_file_contents = [] - self.sp_file_index = [] - self.sp_file_tag = [] - - # count_token - from request_llm.bridge_all import model_info - enc = model_info["gpt-3.5-turbo"]['tokenizer'] - def get_token_num(txt): return len(enc.encode(txt, disallowed_special=())) - self.get_token_num = get_token_num - - def run_file_split(self, max_token_limit=1900): - """ - 将长文本分离开来 - """ - for index, file_content in enumerate(self.file_contents): - if self.get_token_num(file_content) < max_token_limit: - self.sp_file_contents.append(file_content) - self.sp_file_index.append(index) - self.sp_file_tag.append(self.file_paths[index]) - else: - from .crazy_utils import breakdown_txt_to_satisfy_token_limit_for_pdf - segments = breakdown_txt_to_satisfy_token_limit_for_pdf(file_content, self.get_token_num, max_token_limit) - for j, segment in enumerate(segments): - self.sp_file_contents.append(segment) - self.sp_file_index.append(index) - self.sp_file_tag.append(self.file_paths[index] + f".part-{j}.tex") - - print('Segmentation: done') - -def 多文件翻译(file_manifest, project_folder, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, language='en'): - import time, os, re - from .crazy_utils import request_gpt_model_multi_threads_with_very_awesome_ui_and_high_efficiency - - # <-------- 读取Latex文件,删除其中的所有注释 ----------> - pfg = PaperFileGroup() - - for index, fp in enumerate(file_manifest): - with open(fp, 'r', encoding='utf-8', errors='replace') as f: - file_content = f.read() - # 定义注释的正则表达式 - comment_pattern = r'%.*' - # 使用正则表达式查找注释,并替换为空字符串 - clean_tex_content = re.sub(comment_pattern, '', file_content) - # 记录删除注释后的文本 - pfg.file_paths.append(fp) - pfg.file_contents.append(clean_tex_content) - - # <-------- 拆分过长的latex文件 ----------> - pfg.run_file_split(max_token_limit=1024) - n_split = len(pfg.sp_file_contents) - - # <-------- 抽取摘要 ----------> - # if language == 'en': - # abs_extract_inputs = f"Please write an abstract for this paper" - - # # 单线,获取文章meta信息 - # paper_meta_info = yield from request_gpt_model_in_new_thread_with_ui_alive( - # inputs=abs_extract_inputs, - # inputs_show_user=f"正在抽取摘要信息。", - # llm_kwargs=llm_kwargs, - # chatbot=chatbot, history=[], - # sys_prompt="Your job is to collect information from materials。", - # ) - - # <-------- 多线程润色开始 ----------> - if language == 'en->zh': - inputs_array = ["Below is a section from an English academic paper, translate it into Chinese, do not modify any latex command such as \section, \cite and equations:" + - f"\n\n{frag}" for frag in pfg.sp_file_contents] - inputs_show_user_array = [f"翻译 {f}" for f in pfg.sp_file_tag] - sys_prompt_array = ["You are a professional academic paper translator." for _ in range(n_split)] - elif language == 'zh->en': - inputs_array = [f"Below is a section from a Chinese academic paper, translate it into English, do not modify any latex command such as \section, \cite and equations:" + - f"\n\n{frag}" for frag in pfg.sp_file_contents] - inputs_show_user_array = [f"翻译 {f}" for f in pfg.sp_file_tag] - sys_prompt_array = ["You are a professional academic paper translator." for _ in range(n_split)] - - gpt_response_collection = yield from request_gpt_model_multi_threads_with_very_awesome_ui_and_high_efficiency( - inputs_array=inputs_array, - inputs_show_user_array=inputs_show_user_array, - llm_kwargs=llm_kwargs, - chatbot=chatbot, - history_array=[[""] for _ in range(n_split)], - sys_prompt_array=sys_prompt_array, - # max_workers=5, # OpenAI所允许的最大并行过载 - scroller_max_len = 80 - ) - - # <-------- 整理结果,退出 ----------> - create_report_file_name = time.strftime("%Y-%m-%d-%H-%M-%S", time.localtime()) + f"-chatgpt.polish.md" - res = write_results_to_file(gpt_response_collection, file_name=create_report_file_name) - history = gpt_response_collection - chatbot.append((f"{fp}完成了吗?", res)) - yield from update_ui(chatbot=chatbot, history=history) # 刷新界面 - - - - - -@CatchException -def Latex英译中(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, web_port): - # 基本信息:功能、贡献者 - chatbot.append([ - "函数插件功能?", - "对整个Latex项目进行翻译。函数插件贡献者: Binary-Husky"]) - yield from update_ui(chatbot=chatbot, history=history) # 刷新界面 - - # 尝试导入依赖,如果缺少依赖,则给出安装建议 - try: - import tiktoken - except: - report_execption(chatbot, history, - a=f"解析项目: {txt}", - b=f"导入软件依赖失败。使用该模块需要额外依赖,安装方法```pip install --upgrade tiktoken```。") - yield from update_ui(chatbot=chatbot, history=history) # 刷新界面 - return - history = [] # 清空历史,以免输入溢出 - import glob, os - if os.path.exists(txt): - project_folder = txt - else: - if txt == "": txt = '空空如也的输入栏' - report_execption(chatbot, history, a = f"解析项目: {txt}", b = f"找不到本地项目或无权访问: {txt}") - yield from update_ui(chatbot=chatbot, history=history) # 刷新界面 - return - file_manifest = [f for f in glob.glob(f'{project_folder}/**/*.tex', recursive=True)] - if len(file_manifest) == 0: - report_execption(chatbot, history, a = f"解析项目: {txt}", b = f"找不到任何.tex文件: {txt}") - yield from update_ui(chatbot=chatbot, history=history) # 刷新界面 - return - yield from 多文件翻译(file_manifest, project_folder, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, language='en->zh') - - - - - -@CatchException -def Latex中译英(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, web_port): - # 基本信息:功能、贡献者 - chatbot.append([ - "函数插件功能?", - "对整个Latex项目进行翻译。函数插件贡献者: Binary-Husky"]) - yield from update_ui(chatbot=chatbot, history=history) # 刷新界面 - - # 尝试导入依赖,如果缺少依赖,则给出安装建议 - try: - import tiktoken - except: - report_execption(chatbot, history, - a=f"解析项目: {txt}", - b=f"导入软件依赖失败。使用该模块需要额外依赖,安装方法```pip install --upgrade tiktoken```。") - yield from update_ui(chatbot=chatbot, history=history) # 刷新界面 - return - history = [] # 清空历史,以免输入溢出 - import glob, os - if os.path.exists(txt): - project_folder = txt - else: - if txt == "": txt = '空空如也的输入栏' - report_execption(chatbot, history, a = f"解析项目: {txt}", b = f"找不到本地项目或无权访问: {txt}") - yield from update_ui(chatbot=chatbot, history=history) # 刷新界面 - return - file_manifest = [f for f in glob.glob(f'{project_folder}/**/*.tex', recursive=True)] - if len(file_manifest) == 0: - report_execption(chatbot, history, a = f"解析项目: {txt}", b = f"找不到任何.tex文件: {txt}") - yield from update_ui(chatbot=chatbot, history=history) # 刷新界面 - return - yield from 多文件翻译(file_manifest, project_folder, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, language='zh->en') \ No newline at end of file diff --git a/spaces/yangogo/bingo/src/components/chat-notification.tsx b/spaces/yangogo/bingo/src/components/chat-notification.tsx deleted file mode 100644 index 4be24d0f1755c8058698cfa66c736d8d4792475a..0000000000000000000000000000000000000000 --- a/spaces/yangogo/bingo/src/components/chat-notification.tsx +++ /dev/null @@ -1,77 +0,0 @@ -import { useEffect } from 'react' -import Image from 'next/image' - -import IconWarning from '@/assets/images/warning.svg' -import { ChatError, ErrorCode, ChatMessageModel } from '@/lib/bots/bing/types' -import { ExternalLink } from './external-link' -import { useBing } from '@/lib/hooks/use-bing' - -export interface ChatNotificationProps extends Pick, 'bot'> { - message?: ChatMessageModel -} - -function getAction(error: ChatError, reset: () => void) { - if (error.code === ErrorCode.THROTTLE_LIMIT) { - reset() - return ( -
          - 你已达到每日最大发送消息次数,请更换账号或隔一天后重试 -
          - ) - } - if (error.code === ErrorCode.BING_FORBIDDEN) { - return ( - - 你的账号已在黑名单,请尝试更换账号及申请解封 - - ) - } - if (error.code === ErrorCode.CONVERSATION_LIMIT) { - return ( -
          - 当前话题已中止,请点 - 重新开始 - 开启新的对话 -
          - ) - } - if (error.code === ErrorCode.BING_CAPTCHA) { - return ( - - 点击通过人机验证 - - ) - } - if (error.code === ErrorCode.BING_UNAUTHORIZED) { - reset() - return ( - 没有获取到身份信息或身份信息失效,点此重新设置 - ) - } - return error.message -} - -export function ChatNotification({ message, bot }: ChatNotificationProps) { - useEffect(() => { - window.scrollBy(0, 2000) - }, [message]) - - if (!message?.error) return - - return ( -
          -
          -
          -
          -
          - error - {getAction(message.error, () => bot.resetConversation())} -
          -
          -
          -
          -
          - ) -} diff --git a/spaces/yaoshining/text-generation-webui/modules/extensions.py b/spaces/yaoshining/text-generation-webui/modules/extensions.py deleted file mode 100644 index 4950e04e19080d764881c29ced8d6cc3433faf97..0000000000000000000000000000000000000000 --- a/spaces/yaoshining/text-generation-webui/modules/extensions.py +++ /dev/null @@ -1,193 +0,0 @@ -import traceback -from functools import partial - -import gradio as gr - -import extensions -import modules.shared as shared -from modules.logging_colors import logger - -state = {} -available_extensions = [] -setup_called = set() - - -def apply_settings(extension, name): - if not hasattr(extension, 'params'): - return - - for param in extension.params: - _id = f"{name}-{param}" - if _id not in shared.settings: - continue - - extension.params[param] = shared.settings[_id] - - -def load_extensions(): - global state, setup_called - for i, name in enumerate(shared.args.extensions): - if name in available_extensions: - if name != 'api': - logger.info(f'Loading the extension "{name}"...') - try: - exec(f"import extensions.{name}.script") - extension = getattr(extensions, name).script - apply_settings(extension, name) - if extension not in setup_called and hasattr(extension, "setup"): - setup_called.add(extension) - extension.setup() - - state[name] = [True, i] - except: - logger.error(f'Failed to load the extension "{name}".') - traceback.print_exc() - - -# This iterator returns the extensions in the order specified in the command-line -def iterator(): - for name in sorted(state, key=lambda x: state[x][1]): - if state[name][0]: - yield getattr(extensions, name).script, name - - -# Extension functions that map string -> string -def _apply_string_extensions(function_name, text): - for extension, _ in iterator(): - if hasattr(extension, function_name): - text = getattr(extension, function_name)(text) - - return text - - -# Input hijack of extensions -def _apply_input_hijack(text, visible_text): - for extension, _ in iterator(): - if hasattr(extension, 'input_hijack') and extension.input_hijack['state']: - extension.input_hijack['state'] = False - if callable(extension.input_hijack['value']): - text, visible_text = extension.input_hijack['value'](text, visible_text) - else: - text, visible_text = extension.input_hijack['value'] - - return text, visible_text - - -# custom_generate_chat_prompt handling - currently only the first one will work -def _apply_custom_generate_chat_prompt(text, state, **kwargs): - for extension, _ in iterator(): - if hasattr(extension, 'custom_generate_chat_prompt'): - return extension.custom_generate_chat_prompt(text, state, **kwargs) - - return None - - -# Extension that modifies the input parameters before they are used -def _apply_state_modifier_extensions(state): - for extension, _ in iterator(): - if hasattr(extension, "state_modifier"): - state = getattr(extension, "state_modifier")(state) - - return state - - -# Extension that modifies the chat history before it is used -def _apply_history_modifier_extensions(history): - for extension, _ in iterator(): - if hasattr(extension, "history_modifier"): - history = getattr(extension, "history_modifier")(history) - - return history - - -# Extension functions that override the default tokenizer output - currently only the first one will work -def _apply_tokenizer_extensions(function_name, state, prompt, input_ids, input_embeds): - for extension, _ in iterator(): - if hasattr(extension, function_name): - return getattr(extension, function_name)(state, prompt, input_ids, input_embeds) - - return prompt, input_ids, input_embeds - - -# Get prompt length in tokens after applying extension functions which override the default tokenizer output -# currently only the first one will work -def _apply_custom_tokenized_length(prompt): - for extension, _ in iterator(): - if hasattr(extension, 'custom_tokenized_length'): - return getattr(extension, 'custom_tokenized_length')(prompt) - - return None - - -# Custom generate reply handling - currently only the first one will work -def _apply_custom_generate_reply(): - for extension, _ in iterator(): - if hasattr(extension, 'custom_generate_reply'): - return getattr(extension, 'custom_generate_reply') - - return None - - -def _apply_custom_css(): - all_css = '' - for extension, _ in iterator(): - if hasattr(extension, 'custom_css'): - all_css += getattr(extension, 'custom_css')() - - return all_css - - -def _apply_custom_js(): - all_js = '' - for extension, _ in iterator(): - if hasattr(extension, 'custom_js'): - all_js += getattr(extension, 'custom_js')() - - return all_js - - -def create_extensions_block(): - to_display = [] - for extension, name in iterator(): - if hasattr(extension, "ui") and not (hasattr(extension, 'params') and extension.params.get('is_tab', False)): - to_display.append((extension, name)) - - # Creating the extension ui elements - if len(to_display) > 0: - with gr.Column(elem_id="extensions"): - for row in to_display: - extension, name = row - display_name = getattr(extension, 'params', {}).get('display_name', name) - gr.Markdown(f"\n### {display_name}") - extension.ui() - - -def create_extensions_tabs(): - for extension, name in iterator(): - if hasattr(extension, "ui") and (hasattr(extension, 'params') and extension.params.get('is_tab', False)): - display_name = getattr(extension, 'params', {}).get('display_name', name) - with gr.Tab(display_name, elem_classes="extension-tab"): - extension.ui() - - -EXTENSION_MAP = { - "input": partial(_apply_string_extensions, "input_modifier"), - "output": partial(_apply_string_extensions, "output_modifier"), - "state": _apply_state_modifier_extensions, - "history": _apply_history_modifier_extensions, - "bot_prefix": partial(_apply_string_extensions, "bot_prefix_modifier"), - "tokenizer": partial(_apply_tokenizer_extensions, "tokenizer_modifier"), - "input_hijack": _apply_input_hijack, - "custom_generate_chat_prompt": _apply_custom_generate_chat_prompt, - "custom_generate_reply": _apply_custom_generate_reply, - "tokenized_length": _apply_custom_tokenized_length, - "css": _apply_custom_css, - "js": _apply_custom_js -} - - -def apply_extensions(typ, *args, **kwargs): - if typ not in EXTENSION_MAP: - raise ValueError(f"Invalid extension type {typ}") - - return EXTENSION_MAP[typ](*args, **kwargs) diff --git a/spaces/ybelkada/interfacegan_pp/models/stylegan_tf_official/training/dataset.py b/spaces/ybelkada/interfacegan_pp/models/stylegan_tf_official/training/dataset.py deleted file mode 100644 index cf142226b1794b675d61151467444cb65bdaa1a0..0000000000000000000000000000000000000000 --- a/spaces/ybelkada/interfacegan_pp/models/stylegan_tf_official/training/dataset.py +++ /dev/null @@ -1,241 +0,0 @@ -# Copyright (c) 2019, NVIDIA CORPORATION. All rights reserved. -# -# This work is licensed under the Creative Commons Attribution-NonCommercial -# 4.0 International License. To view a copy of this license, visit -# http://creativecommons.org/licenses/by-nc/4.0/ or send a letter to -# Creative Commons, PO Box 1866, Mountain View, CA 94042, USA. - -"""Multi-resolution input data pipeline.""" - -import os -import glob -import numpy as np -import tensorflow as tf -import dnnlib -import dnnlib.tflib as tflib - -#---------------------------------------------------------------------------- -# Parse individual image from a tfrecords file. - -def parse_tfrecord_tf(record): - features = tf.parse_single_example(record, features={ - 'shape': tf.FixedLenFeature([3], tf.int64), - 'data': tf.FixedLenFeature([], tf.string)}) - data = tf.decode_raw(features['data'], tf.uint8) - return tf.reshape(data, features['shape']) - -def parse_tfrecord_np(record): - ex = tf.train.Example() - ex.ParseFromString(record) - shape = ex.features.feature['shape'].int64_list.value # temporary pylint workaround # pylint: disable=no-member - data = ex.features.feature['data'].bytes_list.value[0] # temporary pylint workaround # pylint: disable=no-member - return np.fromstring(data, np.uint8).reshape(shape) - -#---------------------------------------------------------------------------- -# Dataset class that loads data from tfrecords files. - -class TFRecordDataset: - def __init__(self, - tfrecord_dir, # Directory containing a collection of tfrecords files. - resolution = None, # Dataset resolution, None = autodetect. - label_file = None, # Relative path of the labels file, None = autodetect. - max_label_size = 0, # 0 = no labels, 'full' = full labels, = N first label components. - repeat = True, # Repeat dataset indefinitely. - shuffle_mb = 4096, # Shuffle data within specified window (megabytes), 0 = disable shuffling. - prefetch_mb = 2048, # Amount of data to prefetch (megabytes), 0 = disable prefetching. - buffer_mb = 256, # Read buffer size (megabytes). - num_threads = 2): # Number of concurrent threads. - - self.tfrecord_dir = tfrecord_dir - self.resolution = None - self.resolution_log2 = None - self.shape = [] # [channel, height, width] - self.dtype = 'uint8' - self.dynamic_range = [0, 255] - self.label_file = label_file - self.label_size = None # [component] - self.label_dtype = None - self._np_labels = None - self._tf_minibatch_in = None - self._tf_labels_var = None - self._tf_labels_dataset = None - self._tf_datasets = dict() - self._tf_iterator = None - self._tf_init_ops = dict() - self._tf_minibatch_np = None - self._cur_minibatch = -1 - self._cur_lod = -1 - - # List tfrecords files and inspect their shapes. - assert os.path.isdir(self.tfrecord_dir) - tfr_files = sorted(glob.glob(os.path.join(self.tfrecord_dir, '*.tfrecords'))) - assert len(tfr_files) >= 1 - tfr_shapes = [] - for tfr_file in tfr_files: - tfr_opt = tf.python_io.TFRecordOptions(tf.python_io.TFRecordCompressionType.NONE) - for record in tf.python_io.tf_record_iterator(tfr_file, tfr_opt): - tfr_shapes.append(parse_tfrecord_np(record).shape) - break - - # Autodetect label filename. - if self.label_file is None: - guess = sorted(glob.glob(os.path.join(self.tfrecord_dir, '*.labels'))) - if len(guess): - self.label_file = guess[0] - elif not os.path.isfile(self.label_file): - guess = os.path.join(self.tfrecord_dir, self.label_file) - if os.path.isfile(guess): - self.label_file = guess - - # Determine shape and resolution. - max_shape = max(tfr_shapes, key=np.prod) - self.resolution = resolution if resolution is not None else max_shape[1] - self.resolution_log2 = int(np.log2(self.resolution)) - self.shape = [max_shape[0], self.resolution, self.resolution] - tfr_lods = [self.resolution_log2 - int(np.log2(shape[1])) for shape in tfr_shapes] - assert all(shape[0] == max_shape[0] for shape in tfr_shapes) - assert all(shape[1] == shape[2] for shape in tfr_shapes) - assert all(shape[1] == self.resolution // (2**lod) for shape, lod in zip(tfr_shapes, tfr_lods)) - assert all(lod in tfr_lods for lod in range(self.resolution_log2 - 1)) - - # Load labels. - assert max_label_size == 'full' or max_label_size >= 0 - self._np_labels = np.zeros([1<<20, 0], dtype=np.float32) - if self.label_file is not None and max_label_size != 0: - self._np_labels = np.load(self.label_file) - assert self._np_labels.ndim == 2 - if max_label_size != 'full' and self._np_labels.shape[1] > max_label_size: - self._np_labels = self._np_labels[:, :max_label_size] - self.label_size = self._np_labels.shape[1] - self.label_dtype = self._np_labels.dtype.name - - # Build TF expressions. - with tf.name_scope('Dataset'), tf.device('/cpu:0'): - self._tf_minibatch_in = tf.placeholder(tf.int64, name='minibatch_in', shape=[]) - self._tf_labels_var = tflib.create_var_with_large_initial_value(self._np_labels, name='labels_var') - self._tf_labels_dataset = tf.data.Dataset.from_tensor_slices(self._tf_labels_var) - for tfr_file, tfr_shape, tfr_lod in zip(tfr_files, tfr_shapes, tfr_lods): - if tfr_lod < 0: - continue - dset = tf.data.TFRecordDataset(tfr_file, compression_type='', buffer_size=buffer_mb<<20) - dset = dset.map(parse_tfrecord_tf, num_parallel_calls=num_threads) - dset = tf.data.Dataset.zip((dset, self._tf_labels_dataset)) - bytes_per_item = np.prod(tfr_shape) * np.dtype(self.dtype).itemsize - if shuffle_mb > 0: - dset = dset.shuffle(((shuffle_mb << 20) - 1) // bytes_per_item + 1) - if repeat: - dset = dset.repeat() - if prefetch_mb > 0: - dset = dset.prefetch(((prefetch_mb << 20) - 1) // bytes_per_item + 1) - dset = dset.batch(self._tf_minibatch_in) - self._tf_datasets[tfr_lod] = dset - self._tf_iterator = tf.data.Iterator.from_structure(self._tf_datasets[0].output_types, self._tf_datasets[0].output_shapes) - self._tf_init_ops = {lod: self._tf_iterator.make_initializer(dset) for lod, dset in self._tf_datasets.items()} - - # Use the given minibatch size and level-of-detail for the data returned by get_minibatch_tf(). - def configure(self, minibatch_size, lod=0): - lod = int(np.floor(lod)) - assert minibatch_size >= 1 and lod in self._tf_datasets - if self._cur_minibatch != minibatch_size or self._cur_lod != lod: - self._tf_init_ops[lod].run({self._tf_minibatch_in: minibatch_size}) - self._cur_minibatch = minibatch_size - self._cur_lod = lod - - # Get next minibatch as TensorFlow expressions. - def get_minibatch_tf(self): # => images, labels - return self._tf_iterator.get_next() - - # Get next minibatch as NumPy arrays. - def get_minibatch_np(self, minibatch_size, lod=0): # => images, labels - self.configure(minibatch_size, lod) - if self._tf_minibatch_np is None: - self._tf_minibatch_np = self.get_minibatch_tf() - return tflib.run(self._tf_minibatch_np) - - # Get random labels as TensorFlow expression. - def get_random_labels_tf(self, minibatch_size): # => labels - if self.label_size > 0: - with tf.device('/cpu:0'): - return tf.gather(self._tf_labels_var, tf.random_uniform([minibatch_size], 0, self._np_labels.shape[0], dtype=tf.int32)) - return tf.zeros([minibatch_size, 0], self.label_dtype) - - # Get random labels as NumPy array. - def get_random_labels_np(self, minibatch_size): # => labels - if self.label_size > 0: - return self._np_labels[np.random.randint(self._np_labels.shape[0], size=[minibatch_size])] - return np.zeros([minibatch_size, 0], self.label_dtype) - -#---------------------------------------------------------------------------- -# Base class for datasets that are generated on the fly. - -class SyntheticDataset: - def __init__(self, resolution=1024, num_channels=3, dtype='uint8', dynamic_range=[0,255], label_size=0, label_dtype='float32'): - self.resolution = resolution - self.resolution_log2 = int(np.log2(resolution)) - self.shape = [num_channels, resolution, resolution] - self.dtype = dtype - self.dynamic_range = dynamic_range - self.label_size = label_size - self.label_dtype = label_dtype - self._tf_minibatch_var = None - self._tf_lod_var = None - self._tf_minibatch_np = None - self._tf_labels_np = None - - assert self.resolution == 2 ** self.resolution_log2 - with tf.name_scope('Dataset'): - self._tf_minibatch_var = tf.Variable(np.int32(0), name='minibatch_var') - self._tf_lod_var = tf.Variable(np.int32(0), name='lod_var') - - def configure(self, minibatch_size, lod=0): - lod = int(np.floor(lod)) - assert minibatch_size >= 1 and 0 <= lod <= self.resolution_log2 - tflib.set_vars({self._tf_minibatch_var: minibatch_size, self._tf_lod_var: lod}) - - def get_minibatch_tf(self): # => images, labels - with tf.name_scope('SyntheticDataset'): - shrink = tf.cast(2.0 ** tf.cast(self._tf_lod_var, tf.float32), tf.int32) - shape = [self.shape[0], self.shape[1] // shrink, self.shape[2] // shrink] - images = self._generate_images(self._tf_minibatch_var, self._tf_lod_var, shape) - labels = self._generate_labels(self._tf_minibatch_var) - return images, labels - - def get_minibatch_np(self, minibatch_size, lod=0): # => images, labels - self.configure(minibatch_size, lod) - if self._tf_minibatch_np is None: - self._tf_minibatch_np = self.get_minibatch_tf() - return tflib.run(self._tf_minibatch_np) - - def get_random_labels_tf(self, minibatch_size): # => labels - with tf.name_scope('SyntheticDataset'): - return self._generate_labels(minibatch_size) - - def get_random_labels_np(self, minibatch_size): # => labels - self.configure(minibatch_size) - if self._tf_labels_np is None: - self._tf_labels_np = self.get_random_labels_tf(minibatch_size) - return tflib.run(self._tf_labels_np) - - def _generate_images(self, minibatch, lod, shape): # to be overridden by subclasses # pylint: disable=unused-argument - return tf.zeros([minibatch] + shape, self.dtype) - - def _generate_labels(self, minibatch): # to be overridden by subclasses - return tf.zeros([minibatch, self.label_size], self.label_dtype) - -#---------------------------------------------------------------------------- -# Helper func for constructing a dataset object using the given options. - -def load_dataset(class_name='training.dataset.TFRecordDataset', data_dir=None, verbose=False, **kwargs): - adjusted_kwargs = dict(kwargs) - if 'tfrecord_dir' in adjusted_kwargs and data_dir is not None: - adjusted_kwargs['tfrecord_dir'] = os.path.join(data_dir, adjusted_kwargs['tfrecord_dir']) - if verbose: - print('Streaming data using %s...' % class_name) - dataset = dnnlib.util.get_obj_by_name(class_name)(**adjusted_kwargs) - if verbose: - print('Dataset shape =', np.int32(dataset.shape).tolist()) - print('Dynamic range =', dataset.dynamic_range) - print('Label size =', dataset.label_size) - return dataset - -#---------------------------------------------------------------------------- diff --git a/spaces/yderre-aubay/midi-player-demo/src/main/components/PianoRollToolbar/PianoRollToolbar.tsx b/spaces/yderre-aubay/midi-player-demo/src/main/components/PianoRollToolbar/PianoRollToolbar.tsx deleted file mode 100644 index a25aa043c2f9bed7156a50e88f3e0fd38335452a..0000000000000000000000000000000000000000 --- a/spaces/yderre-aubay/midi-player-demo/src/main/components/PianoRollToolbar/PianoRollToolbar.tsx +++ /dev/null @@ -1,86 +0,0 @@ -import styled from "@emotion/styled" -import { observer } from "mobx-react-lite" -import { FC, useCallback } from "react" -import { useStores } from "../../hooks/useStores" -import InstrumentBrowser from "../InstrumentBrowser/InstrumentBrowser" -import { AutoScrollButton } from "../Toolbar/AutoScrollButton" -import QuantizeSelector from "../Toolbar/QuantizeSelector/QuantizeSelector" -import { Toolbar } from "../Toolbar/Toolbar" -import { TrackListMenuButton } from "../TrackList/TrackListMenuButton" -import { EventListButton } from "./EventListButton" -import { InstrumentButton } from "./InstrumentButton" -import { PanSlider } from "./PanSlider" -import { PianoRollToolSelector } from "./PianoRollToolSelector" -import { TrackNameInput } from "./TrackNameInput" -import { VolumeSlider } from "./VolumeSlider" - -const Spacer = styled.div` - width: 1rem; -` - -const FlexibleSpacer = styled.div` - flex-grow: 1; -` - -export const PianoRollToolbar: FC = observer(() => { - const { pianoRollStore } = useStores() - - const { - quantize, - autoScroll, - isQuantizeEnabled, - selectedTrack, - selectedTrackId, - } = pianoRollStore - - const onClickAutoScroll = useCallback( - () => (pianoRollStore.autoScroll = !pianoRollStore.autoScroll), - [pianoRollStore], - ) - - const onSelectQuantize = useCallback( - (denominator: number) => { - pianoRollStore.quantize = denominator - }, - [pianoRollStore], - ) - - const onClickQuantizeSwitch = useCallback(() => { - pianoRollStore.isQuantizeEnabled = !pianoRollStore.isQuantizeEnabled - }, [pianoRollStore]) - - if (selectedTrack === undefined) { - return <> - } - - return ( - - - - - - - - - - - - - - - - - - - - - - - - ) -}) diff --git a/spaces/yerfor/SyntaSpeech/modules/vocoder/parallel_wavegan/models/melgan.py b/spaces/yerfor/SyntaSpeech/modules/vocoder/parallel_wavegan/models/melgan.py deleted file mode 100644 index b593bfbf4bb2eeb07bc7a34a82a5d3c7ee379f73..0000000000000000000000000000000000000000 --- a/spaces/yerfor/SyntaSpeech/modules/vocoder/parallel_wavegan/models/melgan.py +++ /dev/null @@ -1,458 +0,0 @@ -# -*- coding: utf-8 -*- - -# Copyright 2020 Tomoki Hayashi -# MIT License (https://opensource.org/licenses/MIT) - -"""MelGAN Modules.""" - -import logging - -import numpy as np -import torch -from torch import nn - -from modules.vocoder.parallel_wavegan.layers import CausalConv1d -from modules.vocoder.parallel_wavegan.layers import CausalConvTranspose1d -from modules.vocoder.parallel_wavegan.layers import ResidualStack -from modules.vocoder.parallel_wavegan.models.source import SourceModuleCycNoise_v1 - - -class MelGANGenerator(torch.nn.Module): - """MelGAN generator module.""" - - def __init__(self, - in_channels=80, - out_channels=1, - kernel_size=7, - channels=512, - bias=True, - upsample_scales=[8, 8, 2, 2], - stack_kernel_size=3, - stacks=3, - nonlinear_activation="LeakyReLU", - nonlinear_activation_params={"negative_slope": 0.2}, - pad="ReflectionPad1d", - pad_params={}, - use_final_nonlinear_activation=True, - use_weight_norm=True, - use_causal_conv=False, - use_pitch_embed=False, - use_nsf=False, - sample_rate=22050, - **kwargs - ): - """Initialize MelGANGenerator module. - - Args: - in_channels (int): Number of input channels. - out_channels (int): Number of output channels. - kernel_size (int): Kernel size of initial and final conv layer. - channels (int): Initial number of channels for conv layer. - bias (bool): Whether to add bias parameter in convolution layers. - upsample_scales (list): List of upsampling scales. - stack_kernel_size (int): Kernel size of dilated conv layers in residual stack. - stacks (int): Number of stacks in a single residual stack. - nonlinear_activation (str): Activation function module name. - nonlinear_activation_params (dict): Hyperparameters for activation function. - pad (str): Padding function module name before dilated convolution layer. - pad_params (dict): Hyperparameters for padding function. - use_final_nonlinear_activation (torch.nn.Module): Activation function for the final layer. - use_weight_norm (bool): Whether to use weight norm. - If set to true, it will be applied to all of the conv layers. - use_causal_conv (bool): Whether to use causal convolution. - - """ - super(MelGANGenerator, self).__init__() - - # check hyper parameters is valid - assert channels >= np.prod(upsample_scales) - assert channels % (2 ** len(upsample_scales)) == 0 - if not use_causal_conv: - assert (kernel_size - 1) % 2 == 0, "Not support even number kernel size." - - # add initial layer - layers = [] - if not use_causal_conv: - layers += [ - getattr(torch.nn, pad)((kernel_size - 1) // 2, **pad_params), - torch.nn.Conv1d(in_channels, channels, kernel_size, bias=bias), - ] - else: - layers += [ - CausalConv1d(in_channels, channels, kernel_size, - bias=bias, pad=pad, pad_params=pad_params), - ] - - self.use_pitch_embed = use_pitch_embed - if use_pitch_embed: - self.pitch_embed = nn.Embedding(300, in_channels, 0) - self.c_proj = nn.Conv1d(2 * in_channels, in_channels, 1) - - for i, upsample_scale in enumerate(upsample_scales): - # add upsampling layer - layers += [getattr(torch.nn, nonlinear_activation)(**nonlinear_activation_params)] - if not use_causal_conv: - layers += [ - torch.nn.ConvTranspose1d( - channels // (2 ** i), - channels // (2 ** (i + 1)), - upsample_scale * 2, - stride=upsample_scale, - padding=upsample_scale // 2 + upsample_scale % 2, - output_padding=upsample_scale % 2, - bias=bias, - ) - ] - else: - layers += [ - CausalConvTranspose1d( - channels // (2 ** i), - channels // (2 ** (i + 1)), - upsample_scale * 2, - stride=upsample_scale, - bias=bias, - ) - ] - - # add residual stack - for j in range(stacks): - layers += [ - ResidualStack( - kernel_size=stack_kernel_size, - channels=channels // (2 ** (i + 1)), - dilation=stack_kernel_size ** j, - bias=bias, - nonlinear_activation=nonlinear_activation, - nonlinear_activation_params=nonlinear_activation_params, - pad=pad, - pad_params=pad_params, - use_causal_conv=use_causal_conv, - ) - ] - self.use_nsf = use_nsf - if use_nsf: - self.harmonic_num = 8 - hop_size = np.prod(upsample_scales) - self.f0_upsamp = torch.nn.Upsample(scale_factor=hop_size) - # self.m_source = SourceModuleHnNSF(sampling_rate=sample_rate, harmonic_num=self.harmonic_num) - self.m_source = SourceModuleCycNoise_v1(sample_rate, 0.003) - self.nsf_conv = nn.Sequential(nn.Conv1d(1, channels // (2 ** (i + 1)), 1), torch.nn.Tanh()) - - # define the model as a single function - self.melgan_body = torch.nn.Sequential(*layers) - layers = [] - # add final layer - layers += [getattr(torch.nn, nonlinear_activation)(**nonlinear_activation_params)] - if not use_causal_conv: - layers += [ - getattr(torch.nn, pad)((kernel_size - 1) // 2, **pad_params), - torch.nn.Conv1d(channels // (2 ** (i + 1)), out_channels, kernel_size, bias=bias), - ] - else: - layers += [ - CausalConv1d(channels // (2 ** (i + 1)), out_channels, kernel_size, - bias=bias, pad=pad, pad_params=pad_params), - ] - if use_final_nonlinear_activation: - layers += [torch.nn.Tanh()] - - # define the model as a single function - self.melgan_final = torch.nn.Sequential(*layers) - - # apply weight norm - if use_weight_norm: - self.apply_weight_norm() - - # reset parameters - self.reset_parameters() - - def forward(self, c, f0=None, pitch=None): - """Calculate forward propagation. - - Args: - c (Tensor): Input tensor (B, channels, T). - - Returns: - Tensor: Output tensor (B, 1, T ** prod(upsample_scales)). - - """ - if self.use_pitch_embed: - c = self.c_proj(torch.cat([c, self.pitch_embed(pitch).transpose(1, 2)], 1)) - x = self.melgan_body(c) - if self.use_nsf: - f0_upsample = self.f0_upsamp(f0[:, None, :]) - f0_upsample = self.nsf_conv(f0_upsample) - x = x + f0_upsample - x = self.melgan_final(x) - return x - - def remove_weight_norm(self): - """Remove weight normalization module from all of the layers.""" - def _remove_weight_norm(m): - try: - logging.debug(f"Weight norm is removed from {m}.") - torch.nn.utils.remove_weight_norm(m) - except ValueError: # this module didn't have weight norm - return - - self.apply(_remove_weight_norm) - - def apply_weight_norm(self): - """Apply weight normalization module from all of the layers.""" - def _apply_weight_norm(m): - if isinstance(m, torch.nn.Conv1d) or isinstance(m, torch.nn.ConvTranspose1d): - torch.nn.utils.weight_norm(m) - logging.debug(f"Weight norm is applied to {m}.") - - self.apply(_apply_weight_norm) - - def reset_parameters(self): - """Reset parameters. - - This initialization follows official implementation manner. - https://github.com/descriptinc/melgan-neurips/blob/master/spec2wav/modules.py - - """ - def _reset_parameters(m): - if isinstance(m, torch.nn.Conv1d) or isinstance(m, torch.nn.ConvTranspose1d): - m.weight.data.normal_(0.0, 0.02) - logging.debug(f"Reset parameters in {m}.") - - self.apply(_reset_parameters) - - -class MelGANDiscriminator(torch.nn.Module): - """MelGAN discriminator module.""" - - def __init__(self, - in_channels=1, - out_channels=1, - kernel_sizes=[5, 3], - channels=16, - max_downsample_channels=1024, - bias=True, - downsample_scales=[4, 4, 4, 4], - nonlinear_activation="LeakyReLU", - nonlinear_activation_params={"negative_slope": 0.2}, - pad="ReflectionPad1d", - pad_params={}, - ): - """Initilize MelGAN discriminator module. - - Args: - in_channels (int): Number of input channels. - out_channels (int): Number of output channels. - kernel_sizes (list): List of two kernel sizes. The prod will be used for the first conv layer, - and the first and the second kernel sizes will be used for the last two layers. - For example if kernel_sizes = [5, 3], the first layer kernel size will be 5 * 3 = 15, - the last two layers' kernel size will be 5 and 3, respectively. - channels (int): Initial number of channels for conv layer. - max_downsample_channels (int): Maximum number of channels for downsampling layers. - bias (bool): Whether to add bias parameter in convolution layers. - downsample_scales (list): List of downsampling scales. - nonlinear_activation (str): Activation function module name. - nonlinear_activation_params (dict): Hyperparameters for activation function. - pad (str): Padding function module name before dilated convolution layer. - pad_params (dict): Hyperparameters for padding function. - - """ - super(MelGANDiscriminator, self).__init__() - self.layers = torch.nn.ModuleList() - - # check kernel size is valid - assert len(kernel_sizes) == 2 - assert kernel_sizes[0] % 2 == 1 - assert kernel_sizes[1] % 2 == 1 - - # add first layer - self.layers += [ - torch.nn.Sequential( - getattr(torch.nn, pad)((np.prod(kernel_sizes) - 1) // 2, **pad_params), - torch.nn.Conv1d(in_channels, channels, np.prod(kernel_sizes), bias=bias), - getattr(torch.nn, nonlinear_activation)(**nonlinear_activation_params), - ) - ] - - # add downsample layers - in_chs = channels - for downsample_scale in downsample_scales: - out_chs = min(in_chs * downsample_scale, max_downsample_channels) - self.layers += [ - torch.nn.Sequential( - torch.nn.Conv1d( - in_chs, out_chs, - kernel_size=downsample_scale * 10 + 1, - stride=downsample_scale, - padding=downsample_scale * 5, - groups=in_chs // 4, - bias=bias, - ), - getattr(torch.nn, nonlinear_activation)(**nonlinear_activation_params), - ) - ] - in_chs = out_chs - - # add final layers - out_chs = min(in_chs * 2, max_downsample_channels) - self.layers += [ - torch.nn.Sequential( - torch.nn.Conv1d( - in_chs, out_chs, kernel_sizes[0], - padding=(kernel_sizes[0] - 1) // 2, - bias=bias, - ), - getattr(torch.nn, nonlinear_activation)(**nonlinear_activation_params), - ) - ] - self.layers += [ - torch.nn.Conv1d( - out_chs, out_channels, kernel_sizes[1], - padding=(kernel_sizes[1] - 1) // 2, - bias=bias, - ), - ] - - def forward(self, x): - """Calculate forward propagation. - - Args: - x (Tensor): Input noise signal (B, 1, T). - - Returns: - List: List of output tensors of each layer. - - """ - outs = [] - for f in self.layers: - x = f(x) - outs += [x] - - return outs - - -class MelGANMultiScaleDiscriminator(torch.nn.Module): - """MelGAN multi-scale discriminator module.""" - - def __init__(self, - in_channels=1, - out_channels=1, - scales=3, - downsample_pooling="AvgPool1d", - # follow the official implementation setting - downsample_pooling_params={ - "kernel_size": 4, - "stride": 2, - "padding": 1, - "count_include_pad": False, - }, - kernel_sizes=[5, 3], - channels=16, - max_downsample_channels=1024, - bias=True, - downsample_scales=[4, 4, 4, 4], - nonlinear_activation="LeakyReLU", - nonlinear_activation_params={"negative_slope": 0.2}, - pad="ReflectionPad1d", - pad_params={}, - use_weight_norm=True, - **kwargs - ): - """Initilize MelGAN multi-scale discriminator module. - - Args: - in_channels (int): Number of input channels. - out_channels (int): Number of output channels. - downsample_pooling (str): Pooling module name for downsampling of the inputs. - downsample_pooling_params (dict): Parameters for the above pooling module. - kernel_sizes (list): List of two kernel sizes. The sum will be used for the first conv layer, - and the first and the second kernel sizes will be used for the last two layers. - channels (int): Initial number of channels for conv layer. - max_downsample_channels (int): Maximum number of channels for downsampling layers. - bias (bool): Whether to add bias parameter in convolution layers. - downsample_scales (list): List of downsampling scales. - nonlinear_activation (str): Activation function module name. - nonlinear_activation_params (dict): Hyperparameters for activation function. - pad (str): Padding function module name before dilated convolution layer. - pad_params (dict): Hyperparameters for padding function. - use_causal_conv (bool): Whether to use causal convolution. - - """ - super(MelGANMultiScaleDiscriminator, self).__init__() - self.discriminators = torch.nn.ModuleList() - - # add discriminators - for _ in range(scales): - self.discriminators += [ - MelGANDiscriminator( - in_channels=in_channels, - out_channels=out_channels, - kernel_sizes=kernel_sizes, - channels=channels, - max_downsample_channels=max_downsample_channels, - bias=bias, - downsample_scales=downsample_scales, - nonlinear_activation=nonlinear_activation, - nonlinear_activation_params=nonlinear_activation_params, - pad=pad, - pad_params=pad_params, - ) - ] - self.pooling = getattr(torch.nn, downsample_pooling)(**downsample_pooling_params) - - # apply weight norm - if use_weight_norm: - self.apply_weight_norm() - - # reset parameters - self.reset_parameters() - - def forward(self, x): - """Calculate forward propagation. - - Args: - x (Tensor): Input noise signal (B, 1, T). - - Returns: - List: List of list of each discriminator outputs, which consists of each layer output tensors. - - """ - outs = [] - for f in self.discriminators: - outs += [f(x)] - x = self.pooling(x) - - return outs - - def remove_weight_norm(self): - """Remove weight normalization module from all of the layers.""" - def _remove_weight_norm(m): - try: - logging.debug(f"Weight norm is removed from {m}.") - torch.nn.utils.remove_weight_norm(m) - except ValueError: # this module didn't have weight norm - return - - self.apply(_remove_weight_norm) - - def apply_weight_norm(self): - """Apply weight normalization module from all of the layers.""" - def _apply_weight_norm(m): - if isinstance(m, torch.nn.Conv1d) or isinstance(m, torch.nn.ConvTranspose1d): - torch.nn.utils.weight_norm(m) - logging.debug(f"Weight norm is applied to {m}.") - - self.apply(_apply_weight_norm) - - def reset_parameters(self): - """Reset parameters. - - This initialization follows official implementation manner. - https://github.com/descriptinc/melgan-neurips/blob/master/spec2wav/modules.py - - """ - def _reset_parameters(m): - if isinstance(m, torch.nn.Conv1d) or isinstance(m, torch.nn.ConvTranspose1d): - m.weight.data.normal_(0.0, 0.02) - logging.debug(f"Reset parameters in {m}.") - - self.apply(_reset_parameters) diff --git a/spaces/ygangang/CodeFormer/CodeFormer/facelib/detection/matlab_cp2tform.py b/spaces/ygangang/CodeFormer/CodeFormer/facelib/detection/matlab_cp2tform.py deleted file mode 100644 index b2a8b54a91709c71437e15c68d3be9a9b0a20a34..0000000000000000000000000000000000000000 --- a/spaces/ygangang/CodeFormer/CodeFormer/facelib/detection/matlab_cp2tform.py +++ /dev/null @@ -1,317 +0,0 @@ -import numpy as np -from numpy.linalg import inv, lstsq -from numpy.linalg import matrix_rank as rank -from numpy.linalg import norm - - -class MatlabCp2tormException(Exception): - - def __str__(self): - return 'In File {}:{}'.format(__file__, super.__str__(self)) - - -def tformfwd(trans, uv): - """ - Function: - ---------- - apply affine transform 'trans' to uv - - Parameters: - ---------- - @trans: 3x3 np.array - transform matrix - @uv: Kx2 np.array - each row is a pair of coordinates (x, y) - - Returns: - ---------- - @xy: Kx2 np.array - each row is a pair of transformed coordinates (x, y) - """ - uv = np.hstack((uv, np.ones((uv.shape[0], 1)))) - xy = np.dot(uv, trans) - xy = xy[:, 0:-1] - return xy - - -def tforminv(trans, uv): - """ - Function: - ---------- - apply the inverse of affine transform 'trans' to uv - - Parameters: - ---------- - @trans: 3x3 np.array - transform matrix - @uv: Kx2 np.array - each row is a pair of coordinates (x, y) - - Returns: - ---------- - @xy: Kx2 np.array - each row is a pair of inverse-transformed coordinates (x, y) - """ - Tinv = inv(trans) - xy = tformfwd(Tinv, uv) - return xy - - -def findNonreflectiveSimilarity(uv, xy, options=None): - options = {'K': 2} - - K = options['K'] - M = xy.shape[0] - x = xy[:, 0].reshape((-1, 1)) # use reshape to keep a column vector - y = xy[:, 1].reshape((-1, 1)) # use reshape to keep a column vector - - tmp1 = np.hstack((x, y, np.ones((M, 1)), np.zeros((M, 1)))) - tmp2 = np.hstack((y, -x, np.zeros((M, 1)), np.ones((M, 1)))) - X = np.vstack((tmp1, tmp2)) - - u = uv[:, 0].reshape((-1, 1)) # use reshape to keep a column vector - v = uv[:, 1].reshape((-1, 1)) # use reshape to keep a column vector - U = np.vstack((u, v)) - - # We know that X * r = U - if rank(X) >= 2 * K: - r, _, _, _ = lstsq(X, U, rcond=-1) - r = np.squeeze(r) - else: - raise Exception('cp2tform:twoUniquePointsReq') - sc = r[0] - ss = r[1] - tx = r[2] - ty = r[3] - - Tinv = np.array([[sc, -ss, 0], [ss, sc, 0], [tx, ty, 1]]) - T = inv(Tinv) - T[:, 2] = np.array([0, 0, 1]) - - return T, Tinv - - -def findSimilarity(uv, xy, options=None): - options = {'K': 2} - - # uv = np.array(uv) - # xy = np.array(xy) - - # Solve for trans1 - trans1, trans1_inv = findNonreflectiveSimilarity(uv, xy, options) - - # Solve for trans2 - - # manually reflect the xy data across the Y-axis - xyR = xy - xyR[:, 0] = -1 * xyR[:, 0] - - trans2r, trans2r_inv = findNonreflectiveSimilarity(uv, xyR, options) - - # manually reflect the tform to undo the reflection done on xyR - TreflectY = np.array([[-1, 0, 0], [0, 1, 0], [0, 0, 1]]) - - trans2 = np.dot(trans2r, TreflectY) - - # Figure out if trans1 or trans2 is better - xy1 = tformfwd(trans1, uv) - norm1 = norm(xy1 - xy) - - xy2 = tformfwd(trans2, uv) - norm2 = norm(xy2 - xy) - - if norm1 <= norm2: - return trans1, trans1_inv - else: - trans2_inv = inv(trans2) - return trans2, trans2_inv - - -def get_similarity_transform(src_pts, dst_pts, reflective=True): - """ - Function: - ---------- - Find Similarity Transform Matrix 'trans': - u = src_pts[:, 0] - v = src_pts[:, 1] - x = dst_pts[:, 0] - y = dst_pts[:, 1] - [x, y, 1] = [u, v, 1] * trans - - Parameters: - ---------- - @src_pts: Kx2 np.array - source points, each row is a pair of coordinates (x, y) - @dst_pts: Kx2 np.array - destination points, each row is a pair of transformed - coordinates (x, y) - @reflective: True or False - if True: - use reflective similarity transform - else: - use non-reflective similarity transform - - Returns: - ---------- - @trans: 3x3 np.array - transform matrix from uv to xy - trans_inv: 3x3 np.array - inverse of trans, transform matrix from xy to uv - """ - - if reflective: - trans, trans_inv = findSimilarity(src_pts, dst_pts) - else: - trans, trans_inv = findNonreflectiveSimilarity(src_pts, dst_pts) - - return trans, trans_inv - - -def cvt_tform_mat_for_cv2(trans): - """ - Function: - ---------- - Convert Transform Matrix 'trans' into 'cv2_trans' which could be - directly used by cv2.warpAffine(): - u = src_pts[:, 0] - v = src_pts[:, 1] - x = dst_pts[:, 0] - y = dst_pts[:, 1] - [x, y].T = cv_trans * [u, v, 1].T - - Parameters: - ---------- - @trans: 3x3 np.array - transform matrix from uv to xy - - Returns: - ---------- - @cv2_trans: 2x3 np.array - transform matrix from src_pts to dst_pts, could be directly used - for cv2.warpAffine() - """ - cv2_trans = trans[:, 0:2].T - - return cv2_trans - - -def get_similarity_transform_for_cv2(src_pts, dst_pts, reflective=True): - """ - Function: - ---------- - Find Similarity Transform Matrix 'cv2_trans' which could be - directly used by cv2.warpAffine(): - u = src_pts[:, 0] - v = src_pts[:, 1] - x = dst_pts[:, 0] - y = dst_pts[:, 1] - [x, y].T = cv_trans * [u, v, 1].T - - Parameters: - ---------- - @src_pts: Kx2 np.array - source points, each row is a pair of coordinates (x, y) - @dst_pts: Kx2 np.array - destination points, each row is a pair of transformed - coordinates (x, y) - reflective: True or False - if True: - use reflective similarity transform - else: - use non-reflective similarity transform - - Returns: - ---------- - @cv2_trans: 2x3 np.array - transform matrix from src_pts to dst_pts, could be directly used - for cv2.warpAffine() - """ - trans, trans_inv = get_similarity_transform(src_pts, dst_pts, reflective) - cv2_trans = cvt_tform_mat_for_cv2(trans) - - return cv2_trans - - -if __name__ == '__main__': - """ - u = [0, 6, -2] - v = [0, 3, 5] - x = [-1, 0, 4] - y = [-1, -10, 4] - - # In Matlab, run: - # - # uv = [u'; v']; - # xy = [x'; y']; - # tform_sim=cp2tform(uv,xy,'similarity'); - # - # trans = tform_sim.tdata.T - # ans = - # -0.0764 -1.6190 0 - # 1.6190 -0.0764 0 - # -3.2156 0.0290 1.0000 - # trans_inv = tform_sim.tdata.Tinv - # ans = - # - # -0.0291 0.6163 0 - # -0.6163 -0.0291 0 - # -0.0756 1.9826 1.0000 - # xy_m=tformfwd(tform_sim, u,v) - # - # xy_m = - # - # -3.2156 0.0290 - # 1.1833 -9.9143 - # 5.0323 2.8853 - # uv_m=tforminv(tform_sim, x,y) - # - # uv_m = - # - # 0.5698 1.3953 - # 6.0872 2.2733 - # -2.6570 4.3314 - """ - u = [0, 6, -2] - v = [0, 3, 5] - x = [-1, 0, 4] - y = [-1, -10, 4] - - uv = np.array((u, v)).T - xy = np.array((x, y)).T - - print('\n--->uv:') - print(uv) - print('\n--->xy:') - print(xy) - - trans, trans_inv = get_similarity_transform(uv, xy) - - print('\n--->trans matrix:') - print(trans) - - print('\n--->trans_inv matrix:') - print(trans_inv) - - print('\n---> apply transform to uv') - print('\nxy_m = uv_augmented * trans') - uv_aug = np.hstack((uv, np.ones((uv.shape[0], 1)))) - xy_m = np.dot(uv_aug, trans) - print(xy_m) - - print('\nxy_m = tformfwd(trans, uv)') - xy_m = tformfwd(trans, uv) - print(xy_m) - - print('\n---> apply inverse transform to xy') - print('\nuv_m = xy_augmented * trans_inv') - xy_aug = np.hstack((xy, np.ones((xy.shape[0], 1)))) - uv_m = np.dot(xy_aug, trans_inv) - print(uv_m) - - print('\nuv_m = tformfwd(trans_inv, xy)') - uv_m = tformfwd(trans_inv, xy) - print(uv_m) - - uv_m = tforminv(trans, xy) - print('\nuv_m = tforminv(trans, xy)') - print(uv_m) diff --git a/spaces/yizhangliu/Grounded-Segment-Anything/transformers_4_35_0/models/longformer/__init__.py b/spaces/yizhangliu/Grounded-Segment-Anything/transformers_4_35_0/models/longformer/__init__.py deleted file mode 100644 index 66ef7c953cff4385424b208313445962d4facf28..0000000000000000000000000000000000000000 --- a/spaces/yizhangliu/Grounded-Segment-Anything/transformers_4_35_0/models/longformer/__init__.py +++ /dev/null @@ -1,135 +0,0 @@ -# Copyright 2020 The HuggingFace Team. All rights reserved. -# -# Licensed under the Apache License, Version 2.0 (the "License"); -# you may not use this file except in compliance with the License. -# You may obtain a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an "AS IS" BASIS, -# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -# See the License for the specific language governing permissions and -# limitations under the License. - -from typing import TYPE_CHECKING - -from ...utils import ( - OptionalDependencyNotAvailable, - _LazyModule, - is_tf_available, - is_tokenizers_available, - is_torch_available, -) - - -_import_structure = { - "configuration_longformer": [ - "LONGFORMER_PRETRAINED_CONFIG_ARCHIVE_MAP", - "LongformerConfig", - "LongformerOnnxConfig", - ], - "tokenization_longformer": ["LongformerTokenizer"], -} - -try: - if not is_tokenizers_available(): - raise OptionalDependencyNotAvailable() -except OptionalDependencyNotAvailable: - pass -else: - _import_structure["tokenization_longformer_fast"] = ["LongformerTokenizerFast"] - -try: - if not is_torch_available(): - raise OptionalDependencyNotAvailable() -except OptionalDependencyNotAvailable: - pass -else: - _import_structure["modeling_longformer"] = [ - "LONGFORMER_PRETRAINED_MODEL_ARCHIVE_LIST", - "LongformerForMaskedLM", - "LongformerForMultipleChoice", - "LongformerForQuestionAnswering", - "LongformerForSequenceClassification", - "LongformerForTokenClassification", - "LongformerModel", - "LongformerPreTrainedModel", - "LongformerSelfAttention", - ] - -try: - if not is_tf_available(): - raise OptionalDependencyNotAvailable() -except OptionalDependencyNotAvailable: - pass -else: - _import_structure["modeling_tf_longformer"] = [ - "TF_LONGFORMER_PRETRAINED_MODEL_ARCHIVE_LIST", - "TFLongformerForMaskedLM", - "TFLongformerForMultipleChoice", - "TFLongformerForQuestionAnswering", - "TFLongformerForSequenceClassification", - "TFLongformerForTokenClassification", - "TFLongformerModel", - "TFLongformerPreTrainedModel", - "TFLongformerSelfAttention", - ] - - -if TYPE_CHECKING: - from .configuration_longformer import ( - LONGFORMER_PRETRAINED_CONFIG_ARCHIVE_MAP, - LongformerConfig, - LongformerOnnxConfig, - ) - from .tokenization_longformer import LongformerTokenizer - - try: - if not is_tokenizers_available(): - raise OptionalDependencyNotAvailable() - except OptionalDependencyNotAvailable: - pass - else: - from .tokenization_longformer_fast import LongformerTokenizerFast - - try: - if not is_torch_available(): - raise OptionalDependencyNotAvailable() - except OptionalDependencyNotAvailable: - pass - else: - from .modeling_longformer import ( - LONGFORMER_PRETRAINED_MODEL_ARCHIVE_LIST, - LongformerForMaskedLM, - LongformerForMultipleChoice, - LongformerForQuestionAnswering, - LongformerForSequenceClassification, - LongformerForTokenClassification, - LongformerModel, - LongformerPreTrainedModel, - LongformerSelfAttention, - ) - - try: - if not is_tf_available(): - raise OptionalDependencyNotAvailable() - except OptionalDependencyNotAvailable: - pass - else: - from .modeling_tf_longformer import ( - TF_LONGFORMER_PRETRAINED_MODEL_ARCHIVE_LIST, - TFLongformerForMaskedLM, - TFLongformerForMultipleChoice, - TFLongformerForQuestionAnswering, - TFLongformerForSequenceClassification, - TFLongformerForTokenClassification, - TFLongformerModel, - TFLongformerPreTrainedModel, - TFLongformerSelfAttention, - ) - -else: - import sys - - sys.modules[__name__] = _LazyModule(__name__, globals()["__file__"], _import_structure, module_spec=__spec__) diff --git a/spaces/yl12053/so-vits-4.1-Grass-Wonder/vencoder/WhisperPPGLarge.py b/spaces/yl12053/so-vits-4.1-Grass-Wonder/vencoder/WhisperPPGLarge.py deleted file mode 100644 index cab1ca646a1559c2a05b24ec38474408f27b3f08..0000000000000000000000000000000000000000 --- a/spaces/yl12053/so-vits-4.1-Grass-Wonder/vencoder/WhisperPPGLarge.py +++ /dev/null @@ -1,30 +0,0 @@ -from vencoder.encoder import SpeechEncoder -import torch - -from vencoder.whisper.model import Whisper, ModelDimensions -from vencoder.whisper.audio import pad_or_trim, log_mel_spectrogram - - -class WhisperPPGLarge(SpeechEncoder): - def __init__(self,vec_path = "pretrain/large-v2.pt",device=None): - if device is None: - self.dev = torch.device("cuda" if torch.cuda.is_available() else "cpu") - else: - self.dev = torch.device(device) - checkpoint = torch.load(vec_path, map_location=device) - dims = ModelDimensions(**checkpoint["dims"]) - model = Whisper(dims) - model.load_state_dict(checkpoint["model_state_dict"]) - self.hidden_dim = dims - self.model = model.to(self.dev) - - def encoder(self, wav): - audio = wav - audln = audio.shape[0] - ppgln = audln // 320 - audio = pad_or_trim(audio) - mel = log_mel_spectrogram(audio).to(self.dev) - with torch.no_grad(): - ppg = self.model.encoder(mel.unsqueeze(0)).squeeze().data.cpu().float().numpy() - ppg = torch.FloatTensor(ppg[:ppgln,]).to(self.dev) - return ppg[None,:,:].transpose(1, 2) diff --git a/spaces/yl12053/so-vits-4.1-Kitasan-Black/README.md b/spaces/yl12053/so-vits-4.1-Kitasan-Black/README.md deleted file mode 100644 index 1a9b54c85b15312356961269b64106c2a93841e4..0000000000000000000000000000000000000000 --- a/spaces/yl12053/so-vits-4.1-Kitasan-Black/README.md +++ /dev/null @@ -1,13 +0,0 @@ ---- -title: So Vits 4.1 Kitasan Black -emoji: 🐨 -colorFrom: pink -colorTo: red -sdk: gradio -sdk_version: 3.38.0 -app_file: app.py -python_version: 3.8.10 -pinned: false ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/yl12053/so-vits-4.1-Matikanefukukitaru/diffusion/logger/saver.py b/spaces/yl12053/so-vits-4.1-Matikanefukukitaru/diffusion/logger/saver.py deleted file mode 100644 index ef78b52b6bcd32106f962b731d3784d72d5f0cce..0000000000000000000000000000000000000000 --- a/spaces/yl12053/so-vits-4.1-Matikanefukukitaru/diffusion/logger/saver.py +++ /dev/null @@ -1,150 +0,0 @@ -''' -author: wayn391@mastertones -''' - -import os -import json -import time -import yaml -import datetime -import torch -import matplotlib.pyplot as plt -from . import utils -from torch.utils.tensorboard import SummaryWriter - -class Saver(object): - def __init__( - self, - args, - initial_global_step=-1): - - self.expdir = args.env.expdir - self.sample_rate = args.data.sampling_rate - - # cold start - self.global_step = initial_global_step - self.init_time = time.time() - self.last_time = time.time() - - # makedirs - os.makedirs(self.expdir, exist_ok=True) - - # path - self.path_log_info = os.path.join(self.expdir, 'log_info.txt') - - # ckpt - os.makedirs(self.expdir, exist_ok=True) - - # writer - self.writer = SummaryWriter(os.path.join(self.expdir, 'logs')) - - # save config - path_config = os.path.join(self.expdir, 'config.yaml') - with open(path_config, "w") as out_config: - yaml.dump(dict(args), out_config) - - - def log_info(self, msg): - '''log method''' - if isinstance(msg, dict): - msg_list = [] - for k, v in msg.items(): - tmp_str = '' - if isinstance(v, int): - tmp_str = '{}: {:,}'.format(k, v) - else: - tmp_str = '{}: {}'.format(k, v) - - msg_list.append(tmp_str) - msg_str = '\n'.join(msg_list) - else: - msg_str = msg - - # dsplay - print(msg_str) - - # save - with open(self.path_log_info, 'a') as fp: - fp.write(msg_str+'\n') - - def log_value(self, dict): - for k, v in dict.items(): - self.writer.add_scalar(k, v, self.global_step) - - def log_spec(self, name, spec, spec_out, vmin=-14, vmax=3.5): - spec_cat = torch.cat([(spec_out - spec).abs() + vmin, spec, spec_out], -1) - spec = spec_cat[0] - if isinstance(spec, torch.Tensor): - spec = spec.cpu().numpy() - fig = plt.figure(figsize=(12, 9)) - plt.pcolor(spec.T, vmin=vmin, vmax=vmax) - plt.tight_layout() - self.writer.add_figure(name, fig, self.global_step) - - def log_audio(self, dict): - for k, v in dict.items(): - self.writer.add_audio(k, v, global_step=self.global_step, sample_rate=self.sample_rate) - - def get_interval_time(self, update=True): - cur_time = time.time() - time_interval = cur_time - self.last_time - if update: - self.last_time = cur_time - return time_interval - - def get_total_time(self, to_str=True): - total_time = time.time() - self.init_time - if to_str: - total_time = str(datetime.timedelta( - seconds=total_time))[:-5] - return total_time - - def save_model( - self, - model, - optimizer, - name='model', - postfix='', - to_json=False): - # path - if postfix: - postfix = '_' + postfix - path_pt = os.path.join( - self.expdir , name+postfix+'.pt') - - # check - print(' [*] model checkpoint saved: {}'.format(path_pt)) - - # save - if optimizer is not None: - torch.save({ - 'global_step': self.global_step, - 'model': model.state_dict(), - 'optimizer': optimizer.state_dict()}, path_pt) - else: - torch.save({ - 'global_step': self.global_step, - 'model': model.state_dict()}, path_pt) - - # to json - if to_json: - path_json = os.path.join( - self.expdir , name+'.json') - utils.to_json(path_params, path_json) - - def delete_model(self, name='model', postfix=''): - # path - if postfix: - postfix = '_' + postfix - path_pt = os.path.join( - self.expdir , name+postfix+'.pt') - - # delete - if os.path.exists(path_pt): - os.remove(path_pt) - print(' [*] model checkpoint deleted: {}'.format(path_pt)) - - def global_step_increment(self): - self.global_step += 1 - - diff --git a/spaces/ynhe/AskAnything/models/grit_model.py b/spaces/ynhe/AskAnything/models/grit_model.py deleted file mode 100644 index 43337692557b32c430074071b5cdec45499ea7cb..0000000000000000000000000000000000000000 --- a/spaces/ynhe/AskAnything/models/grit_model.py +++ /dev/null @@ -1,46 +0,0 @@ -import os -import sys - -from models.grit_src.image_dense_captions import image_caption_api, init_demo, dense_pred_to_caption, dense_pred_to_caption_only_name -from detectron2.data.detection_utils import read_image - -class DenseCaptioning(): - def __init__(self, device): - self.device = device - self.demo = None - - - def initialize_model(self): - self.demo = init_demo(self.device) - - def image_dense_caption_debug(self, image_src): - dense_caption = """ - 1. the broccoli is green, [0, 0, 333, 325]; - 2. a piece of broccoli, [0, 147, 143, 324]; - 3. silver fork on plate, [4, 547, 252, 612]; - """ - return dense_caption - - def image_dense_caption(self, image_src): - dense_caption = image_caption_api(image_src, self.device) - print('\033[1;35m' + '*' * 100 + '\033[0m') - print("Step2, Dense Caption:\n") - print(dense_caption) - print('\033[1;35m' + '*' * 100 + '\033[0m') - return dense_caption - - def run_caption_api(self,image_src): - img = read_image(image_src, format="BGR") - print(img.shape) - predictions, visualized_output = self.demo.run_on_image(img) - new_caption = dense_pred_to_caption_only_name(predictions) - return new_caption - - def run_caption_tensor(self,img): - # img = read_image(image_src, format="BGR") - # print(img.shape) - predictions, visualized_output = self.demo.run_on_image(img) - new_caption = dense_pred_to_caption_only_name(predictions) - return new_caption - - diff --git a/spaces/ynhe/AskAnything/models/grit_src/third_party/CenterNet2/detectron2/export/torchscript_patch.py b/spaces/ynhe/AskAnything/models/grit_src/third_party/CenterNet2/detectron2/export/torchscript_patch.py deleted file mode 100644 index da9b324f1582e31d1a16d2fe462ac2989bea56ea..0000000000000000000000000000000000000000 --- a/spaces/ynhe/AskAnything/models/grit_src/third_party/CenterNet2/detectron2/export/torchscript_patch.py +++ /dev/null @@ -1,406 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. - -import os -import sys -import tempfile -from contextlib import ExitStack, contextmanager -from copy import deepcopy -from unittest import mock -import torch -from torch import nn - -# need some explicit imports due to https://github.com/pytorch/pytorch/issues/38964 -import detectron2 # noqa F401 -from detectron2.structures import Boxes, Instances -from detectron2.utils.env import _import_file - -_counter = 0 - - -def _clear_jit_cache(): - from torch.jit._recursive import concrete_type_store - from torch.jit._state import _jit_caching_layer - - concrete_type_store.type_store.clear() # for modules - _jit_caching_layer.clear() # for free functions - - -def _add_instances_conversion_methods(newInstances): - """ - Add from_instances methods to the scripted Instances class. - """ - cls_name = newInstances.__name__ - - @torch.jit.unused - def from_instances(instances: Instances): - """ - Create scripted Instances from original Instances - """ - fields = instances.get_fields() - image_size = instances.image_size - ret = newInstances(image_size) - for name, val in fields.items(): - assert hasattr(ret, f"_{name}"), f"No attribute named {name} in {cls_name}" - setattr(ret, name, deepcopy(val)) - return ret - - newInstances.from_instances = from_instances - - -@contextmanager -def patch_instances(fields): - """ - A contextmanager, under which the Instances class in detectron2 is replaced - by a statically-typed scriptable class, defined by `fields`. - See more in `scripting_with_instances`. - """ - - with tempfile.TemporaryDirectory(prefix="detectron2") as dir, tempfile.NamedTemporaryFile( - mode="w", encoding="utf-8", suffix=".py", dir=dir, delete=False - ) as f: - try: - # Objects that use Instances should not reuse previously-compiled - # results in cache, because `Instances` could be a new class each time. - _clear_jit_cache() - - cls_name, s = _gen_instance_module(fields) - f.write(s) - f.flush() - f.close() - - module = _import(f.name) - new_instances = getattr(module, cls_name) - _ = torch.jit.script(new_instances) - # let torchscript think Instances was scripted already - Instances.__torch_script_class__ = True - # let torchscript find new_instances when looking for the jit type of Instances - Instances._jit_override_qualname = torch._jit_internal._qualified_name(new_instances) - - _add_instances_conversion_methods(new_instances) - yield new_instances - finally: - try: - del Instances.__torch_script_class__ - del Instances._jit_override_qualname - except AttributeError: - pass - sys.modules.pop(module.__name__) - - -def _gen_instance_class(fields): - """ - Args: - fields (dict[name: type]) - """ - - class _FieldType: - def __init__(self, name, type_): - assert isinstance(name, str), f"Field name must be str, got {name}" - self.name = name - self.type_ = type_ - self.annotation = f"{type_.__module__}.{type_.__name__}" - - fields = [_FieldType(k, v) for k, v in fields.items()] - - def indent(level, s): - return " " * 4 * level + s - - lines = [] - - global _counter - _counter += 1 - - cls_name = "ScriptedInstances{}".format(_counter) - - field_names = tuple(x.name for x in fields) - extra_args = ", ".join([f"{f.name}: Optional[{f.annotation}] = None" for f in fields]) - lines.append( - f""" -class {cls_name}: - def __init__(self, image_size: Tuple[int, int], {extra_args}): - self.image_size = image_size - self._field_names = {field_names} -""" - ) - - for f in fields: - lines.append( - indent(2, f"self._{f.name} = torch.jit.annotate(Optional[{f.annotation}], {f.name})") - ) - - for f in fields: - lines.append( - f""" - @property - def {f.name}(self) -> {f.annotation}: - # has to use a local for type refinement - # https://pytorch.org/docs/stable/jit_language_reference.html#optional-type-refinement - t = self._{f.name} - assert t is not None, "{f.name} is None and cannot be accessed!" - return t - - @{f.name}.setter - def {f.name}(self, value: {f.annotation}) -> None: - self._{f.name} = value -""" - ) - - # support method `__len__` - lines.append( - """ - def __len__(self) -> int: -""" - ) - for f in fields: - lines.append( - f""" - t = self._{f.name} - if t is not None: - return len(t) -""" - ) - lines.append( - """ - raise NotImplementedError("Empty Instances does not support __len__!") -""" - ) - - # support method `has` - lines.append( - """ - def has(self, name: str) -> bool: -""" - ) - for f in fields: - lines.append( - f""" - if name == "{f.name}": - return self._{f.name} is not None -""" - ) - lines.append( - """ - return False -""" - ) - - # support method `to` - none_args = ", None" * len(fields) - lines.append( - f""" - def to(self, device: torch.device) -> "{cls_name}": - ret = {cls_name}(self.image_size{none_args}) -""" - ) - for f in fields: - if hasattr(f.type_, "to"): - lines.append( - f""" - t = self._{f.name} - if t is not None: - ret._{f.name} = t.to(device) -""" - ) - else: - # For now, ignore fields that cannot be moved to devices. - # Maybe can support other tensor-like classes (e.g. __torch_function__) - pass - lines.append( - """ - return ret -""" - ) - - # support method `getitem` - none_args = ", None" * len(fields) - lines.append( - f""" - def __getitem__(self, item) -> "{cls_name}": - ret = {cls_name}(self.image_size{none_args}) -""" - ) - for f in fields: - lines.append( - f""" - t = self._{f.name} - if t is not None: - ret._{f.name} = t[item] -""" - ) - lines.append( - """ - return ret -""" - ) - - # support method `cat` - # this version does not contain checks that all instances have same size and fields - none_args = ", None" * len(fields) - lines.append( - f""" - def cat(self, instances: List["{cls_name}"]) -> "{cls_name}": - ret = {cls_name}(self.image_size{none_args}) -""" - ) - for f in fields: - lines.append( - f""" - t = self._{f.name} - if t is not None: - values: List[{f.annotation}] = [x.{f.name} for x in instances] - if torch.jit.isinstance(t, torch.Tensor): - ret._{f.name} = torch.cat(values, dim=0) - else: - ret._{f.name} = t.cat(values) -""" - ) - lines.append( - """ - return ret""" - ) - - # support method `get_fields()` - lines.append( - """ - def get_fields(self) -> Dict[str, Tensor]: - ret = {} - """ - ) - for f in fields: - if f.type_ == Boxes: - stmt = "t.tensor" - elif f.type_ == torch.Tensor: - stmt = "t" - else: - stmt = f'assert False, "unsupported type {str(f.type_)}"' - lines.append( - f""" - t = self._{f.name} - if t is not None: - ret["{f.name}"] = {stmt} - """ - ) - lines.append( - """ - return ret""" - ) - return cls_name, os.linesep.join(lines) - - -def _gen_instance_module(fields): - # TODO: find a more automatic way to enable import of other classes - s = """ -from copy import deepcopy -import torch -from torch import Tensor -import typing -from typing import * - -import detectron2 -from detectron2.structures import Boxes, Instances - -""" - - cls_name, cls_def = _gen_instance_class(fields) - s += cls_def - return cls_name, s - - -def _import(path): - return _import_file( - "{}{}".format(sys.modules[__name__].__name__, _counter), path, make_importable=True - ) - - -@contextmanager -def patch_builtin_len(modules=()): - """ - Patch the builtin len() function of a few detectron2 modules - to use __len__ instead, because __len__ does not convert values to - integers and therefore is friendly to tracing. - - Args: - modules (list[stsr]): names of extra modules to patch len(), in - addition to those in detectron2. - """ - - def _new_len(obj): - return obj.__len__() - - with ExitStack() as stack: - MODULES = [ - "detectron2.modeling.roi_heads.fast_rcnn", - "detectron2.modeling.roi_heads.mask_head", - "detectron2.modeling.roi_heads.keypoint_head", - ] + list(modules) - ctxs = [stack.enter_context(mock.patch(mod + ".len")) for mod in MODULES] - for m in ctxs: - m.side_effect = _new_len - yield - - -def patch_nonscriptable_classes(): - """ - Apply patches on a few nonscriptable detectron2 classes. - Should not have side-effects on eager usage. - """ - # __prepare_scriptable__ can also be added to models for easier maintenance. - # But it complicates the clean model code. - - from detectron2.modeling.backbone import ResNet, FPN - - # Due to https://github.com/pytorch/pytorch/issues/36061, - # we change backbone to use ModuleList for scripting. - # (note: this changes param names in state_dict) - - def prepare_resnet(self): - ret = deepcopy(self) - ret.stages = nn.ModuleList(ret.stages) - for k in self.stage_names: - delattr(ret, k) - return ret - - ResNet.__prepare_scriptable__ = prepare_resnet - - def prepare_fpn(self): - ret = deepcopy(self) - ret.lateral_convs = nn.ModuleList(ret.lateral_convs) - ret.output_convs = nn.ModuleList(ret.output_convs) - for name, _ in self.named_children(): - if name.startswith("fpn_"): - delattr(ret, name) - return ret - - FPN.__prepare_scriptable__ = prepare_fpn - - # Annotate some attributes to be constants for the purpose of scripting, - # even though they are not constants in eager mode. - from detectron2.modeling.roi_heads import StandardROIHeads - - if hasattr(StandardROIHeads, "__annotations__"): - # copy first to avoid editing annotations of base class - StandardROIHeads.__annotations__ = deepcopy(StandardROIHeads.__annotations__) - StandardROIHeads.__annotations__["mask_on"] = torch.jit.Final[bool] - StandardROIHeads.__annotations__["keypoint_on"] = torch.jit.Final[bool] - - -# These patches are not supposed to have side-effects. -patch_nonscriptable_classes() - - -@contextmanager -def freeze_training_mode(model): - """ - A context manager that annotates the "training" attribute of every submodule - to constant, so that the training codepath in these modules can be - meta-compiled away. Upon exiting, the annotations are reverted. - """ - classes = {type(x) for x in model.modules()} - # __constants__ is the old way to annotate constants and not compatible - # with __annotations__ . - classes = {x for x in classes if not hasattr(x, "__constants__")} - for cls in classes: - cls.__annotations__["training"] = torch.jit.Final[bool] - yield - for cls in classes: - cls.__annotations__["training"] = bool diff --git a/spaces/younker/chatgpt-turbo/client/node_modules/postcss-value-parser/README.md b/spaces/younker/chatgpt-turbo/client/node_modules/postcss-value-parser/README.md deleted file mode 100644 index 3bd6a0d65d33c1a2313c16e1bfd2e3fe9c7cd887..0000000000000000000000000000000000000000 --- a/spaces/younker/chatgpt-turbo/client/node_modules/postcss-value-parser/README.md +++ /dev/null @@ -1,263 +0,0 @@ -# postcss-value-parser - -[![Travis CI](https://travis-ci.org/TrySound/postcss-value-parser.svg)](https://travis-ci.org/TrySound/postcss-value-parser) - -Transforms CSS declaration values and at-rule parameters into a tree of nodes, and provides a simple traversal API. - -## Usage - -```js -var valueParser = require('postcss-value-parser'); -var cssBackgroundValue = 'url(foo.png) no-repeat 40px 73%'; -var parsedValue = valueParser(cssBackgroundValue); -// parsedValue exposes an API described below, -// e.g. parsedValue.walk(..), parsedValue.toString(), etc. -``` - -For example, parsing the value `rgba(233, 45, 66, .5)` will return the following: - -```js -{ - nodes: [ - { - type: 'function', - value: 'rgba', - before: '', - after: '', - nodes: [ - { type: 'word', value: '233' }, - { type: 'div', value: ',', before: '', after: ' ' }, - { type: 'word', value: '45' }, - { type: 'div', value: ',', before: '', after: ' ' }, - { type: 'word', value: '66' }, - { type: 'div', value: ',', before: ' ', after: '' }, - { type: 'word', value: '.5' } - ] - } - ] -} -``` - -If you wanted to convert each `rgba()` value in `sourceCSS` to a hex value, you could do so like this: - -```js -var valueParser = require('postcss-value-parser'); - -var parsed = valueParser(sourceCSS); - -// walk() will visit all the of the nodes in the tree, -// invoking the callback for each. -parsed.walk(function (node) { - - // Since we only want to transform rgba() values, - // we can ignore anything else. - if (node.type !== 'function' && node.value !== 'rgba') return; - - // We can make an array of the rgba() arguments to feed to a - // convertToHex() function - var color = node.nodes.filter(function (node) { - return node.type === 'word'; - }).map(function (node) { - return Number(node.value); - }); // [233, 45, 66, .5] - - // Now we will transform the existing rgba() function node - // into a word node with the hex value - node.type = 'word'; - node.value = convertToHex(color); -}) - -parsed.toString(); // #E92D42 -``` - -## Nodes - -Each node is an object with these common properties: - -- **type**: The type of node (`word`, `string`, `div`, `space`, `comment`, or `function`). - Each type is documented below. -- **value**: Each node has a `value` property; but what exactly `value` means - is specific to the node type. Details are documented for each type below. -- **sourceIndex**: The starting index of the node within the original source - string. For example, given the source string `10px 20px`, the `word` node - whose value is `20px` will have a `sourceIndex` of `5`. - -### word - -The catch-all node type that includes keywords (e.g. `no-repeat`), -quantities (e.g. `20px`, `75%`, `1.5`), and hex colors (e.g. `#e6e6e6`). - -Node-specific properties: - -- **value**: The "word" itself. - -### string - -A quoted string value, e.g. `"something"` in `content: "something";`. - -Node-specific properties: - -- **value**: The text content of the string. -- **quote**: The quotation mark surrounding the string, either `"` or `'`. -- **unclosed**: `true` if the string was not closed properly. e.g. `"unclosed string `. - -### div - -A divider, for example - -- `,` in `animation-duration: 1s, 2s, 3s` -- `/` in `border-radius: 10px / 23px` -- `:` in `(min-width: 700px)` - -Node-specific properties: - -- **value**: The divider character. Either `,`, `/`, or `:` (see examples above). -- **before**: Whitespace before the divider. -- **after**: Whitespace after the divider. - -### space - -Whitespace used as a separator, e.g. ` ` occurring twice in `border: 1px solid black;`. - -Node-specific properties: - -- **value**: The whitespace itself. - -### comment - -A CSS comment starts with `/*` and ends with `*/` - -Node-specific properties: - -- **value**: The comment value without `/*` and `*/` -- **unclosed**: `true` if the comment was not closed properly. e.g. `/* comment without an end `. - -### function - -A CSS function, e.g. `rgb(0,0,0)` or `url(foo.bar)`. - -Function nodes have nodes nested within them: the function arguments. - -Additional properties: - -- **value**: The name of the function, e.g. `rgb` in `rgb(0,0,0)`. -- **before**: Whitespace after the opening parenthesis and before the first argument, - e.g. ` ` in `rgb( 0,0,0)`. -- **after**: Whitespace before the closing parenthesis and after the last argument, - e.g. ` ` in `rgb(0,0,0 )`. -- **nodes**: More nodes representing the arguments to the function. -- **unclosed**: `true` if the parentheses was not closed properly. e.g. `( unclosed-function `. - -Media features surrounded by parentheses are considered functions with an -empty value. For example, `(min-width: 700px)` parses to these nodes: - -```js -[ - { - type: 'function', value: '', before: '', after: '', - nodes: [ - { type: 'word', value: 'min-width' }, - { type: 'div', value: ':', before: '', after: ' ' }, - { type: 'word', value: '700px' } - ] - } -] -``` - -`url()` functions can be parsed a little bit differently depending on -whether the first character in the argument is a quotation mark. - -`url( /gfx/img/bg.jpg )` parses to: - -```js -{ type: 'function', sourceIndex: 0, value: 'url', before: ' ', after: ' ', nodes: [ - { type: 'word', sourceIndex: 5, value: '/gfx/img/bg.jpg' } -] } -``` - -`url( "/gfx/img/bg.jpg" )`, on the other hand, parses to: - -```js -{ type: 'function', sourceIndex: 0, value: 'url', before: ' ', after: ' ', nodes: [ - type: 'string', sourceIndex: 5, quote: '"', value: '/gfx/img/bg.jpg' }, -] } -``` - -### unicode-range - -The unicode-range CSS descriptor sets the specific range of characters to be -used from a font defined by @font-face and made available -for use on the current page (`unicode-range: U+0025-00FF`). - -Node-specific properties: - -- **value**: The "unicode-range" itself. - -## API - -``` -var valueParser = require('postcss-value-parser'); -``` - -### valueParser.unit(quantity) - -Parses `quantity`, distinguishing the number from the unit. Returns an object like the following: - -```js -// Given 2rem -{ - number: '2', - unit: 'rem' -} -``` - -If the `quantity` argument cannot be parsed as a number, returns `false`. - -*This function does not parse complete values*: you cannot pass it `1px solid black` and expect `px` as -the unit. Instead, you should pass it single quantities only. Parse `1px solid black`, then pass it -the stringified `1px` node (a `word` node) to parse the number and unit. - -### valueParser.stringify(nodes[, custom]) - -Stringifies a node or array of nodes. - -The `custom` function is called for each `node`; return a string to override the default behaviour. - -### valueParser.walk(nodes, callback[, bubble]) - -Walks each provided node, recursively walking all descendent nodes within functions. - -Returning `false` in the `callback` will prevent traversal of descendent nodes (within functions). -You can use this feature to for shallow iteration, walking over only the *immediate* children. -*Note: This only applies if `bubble` is `false` (which is the default).* - -By default, the tree is walked from the outermost node inwards. -To reverse the direction, pass `true` for the `bubble` argument. - -The `callback` is invoked with three arguments: `callback(node, index, nodes)`. - -- `node`: The current node. -- `index`: The index of the current node. -- `nodes`: The complete nodes array passed to `walk()`. - -Returns the `valueParser` instance. - -### var parsed = valueParser(value) - -Returns the parsed node tree. - -### parsed.nodes - -The array of nodes. - -### parsed.toString() - -Stringifies the node tree. - -### parsed.walk(callback[, bubble]) - -Walks each node inside `parsed.nodes`. See the documentation for `valueParser.walk()` above. - -# License - -MIT © [Bogdan Chadkin](mailto:trysound@yandex.ru) diff --git a/spaces/younker/chatgpt-turbo/client/src/pages/_document.tsx b/spaces/younker/chatgpt-turbo/client/src/pages/_document.tsx deleted file mode 100644 index b2fff8b4262dde3178afed021bb634b1379cd125..0000000000000000000000000000000000000000 --- a/spaces/younker/chatgpt-turbo/client/src/pages/_document.tsx +++ /dev/null @@ -1,13 +0,0 @@ -import { Html, Head, Main, NextScript } from "next/document"; - -export default function Document() { - return ( - - - -
          - - - - ); -} diff --git a/spaces/yuhangzang/ContextDet-Demo/app.py b/spaces/yuhangzang/ContextDet-Demo/app.py deleted file mode 100644 index a2994c456dad3ba653b2868217436cc183697f74..0000000000000000000000000000000000000000 --- a/spaces/yuhangzang/ContextDet-Demo/app.py +++ /dev/null @@ -1,176 +0,0 @@ -import os -os.system("python setup.py build develop --user") - -import gradio as gr - -from app_util import ContextDetDemo - -header = ''' -
          -

          -Contextual Object Detection with Multimodal Large Language Models -

          -
          -''' - -abstract = ''' -🤗 This is the official Gradio demo for Contextual Object Detection with Multimodal Large Language Models. - -🆒 Our goal is to promote object detection with better `context understanding` and enable `interactive feedback` -through `human language vocabulary`, all made possible by using multimodal large language models! - -🤝 This demo is still under construction. Your comments or suggestions are welcome! - -⚡ For faster inference without waiting in the queue, you may duplicate the space and use the GPU setting: - -Duplicate Space -

          -''' - -footer = r''' -🦁 **Github Repo** -We would be grateful if you consider star our github repo - -📝 **Citation** -We would be grateful if you consider citing our work if you find it useful: -```bibtex -@article{zang2023contextual, - author = {Zang, Yuhang and Li, Wei and Han, Jun, and Zhou, Kaiyang and Loy, Chen Change}, - title = {Contextual Object Detection with Multimodal Large Language Models}, - journal = {arXiv preprint arXiv:2305.18279}, - year = {2023} -} -``` - -📋 **License** -This project is licensed under -S-Lab License 1.0. -Redistribution and use for non-commercial purposes should follow this license. - -📧 **Contact** -If you have any questions, please feel free to contact Yuhang Zang (zang0012@ntu.edu.sg). -''' - -css = ''' -h1#title { - text-align: center; -} -''' - -cloze_samples = [ - ["main_4.jpg", "A teacher is helping a with her homework at desk."], - ["main_5.jpg", "A man crossing a busy with his up."], -] - - -captioning_samples = [ - ["main_1.jpg"], - ["main_2.jpg"], - ["main_4.jpg"], - ["main_6.jpeg"], -] - -qa_samples = [ - ["main_5.jpg", "What is his career?"], - ["main_6.jpeg", "What are they doing?"], -] - -contextdet_model = ContextDetDemo('./ckpt.pth') - - -def inference_fn_select(image_input, text_input, task_button, history=[]): - return contextdet_model.forward(image_input, text_input, task_button, history) - - -def set_cloze_samples(example: list) -> dict: - return gr.Image.update(example[0]), gr.Textbox.update(example[1]), 'Cloze Test' - - -def set_captioning_samples(example: list) -> dict: - return gr.Image.update(example[0]), gr.Textbox.update(''), 'Captioning' - - -def set_qa_samples(example: list) -> dict: - return gr.Image.update(example[0]), gr.Textbox.update(example[1]), 'Question Answering' - - -with gr.Blocks(css=css, theme=gr.themes.Soft()) as demo: - gr.Markdown(header) - gr.Markdown(abstract) - state = gr.State([]) - - with gr.Row(): - with gr.Column(scale=0.5, min_width=500): - image_input = gr.Image(type="pil", interactive=True, label="Upload an image 📁").style(height=250) - with gr.Column(scale=0.5, min_width=500): - chat_input = gr.Textbox(label="Type your text prompt ⤵️") - task_button = gr.Radio(label="Contextual Task type", interactive=True, - choices=['Cloze Test', 'Captioning', 'Question Answering'], - value='Cloze Test') - with gr.Row(): - submit_button = gr.Button(value="🏃 Run", interactive=True, variant="primary") - clear_button = gr.Button(value="🔄 Clear", interactive=True) - - with gr.Row(): - with gr.Column(scale=0.5, min_width=500): - image_output = gr.Image(type='pil', interactive=False, label="Detection output") - with gr.Column(scale=0.5, min_width=500): - chat_output = gr.Chatbot(label="Text output").style(height=300) - - with gr.Row(): - with gr.Column(scale=0.33, min_width=330): - cloze_examples = gr.Dataset( - label='Contextual Cloze Test Examples', - components=[image_input, chat_input], - samples=cloze_samples, - ) - with gr.Column(scale=0.33, min_width=330): - qa_examples = gr.Dataset( - label='Contextual Question Answering Examples', - components=[image_input, chat_input], - samples=qa_samples, - ) - with gr.Column(scale=0.33, min_width=330): - captioning_examples = gr.Dataset( - label='Contextual Captioning Examples', - components=[image_input, ], - samples=captioning_samples, - ) - - submit_button.click( - inference_fn_select, - [image_input, chat_input, task_button, state], - [image_output, chat_output, state], - ) - clear_button.click( - lambda: (None, None, "", [], [], 'Question Answering'), - [], - [image_input, image_output, chat_input, chat_output, state, task_button], - queue=False, - ) - image_input.change( - lambda: (None, "", []), - [], - [image_output, chat_output, state], - queue=False, - ) - cloze_examples.click( - fn=set_cloze_samples, - inputs=[cloze_examples], - outputs=[image_input, chat_input, task_button], - ) - captioning_examples.click( - fn=set_captioning_samples, - inputs=[captioning_examples], - outputs=[image_input, chat_input, task_button], - ) - qa_examples.click( - fn=set_qa_samples, - inputs=[qa_examples], - outputs=[image_input, chat_input, task_button], - ) - - gr.Markdown(footer) - -demo.launch(enable_queue=True, share=False) -# demo.launch(enable_queue=True, share=True) diff --git a/spaces/zama-fhe/encrypted_health_prediction/README.md b/spaces/zama-fhe/encrypted_health_prediction/README.md deleted file mode 100644 index 5c003b8646a4fa4acbf022e252f2ecde8c9e2730..0000000000000000000000000000000000000000 --- a/spaces/zama-fhe/encrypted_health_prediction/README.md +++ /dev/null @@ -1,61 +0,0 @@ ---- -title: Health Prediction On Encrypted Data Using Fully Homomorphic Encryption -emoji: 🩺😷 -colorFrom: gray -colorTo: blue -sdk: gradio -sdk_version: 3.33.1 -app_file: app.py -pinned: true -tags: - - FHE - - PPML - - privacy - - privacy preserving machine learning - - image processing - - homomorphic encryption - - security -python_version: 3.10.6 ---- - -# Healthcare prediction using FHE - -## Running the application on your machine - -From this directory, i.e., `health_prediction`, you can proceed with the following steps. - -### Do once - -First, create a virtual env and activate it: - - - -```bash -python3 -m venv .venv -source .venv/bin/activate -``` - -Then, install required packages: - - - -```bash -pip3 install pip --upgrade -pip3 install -U pip wheel setuptools --ignore-installed -pip3 install -r requirements.txt --ignore-installed -``` - -## Run the following steps each time you relaunch the application - -In a terminal, run: - - - -```bash -source .venv/bin/activate -python3 app.py -``` - -## Interacting with the application - -Open the given URL link (search for a line like `Running on local URL: http://127.0.0.1:8888/`). diff --git a/spaces/zhang-wei-jian/docker/node_modules/deep-equal/example/cmp.js b/spaces/zhang-wei-jian/docker/node_modules/deep-equal/example/cmp.js deleted file mode 100644 index 67014b88dcbc9b2ebb6398bf79e8f15c628fcf51..0000000000000000000000000000000000000000 --- a/spaces/zhang-wei-jian/docker/node_modules/deep-equal/example/cmp.js +++ /dev/null @@ -1,11 +0,0 @@ -var equal = require('../'); -console.dir([ - equal( - { a : [ 2, 3 ], b : [ 4 ] }, - { a : [ 2, 3 ], b : [ 4 ] } - ), - equal( - { x : 5, y : [6] }, - { x : 5, y : 6 } - ) -]); diff --git a/spaces/zhenwusw/JoJoGAN/e4e/criteria/lpips/networks.py b/spaces/zhenwusw/JoJoGAN/e4e/criteria/lpips/networks.py deleted file mode 100644 index 3a0d13ad2d560278f16586da68d3a5eadb26e746..0000000000000000000000000000000000000000 --- a/spaces/zhenwusw/JoJoGAN/e4e/criteria/lpips/networks.py +++ /dev/null @@ -1,96 +0,0 @@ -from typing import Sequence - -from itertools import chain - -import torch -import torch.nn as nn -from torchvision import models - -from criteria.lpips.utils import normalize_activation - - -def get_network(net_type: str): - if net_type == 'alex': - return AlexNet() - elif net_type == 'squeeze': - return SqueezeNet() - elif net_type == 'vgg': - return VGG16() - else: - raise NotImplementedError('choose net_type from [alex, squeeze, vgg].') - - -class LinLayers(nn.ModuleList): - def __init__(self, n_channels_list: Sequence[int]): - super(LinLayers, self).__init__([ - nn.Sequential( - nn.Identity(), - nn.Conv2d(nc, 1, 1, 1, 0, bias=False) - ) for nc in n_channels_list - ]) - - for param in self.parameters(): - param.requires_grad = False - - -class BaseNet(nn.Module): - def __init__(self): - super(BaseNet, self).__init__() - - # register buffer - self.register_buffer( - 'mean', torch.Tensor([-.030, -.088, -.188])[None, :, None, None]) - self.register_buffer( - 'std', torch.Tensor([.458, .448, .450])[None, :, None, None]) - - def set_requires_grad(self, state: bool): - for param in chain(self.parameters(), self.buffers()): - param.requires_grad = state - - def z_score(self, x: torch.Tensor): - return (x - self.mean) / self.std - - def forward(self, x: torch.Tensor): - x = self.z_score(x) - - output = [] - for i, (_, layer) in enumerate(self.layers._modules.items(), 1): - x = layer(x) - if i in self.target_layers: - output.append(normalize_activation(x)) - if len(output) == len(self.target_layers): - break - return output - - -class SqueezeNet(BaseNet): - def __init__(self): - super(SqueezeNet, self).__init__() - - self.layers = models.squeezenet1_1(True).features - self.target_layers = [2, 5, 8, 10, 11, 12, 13] - self.n_channels_list = [64, 128, 256, 384, 384, 512, 512] - - self.set_requires_grad(False) - - -class AlexNet(BaseNet): - def __init__(self): - super(AlexNet, self).__init__() - - self.layers = models.alexnet(True).features - self.target_layers = [2, 5, 8, 10, 12] - self.n_channels_list = [64, 192, 384, 256, 256] - - self.set_requires_grad(False) - - -class VGG16(BaseNet): - def __init__(self): - super(VGG16, self).__init__() - - self.layers = models.vgg16(True).features - self.target_layers = [4, 9, 16, 23, 30] - self.n_channels_list = [64, 128, 256, 512, 512] - - self.set_requires_grad(False) \ No newline at end of file diff --git a/spaces/ziguo/Real-ESRGAN/realesrgan/models/__init__.py b/spaces/ziguo/Real-ESRGAN/realesrgan/models/__init__.py deleted file mode 100644 index 0be7105dc75d150c49976396724085f678dc0675..0000000000000000000000000000000000000000 --- a/spaces/ziguo/Real-ESRGAN/realesrgan/models/__init__.py +++ /dev/null @@ -1,10 +0,0 @@ -import importlib -from basicsr.utils import scandir -from os import path as osp - -# automatically scan and import model modules for registry -# scan all the files that end with '_model.py' under the model folder -model_folder = osp.dirname(osp.abspath(__file__)) -model_filenames = [osp.splitext(osp.basename(v))[0] for v in scandir(model_folder) if v.endswith('_model.py')] -# import all the model modules -_model_modules = [importlib.import_module(f'realesrgan.models.{file_name}') for file_name in model_filenames]